Feb 16 16:58:15.500655 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 16:58:16.074488 master-0 kubenswrapper[4155]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 16:58:16.074488 master-0 kubenswrapper[4155]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 16:58:16.074488 master-0 kubenswrapper[4155]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 16:58:16.074488 master-0 kubenswrapper[4155]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 16:58:16.074488 master-0 kubenswrapper[4155]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 16:58:16.074488 master-0 kubenswrapper[4155]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 16:58:16.467795 master-0 kubenswrapper[4155]: I0216 16:58:16.467398 4155 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 16:58:16.472849 master-0 kubenswrapper[4155]: W0216 16:58:16.472790 4155 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 16:58:16.472849 master-0 kubenswrapper[4155]: W0216 16:58:16.472824 4155 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 16:58:16.472849 master-0 kubenswrapper[4155]: W0216 16:58:16.472837 4155 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 16:58:16.472849 master-0 kubenswrapper[4155]: W0216 16:58:16.472849 4155 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 16:58:16.472849 master-0 kubenswrapper[4155]: W0216 16:58:16.472859 4155 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.472874 4155 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.472888 4155 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.472902 4155 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.472913 4155 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.472955 4155 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.472967 4155 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.472978 4155 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.472989 4155 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.472998 4155 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.473008 4155 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.473017 4155 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.473042 4155 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.473053 4155 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.473094 4155 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.473106 4155 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.473116 4155 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.473125 4155 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.473144 4155 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.473155 4155 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 16:58:16.473322 master-0 kubenswrapper[4155]: W0216 16:58:16.473165 4155 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473175 4155 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473184 4155 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473195 4155 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473206 4155 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473216 4155 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473231 4155 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473242 4155 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473253 4155 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473264 4155 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473273 4155 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473284 4155 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473294 4155 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473305 4155 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473322 4155 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473332 4155 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473343 4155 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473352 4155 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473361 4155 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473371 4155 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 16:58:16.474491 master-0 kubenswrapper[4155]: W0216 16:58:16.473386 4155 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473399 4155 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473410 4155 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473420 4155 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473431 4155 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473444 4155 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473456 4155 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473467 4155 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473477 4155 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473488 4155 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473498 4155 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473544 4155 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473555 4155 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473564 4155 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473574 4155 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473588 4155 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473600 4155 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473610 4155 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473620 4155 feature_gate.go:330] unrecognized feature gate: Example Feb 16 16:58:16.475573 master-0 kubenswrapper[4155]: W0216 16:58:16.473630 4155 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: W0216 16:58:16.473641 4155 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: W0216 16:58:16.473651 4155 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: W0216 16:58:16.473661 4155 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: W0216 16:58:16.473672 4155 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: W0216 16:58:16.473682 4155 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: W0216 16:58:16.473692 4155 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: W0216 16:58:16.473707 4155 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: W0216 16:58:16.473717 4155 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.474852 4155 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.474893 4155 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.474912 4155 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.474966 4155 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.474983 4155 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.474995 4155 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.475010 4155 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.475024 4155 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.475037 4155 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.475049 4155 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.475061 4155 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.475075 4155 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.475088 4155 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 16:58:16.476667 master-0 kubenswrapper[4155]: I0216 16:58:16.475099 4155 flags.go:64] FLAG: --cgroup-root="" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.475112 4155 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.475123 4155 flags.go:64] FLAG: --client-ca-file="" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.475135 4155 flags.go:64] FLAG: --cloud-config="" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.475146 4155 flags.go:64] FLAG: --cloud-provider="" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.475157 4155 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476742 4155 flags.go:64] FLAG: --cluster-domain="" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476758 4155 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476771 4155 flags.go:64] FLAG: --config-dir="" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476783 4155 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476796 4155 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476812 4155 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476824 4155 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476836 4155 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476848 4155 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476860 4155 flags.go:64] FLAG: --contention-profiling="false" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476872 4155 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476884 4155 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476897 4155 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476910 4155 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476961 4155 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476975 4155 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.476987 4155 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.477002 4155 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.477014 4155 flags.go:64] FLAG: --enable-server="true" Feb 16 16:58:16.478235 master-0 kubenswrapper[4155]: I0216 16:58:16.477025 4155 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477042 4155 flags.go:64] FLAG: --event-burst="100" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477055 4155 flags.go:64] FLAG: --event-qps="50" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477066 4155 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477078 4155 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477089 4155 flags.go:64] FLAG: --eviction-hard="" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477104 4155 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477117 4155 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477128 4155 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477149 4155 flags.go:64] FLAG: --eviction-soft="" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477160 4155 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477172 4155 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477184 4155 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477196 4155 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477207 4155 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477218 4155 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477229 4155 flags.go:64] FLAG: --feature-gates="" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477244 4155 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477256 4155 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477268 4155 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477280 4155 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477292 4155 flags.go:64] FLAG: --healthz-port="10248" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477304 4155 flags.go:64] FLAG: --help="false" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477316 4155 flags.go:64] FLAG: --hostname-override="" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477327 4155 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477339 4155 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 16:58:16.480007 master-0 kubenswrapper[4155]: I0216 16:58:16.477352 4155 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477363 4155 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477374 4155 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477386 4155 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477397 4155 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477408 4155 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477419 4155 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477431 4155 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477446 4155 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477457 4155 flags.go:64] FLAG: --kube-reserved="" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477470 4155 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477481 4155 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477493 4155 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477547 4155 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477561 4155 flags.go:64] FLAG: --lock-file="" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477577 4155 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477588 4155 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477600 4155 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477618 4155 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477630 4155 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477641 4155 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477653 4155 flags.go:64] FLAG: --logging-format="text" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477664 4155 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477677 4155 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477689 4155 flags.go:64] FLAG: --manifest-url="" Feb 16 16:58:16.481599 master-0 kubenswrapper[4155]: I0216 16:58:16.477700 4155 flags.go:64] FLAG: --manifest-url-header="" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477715 4155 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477727 4155 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477742 4155 flags.go:64] FLAG: --max-pods="110" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477755 4155 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477766 4155 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477778 4155 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477789 4155 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477802 4155 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477814 4155 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477827 4155 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477854 4155 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477865 4155 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477877 4155 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477891 4155 flags.go:64] FLAG: --pod-cidr="" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477902 4155 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477954 4155 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477968 4155 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477982 4155 flags.go:64] FLAG: --pods-per-core="0" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.477995 4155 flags.go:64] FLAG: --port="10250" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.478007 4155 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.478019 4155 flags.go:64] FLAG: --provider-id="" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.478031 4155 flags.go:64] FLAG: --qos-reserved="" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.478047 4155 flags.go:64] FLAG: --read-only-port="10255" Feb 16 16:58:16.483070 master-0 kubenswrapper[4155]: I0216 16:58:16.478059 4155 flags.go:64] FLAG: --register-node="true" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478071 4155 flags.go:64] FLAG: --register-schedulable="true" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478083 4155 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478103 4155 flags.go:64] FLAG: --registry-burst="10" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478115 4155 flags.go:64] FLAG: --registry-qps="5" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478127 4155 flags.go:64] FLAG: --reserved-cpus="" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478139 4155 flags.go:64] FLAG: --reserved-memory="" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478153 4155 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478165 4155 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478176 4155 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478188 4155 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478199 4155 flags.go:64] FLAG: --runonce="false" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478211 4155 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478223 4155 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478235 4155 flags.go:64] FLAG: --seccomp-default="false" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478247 4155 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478258 4155 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478270 4155 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478281 4155 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478293 4155 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478304 4155 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478317 4155 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478333 4155 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478344 4155 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478356 4155 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 16:58:16.484349 master-0 kubenswrapper[4155]: I0216 16:58:16.478368 4155 flags.go:64] FLAG: --system-cgroups="" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478380 4155 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478400 4155 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478411 4155 flags.go:64] FLAG: --tls-cert-file="" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478423 4155 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478437 4155 flags.go:64] FLAG: --tls-min-version="" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478455 4155 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478466 4155 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478478 4155 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478489 4155 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478500 4155 flags.go:64] FLAG: --v="2" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478516 4155 flags.go:64] FLAG: --version="false" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478538 4155 flags.go:64] FLAG: --vmodule="" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478552 4155 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: I0216 16:58:16.478564 4155 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: W0216 16:58:16.478854 4155 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: W0216 16:58:16.478871 4155 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: W0216 16:58:16.478883 4155 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: W0216 16:58:16.478893 4155 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: W0216 16:58:16.478903 4155 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: W0216 16:58:16.478913 4155 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: W0216 16:58:16.478955 4155 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: W0216 16:58:16.478967 4155 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 16:58:16.486001 master-0 kubenswrapper[4155]: W0216 16:58:16.478977 4155 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.478988 4155 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479002 4155 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479014 4155 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479025 4155 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479035 4155 feature_gate.go:330] unrecognized feature gate: Example Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479053 4155 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479067 4155 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479079 4155 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479090 4155 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479101 4155 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479113 4155 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479124 4155 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479137 4155 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479147 4155 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479162 4155 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479172 4155 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479182 4155 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479192 4155 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479220 4155 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 16:58:16.487092 master-0 kubenswrapper[4155]: W0216 16:58:16.479232 4155 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479242 4155 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479252 4155 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479262 4155 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479273 4155 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479285 4155 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479295 4155 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479305 4155 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479315 4155 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479328 4155 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479341 4155 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479353 4155 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479365 4155 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479375 4155 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479386 4155 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479446 4155 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479458 4155 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479468 4155 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479484 4155 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 16:58:16.488273 master-0 kubenswrapper[4155]: W0216 16:58:16.479493 4155 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479507 4155 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479521 4155 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479532 4155 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479542 4155 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479552 4155 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479562 4155 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479573 4155 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479589 4155 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479599 4155 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479609 4155 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479619 4155 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479629 4155 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479639 4155 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479649 4155 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479659 4155 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479673 4155 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479683 4155 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479693 4155 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 16:58:16.489581 master-0 kubenswrapper[4155]: W0216 16:58:16.479703 4155 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 16:58:16.490690 master-0 kubenswrapper[4155]: W0216 16:58:16.479713 4155 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 16:58:16.490690 master-0 kubenswrapper[4155]: W0216 16:58:16.479723 4155 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 16:58:16.490690 master-0 kubenswrapper[4155]: W0216 16:58:16.479734 4155 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 16:58:16.490690 master-0 kubenswrapper[4155]: W0216 16:58:16.479744 4155 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 16:58:16.490690 master-0 kubenswrapper[4155]: W0216 16:58:16.479753 4155 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 16:58:16.490690 master-0 kubenswrapper[4155]: I0216 16:58:16.479782 4155 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: I0216 16:58:16.490814 4155 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: I0216 16:58:16.490853 4155 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.490942 4155 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.490951 4155 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.490956 4155 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.490961 4155 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.490965 4155 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.490970 4155 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.490975 4155 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.490980 4155 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.490984 4155 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.490989 4155 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.490993 4155 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.490998 4155 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.491002 4155 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.491006 4155 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.491010 4155 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.491016 4155 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 16:58:16.491037 master-0 kubenswrapper[4155]: W0216 16:58:16.491024 4155 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491029 4155 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491034 4155 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491040 4155 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491046 4155 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491053 4155 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491058 4155 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491064 4155 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491070 4155 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491075 4155 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491081 4155 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491087 4155 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491092 4155 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491097 4155 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491103 4155 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491110 4155 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491116 4155 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491121 4155 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491127 4155 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 16:58:16.492215 master-0 kubenswrapper[4155]: W0216 16:58:16.491132 4155 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491137 4155 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491143 4155 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491148 4155 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491153 4155 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491158 4155 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491163 4155 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491168 4155 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491173 4155 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491177 4155 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491181 4155 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491186 4155 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491192 4155 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491197 4155 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491201 4155 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491207 4155 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491211 4155 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491216 4155 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491221 4155 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 16:58:16.493246 master-0 kubenswrapper[4155]: W0216 16:58:16.491225 4155 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491229 4155 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491234 4155 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491238 4155 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491242 4155 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491247 4155 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491251 4155 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491255 4155 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491260 4155 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491264 4155 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491269 4155 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491273 4155 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491277 4155 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491281 4155 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491285 4155 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491289 4155 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491294 4155 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 16:58:16.494456 master-0 kubenswrapper[4155]: W0216 16:58:16.491298 4155 feature_gate.go:330] unrecognized feature gate: Example Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: I0216 16:58:16.491319 4155 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: W0216 16:58:16.491487 4155 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: W0216 16:58:16.491499 4155 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: W0216 16:58:16.491506 4155 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: W0216 16:58:16.491512 4155 feature_gate.go:330] unrecognized feature gate: Example Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: W0216 16:58:16.491517 4155 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: W0216 16:58:16.491522 4155 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: W0216 16:58:16.491527 4155 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: W0216 16:58:16.491532 4155 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: W0216 16:58:16.491537 4155 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: W0216 16:58:16.491543 4155 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: W0216 16:58:16.491550 4155 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: W0216 16:58:16.491566 4155 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 16:58:16.495361 master-0 kubenswrapper[4155]: W0216 16:58:16.491571 4155 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491576 4155 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491581 4155 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491585 4155 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491590 4155 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491595 4155 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491600 4155 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491606 4155 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491612 4155 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491618 4155 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491622 4155 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491627 4155 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491632 4155 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491637 4155 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491641 4155 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491646 4155 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491651 4155 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491656 4155 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491661 4155 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 16:58:16.496118 master-0 kubenswrapper[4155]: W0216 16:58:16.491668 4155 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491673 4155 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491678 4155 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491684 4155 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491688 4155 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491693 4155 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491698 4155 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491703 4155 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491707 4155 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491712 4155 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491716 4155 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491721 4155 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491726 4155 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491731 4155 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491736 4155 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491740 4155 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491745 4155 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491749 4155 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491753 4155 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491758 4155 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491763 4155 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 16:58:16.497250 master-0 kubenswrapper[4155]: W0216 16:58:16.491767 4155 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491772 4155 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491776 4155 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491780 4155 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491783 4155 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491787 4155 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491791 4155 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491794 4155 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491798 4155 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491802 4155 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491806 4155 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491809 4155 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491813 4155 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491816 4155 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491820 4155 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491824 4155 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491827 4155 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491831 4155 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491836 4155 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 16:58:16.498408 master-0 kubenswrapper[4155]: W0216 16:58:16.491841 4155 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 16:58:16.499470 master-0 kubenswrapper[4155]: I0216 16:58:16.491846 4155 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 16:58:16.499470 master-0 kubenswrapper[4155]: I0216 16:58:16.492031 4155 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 16:58:16.499470 master-0 kubenswrapper[4155]: I0216 16:58:16.493661 4155 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 16 16:58:16.499470 master-0 kubenswrapper[4155]: I0216 16:58:16.495321 4155 server.go:997] "Starting client certificate rotation" Feb 16 16:58:16.499470 master-0 kubenswrapper[4155]: I0216 16:58:16.495337 4155 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 16:58:16.499470 master-0 kubenswrapper[4155]: I0216 16:58:16.496424 4155 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 16:58:16.520778 master-0 kubenswrapper[4155]: I0216 16:58:16.520704 4155 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 16:58:16.525037 master-0 kubenswrapper[4155]: I0216 16:58:16.524147 4155 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 16:58:16.528866 master-0 kubenswrapper[4155]: E0216 16:58:16.528803 4155 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:16.544287 master-0 kubenswrapper[4155]: I0216 16:58:16.544247 4155 log.go:25] "Validated CRI v1 runtime API" Feb 16 16:58:16.549860 master-0 kubenswrapper[4155]: I0216 16:58:16.549800 4155 log.go:25] "Validated CRI v1 image API" Feb 16 16:58:16.552262 master-0 kubenswrapper[4155]: I0216 16:58:16.552227 4155 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 16:58:16.557301 master-0 kubenswrapper[4155]: I0216 16:58:16.557261 4155 fs.go:135] Filesystem UUIDs: map[35a0b0cc-84b1-4374-a18a-0f49ad7a8333:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 16:58:16.557301 master-0 kubenswrapper[4155]: I0216 16:58:16.557291 4155 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Feb 16 16:58:16.577569 master-0 kubenswrapper[4155]: I0216 16:58:16.577049 4155 manager.go:217] Machine: {Timestamp:2026-02-16 16:58:16.573177729 +0000 UTC m=+0.912231323 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514149376 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:47bfea951bd14de8bb3b008f6812b13f SystemUUID:47bfea95-1bd1-4de8-bb3b-008f6812b13f BootID:4b8043a6-19a9-42c4-a3dd-d330b8dbba91 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:2c:b9:e2 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:4a:2e:ce Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:6a:29:7e:a8:c6:78 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514149376 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 16:58:16.577569 master-0 kubenswrapper[4155]: I0216 16:58:16.577527 4155 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 16:58:16.577916 master-0 kubenswrapper[4155]: I0216 16:58:16.577877 4155 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 16:58:16.579537 master-0 kubenswrapper[4155]: I0216 16:58:16.579500 4155 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 16:58:16.579874 master-0 kubenswrapper[4155]: I0216 16:58:16.579812 4155 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 16:58:16.580237 master-0 kubenswrapper[4155]: I0216 16:58:16.579871 4155 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 16:58:16.581814 master-0 kubenswrapper[4155]: I0216 16:58:16.581764 4155 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 16:58:16.581814 master-0 kubenswrapper[4155]: I0216 16:58:16.581802 4155 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 16:58:16.582242 master-0 kubenswrapper[4155]: I0216 16:58:16.582193 4155 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 16:58:16.582242 master-0 kubenswrapper[4155]: I0216 16:58:16.582225 4155 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 16:58:16.582891 master-0 kubenswrapper[4155]: I0216 16:58:16.582835 4155 state_mem.go:36] "Initialized new in-memory state store" Feb 16 16:58:16.583065 master-0 kubenswrapper[4155]: I0216 16:58:16.583023 4155 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 16:58:16.586664 master-0 kubenswrapper[4155]: I0216 16:58:16.586616 4155 kubelet.go:418] "Attempting to sync node with API server" Feb 16 16:58:16.586664 master-0 kubenswrapper[4155]: I0216 16:58:16.586662 4155 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 16:58:16.586747 master-0 kubenswrapper[4155]: I0216 16:58:16.586688 4155 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 16:58:16.586747 master-0 kubenswrapper[4155]: I0216 16:58:16.586710 4155 kubelet.go:324] "Adding apiserver pod source" Feb 16 16:58:16.586747 master-0 kubenswrapper[4155]: I0216 16:58:16.586739 4155 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 16:58:16.591850 master-0 kubenswrapper[4155]: I0216 16:58:16.591814 4155 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 16:58:16.598192 master-0 kubenswrapper[4155]: I0216 16:58:16.598023 4155 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 16:58:16.598302 master-0 kubenswrapper[4155]: W0216 16:58:16.598212 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:16.598766 master-0 kubenswrapper[4155]: E0216 16:58:16.598591 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:16.598821 master-0 kubenswrapper[4155]: W0216 16:58:16.598719 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:16.599275 master-0 kubenswrapper[4155]: E0216 16:58:16.599180 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:16.601517 master-0 kubenswrapper[4155]: I0216 16:58:16.601482 4155 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 16:58:16.601605 master-0 kubenswrapper[4155]: I0216 16:58:16.601545 4155 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 16:58:16.601652 master-0 kubenswrapper[4155]: I0216 16:58:16.601612 4155 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 16:58:16.601708 master-0 kubenswrapper[4155]: I0216 16:58:16.601675 4155 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 16:58:16.601738 master-0 kubenswrapper[4155]: I0216 16:58:16.601727 4155 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 16:58:16.601772 master-0 kubenswrapper[4155]: I0216 16:58:16.601743 4155 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 16:58:16.601772 master-0 kubenswrapper[4155]: I0216 16:58:16.601758 4155 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 16:58:16.601821 master-0 kubenswrapper[4155]: I0216 16:58:16.601773 4155 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 16:58:16.601821 master-0 kubenswrapper[4155]: I0216 16:58:16.601791 4155 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 16:58:16.601821 master-0 kubenswrapper[4155]: I0216 16:58:16.601806 4155 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 16:58:16.601887 master-0 kubenswrapper[4155]: I0216 16:58:16.601847 4155 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 16:58:16.601887 master-0 kubenswrapper[4155]: I0216 16:58:16.601873 4155 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 16:58:16.601971 master-0 kubenswrapper[4155]: I0216 16:58:16.601918 4155 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 16:58:16.602753 master-0 kubenswrapper[4155]: I0216 16:58:16.602730 4155 server.go:1280] "Started kubelet" Feb 16 16:58:16.603015 master-0 kubenswrapper[4155]: I0216 16:58:16.602902 4155 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 16:58:16.603301 master-0 kubenswrapper[4155]: I0216 16:58:16.603164 4155 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 16:58:16.603386 master-0 kubenswrapper[4155]: I0216 16:58:16.603354 4155 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 16:58:16.604181 master-0 kubenswrapper[4155]: I0216 16:58:16.604149 4155 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 16:58:16.604269 master-0 kubenswrapper[4155]: I0216 16:58:16.604225 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:16.604490 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 16:58:16.607122 master-0 kubenswrapper[4155]: I0216 16:58:16.607088 4155 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 16:58:16.607186 master-0 kubenswrapper[4155]: I0216 16:58:16.607141 4155 server.go:449] "Adding debug handlers to kubelet server" Feb 16 16:58:16.607306 master-0 kubenswrapper[4155]: I0216 16:58:16.607144 4155 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 16:58:16.607430 master-0 kubenswrapper[4155]: I0216 16:58:16.607406 4155 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 16:58:16.607430 master-0 kubenswrapper[4155]: I0216 16:58:16.607422 4155 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 16:58:16.607516 master-0 kubenswrapper[4155]: E0216 16:58:16.607420 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:16.607516 master-0 kubenswrapper[4155]: I0216 16:58:16.607496 4155 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 16:58:16.608521 master-0 kubenswrapper[4155]: E0216 16:58:16.608426 4155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 16 16:58:16.608581 master-0 kubenswrapper[4155]: W0216 16:58:16.608432 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:16.608634 master-0 kubenswrapper[4155]: E0216 16:58:16.608562 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:16.608864 master-0 kubenswrapper[4155]: I0216 16:58:16.608822 4155 reconstruct.go:97] "Volume reconstruction finished" Feb 16 16:58:16.608864 master-0 kubenswrapper[4155]: I0216 16:58:16.608853 4155 reconciler.go:26] "Reconciler: start to sync state" Feb 16 16:58:16.609476 master-0 kubenswrapper[4155]: I0216 16:58:16.609245 4155 factory.go:55] Registering systemd factory Feb 16 16:58:16.609476 master-0 kubenswrapper[4155]: I0216 16:58:16.609283 4155 factory.go:221] Registration of the systemd container factory successfully Feb 16 16:58:16.609629 master-0 kubenswrapper[4155]: I0216 16:58:16.609602 4155 factory.go:153] Registering CRI-O factory Feb 16 16:58:16.609681 master-0 kubenswrapper[4155]: I0216 16:58:16.609634 4155 factory.go:221] Registration of the crio container factory successfully Feb 16 16:58:16.609743 master-0 kubenswrapper[4155]: I0216 16:58:16.609719 4155 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 16:58:16.609789 master-0 kubenswrapper[4155]: I0216 16:58:16.609758 4155 factory.go:103] Registering Raw factory Feb 16 16:58:16.609826 master-0 kubenswrapper[4155]: I0216 16:58:16.609797 4155 manager.go:1196] Started watching for new ooms in manager Feb 16 16:58:16.610540 master-0 kubenswrapper[4155]: I0216 16:58:16.610511 4155 manager.go:319] Starting recovery of all containers Feb 16 16:58:16.611164 master-0 kubenswrapper[4155]: E0216 16:58:16.609530 4155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1894c8953378d617 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.602686999 +0000 UTC m=+0.941740543,LastTimestamp:2026-02-16 16:58:16.602686999 +0000 UTC m=+0.941740543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:16.619241 master-0 kubenswrapper[4155]: E0216 16:58:16.619193 4155 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 16 16:58:16.633531 master-0 kubenswrapper[4155]: I0216 16:58:16.633492 4155 manager.go:324] Recovery completed Feb 16 16:58:16.645679 master-0 kubenswrapper[4155]: I0216 16:58:16.645644 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:16.647324 master-0 kubenswrapper[4155]: I0216 16:58:16.647215 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:16.647324 master-0 kubenswrapper[4155]: I0216 16:58:16.647269 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:16.647324 master-0 kubenswrapper[4155]: I0216 16:58:16.647281 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:16.648711 master-0 kubenswrapper[4155]: I0216 16:58:16.648690 4155 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 16:58:16.648711 master-0 kubenswrapper[4155]: I0216 16:58:16.648707 4155 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 16:58:16.648800 master-0 kubenswrapper[4155]: I0216 16:58:16.648728 4155 state_mem.go:36] "Initialized new in-memory state store" Feb 16 16:58:16.653227 master-0 kubenswrapper[4155]: I0216 16:58:16.653180 4155 policy_none.go:49] "None policy: Start" Feb 16 16:58:16.654222 master-0 kubenswrapper[4155]: I0216 16:58:16.654192 4155 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 16:58:16.654222 master-0 kubenswrapper[4155]: I0216 16:58:16.654233 4155 state_mem.go:35] "Initializing new in-memory state store" Feb 16 16:58:16.707903 master-0 kubenswrapper[4155]: E0216 16:58:16.707585 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:16.725418 master-0 kubenswrapper[4155]: I0216 16:58:16.725294 4155 manager.go:334] "Starting Device Plugin manager" Feb 16 16:58:16.725418 master-0 kubenswrapper[4155]: I0216 16:58:16.725384 4155 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 16:58:16.725418 master-0 kubenswrapper[4155]: I0216 16:58:16.725407 4155 server.go:79] "Starting device plugin registration server" Feb 16 16:58:16.726167 master-0 kubenswrapper[4155]: I0216 16:58:16.726123 4155 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 16:58:16.726229 master-0 kubenswrapper[4155]: I0216 16:58:16.726157 4155 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 16:58:16.727066 master-0 kubenswrapper[4155]: I0216 16:58:16.726829 4155 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 16:58:16.727066 master-0 kubenswrapper[4155]: I0216 16:58:16.727050 4155 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 16:58:16.727066 master-0 kubenswrapper[4155]: I0216 16:58:16.727067 4155 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 16:58:16.728719 master-0 kubenswrapper[4155]: E0216 16:58:16.728628 4155 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 16:58:16.778005 master-0 kubenswrapper[4155]: I0216 16:58:16.777732 4155 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 16:58:16.787576 master-0 kubenswrapper[4155]: I0216 16:58:16.779516 4155 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 16:58:16.787576 master-0 kubenswrapper[4155]: I0216 16:58:16.779583 4155 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 16:58:16.787576 master-0 kubenswrapper[4155]: I0216 16:58:16.779624 4155 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 16:58:16.787576 master-0 kubenswrapper[4155]: E0216 16:58:16.779869 4155 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 16 16:58:16.787576 master-0 kubenswrapper[4155]: W0216 16:58:16.782894 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:16.787576 master-0 kubenswrapper[4155]: E0216 16:58:16.782980 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:16.810848 master-0 kubenswrapper[4155]: E0216 16:58:16.810754 4155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 16 16:58:16.827011 master-0 kubenswrapper[4155]: I0216 16:58:16.826853 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:16.828423 master-0 kubenswrapper[4155]: I0216 16:58:16.828353 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:16.828423 master-0 kubenswrapper[4155]: I0216 16:58:16.828422 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:16.828655 master-0 kubenswrapper[4155]: I0216 16:58:16.828442 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:16.828655 master-0 kubenswrapper[4155]: I0216 16:58:16.828527 4155 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 16:58:16.829796 master-0 kubenswrapper[4155]: E0216 16:58:16.829722 4155 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 16:58:16.862195 master-0 kubenswrapper[4155]: E0216 16:58:16.861972 4155 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1894c8953378d617 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.602686999 +0000 UTC m=+0.941740543,LastTimestamp:2026-02-16 16:58:16.602686999 +0000 UTC m=+0.941740543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:16.880111 master-0 kubenswrapper[4155]: I0216 16:58:16.880001 4155 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Feb 16 16:58:16.880382 master-0 kubenswrapper[4155]: I0216 16:58:16.880135 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:16.881822 master-0 kubenswrapper[4155]: I0216 16:58:16.881746 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:16.881822 master-0 kubenswrapper[4155]: I0216 16:58:16.881798 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:16.881822 master-0 kubenswrapper[4155]: I0216 16:58:16.881817 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:16.882151 master-0 kubenswrapper[4155]: I0216 16:58:16.881989 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:16.882258 master-0 kubenswrapper[4155]: I0216 16:58:16.882217 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 16:58:16.882335 master-0 kubenswrapper[4155]: I0216 16:58:16.882301 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:16.883348 master-0 kubenswrapper[4155]: I0216 16:58:16.883288 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:16.883348 master-0 kubenswrapper[4155]: I0216 16:58:16.883330 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:16.883348 master-0 kubenswrapper[4155]: I0216 16:58:16.883346 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:16.883677 master-0 kubenswrapper[4155]: I0216 16:58:16.883388 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:16.883677 master-0 kubenswrapper[4155]: I0216 16:58:16.883442 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:16.883677 master-0 kubenswrapper[4155]: I0216 16:58:16.883458 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:16.883677 master-0 kubenswrapper[4155]: I0216 16:58:16.883466 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:16.884043 master-0 kubenswrapper[4155]: I0216 16:58:16.883793 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 16:58:16.884043 master-0 kubenswrapper[4155]: I0216 16:58:16.883883 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:16.884725 master-0 kubenswrapper[4155]: I0216 16:58:16.884654 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:16.885039 master-0 kubenswrapper[4155]: I0216 16:58:16.884747 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:16.885039 master-0 kubenswrapper[4155]: I0216 16:58:16.884990 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:16.885286 master-0 kubenswrapper[4155]: I0216 16:58:16.885236 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:16.885522 master-0 kubenswrapper[4155]: I0216 16:58:16.885469 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:16.885824 master-0 kubenswrapper[4155]: I0216 16:58:16.885763 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:16.885824 master-0 kubenswrapper[4155]: I0216 16:58:16.885823 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:16.886051 master-0 kubenswrapper[4155]: I0216 16:58:16.885914 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:16.886142 master-0 kubenswrapper[4155]: I0216 16:58:16.885914 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:16.886863 master-0 kubenswrapper[4155]: I0216 16:58:16.886800 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:16.886863 master-0 kubenswrapper[4155]: I0216 16:58:16.886853 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:16.887131 master-0 kubenswrapper[4155]: I0216 16:58:16.886874 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:16.887131 master-0 kubenswrapper[4155]: I0216 16:58:16.887084 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:16.887131 master-0 kubenswrapper[4155]: I0216 16:58:16.887113 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:16.887381 master-0 kubenswrapper[4155]: I0216 16:58:16.887205 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:16.887381 master-0 kubenswrapper[4155]: I0216 16:58:16.887251 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:16.887531 master-0 kubenswrapper[4155]: I0216 16:58:16.887121 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:16.887531 master-0 kubenswrapper[4155]: I0216 16:58:16.887456 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:16.888270 master-0 kubenswrapper[4155]: I0216 16:58:16.888226 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:16.888270 master-0 kubenswrapper[4155]: I0216 16:58:16.888274 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:16.888461 master-0 kubenswrapper[4155]: I0216 16:58:16.888291 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:16.888527 master-0 kubenswrapper[4155]: I0216 16:58:16.888512 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:16.888586 master-0 kubenswrapper[4155]: I0216 16:58:16.888546 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:16.888586 master-0 kubenswrapper[4155]: I0216 16:58:16.888565 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:16.888744 master-0 kubenswrapper[4155]: I0216 16:58:16.888702 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 16:58:16.888744 master-0 kubenswrapper[4155]: I0216 16:58:16.888743 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:16.889708 master-0 kubenswrapper[4155]: I0216 16:58:16.889622 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:16.889708 master-0 kubenswrapper[4155]: I0216 16:58:16.889704 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:16.889959 master-0 kubenswrapper[4155]: I0216 16:58:16.889727 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:16.910991 master-0 kubenswrapper[4155]: I0216 16:58:16.910910 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:16.911120 master-0 kubenswrapper[4155]: I0216 16:58:16.911001 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:16.911120 master-0 kubenswrapper[4155]: I0216 16:58:16.911034 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:16.911120 master-0 kubenswrapper[4155]: I0216 16:58:16.911066 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 16:58:16.911365 master-0 kubenswrapper[4155]: I0216 16:58:16.911145 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:16.911365 master-0 kubenswrapper[4155]: I0216 16:58:16.911219 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:16.911365 master-0 kubenswrapper[4155]: I0216 16:58:16.911250 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 16:58:16.911365 master-0 kubenswrapper[4155]: I0216 16:58:16.911317 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 16:58:16.911365 master-0 kubenswrapper[4155]: I0216 16:58:16.911349 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:16.911715 master-0 kubenswrapper[4155]: I0216 16:58:16.911380 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:16.911715 master-0 kubenswrapper[4155]: I0216 16:58:16.911483 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:16.912108 master-0 kubenswrapper[4155]: I0216 16:58:16.911591 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:16.912231 master-0 kubenswrapper[4155]: I0216 16:58:16.912133 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 16:58:16.912320 master-0 kubenswrapper[4155]: I0216 16:58:16.912234 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 16:58:16.912320 master-0 kubenswrapper[4155]: I0216 16:58:16.912300 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:16.912517 master-0 kubenswrapper[4155]: I0216 16:58:16.912458 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:16.912615 master-0 kubenswrapper[4155]: I0216 16:58:16.912553 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 16:58:17.013818 master-0 kubenswrapper[4155]: I0216 16:58:17.013745 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:17.013818 master-0 kubenswrapper[4155]: I0216 16:58:17.013795 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:17.013818 master-0 kubenswrapper[4155]: I0216 16:58:17.013813 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 16:58:17.013818 master-0 kubenswrapper[4155]: I0216 16:58:17.013829 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.013843 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.013860 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.013879 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.014043 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.014058 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.014086 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.014103 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.014091 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.014132 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.014149 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.014155 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.014172 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.014191 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.014213 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:17.014216 master-0 kubenswrapper[4155]: I0216 16:58:17.014201 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014293 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014303 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014332 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014362 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014392 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014484 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014533 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014573 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014580 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014623 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014639 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014670 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014710 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014761 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:17.015031 master-0 kubenswrapper[4155]: I0216 16:58:17.014786 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:17.030349 master-0 kubenswrapper[4155]: I0216 16:58:17.030241 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:17.031744 master-0 kubenswrapper[4155]: I0216 16:58:17.031675 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:17.031870 master-0 kubenswrapper[4155]: I0216 16:58:17.031757 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:17.031870 master-0 kubenswrapper[4155]: I0216 16:58:17.031781 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:17.031870 master-0 kubenswrapper[4155]: I0216 16:58:17.031846 4155 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 16:58:17.033195 master-0 kubenswrapper[4155]: E0216 16:58:17.033127 4155 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 16:58:17.213161 master-0 kubenswrapper[4155]: E0216 16:58:17.213041 4155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 16 16:58:17.222153 master-0 kubenswrapper[4155]: I0216 16:58:17.222049 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 16:58:17.250888 master-0 kubenswrapper[4155]: I0216 16:58:17.250721 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 16:58:17.264082 master-0 kubenswrapper[4155]: I0216 16:58:17.263970 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:17.289370 master-0 kubenswrapper[4155]: I0216 16:58:17.289265 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:17.300675 master-0 kubenswrapper[4155]: I0216 16:58:17.300611 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 16:58:17.433789 master-0 kubenswrapper[4155]: I0216 16:58:17.433687 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:17.435007 master-0 kubenswrapper[4155]: I0216 16:58:17.434970 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:17.435121 master-0 kubenswrapper[4155]: I0216 16:58:17.435012 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:17.435121 master-0 kubenswrapper[4155]: I0216 16:58:17.435027 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:17.435121 master-0 kubenswrapper[4155]: I0216 16:58:17.435080 4155 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 16:58:17.436067 master-0 kubenswrapper[4155]: E0216 16:58:17.435997 4155 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 16:58:17.588284 master-0 kubenswrapper[4155]: W0216 16:58:17.588022 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:17.588284 master-0 kubenswrapper[4155]: E0216 16:58:17.588161 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:17.606589 master-0 kubenswrapper[4155]: I0216 16:58:17.606485 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:17.625398 master-0 kubenswrapper[4155]: W0216 16:58:17.625280 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:17.625398 master-0 kubenswrapper[4155]: E0216 16:58:17.625400 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:17.700266 master-0 kubenswrapper[4155]: W0216 16:58:17.700108 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:17.700266 master-0 kubenswrapper[4155]: E0216 16:58:17.700226 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:17.889787 master-0 kubenswrapper[4155]: W0216 16:58:17.889671 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80420f2e7c3cdda71f7d0d6ccbe6f9f3.slice/crio-be29035bd3f07d8681e71946753c9f5c4233d203be4ff12561b76d96bc674177 WatchSource:0}: Error finding container be29035bd3f07d8681e71946753c9f5c4233d203be4ff12561b76d96bc674177: Status 404 returned error can't find the container with id be29035bd3f07d8681e71946753c9f5c4233d203be4ff12561b76d96bc674177 Feb 16 16:58:17.892171 master-0 kubenswrapper[4155]: W0216 16:58:17.892114 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d1e91e5a1fed5cf7076a92d2830d36f.slice/crio-a6e1a17cdf628ad1d6c859dc2741c8e5533022bb4b1d4a9deacf8709bd53c33e WatchSource:0}: Error finding container a6e1a17cdf628ad1d6c859dc2741c8e5533022bb4b1d4a9deacf8709bd53c33e: Status 404 returned error can't find the container with id a6e1a17cdf628ad1d6c859dc2741c8e5533022bb4b1d4a9deacf8709bd53c33e Feb 16 16:58:17.897227 master-0 kubenswrapper[4155]: I0216 16:58:17.896490 4155 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 16:58:17.929599 master-0 kubenswrapper[4155]: W0216 16:58:17.929474 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod400a178a4d5e9a88ba5bbbd1da2ad15e.slice/crio-0a3ce6339796232d6462786af4891ac2f6ae4477b24c445386f55fd5ad1be497 WatchSource:0}: Error finding container 0a3ce6339796232d6462786af4891ac2f6ae4477b24c445386f55fd5ad1be497: Status 404 returned error can't find the container with id 0a3ce6339796232d6462786af4891ac2f6ae4477b24c445386f55fd5ad1be497 Feb 16 16:58:17.953699 master-0 kubenswrapper[4155]: W0216 16:58:17.953601 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3322fd3717f4aec0d8f54ec7862c07e.slice/crio-a493ec972b676ff0c630095722c8d8d6f05ae211809b90b8791aa422b9dcb2fb WatchSource:0}: Error finding container a493ec972b676ff0c630095722c8d8d6f05ae211809b90b8791aa422b9dcb2fb: Status 404 returned error can't find the container with id a493ec972b676ff0c630095722c8d8d6f05ae211809b90b8791aa422b9dcb2fb Feb 16 16:58:17.974425 master-0 kubenswrapper[4155]: W0216 16:58:17.974344 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9460ca0802075a8a6a10d7b3e6052c4d.slice/crio-040d7d0293a7b20224cd27a16c0bf2020794d17010ab130f879f9e5ce8511a88 WatchSource:0}: Error finding container 040d7d0293a7b20224cd27a16c0bf2020794d17010ab130f879f9e5ce8511a88: Status 404 returned error can't find the container with id 040d7d0293a7b20224cd27a16c0bf2020794d17010ab130f879f9e5ce8511a88 Feb 16 16:58:18.014457 master-0 kubenswrapper[4155]: E0216 16:58:18.014334 4155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 16 16:58:18.152343 master-0 kubenswrapper[4155]: W0216 16:58:18.152177 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:18.152343 master-0 kubenswrapper[4155]: E0216 16:58:18.152314 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:18.236366 master-0 kubenswrapper[4155]: I0216 16:58:18.236268 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:18.237622 master-0 kubenswrapper[4155]: I0216 16:58:18.237572 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:18.237622 master-0 kubenswrapper[4155]: I0216 16:58:18.237600 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:18.237622 master-0 kubenswrapper[4155]: I0216 16:58:18.237611 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:18.237898 master-0 kubenswrapper[4155]: I0216 16:58:18.237680 4155 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 16:58:18.238679 master-0 kubenswrapper[4155]: E0216 16:58:18.238606 4155 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 16:58:18.570132 master-0 kubenswrapper[4155]: I0216 16:58:18.570058 4155 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 16:58:18.571387 master-0 kubenswrapper[4155]: E0216 16:58:18.571348 4155 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:18.606009 master-0 kubenswrapper[4155]: I0216 16:58:18.605899 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:18.787220 master-0 kubenswrapper[4155]: I0216 16:58:18.787114 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"a6e1a17cdf628ad1d6c859dc2741c8e5533022bb4b1d4a9deacf8709bd53c33e"} Feb 16 16:58:18.789901 master-0 kubenswrapper[4155]: I0216 16:58:18.789857 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"be29035bd3f07d8681e71946753c9f5c4233d203be4ff12561b76d96bc674177"} Feb 16 16:58:18.790966 master-0 kubenswrapper[4155]: I0216 16:58:18.790939 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"040d7d0293a7b20224cd27a16c0bf2020794d17010ab130f879f9e5ce8511a88"} Feb 16 16:58:18.792065 master-0 kubenswrapper[4155]: I0216 16:58:18.792039 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"a493ec972b676ff0c630095722c8d8d6f05ae211809b90b8791aa422b9dcb2fb"} Feb 16 16:58:18.793090 master-0 kubenswrapper[4155]: I0216 16:58:18.793058 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"0a3ce6339796232d6462786af4891ac2f6ae4477b24c445386f55fd5ad1be497"} Feb 16 16:58:19.606176 master-0 kubenswrapper[4155]: I0216 16:58:19.606088 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:19.615972 master-0 kubenswrapper[4155]: E0216 16:58:19.615906 4155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 16 16:58:19.839129 master-0 kubenswrapper[4155]: I0216 16:58:19.839091 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:19.840144 master-0 kubenswrapper[4155]: I0216 16:58:19.840113 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:19.840144 master-0 kubenswrapper[4155]: I0216 16:58:19.840151 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:19.840144 master-0 kubenswrapper[4155]: I0216 16:58:19.840161 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:19.840301 master-0 kubenswrapper[4155]: I0216 16:58:19.840214 4155 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 16:58:19.841198 master-0 kubenswrapper[4155]: E0216 16:58:19.841167 4155 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 16:58:19.954008 master-0 kubenswrapper[4155]: W0216 16:58:19.953843 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:19.954008 master-0 kubenswrapper[4155]: E0216 16:58:19.953961 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:20.310629 master-0 kubenswrapper[4155]: W0216 16:58:20.310581 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:20.310809 master-0 kubenswrapper[4155]: E0216 16:58:20.310637 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:20.340092 master-0 kubenswrapper[4155]: W0216 16:58:20.340046 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:20.340186 master-0 kubenswrapper[4155]: E0216 16:58:20.340103 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:20.361580 master-0 kubenswrapper[4155]: W0216 16:58:20.361489 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:20.361580 master-0 kubenswrapper[4155]: E0216 16:58:20.361557 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:20.605914 master-0 kubenswrapper[4155]: I0216 16:58:20.605792 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:20.800537 master-0 kubenswrapper[4155]: I0216 16:58:20.800469 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841"} Feb 16 16:58:20.803089 master-0 kubenswrapper[4155]: I0216 16:58:20.803035 4155 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="aa5ecc6a98445fdbcf4dc0a764b1f3d8e109d603e5ddc36d010d08e31acfcc8f" exitCode=0 Feb 16 16:58:20.803159 master-0 kubenswrapper[4155]: I0216 16:58:20.803108 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"aa5ecc6a98445fdbcf4dc0a764b1f3d8e109d603e5ddc36d010d08e31acfcc8f"} Feb 16 16:58:20.803385 master-0 kubenswrapper[4155]: I0216 16:58:20.803327 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:20.806403 master-0 kubenswrapper[4155]: I0216 16:58:20.806367 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:20.806462 master-0 kubenswrapper[4155]: I0216 16:58:20.806436 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:20.806507 master-0 kubenswrapper[4155]: I0216 16:58:20.806461 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:21.605903 master-0 kubenswrapper[4155]: I0216 16:58:21.605851 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:21.807613 master-0 kubenswrapper[4155]: I0216 16:58:21.807558 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/0.log" Feb 16 16:58:21.808124 master-0 kubenswrapper[4155]: I0216 16:58:21.808065 4155 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="d162a948db8c7e463a55e2209938f37cb8236b99770b40ae3044d22b8bb35ada" exitCode=1 Feb 16 16:58:21.808167 master-0 kubenswrapper[4155]: I0216 16:58:21.808115 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"d162a948db8c7e463a55e2209938f37cb8236b99770b40ae3044d22b8bb35ada"} Feb 16 16:58:21.808167 master-0 kubenswrapper[4155]: I0216 16:58:21.808150 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:21.808840 master-0 kubenswrapper[4155]: I0216 16:58:21.808812 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:21.808874 master-0 kubenswrapper[4155]: I0216 16:58:21.808845 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:21.808874 master-0 kubenswrapper[4155]: I0216 16:58:21.808861 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:21.809186 master-0 kubenswrapper[4155]: I0216 16:58:21.809163 4155 scope.go:117] "RemoveContainer" containerID="d162a948db8c7e463a55e2209938f37cb8236b99770b40ae3044d22b8bb35ada" Feb 16 16:58:21.810143 master-0 kubenswrapper[4155]: I0216 16:58:21.810106 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9"} Feb 16 16:58:21.810210 master-0 kubenswrapper[4155]: I0216 16:58:21.810193 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:21.831583 master-0 kubenswrapper[4155]: I0216 16:58:21.831529 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:21.831683 master-0 kubenswrapper[4155]: I0216 16:58:21.831596 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:21.831683 master-0 kubenswrapper[4155]: I0216 16:58:21.831624 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:22.606426 master-0 kubenswrapper[4155]: I0216 16:58:22.606330 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:22.804862 master-0 kubenswrapper[4155]: I0216 16:58:22.804803 4155 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 16:58:22.806037 master-0 kubenswrapper[4155]: E0216 16:58:22.805992 4155 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:22.813777 master-0 kubenswrapper[4155]: I0216 16:58:22.813748 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/1.log" Feb 16 16:58:22.814211 master-0 kubenswrapper[4155]: I0216 16:58:22.814124 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/0.log" Feb 16 16:58:22.814477 master-0 kubenswrapper[4155]: I0216 16:58:22.814440 4155 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="bb1c3f01d999be4a5e6538cfab5c68176952f4d5ac11927cec1e80502847a35e" exitCode=1 Feb 16 16:58:22.814552 master-0 kubenswrapper[4155]: I0216 16:58:22.814521 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:22.814601 master-0 kubenswrapper[4155]: I0216 16:58:22.814550 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:22.814671 master-0 kubenswrapper[4155]: I0216 16:58:22.814544 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"bb1c3f01d999be4a5e6538cfab5c68176952f4d5ac11927cec1e80502847a35e"} Feb 16 16:58:22.814719 master-0 kubenswrapper[4155]: I0216 16:58:22.814670 4155 scope.go:117] "RemoveContainer" containerID="d162a948db8c7e463a55e2209938f37cb8236b99770b40ae3044d22b8bb35ada" Feb 16 16:58:22.815282 master-0 kubenswrapper[4155]: I0216 16:58:22.815258 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:22.815348 master-0 kubenswrapper[4155]: I0216 16:58:22.815290 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:22.815348 master-0 kubenswrapper[4155]: I0216 16:58:22.815291 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:22.815348 master-0 kubenswrapper[4155]: I0216 16:58:22.815302 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:22.815348 master-0 kubenswrapper[4155]: I0216 16:58:22.815312 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:22.815348 master-0 kubenswrapper[4155]: I0216 16:58:22.815321 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:22.815686 master-0 kubenswrapper[4155]: I0216 16:58:22.815665 4155 scope.go:117] "RemoveContainer" containerID="bb1c3f01d999be4a5e6538cfab5c68176952f4d5ac11927cec1e80502847a35e" Feb 16 16:58:22.815832 master-0 kubenswrapper[4155]: E0216 16:58:22.815797 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 16 16:58:22.816715 master-0 kubenswrapper[4155]: E0216 16:58:22.816683 4155 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 16 16:58:23.042310 master-0 kubenswrapper[4155]: I0216 16:58:23.042262 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:23.043316 master-0 kubenswrapper[4155]: I0216 16:58:23.043286 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:23.043368 master-0 kubenswrapper[4155]: I0216 16:58:23.043321 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:23.043368 master-0 kubenswrapper[4155]: I0216 16:58:23.043334 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:23.043440 master-0 kubenswrapper[4155]: I0216 16:58:23.043430 4155 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 16:58:23.044271 master-0 kubenswrapper[4155]: E0216 16:58:23.044227 4155 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 16:58:23.606709 master-0 kubenswrapper[4155]: I0216 16:58:23.606594 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:23.633562 master-0 kubenswrapper[4155]: W0216 16:58:23.633431 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:23.633765 master-0 kubenswrapper[4155]: E0216 16:58:23.633558 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:23.816629 master-0 kubenswrapper[4155]: I0216 16:58:23.816557 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:23.817545 master-0 kubenswrapper[4155]: I0216 16:58:23.817493 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:23.817597 master-0 kubenswrapper[4155]: I0216 16:58:23.817555 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:23.817597 master-0 kubenswrapper[4155]: I0216 16:58:23.817569 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:23.818051 master-0 kubenswrapper[4155]: I0216 16:58:23.818020 4155 scope.go:117] "RemoveContainer" containerID="bb1c3f01d999be4a5e6538cfab5c68176952f4d5ac11927cec1e80502847a35e" Feb 16 16:58:23.818238 master-0 kubenswrapper[4155]: E0216 16:58:23.818205 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 16 16:58:24.562855 master-0 kubenswrapper[4155]: W0216 16:58:24.562758 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:24.562855 master-0 kubenswrapper[4155]: E0216 16:58:24.562855 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:24.606657 master-0 kubenswrapper[4155]: I0216 16:58:24.606559 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:24.814353 master-0 kubenswrapper[4155]: W0216 16:58:24.814245 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:24.814353 master-0 kubenswrapper[4155]: E0216 16:58:24.814307 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:24.821438 master-0 kubenswrapper[4155]: I0216 16:58:24.821393 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e"} Feb 16 16:58:24.823651 master-0 kubenswrapper[4155]: I0216 16:58:24.823612 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098"} Feb 16 16:58:24.823746 master-0 kubenswrapper[4155]: I0216 16:58:24.823726 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:24.824641 master-0 kubenswrapper[4155]: I0216 16:58:24.824610 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:24.824701 master-0 kubenswrapper[4155]: I0216 16:58:24.824655 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:24.824701 master-0 kubenswrapper[4155]: I0216 16:58:24.824673 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:24.825846 master-0 kubenswrapper[4155]: I0216 16:58:24.825797 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/1.log" Feb 16 16:58:25.606038 master-0 kubenswrapper[4155]: I0216 16:58:25.605986 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:25.728723 master-0 kubenswrapper[4155]: W0216 16:58:25.728615 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 16:58:25.728903 master-0 kubenswrapper[4155]: E0216 16:58:25.728748 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 16:58:25.831995 master-0 kubenswrapper[4155]: I0216 16:58:25.831881 4155 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa" exitCode=0 Feb 16 16:58:25.831995 master-0 kubenswrapper[4155]: I0216 16:58:25.831997 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:25.832810 master-0 kubenswrapper[4155]: I0216 16:58:25.832049 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerDied","Data":"ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa"} Feb 16 16:58:25.833119 master-0 kubenswrapper[4155]: I0216 16:58:25.833074 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:25.833119 master-0 kubenswrapper[4155]: I0216 16:58:25.833113 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:25.833297 master-0 kubenswrapper[4155]: I0216 16:58:25.833126 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:25.835040 master-0 kubenswrapper[4155]: I0216 16:58:25.834703 4155 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e" exitCode=1 Feb 16 16:58:25.835040 master-0 kubenswrapper[4155]: I0216 16:58:25.834810 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:25.835269 master-0 kubenswrapper[4155]: I0216 16:58:25.835223 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e"} Feb 16 16:58:25.835761 master-0 kubenswrapper[4155]: I0216 16:58:25.835720 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:25.835761 master-0 kubenswrapper[4155]: I0216 16:58:25.835753 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:25.835761 master-0 kubenswrapper[4155]: I0216 16:58:25.835765 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:25.839844 master-0 kubenswrapper[4155]: I0216 16:58:25.839771 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:25.841545 master-0 kubenswrapper[4155]: I0216 16:58:25.841044 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:25.841545 master-0 kubenswrapper[4155]: I0216 16:58:25.841102 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:25.841545 master-0 kubenswrapper[4155]: I0216 16:58:25.841121 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:26.729148 master-0 kubenswrapper[4155]: E0216 16:58:26.729067 4155 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 16:58:26.839802 master-0 kubenswrapper[4155]: I0216 16:58:26.839662 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b"} Feb 16 16:58:26.841706 master-0 kubenswrapper[4155]: I0216 16:58:26.841634 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"67636bc611814bbf34e6bb9093e3c3fce5ce2b828a2dd05d2b7fdd2dd015348f"} Feb 16 16:58:26.841791 master-0 kubenswrapper[4155]: I0216 16:58:26.841767 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:26.842371 master-0 kubenswrapper[4155]: I0216 16:58:26.842344 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:26.842446 master-0 kubenswrapper[4155]: I0216 16:58:26.842377 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:26.842446 master-0 kubenswrapper[4155]: I0216 16:58:26.842388 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:26.842684 master-0 kubenswrapper[4155]: I0216 16:58:26.842656 4155 scope.go:117] "RemoveContainer" containerID="c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e" Feb 16 16:58:27.335274 master-0 kubenswrapper[4155]: E0216 16:58:27.334928 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953378d617 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.602686999 +0000 UTC m=+0.941740543,LastTimestamp:2026-02-16 16:58:16.602686999 +0000 UTC m=+0.941740543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.336000 master-0 kubenswrapper[4155]: I0216 16:58:27.335936 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:27.336403 master-0 kubenswrapper[4155]: E0216 16:58:27.336235 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953620d807 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647251975 +0000 UTC m=+0.986305489,LastTimestamp:2026-02-16 16:58:16.647251975 +0000 UTC m=+0.986305489,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.344956 master-0 kubenswrapper[4155]: E0216 16:58:27.341721 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953621359f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647275935 +0000 UTC m=+0.986329449,LastTimestamp:2026-02-16 16:58:16.647275935 +0000 UTC m=+0.986329449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.360134 master-0 kubenswrapper[4155]: E0216 16:58:27.359969 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found]" event="&Event{ObjectMeta:{master-0.1894c895362161af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647287215 +0000 UTC m=+0.986340739,LastTimestamp:2026-02-16 16:58:16.647287215 +0000 UTC m=+0.986340739,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.367800 master-0 kubenswrapper[4155]: E0216 16:58:27.367498 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953affb2f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.728965879 +0000 UTC m=+1.068019393,LastTimestamp:2026-02-16 16:58:16.728965879 +0000 UTC m=+1.068019393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.374158 master-0 kubenswrapper[4155]: E0216 16:58:27.374048 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953620d807\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953620d807 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647251975 +0000 UTC m=+0.986305489,LastTimestamp:2026-02-16 16:58:16.828399369 +0000 UTC m=+1.167452913,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.379336 master-0 kubenswrapper[4155]: E0216 16:58:27.379229 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953621359f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953621359f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647275935 +0000 UTC m=+0.986329449,LastTimestamp:2026-02-16 16:58:16.828434368 +0000 UTC m=+1.167487912,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.383237 master-0 kubenswrapper[4155]: E0216 16:58:27.383148 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c895362161af\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c895362161af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647287215 +0000 UTC m=+0.986340739,LastTimestamp:2026-02-16 16:58:16.828452338 +0000 UTC m=+1.167505882,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.387112 master-0 kubenswrapper[4155]: E0216 16:58:27.386905 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953620d807\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953620d807 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647251975 +0000 UTC m=+0.986305489,LastTimestamp:2026-02-16 16:58:16.881783369 +0000 UTC m=+1.220836903,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.391667 master-0 kubenswrapper[4155]: E0216 16:58:27.391604 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953621359f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953621359f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647275935 +0000 UTC m=+0.986329449,LastTimestamp:2026-02-16 16:58:16.881809039 +0000 UTC m=+1.220862573,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.397066 master-0 kubenswrapper[4155]: E0216 16:58:27.396995 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c895362161af\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c895362161af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647287215 +0000 UTC m=+0.986340739,LastTimestamp:2026-02-16 16:58:16.881826879 +0000 UTC m=+1.220880423,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.400849 master-0 kubenswrapper[4155]: E0216 16:58:27.400617 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953620d807\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953620d807 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647251975 +0000 UTC m=+0.986305489,LastTimestamp:2026-02-16 16:58:16.883318703 +0000 UTC m=+1.222372237,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.404824 master-0 kubenswrapper[4155]: E0216 16:58:27.404736 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953621359f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953621359f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647275935 +0000 UTC m=+0.986329449,LastTimestamp:2026-02-16 16:58:16.883340362 +0000 UTC m=+1.222393896,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.409548 master-0 kubenswrapper[4155]: E0216 16:58:27.409420 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c895362161af\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c895362161af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647287215 +0000 UTC m=+0.986340739,LastTimestamp:2026-02-16 16:58:16.883356122 +0000 UTC m=+1.222409666,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.413523 master-0 kubenswrapper[4155]: E0216 16:58:27.413451 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953620d807\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953620d807 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647251975 +0000 UTC m=+0.986305489,LastTimestamp:2026-02-16 16:58:16.883412222 +0000 UTC m=+1.222465756,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.418661 master-0 kubenswrapper[4155]: E0216 16:58:27.417839 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953621359f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953621359f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647275935 +0000 UTC m=+0.986329449,LastTimestamp:2026-02-16 16:58:16.883456441 +0000 UTC m=+1.222509985,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.421777 master-0 kubenswrapper[4155]: E0216 16:58:27.421690 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c895362161af\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c895362161af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647287215 +0000 UTC m=+0.986340739,LastTimestamp:2026-02-16 16:58:16.883477911 +0000 UTC m=+1.222531455,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.425740 master-0 kubenswrapper[4155]: E0216 16:58:27.425664 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953620d807\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953620d807 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647251975 +0000 UTC m=+0.986305489,LastTimestamp:2026-02-16 16:58:16.884720208 +0000 UTC m=+1.223773752,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.429532 master-0 kubenswrapper[4155]: E0216 16:58:27.429465 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953621359f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953621359f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647275935 +0000 UTC m=+0.986329449,LastTimestamp:2026-02-16 16:58:16.884981315 +0000 UTC m=+1.224034849,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.446998 master-0 kubenswrapper[4155]: E0216 16:58:27.446858 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c895362161af\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c895362161af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647287215 +0000 UTC m=+0.986340739,LastTimestamp:2026-02-16 16:58:16.884999824 +0000 UTC m=+1.224053368,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.450874 master-0 kubenswrapper[4155]: E0216 16:58:27.450764 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953620d807\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953620d807 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647251975 +0000 UTC m=+0.986305489,LastTimestamp:2026-02-16 16:58:16.885801586 +0000 UTC m=+1.224855120,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.454746 master-0 kubenswrapper[4155]: E0216 16:58:27.454672 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953621359f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953621359f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647275935 +0000 UTC m=+0.986329449,LastTimestamp:2026-02-16 16:58:16.885838275 +0000 UTC m=+1.224891819,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.458281 master-0 kubenswrapper[4155]: E0216 16:58:27.458050 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c895362161af\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c895362161af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647287215 +0000 UTC m=+0.986340739,LastTimestamp:2026-02-16 16:58:16.886079273 +0000 UTC m=+1.225132807,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.462205 master-0 kubenswrapper[4155]: E0216 16:58:27.462132 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953620d807\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953620d807 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647251975 +0000 UTC m=+0.986305489,LastTimestamp:2026-02-16 16:58:16.886829445 +0000 UTC m=+1.225882989,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.466225 master-0 kubenswrapper[4155]: E0216 16:58:27.466134 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.1894c8953621359f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.1894c8953621359f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:16.647275935 +0000 UTC m=+0.986329449,LastTimestamp:2026-02-16 16:58:16.886866684 +0000 UTC m=+1.225920228,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.470820 master-0 kubenswrapper[4155]: E0216 16:58:27.470724 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894c89580957941 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:17.896409409 +0000 UTC m=+2.235462913,LastTimestamp:2026-02-16 16:58:17.896409409 +0000 UTC m=+2.235462913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.474149 master-0 kubenswrapper[4155]: E0216 16:58:27.474061 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894c895809c1900 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:17.89684352 +0000 UTC m=+2.235897064,LastTimestamp:2026-02-16 16:58:17.89684352 +0000 UTC m=+2.235897064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.477494 master-0 kubenswrapper[4155]: E0216 16:58:27.477367 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894c89583229303 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:17.939211011 +0000 UTC m=+2.278264515,LastTimestamp:2026-02-16 16:58:17.939211011 +0000 UTC m=+2.278264515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.481498 master-0 kubenswrapper[4155]: E0216 16:58:27.481419 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c895845d4ae1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:17.959836385 +0000 UTC m=+2.298889939,LastTimestamp:2026-02-16 16:58:17.959836385 +0000 UTC m=+2.298889939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.484953 master-0 kubenswrapper[4155]: E0216 16:58:27.484861 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1894c8958583fb8d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:17.979149197 +0000 UTC m=+2.318202721,LastTimestamp:2026-02-16 16:58:17.979149197 +0000 UTC m=+2.318202721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.488424 master-0 kubenswrapper[4155]: E0216 16:58:27.488329 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c8960a9323fc openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" in 2.251s (2.251s including waiting). Image size: 459915626 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:20.211512316 +0000 UTC m=+4.550565820,LastTimestamp:2026-02-16 16:58:20.211512316 +0000 UTC m=+4.550565820,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.492065 master-0 kubenswrapper[4155]: E0216 16:58:27.491996 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894c8960ba451a0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\" in 2.29s (2.29s including waiting). Image size: 524042902 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:20.229415328 +0000 UTC m=+4.568468832,LastTimestamp:2026-02-16 16:58:20.229415328 +0000 UTC m=+4.568468832,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.495694 master-0 kubenswrapper[4155]: E0216 16:58:27.495496 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c896193b1b65 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:20.457401189 +0000 UTC m=+4.796454693,LastTimestamp:2026-02-16 16:58:20.457401189 +0000 UTC m=+4.796454693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.498735 master-0 kubenswrapper[4155]: E0216 16:58:27.498622 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894c8961eeda133 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:20.552986931 +0000 UTC m=+4.892040435,LastTimestamp:2026-02-16 16:58:20.552986931 +0000 UTC m=+4.892040435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.501828 master-0 kubenswrapper[4155]: E0216 16:58:27.501689 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c89627ed433c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:20.70395782 +0000 UTC m=+5.043011334,LastTimestamp:2026-02-16 16:58:20.70395782 +0000 UTC m=+5.043011334,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.505011 master-0 kubenswrapper[4155]: E0216 16:58:27.504942 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894c89629c23b52 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:20.734692178 +0000 UTC m=+5.073745682,LastTimestamp:2026-02-16 16:58:20.734692178 +0000 UTC m=+5.073745682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.508683 master-0 kubenswrapper[4155]: E0216 16:58:27.508573 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894c89629e58b62 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:20.737006434 +0000 UTC m=+5.076059938,LastTimestamp:2026-02-16 16:58:20.737006434 +0000 UTC m=+5.076059938,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.512416 master-0 kubenswrapper[4155]: E0216 16:58:27.512326 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c8962e366010 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:20.809412624 +0000 UTC m=+5.148466128,LastTimestamp:2026-02-16 16:58:20.809412624 +0000 UTC m=+5.148466128,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.515956 master-0 kubenswrapper[4155]: E0216 16:58:27.515861 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894c896420077b5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:21.141424053 +0000 UTC m=+5.480477557,LastTimestamp:2026-02-16 16:58:21.141424053 +0000 UTC m=+5.480477557,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.519896 master-0 kubenswrapper[4155]: E0216 16:58:27.519657 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894c8964a8f75f3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:21.285012979 +0000 UTC m=+5.624066483,LastTimestamp:2026-02-16 16:58:21.285012979 +0000 UTC m=+5.624066483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.537185 master-0 kubenswrapper[4155]: E0216 16:58:27.532044 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c8964ff34843 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:21.375440963 +0000 UTC m=+5.714494467,LastTimestamp:2026-02-16 16:58:21.375440963 +0000 UTC m=+5.714494467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.541982 master-0 kubenswrapper[4155]: E0216 16:58:27.537266 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c8965b037b1e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:21.561051934 +0000 UTC m=+5.900105438,LastTimestamp:2026-02-16 16:58:21.561051934 +0000 UTC m=+5.900105438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.544005 master-0 kubenswrapper[4155]: E0216 16:58:27.543120 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894c8962e366010\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c8962e366010 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:20.809412624 +0000 UTC m=+5.148466128,LastTimestamp:2026-02-16 16:58:21.812182348 +0000 UTC m=+6.151235852,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.564679 master-0 kubenswrapper[4155]: E0216 16:58:27.564479 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894c8964ff34843\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c8964ff34843 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:21.375440963 +0000 UTC m=+5.714494467,LastTimestamp:2026-02-16 16:58:22.069764684 +0000 UTC m=+6.408818188,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.572395 master-0 kubenswrapper[4155]: E0216 16:58:27.572198 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894c8965b037b1e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c8965b037b1e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:21.561051934 +0000 UTC m=+5.900105438,LastTimestamp:2026-02-16 16:58:22.08352064 +0000 UTC m=+6.422574154,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.577491 master-0 kubenswrapper[4155]: E0216 16:58:27.577310 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c896a5ccea82 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:22.81576717 +0000 UTC m=+7.154820674,LastTimestamp:2026-02-16 16:58:22.81576717 +0000 UTC m=+7.154820674,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.582717 master-0 kubenswrapper[4155]: E0216 16:58:27.582601 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894c896a5ccea82\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c896a5ccea82 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:22.81576717 +0000 UTC m=+7.154820674,LastTimestamp:2026-02-16 16:58:23.81817184 +0000 UTC m=+8.157225354,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.587614 master-0 kubenswrapper[4155]: E0216 16:58:27.587449 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894c8970ac21185 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" in 6.612s (6.612s including waiting). Image size: 938665460 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:24.509555077 +0000 UTC m=+8.848608611,LastTimestamp:2026-02-16 16:58:24.509555077 +0000 UTC m=+8.848608611,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.591688 master-0 kubenswrapper[4155]: E0216 16:58:27.591536 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1894c8970bcf11a8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" in 6.547s (6.548s including waiting). Image size: 938665460 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:24.527184296 +0000 UTC m=+8.866237840,LastTimestamp:2026-02-16 16:58:24.527184296 +0000 UTC m=+8.866237840,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.596246 master-0 kubenswrapper[4155]: E0216 16:58:27.596076 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894c8970f0ba377 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" in 6.685s (6.685s including waiting). Image size: 938665460 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:24.581485431 +0000 UTC m=+8.920538945,LastTimestamp:2026-02-16 16:58:24.581485431 +0000 UTC m=+8.920538945,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.600959 master-0 kubenswrapper[4155]: E0216 16:58:27.600819 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1894c8971724fff8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:24.71736524 +0000 UTC m=+9.056418744,LastTimestamp:2026-02-16 16:58:24.71736524 +0000 UTC m=+9.056418744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.604800 master-0 kubenswrapper[4155]: E0216 16:58:27.604675 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894c89717307df4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:24.718118388 +0000 UTC m=+9.057171892,LastTimestamp:2026-02-16 16:58:24.718118388 +0000 UTC m=+9.057171892,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.609609 master-0 kubenswrapper[4155]: E0216 16:58:27.609462 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.1894c897179d047f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:9460ca0802075a8a6a10d7b3e6052c4d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:24.725230719 +0000 UTC m=+9.064284223,LastTimestamp:2026-02-16 16:58:24.725230719 +0000 UTC m=+9.064284223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.614402 master-0 kubenswrapper[4155]: E0216 16:58:27.614252 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894c89717d8e88b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:24.729155723 +0000 UTC m=+9.068209227,LastTimestamp:2026-02-16 16:58:24.729155723 +0000 UTC m=+9.068209227,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.614689 master-0 kubenswrapper[4155]: I0216 16:58:27.614646 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:27.619215 master-0 kubenswrapper[4155]: E0216 16:58:27.619070 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894c89717e9a5f2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:24.730252786 +0000 UTC m=+9.069306290,LastTimestamp:2026-02-16 16:58:24.730252786 +0000 UTC m=+9.069306290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.623085 master-0 kubenswrapper[4155]: E0216 16:58:27.622980 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894c8971ce3cf8d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:24.813756301 +0000 UTC m=+9.152809805,LastTimestamp:2026-02-16 16:58:24.813756301 +0000 UTC m=+9.152809805,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.626786 master-0 kubenswrapper[4155]: E0216 16:58:27.626647 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894c8971d84b686 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:24.82430119 +0000 UTC m=+9.163354694,LastTimestamp:2026-02-16 16:58:24.82430119 +0000 UTC m=+9.163354694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.630704 master-0 kubenswrapper[4155]: E0216 16:58:27.630560 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894c8975a0aaca3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:25.839713443 +0000 UTC m=+10.178766957,LastTimestamp:2026-02-16 16:58:25.839713443 +0000 UTC m=+10.178766957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.635095 master-0 kubenswrapper[4155]: E0216 16:58:27.634916 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894c89765d1a844 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:26.037303364 +0000 UTC m=+10.376356858,LastTimestamp:2026-02-16 16:58:26.037303364 +0000 UTC m=+10.376356858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.638770 master-0 kubenswrapper[4155]: E0216 16:58:27.638650 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894c8976661dd99 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:26.046754201 +0000 UTC m=+10.385807735,LastTimestamp:2026-02-16 16:58:26.046754201 +0000 UTC m=+10.385807735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.642625 master-0 kubenswrapper[4155]: E0216 16:58:27.642539 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894c897667329ad openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:26.047887789 +0000 UTC m=+10.386941293,LastTimestamp:2026-02-16 16:58:26.047887789 +0000 UTC m=+10.386941293,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.645938 master-0 kubenswrapper[4155]: E0216 16:58:27.645821 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894c8978ae4dcce kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\" in 1.929s (1.929s including waiting). Image size: 500068323 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:26.65931899 +0000 UTC m=+10.998372494,LastTimestamp:2026-02-16 16:58:26.65931899 +0000 UTC m=+10.998372494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.666466 master-0 kubenswrapper[4155]: E0216 16:58:27.666325 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894c89793bbc5f9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:26.807621113 +0000 UTC m=+11.146674617,LastTimestamp:2026-02-16 16:58:26.807621113 +0000 UTC m=+11.146674617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.672773 master-0 kubenswrapper[4155]: E0216 16:58:27.672663 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894c89794657fb2 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:26.818744242 +0000 UTC m=+11.157797746,LastTimestamp:2026-02-16 16:58:26.818744242 +0000 UTC m=+11.157797746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.680316 master-0 kubenswrapper[4155]: E0216 16:58:27.680177 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894c89795f01c1f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:26.844605471 +0000 UTC m=+11.183658975,LastTimestamp:2026-02-16 16:58:26.844605471 +0000 UTC m=+11.183658975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.685074 master-0 kubenswrapper[4155]: E0216 16:58:27.684981 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.1894c89717307df4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894c89717307df4 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:24.718118388 +0000 UTC m=+9.057171892,LastTimestamp:2026-02-16 16:58:27.072038551 +0000 UTC m=+11.411092055,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.689191 master-0 kubenswrapper[4155]: E0216 16:58:27.689110 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.1894c89717d8e88b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.1894c89717d8e88b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:80420f2e7c3cdda71f7d0d6ccbe6f9f3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:24.729155723 +0000 UTC m=+9.068209227,LastTimestamp:2026-02-16 16:58:27.080623587 +0000 UTC m=+11.419677091,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:27.847638 master-0 kubenswrapper[4155]: I0216 16:58:27.847502 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26"} Feb 16 16:58:27.848149 master-0 kubenswrapper[4155]: I0216 16:58:27.847679 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:27.848715 master-0 kubenswrapper[4155]: I0216 16:58:27.848670 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:27.848771 master-0 kubenswrapper[4155]: I0216 16:58:27.848719 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:27.848771 master-0 kubenswrapper[4155]: I0216 16:58:27.848733 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:28.609495 master-0 kubenswrapper[4155]: I0216 16:58:28.609450 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:28.849808 master-0 kubenswrapper[4155]: I0216 16:58:28.849760 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:28.850810 master-0 kubenswrapper[4155]: I0216 16:58:28.850730 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:28.850879 master-0 kubenswrapper[4155]: I0216 16:58:28.850820 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:28.850879 master-0 kubenswrapper[4155]: I0216 16:58:28.850840 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:29.223368 master-0 kubenswrapper[4155]: E0216 16:58:29.223312 4155 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 16:58:29.234752 master-0 kubenswrapper[4155]: E0216 16:58:29.234554 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894c898240c5aa5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" in 3.18s (3.18s including waiting). Image size: 509806416 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:29.228821157 +0000 UTC m=+13.567874671,LastTimestamp:2026-02-16 16:58:29.228821157 +0000 UTC m=+13.567874671,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:29.451307 master-0 kubenswrapper[4155]: I0216 16:58:29.451191 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:29.452861 master-0 kubenswrapper[4155]: I0216 16:58:29.452798 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:29.452861 master-0 kubenswrapper[4155]: I0216 16:58:29.452844 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:29.452861 master-0 kubenswrapper[4155]: I0216 16:58:29.452858 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:29.453384 master-0 kubenswrapper[4155]: I0216 16:58:29.452945 4155 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 16:58:29.461842 master-0 kubenswrapper[4155]: E0216 16:58:29.461798 4155 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 16 16:58:29.476846 master-0 kubenswrapper[4155]: E0216 16:58:29.476659 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894c89832737507 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:29.470459143 +0000 UTC m=+13.809512687,LastTimestamp:2026-02-16 16:58:29.470459143 +0000 UTC m=+13.809512687,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:29.488832 master-0 kubenswrapper[4155]: E0216 16:58:29.488641 4155 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.1894c89833234c06 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5d1e91e5a1fed5cf7076a92d2830d36f,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:29.481982982 +0000 UTC m=+13.821036506,LastTimestamp:2026-02-16 16:58:29.481982982 +0000 UTC m=+13.821036506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:29.612051 master-0 kubenswrapper[4155]: I0216 16:58:29.611982 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:29.856662 master-0 kubenswrapper[4155]: I0216 16:58:29.856583 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31"} Feb 16 16:58:29.857529 master-0 kubenswrapper[4155]: I0216 16:58:29.856733 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:29.857879 master-0 kubenswrapper[4155]: I0216 16:58:29.857845 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:29.857968 master-0 kubenswrapper[4155]: I0216 16:58:29.857893 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:29.857968 master-0 kubenswrapper[4155]: I0216 16:58:29.857911 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:29.964849 master-0 kubenswrapper[4155]: I0216 16:58:29.964719 4155 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:29.965197 master-0 kubenswrapper[4155]: I0216 16:58:29.964969 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:29.966443 master-0 kubenswrapper[4155]: I0216 16:58:29.966417 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:29.966615 master-0 kubenswrapper[4155]: I0216 16:58:29.966601 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:29.966749 master-0 kubenswrapper[4155]: I0216 16:58:29.966736 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:29.972367 master-0 kubenswrapper[4155]: I0216 16:58:29.972328 4155 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:29.973434 master-0 kubenswrapper[4155]: I0216 16:58:29.973405 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:30.611759 master-0 kubenswrapper[4155]: I0216 16:58:30.611724 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:30.784745 master-0 kubenswrapper[4155]: I0216 16:58:30.784700 4155 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:30.789877 master-0 kubenswrapper[4155]: I0216 16:58:30.789837 4155 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:30.858801 master-0 kubenswrapper[4155]: I0216 16:58:30.858758 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:30.858801 master-0 kubenswrapper[4155]: I0216 16:58:30.858782 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:30.859383 master-0 kubenswrapper[4155]: I0216 16:58:30.858851 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:30.859498 master-0 kubenswrapper[4155]: I0216 16:58:30.859467 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:30.859498 master-0 kubenswrapper[4155]: I0216 16:58:30.859485 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:30.859498 master-0 kubenswrapper[4155]: I0216 16:58:30.859493 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:30.859690 master-0 kubenswrapper[4155]: I0216 16:58:30.859666 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:30.859744 master-0 kubenswrapper[4155]: I0216 16:58:30.859697 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:30.859744 master-0 kubenswrapper[4155]: I0216 16:58:30.859709 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:30.863406 master-0 kubenswrapper[4155]: I0216 16:58:30.863313 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 16:58:31.239993 master-0 kubenswrapper[4155]: I0216 16:58:31.239754 4155 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 16:58:31.264356 master-0 kubenswrapper[4155]: I0216 16:58:31.264264 4155 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 16:58:31.610151 master-0 kubenswrapper[4155]: I0216 16:58:31.610083 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:31.865212 master-0 kubenswrapper[4155]: I0216 16:58:31.865106 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:31.865989 master-0 kubenswrapper[4155]: I0216 16:58:31.865712 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:31.866360 master-0 kubenswrapper[4155]: I0216 16:58:31.866315 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:31.866415 master-0 kubenswrapper[4155]: I0216 16:58:31.866369 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:31.866415 master-0 kubenswrapper[4155]: I0216 16:58:31.866384 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:31.866558 master-0 kubenswrapper[4155]: I0216 16:58:31.866511 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:31.866613 master-0 kubenswrapper[4155]: I0216 16:58:31.866571 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:31.866613 master-0 kubenswrapper[4155]: I0216 16:58:31.866585 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:32.609505 master-0 kubenswrapper[4155]: I0216 16:58:32.609459 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:32.866897 master-0 kubenswrapper[4155]: I0216 16:58:32.866796 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:32.867500 master-0 kubenswrapper[4155]: I0216 16:58:32.867483 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:32.867580 master-0 kubenswrapper[4155]: I0216 16:58:32.867570 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:32.867641 master-0 kubenswrapper[4155]: I0216 16:58:32.867632 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:33.505144 master-0 kubenswrapper[4155]: W0216 16:58:33.505035 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 16 16:58:33.505144 master-0 kubenswrapper[4155]: E0216 16:58:33.505143 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 16 16:58:33.612364 master-0 kubenswrapper[4155]: I0216 16:58:33.612302 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:33.723241 master-0 kubenswrapper[4155]: W0216 16:58:33.723190 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:33.723241 master-0 kubenswrapper[4155]: E0216 16:58:33.723259 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 16 16:58:34.611880 master-0 kubenswrapper[4155]: I0216 16:58:34.611823 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:35.202318 master-0 kubenswrapper[4155]: I0216 16:58:35.202137 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:35.202552 master-0 kubenswrapper[4155]: I0216 16:58:35.202337 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:35.204420 master-0 kubenswrapper[4155]: I0216 16:58:35.203538 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:35.204420 master-0 kubenswrapper[4155]: I0216 16:58:35.203614 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:35.204420 master-0 kubenswrapper[4155]: I0216 16:58:35.203638 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:35.288472 master-0 kubenswrapper[4155]: I0216 16:58:35.288412 4155 csr.go:261] certificate signing request csr-2fdpd is approved, waiting to be issued Feb 16 16:58:35.609827 master-0 kubenswrapper[4155]: I0216 16:58:35.609773 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:36.232553 master-0 kubenswrapper[4155]: E0216 16:58:36.232122 4155 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 16:58:36.463289 master-0 kubenswrapper[4155]: I0216 16:58:36.463217 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:36.464579 master-0 kubenswrapper[4155]: I0216 16:58:36.464538 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:36.464777 master-0 kubenswrapper[4155]: I0216 16:58:36.464753 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:36.464954 master-0 kubenswrapper[4155]: I0216 16:58:36.464895 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:36.465500 master-0 kubenswrapper[4155]: I0216 16:58:36.465385 4155 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 16:58:36.473127 master-0 kubenswrapper[4155]: E0216 16:58:36.473068 4155 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 16 16:58:36.610893 master-0 kubenswrapper[4155]: I0216 16:58:36.610837 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:36.730301 master-0 kubenswrapper[4155]: E0216 16:58:36.730225 4155 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 16:58:36.780300 master-0 kubenswrapper[4155]: I0216 16:58:36.780219 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:36.781709 master-0 kubenswrapper[4155]: I0216 16:58:36.781645 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:36.781799 master-0 kubenswrapper[4155]: I0216 16:58:36.781719 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:36.781799 master-0 kubenswrapper[4155]: I0216 16:58:36.781743 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:36.782394 master-0 kubenswrapper[4155]: I0216 16:58:36.782327 4155 scope.go:117] "RemoveContainer" containerID="bb1c3f01d999be4a5e6538cfab5c68176952f4d5ac11927cec1e80502847a35e" Feb 16 16:58:36.787461 master-0 kubenswrapper[4155]: I0216 16:58:36.787385 4155 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:36.787766 master-0 kubenswrapper[4155]: I0216 16:58:36.787658 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:36.788918 master-0 kubenswrapper[4155]: I0216 16:58:36.788865 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:36.788918 master-0 kubenswrapper[4155]: I0216 16:58:36.788932 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:36.788918 master-0 kubenswrapper[4155]: I0216 16:58:36.788945 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:36.792840 master-0 kubenswrapper[4155]: I0216 16:58:36.792810 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:36.793025 master-0 kubenswrapper[4155]: E0216 16:58:36.792765 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894c8962e366010\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c8962e366010 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:20.809412624 +0000 UTC m=+5.148466128,LastTimestamp:2026-02-16 16:58:36.787086084 +0000 UTC m=+21.126139618,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:36.794189 master-0 kubenswrapper[4155]: I0216 16:58:36.794032 4155 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:36.876672 master-0 kubenswrapper[4155]: I0216 16:58:36.876127 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:36.877330 master-0 kubenswrapper[4155]: I0216 16:58:36.877210 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:36.877330 master-0 kubenswrapper[4155]: I0216 16:58:36.877327 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:36.877457 master-0 kubenswrapper[4155]: I0216 16:58:36.877339 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:37.009674 master-0 kubenswrapper[4155]: E0216 16:58:37.009561 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894c8964ff34843\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c8964ff34843 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:21.375440963 +0000 UTC m=+5.714494467,LastTimestamp:2026-02-16 16:58:37.002995785 +0000 UTC m=+21.342049329,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:37.027848 master-0 kubenswrapper[4155]: E0216 16:58:37.027691 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894c8965b037b1e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c8965b037b1e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:21.561051934 +0000 UTC m=+5.900105438,LastTimestamp:2026-02-16 16:58:37.021255313 +0000 UTC m=+21.360308827,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:37.150587 master-0 kubenswrapper[4155]: W0216 16:58:37.150427 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 16 16:58:37.150587 master-0 kubenswrapper[4155]: E0216 16:58:37.150474 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 16 16:58:37.612777 master-0 kubenswrapper[4155]: I0216 16:58:37.612674 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:37.881127 master-0 kubenswrapper[4155]: I0216 16:58:37.880549 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 16:58:37.881905 master-0 kubenswrapper[4155]: I0216 16:58:37.881856 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/1.log" Feb 16 16:58:37.882864 master-0 kubenswrapper[4155]: I0216 16:58:37.882803 4155 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="6bb85739a7a836abfdb346023915e77abb0b10b023f88b2b7e7c9536a35657a8" exitCode=1 Feb 16 16:58:37.882967 master-0 kubenswrapper[4155]: I0216 16:58:37.882904 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"6bb85739a7a836abfdb346023915e77abb0b10b023f88b2b7e7c9536a35657a8"} Feb 16 16:58:37.883026 master-0 kubenswrapper[4155]: I0216 16:58:37.882990 4155 scope.go:117] "RemoveContainer" containerID="bb1c3f01d999be4a5e6538cfab5c68176952f4d5ac11927cec1e80502847a35e" Feb 16 16:58:37.883072 master-0 kubenswrapper[4155]: I0216 16:58:37.883022 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:37.883268 master-0 kubenswrapper[4155]: I0216 16:58:37.883180 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:37.884060 master-0 kubenswrapper[4155]: I0216 16:58:37.884012 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:37.884138 master-0 kubenswrapper[4155]: I0216 16:58:37.884069 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:37.884138 master-0 kubenswrapper[4155]: I0216 16:58:37.884087 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:37.884853 master-0 kubenswrapper[4155]: I0216 16:58:37.884809 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:37.884853 master-0 kubenswrapper[4155]: I0216 16:58:37.884845 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:37.884986 master-0 kubenswrapper[4155]: I0216 16:58:37.884861 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:37.885264 master-0 kubenswrapper[4155]: I0216 16:58:37.885228 4155 scope.go:117] "RemoveContainer" containerID="6bb85739a7a836abfdb346023915e77abb0b10b023f88b2b7e7c9536a35657a8" Feb 16 16:58:37.885759 master-0 kubenswrapper[4155]: E0216 16:58:37.885407 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 16 16:58:37.888047 master-0 kubenswrapper[4155]: E0216 16:58:37.887833 4155 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.1894c896a5ccea82\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.1894c896a5ccea82 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:b3322fd3717f4aec0d8f54ec7862c07e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 16:58:22.81576717 +0000 UTC m=+7.154820674,LastTimestamp:2026-02-16 16:58:37.885377807 +0000 UTC m=+22.224431351,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 16:58:37.891212 master-0 kubenswrapper[4155]: I0216 16:58:37.891165 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 16:58:37.894319 master-0 kubenswrapper[4155]: W0216 16:58:37.894226 4155 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 16 16:58:37.894412 master-0 kubenswrapper[4155]: E0216 16:58:37.894323 4155 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 16 16:58:38.610830 master-0 kubenswrapper[4155]: I0216 16:58:38.610755 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:38.888004 master-0 kubenswrapper[4155]: I0216 16:58:38.887839 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 16:58:38.888731 master-0 kubenswrapper[4155]: I0216 16:58:38.888681 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:38.889768 master-0 kubenswrapper[4155]: I0216 16:58:38.889714 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:38.889863 master-0 kubenswrapper[4155]: I0216 16:58:38.889771 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:38.889863 master-0 kubenswrapper[4155]: I0216 16:58:38.889792 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:39.612876 master-0 kubenswrapper[4155]: I0216 16:58:39.612765 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:40.612800 master-0 kubenswrapper[4155]: I0216 16:58:40.612746 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:41.612549 master-0 kubenswrapper[4155]: I0216 16:58:41.612439 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:42.614817 master-0 kubenswrapper[4155]: I0216 16:58:42.614707 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:43.241411 master-0 kubenswrapper[4155]: E0216 16:58:43.241270 4155 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 16:58:43.473761 master-0 kubenswrapper[4155]: I0216 16:58:43.473673 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:43.475085 master-0 kubenswrapper[4155]: I0216 16:58:43.475035 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:43.475158 master-0 kubenswrapper[4155]: I0216 16:58:43.475104 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:43.475158 master-0 kubenswrapper[4155]: I0216 16:58:43.475124 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:43.475243 master-0 kubenswrapper[4155]: I0216 16:58:43.475193 4155 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 16:58:43.480720 master-0 kubenswrapper[4155]: E0216 16:58:43.480671 4155 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Feb 16 16:58:43.611672 master-0 kubenswrapper[4155]: I0216 16:58:43.611615 4155 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 16:58:44.573748 master-0 kubenswrapper[4155]: I0216 16:58:44.573700 4155 csr.go:257] certificate signing request csr-2fdpd is issued Feb 16 16:58:44.614221 master-0 kubenswrapper[4155]: I0216 16:58:44.614168 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:44.632680 master-0 kubenswrapper[4155]: I0216 16:58:44.632636 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:44.690803 master-0 kubenswrapper[4155]: I0216 16:58:44.690754 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:44.959998 master-0 kubenswrapper[4155]: I0216 16:58:44.959875 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:44.959998 master-0 kubenswrapper[4155]: E0216 16:58:44.959951 4155 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 16 16:58:44.981992 master-0 kubenswrapper[4155]: I0216 16:58:44.981893 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:45.000195 master-0 kubenswrapper[4155]: I0216 16:58:45.000118 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:45.057286 master-0 kubenswrapper[4155]: I0216 16:58:45.057238 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:45.317083 master-0 kubenswrapper[4155]: I0216 16:58:45.316961 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:45.317083 master-0 kubenswrapper[4155]: E0216 16:58:45.317007 4155 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 16 16:58:45.416388 master-0 kubenswrapper[4155]: I0216 16:58:45.416337 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:45.437111 master-0 kubenswrapper[4155]: I0216 16:58:45.436902 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:45.496815 master-0 kubenswrapper[4155]: I0216 16:58:45.496762 4155 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 16:58:45.505972 master-0 kubenswrapper[4155]: I0216 16:58:45.505826 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:45.575796 master-0 kubenswrapper[4155]: I0216 16:58:45.575456 4155 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 13:22:10.638764512 +0000 UTC Feb 16 16:58:45.575796 master-0 kubenswrapper[4155]: I0216 16:58:45.575503 4155 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h23m25.063265097s for next certificate rotation Feb 16 16:58:45.781962 master-0 kubenswrapper[4155]: I0216 16:58:45.781891 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:45.781962 master-0 kubenswrapper[4155]: E0216 16:58:45.781964 4155 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 16 16:58:46.332451 master-0 kubenswrapper[4155]: I0216 16:58:46.332394 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:46.348230 master-0 kubenswrapper[4155]: I0216 16:58:46.348127 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:46.403633 master-0 kubenswrapper[4155]: I0216 16:58:46.403568 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:46.678554 master-0 kubenswrapper[4155]: I0216 16:58:46.678401 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:46.678554 master-0 kubenswrapper[4155]: E0216 16:58:46.678441 4155 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Feb 16 16:58:46.731104 master-0 kubenswrapper[4155]: E0216 16:58:46.730983 4155 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 16:58:49.780825 master-0 kubenswrapper[4155]: I0216 16:58:49.780722 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:49.781971 master-0 kubenswrapper[4155]: I0216 16:58:49.781540 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:49.781971 master-0 kubenswrapper[4155]: I0216 16:58:49.781559 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:49.781971 master-0 kubenswrapper[4155]: I0216 16:58:49.781569 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:49.781971 master-0 kubenswrapper[4155]: I0216 16:58:49.781806 4155 scope.go:117] "RemoveContainer" containerID="6bb85739a7a836abfdb346023915e77abb0b10b023f88b2b7e7c9536a35657a8" Feb 16 16:58:49.781971 master-0 kubenswrapper[4155]: E0216 16:58:49.781942 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(b3322fd3717f4aec0d8f54ec7862c07e)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="b3322fd3717f4aec0d8f54ec7862c07e" Feb 16 16:58:50.159028 master-0 kubenswrapper[4155]: I0216 16:58:50.158837 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:50.175175 master-0 kubenswrapper[4155]: I0216 16:58:50.175096 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:50.232463 master-0 kubenswrapper[4155]: I0216 16:58:50.232347 4155 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Feb 16 16:58:50.247013 master-0 kubenswrapper[4155]: E0216 16:58:50.246947 4155 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Feb 16 16:58:50.481064 master-0 kubenswrapper[4155]: I0216 16:58:50.480892 4155 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 16:58:50.482555 master-0 kubenswrapper[4155]: I0216 16:58:50.482508 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 16:58:50.482654 master-0 kubenswrapper[4155]: I0216 16:58:50.482586 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 16:58:50.482654 master-0 kubenswrapper[4155]: I0216 16:58:50.482604 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 16:58:50.482732 master-0 kubenswrapper[4155]: I0216 16:58:50.482674 4155 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 16:58:50.491186 master-0 kubenswrapper[4155]: I0216 16:58:50.491158 4155 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 16 16:58:50.491292 master-0 kubenswrapper[4155]: E0216 16:58:50.491198 4155 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 16 16:58:50.504293 master-0 kubenswrapper[4155]: E0216 16:58:50.504261 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:50.604546 master-0 kubenswrapper[4155]: E0216 16:58:50.604478 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:50.704715 master-0 kubenswrapper[4155]: E0216 16:58:50.704633 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:50.805491 master-0 kubenswrapper[4155]: E0216 16:58:50.805438 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:50.876710 master-0 kubenswrapper[4155]: I0216 16:58:50.876656 4155 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 16:58:50.885072 master-0 kubenswrapper[4155]: I0216 16:58:50.885004 4155 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 16:58:50.906411 master-0 kubenswrapper[4155]: E0216 16:58:50.906376 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:51.007618 master-0 kubenswrapper[4155]: E0216 16:58:51.007542 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:51.108091 master-0 kubenswrapper[4155]: E0216 16:58:51.107854 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:51.208528 master-0 kubenswrapper[4155]: E0216 16:58:51.208399 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:51.309353 master-0 kubenswrapper[4155]: E0216 16:58:51.309280 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:51.409797 master-0 kubenswrapper[4155]: E0216 16:58:51.409676 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:51.509901 master-0 kubenswrapper[4155]: E0216 16:58:51.509862 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:51.610448 master-0 kubenswrapper[4155]: E0216 16:58:51.610362 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:51.711474 master-0 kubenswrapper[4155]: E0216 16:58:51.711281 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:51.811492 master-0 kubenswrapper[4155]: E0216 16:58:51.811395 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:51.912669 master-0 kubenswrapper[4155]: E0216 16:58:51.912561 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:52.012975 master-0 kubenswrapper[4155]: E0216 16:58:52.012875 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:52.113093 master-0 kubenswrapper[4155]: E0216 16:58:52.112988 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:52.213982 master-0 kubenswrapper[4155]: E0216 16:58:52.213880 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:52.315187 master-0 kubenswrapper[4155]: E0216 16:58:52.315018 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:52.415975 master-0 kubenswrapper[4155]: E0216 16:58:52.415835 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:52.517055 master-0 kubenswrapper[4155]: E0216 16:58:52.516912 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:52.617659 master-0 kubenswrapper[4155]: E0216 16:58:52.617469 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:52.718400 master-0 kubenswrapper[4155]: E0216 16:58:52.718285 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:52.819058 master-0 kubenswrapper[4155]: E0216 16:58:52.818963 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:52.919395 master-0 kubenswrapper[4155]: E0216 16:58:52.919290 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:53.020227 master-0 kubenswrapper[4155]: E0216 16:58:53.020137 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:53.120878 master-0 kubenswrapper[4155]: E0216 16:58:53.120790 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:53.222144 master-0 kubenswrapper[4155]: E0216 16:58:53.221957 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:53.322966 master-0 kubenswrapper[4155]: E0216 16:58:53.322843 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:53.423669 master-0 kubenswrapper[4155]: E0216 16:58:53.423586 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:53.523864 master-0 kubenswrapper[4155]: E0216 16:58:53.523756 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:53.625047 master-0 kubenswrapper[4155]: E0216 16:58:53.624934 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:53.725971 master-0 kubenswrapper[4155]: E0216 16:58:53.725770 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:53.826229 master-0 kubenswrapper[4155]: E0216 16:58:53.825912 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:53.926528 master-0 kubenswrapper[4155]: E0216 16:58:53.926410 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:54.026903 master-0 kubenswrapper[4155]: E0216 16:58:54.026828 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:54.127613 master-0 kubenswrapper[4155]: E0216 16:58:54.127426 4155 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 16:58:54.173729 master-0 kubenswrapper[4155]: I0216 16:58:54.173650 4155 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 16:58:54.879262 master-0 kubenswrapper[4155]: I0216 16:58:54.879208 4155 apiserver.go:52] "Watching apiserver" Feb 16 16:58:54.886827 master-0 kubenswrapper[4155]: I0216 16:58:54.886752 4155 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 16:58:54.887118 master-0 kubenswrapper[4155]: I0216 16:58:54.887060 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-thhq2","openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l","openshift-network-operator/network-operator-6fcf4c966-6bmf9"] Feb 16 16:58:54.887670 master-0 kubenswrapper[4155]: I0216 16:58:54.887566 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:54.887796 master-0 kubenswrapper[4155]: I0216 16:58:54.887574 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:54.887796 master-0 kubenswrapper[4155]: I0216 16:58:54.887750 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 16:58:54.890419 master-0 kubenswrapper[4155]: I0216 16:58:54.890330 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 16:58:54.890419 master-0 kubenswrapper[4155]: I0216 16:58:54.890378 4155 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Feb 16 16:58:54.890842 master-0 kubenswrapper[4155]: I0216 16:58:54.890683 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 16:58:54.891788 master-0 kubenswrapper[4155]: I0216 16:58:54.891614 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 16:58:54.892122 master-0 kubenswrapper[4155]: I0216 16:58:54.892051 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 16:58:54.893013 master-0 kubenswrapper[4155]: I0216 16:58:54.892809 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 16:58:54.893013 master-0 kubenswrapper[4155]: I0216 16:58:54.892866 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Feb 16 16:58:54.893013 master-0 kubenswrapper[4155]: I0216 16:58:54.892957 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 16:58:54.893432 master-0 kubenswrapper[4155]: I0216 16:58:54.893041 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Feb 16 16:58:54.893432 master-0 kubenswrapper[4155]: I0216 16:58:54.893080 4155 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 16:58:54.893432 master-0 kubenswrapper[4155]: I0216 16:58:54.893140 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Feb 16 16:58:54.906117 master-0 kubenswrapper[4155]: I0216 16:58:54.906052 4155 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 16:58:54.909012 master-0 kubenswrapper[4155]: I0216 16:58:54.908903 4155 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 16 16:58:54.968821 master-0 kubenswrapper[4155]: I0216 16:58:54.968707 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:54.968821 master-0 kubenswrapper[4155]: I0216 16:58:54.968777 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-resolv-conf\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:54.968821 master-0 kubenswrapper[4155]: I0216 16:58:54.968801 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 16:58:54.968821 master-0 kubenswrapper[4155]: I0216 16:58:54.968819 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-ca-bundle\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:54.969244 master-0 kubenswrapper[4155]: I0216 16:58:54.968845 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/568b22df-b454-4d74-bc21-6c84daf17c8c-service-ca\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:54.969244 master-0 kubenswrapper[4155]: I0216 16:58:54.968898 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 16:58:54.969244 master-0 kubenswrapper[4155]: I0216 16:58:54.969069 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:54.969244 master-0 kubenswrapper[4155]: I0216 16:58:54.969165 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 16:58:54.969244 master-0 kubenswrapper[4155]: I0216 16:58:54.969194 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/568b22df-b454-4d74-bc21-6c84daf17c8c-kube-api-access\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:54.969244 master-0 kubenswrapper[4155]: I0216 16:58:54.969221 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-var-run-resolv-conf\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:54.969244 master-0 kubenswrapper[4155]: I0216 16:58:54.969249 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-sno-bootstrap-files\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:54.969862 master-0 kubenswrapper[4155]: I0216 16:58:54.969272 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx9f7\" (UniqueName: \"kubernetes.io/projected/f8589094-f18e-4070-a550-b2da6f8acfc0-kube-api-access-nx9f7\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:54.969862 master-0 kubenswrapper[4155]: I0216 16:58:54.969302 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:55.070126 master-0 kubenswrapper[4155]: I0216 16:58:55.070043 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-var-run-resolv-conf\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:55.070126 master-0 kubenswrapper[4155]: I0216 16:58:55.070096 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-sno-bootstrap-files\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:55.070126 master-0 kubenswrapper[4155]: I0216 16:58:55.070112 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-var-run-resolv-conf\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:55.070126 master-0 kubenswrapper[4155]: I0216 16:58:55.070118 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx9f7\" (UniqueName: \"kubernetes.io/projected/f8589094-f18e-4070-a550-b2da6f8acfc0-kube-api-access-nx9f7\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:55.070515 master-0 kubenswrapper[4155]: I0216 16:58:55.070249 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-sno-bootstrap-files\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:55.070515 master-0 kubenswrapper[4155]: I0216 16:58:55.070380 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:55.070515 master-0 kubenswrapper[4155]: I0216 16:58:55.070414 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:55.070515 master-0 kubenswrapper[4155]: I0216 16:58:55.070434 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-ca-bundle\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:55.070515 master-0 kubenswrapper[4155]: I0216 16:58:55.070451 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-resolv-conf\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:55.070515 master-0 kubenswrapper[4155]: I0216 16:58:55.070468 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 16:58:55.070515 master-0 kubenswrapper[4155]: I0216 16:58:55.070499 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:55.070515 master-0 kubenswrapper[4155]: I0216 16:58:55.070505 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-ca-bundle\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:55.070888 master-0 kubenswrapper[4155]: I0216 16:58:55.070538 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/568b22df-b454-4d74-bc21-6c84daf17c8c-service-ca\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:55.070888 master-0 kubenswrapper[4155]: E0216 16:58:55.070544 4155 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 16:58:55.070888 master-0 kubenswrapper[4155]: I0216 16:58:55.070582 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-resolv-conf\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:55.070888 master-0 kubenswrapper[4155]: I0216 16:58:55.070614 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 16:58:55.070888 master-0 kubenswrapper[4155]: E0216 16:58:55.070639 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert podName:568b22df-b454-4d74-bc21-6c84daf17c8c nodeName:}" failed. No retries permitted until 2026-02-16 16:58:55.57060863 +0000 UTC m=+39.909662134 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert") pod "cluster-version-operator-76959b6567-wnh7l" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c") : secret "cluster-version-operator-serving-cert" not found Feb 16 16:58:55.070888 master-0 kubenswrapper[4155]: I0216 16:58:55.070668 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:55.070888 master-0 kubenswrapper[4155]: I0216 16:58:55.070549 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 16:58:55.070888 master-0 kubenswrapper[4155]: I0216 16:58:55.070703 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 16:58:55.070888 master-0 kubenswrapper[4155]: I0216 16:58:55.070729 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/568b22df-b454-4d74-bc21-6c84daf17c8c-kube-api-access\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:55.070888 master-0 kubenswrapper[4155]: I0216 16:58:55.070758 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:55.071596 master-0 kubenswrapper[4155]: I0216 16:58:55.071543 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/568b22df-b454-4d74-bc21-6c84daf17c8c-service-ca\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:55.071692 master-0 kubenswrapper[4155]: I0216 16:58:55.071618 4155 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 16:58:55.080358 master-0 kubenswrapper[4155]: I0216 16:58:55.080293 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 16:58:55.088314 master-0 kubenswrapper[4155]: I0216 16:58:55.088253 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 16:58:55.089074 master-0 kubenswrapper[4155]: I0216 16:58:55.089027 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx9f7\" (UniqueName: \"kubernetes.io/projected/f8589094-f18e-4070-a550-b2da6f8acfc0-kube-api-access-nx9f7\") pod \"assisted-installer-controller-thhq2\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:55.089387 master-0 kubenswrapper[4155]: I0216 16:58:55.089339 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/568b22df-b454-4d74-bc21-6c84daf17c8c-kube-api-access\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:55.230096 master-0 kubenswrapper[4155]: I0216 16:58:55.229904 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:58:55.236181 master-0 kubenswrapper[4155]: I0216 16:58:55.236153 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 16:58:55.574803 master-0 kubenswrapper[4155]: I0216 16:58:55.574719 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:55.575075 master-0 kubenswrapper[4155]: E0216 16:58:55.574906 4155 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 16:58:55.575075 master-0 kubenswrapper[4155]: E0216 16:58:55.575016 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert podName:568b22df-b454-4d74-bc21-6c84daf17c8c nodeName:}" failed. No retries permitted until 2026-02-16 16:58:56.574997043 +0000 UTC m=+40.914050547 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert") pod "cluster-version-operator-76959b6567-wnh7l" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c") : secret "cluster-version-operator-serving-cert" not found Feb 16 16:58:55.933280 master-0 kubenswrapper[4155]: I0216 16:58:55.933093 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" event={"ID":"4549ea98-7379-49e1-8452-5efb643137ca","Type":"ContainerStarted","Data":"75a3e91157092df61ab323caf67fd50fd02c9c52e83eb981207e27d0552a17af"} Feb 16 16:58:55.934606 master-0 kubenswrapper[4155]: I0216 16:58:55.934413 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-thhq2" event={"ID":"f8589094-f18e-4070-a550-b2da6f8acfc0","Type":"ContainerStarted","Data":"032b64d679f57601688a8c909d1648c2b2ff07b1d0ed9eae1ac157ec69dbfe35"} Feb 16 16:58:56.225673 master-0 kubenswrapper[4155]: I0216 16:58:56.225553 4155 csr.go:261] certificate signing request csr-znlml is approved, waiting to be issued Feb 16 16:58:56.232289 master-0 kubenswrapper[4155]: I0216 16:58:56.232236 4155 csr.go:257] certificate signing request csr-znlml is issued Feb 16 16:58:56.586825 master-0 kubenswrapper[4155]: I0216 16:58:56.586763 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:56.587037 master-0 kubenswrapper[4155]: E0216 16:58:56.586965 4155 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 16:58:56.587084 master-0 kubenswrapper[4155]: E0216 16:58:56.587050 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert podName:568b22df-b454-4d74-bc21-6c84daf17c8c nodeName:}" failed. No retries permitted until 2026-02-16 16:58:58.587024109 +0000 UTC m=+42.926077633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert") pod "cluster-version-operator-76959b6567-wnh7l" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c") : secret "cluster-version-operator-serving-cert" not found Feb 16 16:58:57.234623 master-0 kubenswrapper[4155]: I0216 16:58:57.234576 4155 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 13:44:01.766104979 +0000 UTC Feb 16 16:58:57.234623 master-0 kubenswrapper[4155]: I0216 16:58:57.234609 4155 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h45m4.531498642s for next certificate rotation Feb 16 16:58:58.234865 master-0 kubenswrapper[4155]: I0216 16:58:58.234789 4155 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 13:32:25.585414202 +0000 UTC Feb 16 16:58:58.234865 master-0 kubenswrapper[4155]: I0216 16:58:58.234836 4155 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h33m27.350582764s for next certificate rotation Feb 16 16:58:58.544762 master-0 kubenswrapper[4155]: E0216 16:58:58.544692 4155 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 16 16:58:58.544762 master-0 kubenswrapper[4155]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e,Command:[/bin/bash -c #!/bin/bash Feb 16 16:58:58.544762 master-0 kubenswrapper[4155]: set -o allexport Feb 16 16:58:58.544762 master-0 kubenswrapper[4155]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 16 16:58:58.544762 master-0 kubenswrapper[4155]: source /etc/kubernetes/apiserver-url.env Feb 16 16:58:58.544762 master-0 kubenswrapper[4155]: else Feb 16 16:58:58.544762 master-0 kubenswrapper[4155]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 16 16:58:58.544762 master-0 kubenswrapper[4155]: exit 1 Feb 16 16:58:58.544762 master-0 kubenswrapper[4155]: fi Feb 16 16:58:58.544762 master-0 kubenswrapper[4155]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 16 16:58:58.544762 master-0 kubenswrapper[4155]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.32,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67d1623cf33e4a5ecaa5ec7f1dae3af4e0e0478489b3a628de2062dca1473c7e,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c38d58b62290b59d0182b50ce3cfd87fbb7729f3ce6fc06ffa46d9805c7dd78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:15c5e645edf257a08c061ad9ae7dab4293104a042b8396181d76dd28f396cebe,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:67d61d27aa46f8c1f49f2f691ebeec6a8465c1506c83e876415fcf6be19c2d77,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae9731fa23b96fe0a08e198d4cab6bb4e4b81a006a45b1e68948ffcac4e0bf9c,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7862391b6b069f985d3ba652ff80f29fedce94493a013b8e464e2d7bde964da4,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a913cef121c9a6c3ddc57b01fc807bb042e5a903489c05f99e6e2da9e6ec0b98,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zt8mt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-6fcf4c966-6bmf9_openshift-network-operator(4549ea98-7379-49e1-8452-5efb643137ca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 16:58:58.544762 master-0 kubenswrapper[4155]: > logger="UnhandledError" Feb 16 16:58:58.545988 master-0 kubenswrapper[4155]: E0216 16:58:58.545914 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" podUID="4549ea98-7379-49e1-8452-5efb643137ca" Feb 16 16:58:58.603258 master-0 kubenswrapper[4155]: I0216 16:58:58.603192 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:58:58.603455 master-0 kubenswrapper[4155]: E0216 16:58:58.603317 4155 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 16:58:58.603455 master-0 kubenswrapper[4155]: E0216 16:58:58.603365 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert podName:568b22df-b454-4d74-bc21-6c84daf17c8c nodeName:}" failed. No retries permitted until 2026-02-16 16:59:02.603352039 +0000 UTC m=+46.942405543 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert") pod "cluster-version-operator-76959b6567-wnh7l" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c") : secret "cluster-version-operator-serving-cert" not found Feb 16 16:58:59.245398 master-0 kubenswrapper[4155]: I0216 16:58:59.245313 4155 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 16:59:00.946995 master-0 kubenswrapper[4155]: I0216 16:59:00.946695 4155 generic.go:334] "Generic (PLEG): container finished" podID="f8589094-f18e-4070-a550-b2da6f8acfc0" containerID="a029a9519b0af6df58434184bb4dd337dec578276ce41db33a7f4964a78b38d1" exitCode=0 Feb 16 16:59:00.947663 master-0 kubenswrapper[4155]: I0216 16:59:00.946784 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-thhq2" event={"ID":"f8589094-f18e-4070-a550-b2da6f8acfc0","Type":"ContainerDied","Data":"a029a9519b0af6df58434184bb4dd337dec578276ce41db33a7f4964a78b38d1"} Feb 16 16:59:00.949417 master-0 kubenswrapper[4155]: I0216 16:59:00.949374 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" event={"ID":"4549ea98-7379-49e1-8452-5efb643137ca","Type":"ContainerStarted","Data":"01bf42c6c3bf4f293fd2294a37aff703b4c469002ae6a87f7c50eefa7c6ae11b"} Feb 16 16:59:00.969540 master-0 kubenswrapper[4155]: I0216 16:59:00.969413 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" podStartSLOduration=7.676787618 podStartE2EDuration="10.969395169s" podCreationTimestamp="2026-02-16 16:58:50 +0000 UTC" firstStartedPulling="2026-02-16 16:58:55.250821794 +0000 UTC m=+39.589875298" lastFinishedPulling="2026-02-16 16:58:58.543429335 +0000 UTC m=+42.882482849" observedRunningTime="2026-02-16 16:59:00.969210554 +0000 UTC m=+45.308264128" watchObservedRunningTime="2026-02-16 16:59:00.969395169 +0000 UTC m=+45.308448673" Feb 16 16:59:01.979362 master-0 kubenswrapper[4155]: I0216 16:59:01.979288 4155 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:59:02.131522 master-0 kubenswrapper[4155]: I0216 16:59:02.131369 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-var-run-resolv-conf\") pod \"f8589094-f18e-4070-a550-b2da6f8acfc0\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " Feb 16 16:59:02.131522 master-0 kubenswrapper[4155]: I0216 16:59:02.131475 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-sno-bootstrap-files\") pod \"f8589094-f18e-4070-a550-b2da6f8acfc0\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " Feb 16 16:59:02.131962 master-0 kubenswrapper[4155]: I0216 16:59:02.131547 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx9f7\" (UniqueName: \"kubernetes.io/projected/f8589094-f18e-4070-a550-b2da6f8acfc0-kube-api-access-nx9f7\") pod \"f8589094-f18e-4070-a550-b2da6f8acfc0\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " Feb 16 16:59:02.131962 master-0 kubenswrapper[4155]: I0216 16:59:02.131593 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-resolv-conf\") pod \"f8589094-f18e-4070-a550-b2da6f8acfc0\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " Feb 16 16:59:02.131962 master-0 kubenswrapper[4155]: I0216 16:59:02.131642 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-ca-bundle\") pod \"f8589094-f18e-4070-a550-b2da6f8acfc0\" (UID: \"f8589094-f18e-4070-a550-b2da6f8acfc0\") " Feb 16 16:59:02.131962 master-0 kubenswrapper[4155]: I0216 16:59:02.131589 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "f8589094-f18e-4070-a550-b2da6f8acfc0" (UID: "f8589094-f18e-4070-a550-b2da6f8acfc0"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:02.131962 master-0 kubenswrapper[4155]: I0216 16:59:02.131629 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "f8589094-f18e-4070-a550-b2da6f8acfc0" (UID: "f8589094-f18e-4070-a550-b2da6f8acfc0"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:02.131962 master-0 kubenswrapper[4155]: I0216 16:59:02.131771 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "f8589094-f18e-4070-a550-b2da6f8acfc0" (UID: "f8589094-f18e-4070-a550-b2da6f8acfc0"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:02.131962 master-0 kubenswrapper[4155]: I0216 16:59:02.131682 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "f8589094-f18e-4070-a550-b2da6f8acfc0" (UID: "f8589094-f18e-4070-a550-b2da6f8acfc0"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:02.136821 master-0 kubenswrapper[4155]: I0216 16:59:02.136687 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8589094-f18e-4070-a550-b2da6f8acfc0-kube-api-access-nx9f7" (OuterVolumeSpecName: "kube-api-access-nx9f7") pod "f8589094-f18e-4070-a550-b2da6f8acfc0" (UID: "f8589094-f18e-4070-a550-b2da6f8acfc0"). InnerVolumeSpecName "kube-api-access-nx9f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:02.231979 master-0 kubenswrapper[4155]: I0216 16:59:02.231845 4155 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:02.231979 master-0 kubenswrapper[4155]: I0216 16:59:02.231872 4155 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:02.231979 master-0 kubenswrapper[4155]: I0216 16:59:02.231882 4155 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx9f7\" (UniqueName: \"kubernetes.io/projected/f8589094-f18e-4070-a550-b2da6f8acfc0-kube-api-access-nx9f7\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:02.231979 master-0 kubenswrapper[4155]: I0216 16:59:02.231891 4155 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:02.231979 master-0 kubenswrapper[4155]: I0216 16:59:02.231899 4155 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/f8589094-f18e-4070-a550-b2da6f8acfc0-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:02.492820 master-0 kubenswrapper[4155]: I0216 16:59:02.492690 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-qvf8n"] Feb 16 16:59:02.493306 master-0 kubenswrapper[4155]: E0216 16:59:02.492835 4155 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8589094-f18e-4070-a550-b2da6f8acfc0" containerName="assisted-installer-controller" Feb 16 16:59:02.493306 master-0 kubenswrapper[4155]: I0216 16:59:02.492850 4155 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8589094-f18e-4070-a550-b2da6f8acfc0" containerName="assisted-installer-controller" Feb 16 16:59:02.493306 master-0 kubenswrapper[4155]: I0216 16:59:02.492873 4155 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8589094-f18e-4070-a550-b2da6f8acfc0" containerName="assisted-installer-controller" Feb 16 16:59:02.493306 master-0 kubenswrapper[4155]: I0216 16:59:02.493091 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-qvf8n" Feb 16 16:59:02.634775 master-0 kubenswrapper[4155]: I0216 16:59:02.634660 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x46sk\" (UniqueName: \"kubernetes.io/projected/10280d4e-9a32-4fea-aea0-211e7c9f0502-kube-api-access-x46sk\") pod \"mtu-prober-qvf8n\" (UID: \"10280d4e-9a32-4fea-aea0-211e7c9f0502\") " pod="openshift-network-operator/mtu-prober-qvf8n" Feb 16 16:59:02.634775 master-0 kubenswrapper[4155]: I0216 16:59:02.634787 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:59:02.635127 master-0 kubenswrapper[4155]: E0216 16:59:02.634956 4155 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 16:59:02.635127 master-0 kubenswrapper[4155]: E0216 16:59:02.635030 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert podName:568b22df-b454-4d74-bc21-6c84daf17c8c nodeName:}" failed. No retries permitted until 2026-02-16 16:59:10.635005974 +0000 UTC m=+54.974059518 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert") pod "cluster-version-operator-76959b6567-wnh7l" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c") : secret "cluster-version-operator-serving-cert" not found Feb 16 16:59:02.735612 master-0 kubenswrapper[4155]: I0216 16:59:02.735454 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x46sk\" (UniqueName: \"kubernetes.io/projected/10280d4e-9a32-4fea-aea0-211e7c9f0502-kube-api-access-x46sk\") pod \"mtu-prober-qvf8n\" (UID: \"10280d4e-9a32-4fea-aea0-211e7c9f0502\") " pod="openshift-network-operator/mtu-prober-qvf8n" Feb 16 16:59:02.753296 master-0 kubenswrapper[4155]: I0216 16:59:02.753039 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x46sk\" (UniqueName: \"kubernetes.io/projected/10280d4e-9a32-4fea-aea0-211e7c9f0502-kube-api-access-x46sk\") pod \"mtu-prober-qvf8n\" (UID: \"10280d4e-9a32-4fea-aea0-211e7c9f0502\") " pod="openshift-network-operator/mtu-prober-qvf8n" Feb 16 16:59:02.801396 master-0 kubenswrapper[4155]: I0216 16:59:02.801224 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Feb 16 16:59:02.801819 master-0 kubenswrapper[4155]: I0216 16:59:02.801530 4155 scope.go:117] "RemoveContainer" containerID="6bb85739a7a836abfdb346023915e77abb0b10b023f88b2b7e7c9536a35657a8" Feb 16 16:59:02.807691 master-0 kubenswrapper[4155]: I0216 16:59:02.807465 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-qvf8n" Feb 16 16:59:02.823869 master-0 kubenswrapper[4155]: W0216 16:59:02.823814 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10280d4e_9a32_4fea_aea0_211e7c9f0502.slice/crio-78f7ca346bca4984ddbbaf801650ea12f9c20b44ed1343037c4daed41481b056 WatchSource:0}: Error finding container 78f7ca346bca4984ddbbaf801650ea12f9c20b44ed1343037c4daed41481b056: Status 404 returned error can't find the container with id 78f7ca346bca4984ddbbaf801650ea12f9c20b44ed1343037c4daed41481b056 Feb 16 16:59:02.955641 master-0 kubenswrapper[4155]: I0216 16:59:02.955596 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-qvf8n" event={"ID":"10280d4e-9a32-4fea-aea0-211e7c9f0502","Type":"ContainerStarted","Data":"78f7ca346bca4984ddbbaf801650ea12f9c20b44ed1343037c4daed41481b056"} Feb 16 16:59:02.957091 master-0 kubenswrapper[4155]: I0216 16:59:02.957050 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-thhq2" event={"ID":"f8589094-f18e-4070-a550-b2da6f8acfc0","Type":"ContainerDied","Data":"032b64d679f57601688a8c909d1648c2b2ff07b1d0ed9eae1ac157ec69dbfe35"} Feb 16 16:59:02.957091 master-0 kubenswrapper[4155]: I0216 16:59:02.957074 4155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="032b64d679f57601688a8c909d1648c2b2ff07b1d0ed9eae1ac157ec69dbfe35" Feb 16 16:59:02.957177 master-0 kubenswrapper[4155]: I0216 16:59:02.957112 4155 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 16:59:03.961199 master-0 kubenswrapper[4155]: I0216 16:59:03.961167 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 16:59:03.962266 master-0 kubenswrapper[4155]: I0216 16:59:03.962230 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"d85f4bae9120dd5571ac4aef5b4bc508cd0c2e61ac41e2e016d2fca33cf2c0df"} Feb 16 16:59:03.966083 master-0 kubenswrapper[4155]: I0216 16:59:03.966035 4155 generic.go:334] "Generic (PLEG): container finished" podID="10280d4e-9a32-4fea-aea0-211e7c9f0502" containerID="500d24f874646514d290aa65da48da18a395647cf9847d120c566c759fe02946" exitCode=0 Feb 16 16:59:03.966083 master-0 kubenswrapper[4155]: I0216 16:59:03.966081 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-qvf8n" event={"ID":"10280d4e-9a32-4fea-aea0-211e7c9f0502","Type":"ContainerDied","Data":"500d24f874646514d290aa65da48da18a395647cf9847d120c566c759fe02946"} Feb 16 16:59:03.990571 master-0 kubenswrapper[4155]: I0216 16:59:03.990481 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=1.990447351 podStartE2EDuration="1.990447351s" podCreationTimestamp="2026-02-16 16:59:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 16:59:03.977776793 +0000 UTC m=+48.316830337" watchObservedRunningTime="2026-02-16 16:59:03.990447351 +0000 UTC m=+48.329500895" Feb 16 16:59:04.991873 master-0 kubenswrapper[4155]: I0216 16:59:04.991817 4155 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-qvf8n" Feb 16 16:59:05.154284 master-0 kubenswrapper[4155]: I0216 16:59:05.154181 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x46sk\" (UniqueName: \"kubernetes.io/projected/10280d4e-9a32-4fea-aea0-211e7c9f0502-kube-api-access-x46sk\") pod \"10280d4e-9a32-4fea-aea0-211e7c9f0502\" (UID: \"10280d4e-9a32-4fea-aea0-211e7c9f0502\") " Feb 16 16:59:05.159613 master-0 kubenswrapper[4155]: I0216 16:59:05.159531 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10280d4e-9a32-4fea-aea0-211e7c9f0502-kube-api-access-x46sk" (OuterVolumeSpecName: "kube-api-access-x46sk") pod "10280d4e-9a32-4fea-aea0-211e7c9f0502" (UID: "10280d4e-9a32-4fea-aea0-211e7c9f0502"). InnerVolumeSpecName "kube-api-access-x46sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:05.255424 master-0 kubenswrapper[4155]: I0216 16:59:05.255336 4155 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x46sk\" (UniqueName: \"kubernetes.io/projected/10280d4e-9a32-4fea-aea0-211e7c9f0502-kube-api-access-x46sk\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:05.973175 master-0 kubenswrapper[4155]: I0216 16:59:05.973037 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-qvf8n" event={"ID":"10280d4e-9a32-4fea-aea0-211e7c9f0502","Type":"ContainerDied","Data":"78f7ca346bca4984ddbbaf801650ea12f9c20b44ed1343037c4daed41481b056"} Feb 16 16:59:05.973175 master-0 kubenswrapper[4155]: I0216 16:59:05.973098 4155 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78f7ca346bca4984ddbbaf801650ea12f9c20b44ed1343037c4daed41481b056" Feb 16 16:59:05.973175 master-0 kubenswrapper[4155]: I0216 16:59:05.973131 4155 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-qvf8n" Feb 16 16:59:07.485217 master-0 kubenswrapper[4155]: I0216 16:59:07.485158 4155 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-qvf8n"] Feb 16 16:59:07.491364 master-0 kubenswrapper[4155]: I0216 16:59:07.491338 4155 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-qvf8n"] Feb 16 16:59:08.783786 master-0 kubenswrapper[4155]: I0216 16:59:08.783718 4155 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10280d4e-9a32-4fea-aea0-211e7c9f0502" path="/var/lib/kubelet/pods/10280d4e-9a32-4fea-aea0-211e7c9f0502/volumes" Feb 16 16:59:10.693581 master-0 kubenswrapper[4155]: I0216 16:59:10.693477 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:59:10.694275 master-0 kubenswrapper[4155]: E0216 16:59:10.693604 4155 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 16:59:10.694275 master-0 kubenswrapper[4155]: E0216 16:59:10.693660 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert podName:568b22df-b454-4d74-bc21-6c84daf17c8c nodeName:}" failed. No retries permitted until 2026-02-16 16:59:26.693643344 +0000 UTC m=+71.032696838 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert") pod "cluster-version-operator-76959b6567-wnh7l" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c") : secret "cluster-version-operator-serving-cert" not found Feb 16 16:59:12.355099 master-0 kubenswrapper[4155]: I0216 16:59:12.354759 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-6r7wj"] Feb 16 16:59:12.356037 master-0 kubenswrapper[4155]: E0216 16:59:12.355123 4155 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10280d4e-9a32-4fea-aea0-211e7c9f0502" containerName="prober" Feb 16 16:59:12.356037 master-0 kubenswrapper[4155]: I0216 16:59:12.355137 4155 state_mem.go:107] "Deleted CPUSet assignment" podUID="10280d4e-9a32-4fea-aea0-211e7c9f0502" containerName="prober" Feb 16 16:59:12.356037 master-0 kubenswrapper[4155]: I0216 16:59:12.355163 4155 memory_manager.go:354] "RemoveStaleState removing state" podUID="10280d4e-9a32-4fea-aea0-211e7c9f0502" containerName="prober" Feb 16 16:59:12.356037 master-0 kubenswrapper[4155]: I0216 16:59:12.355347 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.357708 master-0 kubenswrapper[4155]: I0216 16:59:12.357652 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 16:59:12.357963 master-0 kubenswrapper[4155]: I0216 16:59:12.357868 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 16:59:12.357963 master-0 kubenswrapper[4155]: I0216 16:59:12.357664 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 16:59:12.358795 master-0 kubenswrapper[4155]: I0216 16:59:12.358757 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 16:59:12.506678 master-0 kubenswrapper[4155]: I0216 16:59:12.506601 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.506678 master-0 kubenswrapper[4155]: I0216 16:59:12.506689 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507170 master-0 kubenswrapper[4155]: I0216 16:59:12.506732 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507170 master-0 kubenswrapper[4155]: I0216 16:59:12.506826 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507170 master-0 kubenswrapper[4155]: I0216 16:59:12.506900 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507170 master-0 kubenswrapper[4155]: I0216 16:59:12.506989 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507170 master-0 kubenswrapper[4155]: I0216 16:59:12.507021 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507170 master-0 kubenswrapper[4155]: I0216 16:59:12.507051 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507170 master-0 kubenswrapper[4155]: I0216 16:59:12.507079 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507170 master-0 kubenswrapper[4155]: I0216 16:59:12.507114 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507170 master-0 kubenswrapper[4155]: I0216 16:59:12.507142 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507170 master-0 kubenswrapper[4155]: I0216 16:59:12.507179 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507888 master-0 kubenswrapper[4155]: I0216 16:59:12.507235 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507888 master-0 kubenswrapper[4155]: I0216 16:59:12.507271 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507888 master-0 kubenswrapper[4155]: I0216 16:59:12.507306 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507888 master-0 kubenswrapper[4155]: I0216 16:59:12.507469 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.507888 master-0 kubenswrapper[4155]: I0216 16:59:12.507536 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.558873 master-0 kubenswrapper[4155]: I0216 16:59:12.558820 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rjdlk"] Feb 16 16:59:12.559343 master-0 kubenswrapper[4155]: I0216 16:59:12.559311 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.561436 master-0 kubenswrapper[4155]: I0216 16:59:12.561410 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 16:59:12.561528 master-0 kubenswrapper[4155]: I0216 16:59:12.561450 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 16:59:12.608401 master-0 kubenswrapper[4155]: I0216 16:59:12.608283 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.608401 master-0 kubenswrapper[4155]: I0216 16:59:12.608335 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.608401 master-0 kubenswrapper[4155]: I0216 16:59:12.608362 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.608401 master-0 kubenswrapper[4155]: I0216 16:59:12.608383 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.608401 master-0 kubenswrapper[4155]: I0216 16:59:12.608403 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.608771 master-0 kubenswrapper[4155]: I0216 16:59:12.608601 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.608771 master-0 kubenswrapper[4155]: I0216 16:59:12.608629 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.608771 master-0 kubenswrapper[4155]: I0216 16:59:12.608702 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.608771 master-0 kubenswrapper[4155]: I0216 16:59:12.608660 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.608771 master-0 kubenswrapper[4155]: I0216 16:59:12.608740 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.608771 master-0 kubenswrapper[4155]: I0216 16:59:12.608765 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609037 master-0 kubenswrapper[4155]: I0216 16:59:12.608760 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609037 master-0 kubenswrapper[4155]: I0216 16:59:12.608792 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609037 master-0 kubenswrapper[4155]: I0216 16:59:12.608811 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609037 master-0 kubenswrapper[4155]: I0216 16:59:12.608845 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609037 master-0 kubenswrapper[4155]: I0216 16:59:12.608875 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609037 master-0 kubenswrapper[4155]: I0216 16:59:12.608882 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609037 master-0 kubenswrapper[4155]: I0216 16:59:12.608943 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609037 master-0 kubenswrapper[4155]: I0216 16:59:12.608976 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609037 master-0 kubenswrapper[4155]: I0216 16:59:12.608998 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609037 master-0 kubenswrapper[4155]: I0216 16:59:12.609020 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609037 master-0 kubenswrapper[4155]: I0216 16:59:12.609050 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609494 master-0 kubenswrapper[4155]: I0216 16:59:12.609066 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609494 master-0 kubenswrapper[4155]: I0216 16:59:12.609081 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609494 master-0 kubenswrapper[4155]: I0216 16:59:12.609099 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609494 master-0 kubenswrapper[4155]: I0216 16:59:12.609159 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609494 master-0 kubenswrapper[4155]: I0216 16:59:12.609195 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609494 master-0 kubenswrapper[4155]: I0216 16:59:12.609207 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609494 master-0 kubenswrapper[4155]: I0216 16:59:12.609237 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609494 master-0 kubenswrapper[4155]: I0216 16:59:12.609251 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609494 master-0 kubenswrapper[4155]: I0216 16:59:12.609261 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609897 master-0 kubenswrapper[4155]: I0216 16:59:12.609872 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.609975 master-0 kubenswrapper[4155]: I0216 16:59:12.609958 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.636060 master-0 kubenswrapper[4155]: I0216 16:59:12.636000 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.674968 master-0 kubenswrapper[4155]: I0216 16:59:12.674829 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6r7wj" Feb 16 16:59:12.692120 master-0 kubenswrapper[4155]: W0216 16:59:12.692054 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43f65f23_4ddd_471a_9cb3_b0945382d83c.slice/crio-4b4cf6ce22ab8720cdceaa9299137fdb7eefaf7a73cc07e0b511a6eb79ff2810 WatchSource:0}: Error finding container 4b4cf6ce22ab8720cdceaa9299137fdb7eefaf7a73cc07e0b511a6eb79ff2810: Status 404 returned error can't find the container with id 4b4cf6ce22ab8720cdceaa9299137fdb7eefaf7a73cc07e0b511a6eb79ff2810 Feb 16 16:59:12.709856 master-0 kubenswrapper[4155]: I0216 16:59:12.709805 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.709856 master-0 kubenswrapper[4155]: I0216 16:59:12.709854 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.710035 master-0 kubenswrapper[4155]: I0216 16:59:12.709882 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.710035 master-0 kubenswrapper[4155]: I0216 16:59:12.709950 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.710035 master-0 kubenswrapper[4155]: I0216 16:59:12.709987 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.710035 master-0 kubenswrapper[4155]: I0216 16:59:12.710012 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.710179 master-0 kubenswrapper[4155]: I0216 16:59:12.710088 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.710179 master-0 kubenswrapper[4155]: I0216 16:59:12.710149 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.811408 master-0 kubenswrapper[4155]: I0216 16:59:12.811321 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.811630 master-0 kubenswrapper[4155]: I0216 16:59:12.811417 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.811630 master-0 kubenswrapper[4155]: I0216 16:59:12.811473 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.811727 master-0 kubenswrapper[4155]: I0216 16:59:12.811656 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.811727 master-0 kubenswrapper[4155]: I0216 16:59:12.811673 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.811727 master-0 kubenswrapper[4155]: I0216 16:59:12.811692 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.811878 master-0 kubenswrapper[4155]: I0216 16:59:12.811849 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.811957 master-0 kubenswrapper[4155]: I0216 16:59:12.811851 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.811997 master-0 kubenswrapper[4155]: I0216 16:59:12.811969 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.812030 master-0 kubenswrapper[4155]: I0216 16:59:12.812018 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.812133 master-0 kubenswrapper[4155]: I0216 16:59:12.812097 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.812169 master-0 kubenswrapper[4155]: I0216 16:59:12.812086 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.813498 master-0 kubenswrapper[4155]: I0216 16:59:12.813416 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.813498 master-0 kubenswrapper[4155]: I0216 16:59:12.813446 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.814284 master-0 kubenswrapper[4155]: I0216 16:59:12.814212 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.833689 master-0 kubenswrapper[4155]: I0216 16:59:12.833652 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.870468 master-0 kubenswrapper[4155]: I0216 16:59:12.870342 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 16:59:12.881260 master-0 kubenswrapper[4155]: W0216 16:59:12.881183 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab5760f1_b2e0_4138_9383_e4827154ac50.slice/crio-d49374ae3fffc96a5ea6ebfe8e306371f24f9cbc5677024d9ced60c8e5b1a65e WatchSource:0}: Error finding container d49374ae3fffc96a5ea6ebfe8e306371f24f9cbc5677024d9ced60c8e5b1a65e: Status 404 returned error can't find the container with id d49374ae3fffc96a5ea6ebfe8e306371f24f9cbc5677024d9ced60c8e5b1a65e Feb 16 16:59:12.990120 master-0 kubenswrapper[4155]: I0216 16:59:12.990054 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerStarted","Data":"d49374ae3fffc96a5ea6ebfe8e306371f24f9cbc5677024d9ced60c8e5b1a65e"} Feb 16 16:59:12.991536 master-0 kubenswrapper[4155]: I0216 16:59:12.991473 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r7wj" event={"ID":"43f65f23-4ddd-471a-9cb3-b0945382d83c","Type":"ContainerStarted","Data":"4b4cf6ce22ab8720cdceaa9299137fdb7eefaf7a73cc07e0b511a6eb79ff2810"} Feb 16 16:59:13.348548 master-0 kubenswrapper[4155]: I0216 16:59:13.348438 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-279g6"] Feb 16 16:59:13.349030 master-0 kubenswrapper[4155]: I0216 16:59:13.348981 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:13.349126 master-0 kubenswrapper[4155]: E0216 16:59:13.349086 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:13.518484 master-0 kubenswrapper[4155]: I0216 16:59:13.518407 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:13.518904 master-0 kubenswrapper[4155]: I0216 16:59:13.518514 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:13.619646 master-0 kubenswrapper[4155]: I0216 16:59:13.619544 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:13.619646 master-0 kubenswrapper[4155]: I0216 16:59:13.619607 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:13.619831 master-0 kubenswrapper[4155]: E0216 16:59:13.619704 4155 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:13.619831 master-0 kubenswrapper[4155]: E0216 16:59:13.619751 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:14.119736056 +0000 UTC m=+58.458789560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:13.636400 master-0 kubenswrapper[4155]: I0216 16:59:13.636349 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:14.123246 master-0 kubenswrapper[4155]: I0216 16:59:14.123189 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:14.123602 master-0 kubenswrapper[4155]: E0216 16:59:14.123339 4155 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:14.123602 master-0 kubenswrapper[4155]: E0216 16:59:14.123404 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:15.123387359 +0000 UTC m=+59.462440863 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:14.780465 master-0 kubenswrapper[4155]: I0216 16:59:14.780409 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:14.780979 master-0 kubenswrapper[4155]: E0216 16:59:14.780544 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:15.132288 master-0 kubenswrapper[4155]: I0216 16:59:15.132218 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:15.132524 master-0 kubenswrapper[4155]: E0216 16:59:15.132432 4155 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:15.132572 master-0 kubenswrapper[4155]: E0216 16:59:15.132528 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:17.132509403 +0000 UTC m=+61.471562907 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:16.000006 master-0 kubenswrapper[4155]: I0216 16:59:15.999902 4155 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="f47270eadf232a1b51b70eb1069033d1ee831e9e2a83cf22e20d3b2db1ceb184" exitCode=0 Feb 16 16:59:16.000481 master-0 kubenswrapper[4155]: I0216 16:59:15.999993 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"f47270eadf232a1b51b70eb1069033d1ee831e9e2a83cf22e20d3b2db1ceb184"} Feb 16 16:59:16.780713 master-0 kubenswrapper[4155]: I0216 16:59:16.780640 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:16.780981 master-0 kubenswrapper[4155]: E0216 16:59:16.780932 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:17.149274 master-0 kubenswrapper[4155]: I0216 16:59:17.149090 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:17.149274 master-0 kubenswrapper[4155]: E0216 16:59:17.149243 4155 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:17.149874 master-0 kubenswrapper[4155]: E0216 16:59:17.149329 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:21.149310425 +0000 UTC m=+65.488363929 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:18.780089 master-0 kubenswrapper[4155]: I0216 16:59:18.780011 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:18.781021 master-0 kubenswrapper[4155]: E0216 16:59:18.780150 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:20.779826 master-0 kubenswrapper[4155]: I0216 16:59:20.779769 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:20.780661 master-0 kubenswrapper[4155]: E0216 16:59:20.779900 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:21.183538 master-0 kubenswrapper[4155]: I0216 16:59:21.183392 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:21.183751 master-0 kubenswrapper[4155]: E0216 16:59:21.183551 4155 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:21.183751 master-0 kubenswrapper[4155]: E0216 16:59:21.183625 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:29.183608095 +0000 UTC m=+73.522661599 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:22.780455 master-0 kubenswrapper[4155]: I0216 16:59:22.780387 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:22.781053 master-0 kubenswrapper[4155]: E0216 16:59:22.780551 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:24.745508 master-0 kubenswrapper[4155]: I0216 16:59:24.745447 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9"] Feb 16 16:59:24.751493 master-0 kubenswrapper[4155]: I0216 16:59:24.745946 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:24.751493 master-0 kubenswrapper[4155]: I0216 16:59:24.749057 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 16:59:24.751493 master-0 kubenswrapper[4155]: I0216 16:59:24.749091 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 16:59:24.751493 master-0 kubenswrapper[4155]: I0216 16:59:24.749057 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 16:59:24.751493 master-0 kubenswrapper[4155]: I0216 16:59:24.749265 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 16:59:24.751493 master-0 kubenswrapper[4155]: I0216 16:59:24.749334 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 16:59:24.785078 master-0 kubenswrapper[4155]: I0216 16:59:24.784629 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:24.785203 master-0 kubenswrapper[4155]: E0216 16:59:24.785149 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:24.808687 master-0 kubenswrapper[4155]: I0216 16:59:24.808572 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:24.808687 master-0 kubenswrapper[4155]: I0216 16:59:24.808613 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:24.808687 master-0 kubenswrapper[4155]: I0216 16:59:24.808688 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:24.808687 master-0 kubenswrapper[4155]: I0216 16:59:24.808707 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:24.909523 master-0 kubenswrapper[4155]: I0216 16:59:24.909360 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:24.909523 master-0 kubenswrapper[4155]: I0216 16:59:24.909449 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:24.909523 master-0 kubenswrapper[4155]: I0216 16:59:24.909481 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:24.909523 master-0 kubenswrapper[4155]: I0216 16:59:24.909508 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:24.910562 master-0 kubenswrapper[4155]: I0216 16:59:24.910518 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:24.911087 master-0 kubenswrapper[4155]: I0216 16:59:24.911044 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:24.916868 master-0 kubenswrapper[4155]: I0216 16:59:24.916824 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:24.934777 master-0 kubenswrapper[4155]: I0216 16:59:24.934741 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:24.967493 master-0 kubenswrapper[4155]: I0216 16:59:24.966608 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xsclm"] Feb 16 16:59:24.967493 master-0 kubenswrapper[4155]: I0216 16:59:24.967203 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:24.969183 master-0 kubenswrapper[4155]: I0216 16:59:24.969142 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 16:59:24.971254 master-0 kubenswrapper[4155]: I0216 16:59:24.971220 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 16:59:25.009998 master-0 kubenswrapper[4155]: I0216 16:59:25.009945 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovn-node-metrics-cert\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010187 master-0 kubenswrapper[4155]: I0216 16:59:25.010013 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-openvswitch\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010187 master-0 kubenswrapper[4155]: I0216 16:59:25.010139 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-systemd\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010293 master-0 kubenswrapper[4155]: I0216 16:59:25.010201 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-ovn\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010293 master-0 kubenswrapper[4155]: I0216 16:59:25.010239 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-run-ovn-kubernetes\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010293 master-0 kubenswrapper[4155]: I0216 16:59:25.010276 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-cni-netd\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010293 master-0 kubenswrapper[4155]: I0216 16:59:25.010307 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmf2v\" (UniqueName: \"kubernetes.io/projected/a9031b9a-ff74-4c8e-8733-2f36900bd05d-kube-api-access-nmf2v\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010569 master-0 kubenswrapper[4155]: I0216 16:59:25.010362 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-slash\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010569 master-0 kubenswrapper[4155]: I0216 16:59:25.010467 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-run-netns\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010569 master-0 kubenswrapper[4155]: I0216 16:59:25.010519 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovnkube-script-lib\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010569 master-0 kubenswrapper[4155]: I0216 16:59:25.010545 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-log-socket\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010569 master-0 kubenswrapper[4155]: I0216 16:59:25.010560 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-cni-bin\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010902 master-0 kubenswrapper[4155]: I0216 16:59:25.010589 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovnkube-config\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010902 master-0 kubenswrapper[4155]: I0216 16:59:25.010654 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-kubelet\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010902 master-0 kubenswrapper[4155]: I0216 16:59:25.010683 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-systemd-units\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010902 master-0 kubenswrapper[4155]: I0216 16:59:25.010705 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010902 master-0 kubenswrapper[4155]: I0216 16:59:25.010782 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-var-lib-openvswitch\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010902 master-0 kubenswrapper[4155]: I0216 16:59:25.010815 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-etc-openvswitch\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010902 master-0 kubenswrapper[4155]: I0216 16:59:25.010841 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-node-log\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.010902 master-0 kubenswrapper[4155]: I0216 16:59:25.010861 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-env-overrides\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.029196 master-0 kubenswrapper[4155]: I0216 16:59:25.029104 4155 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="6f850c8263f7a5fffe361664a6b474015b2a97155111509d5a8154875803d4f3" exitCode=0 Feb 16 16:59:25.029196 master-0 kubenswrapper[4155]: I0216 16:59:25.029158 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"6f850c8263f7a5fffe361664a6b474015b2a97155111509d5a8154875803d4f3"} Feb 16 16:59:25.031202 master-0 kubenswrapper[4155]: I0216 16:59:25.031156 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r7wj" event={"ID":"43f65f23-4ddd-471a-9cb3-b0945382d83c","Type":"ContainerStarted","Data":"5d094f2876f5545b7c63fc8765883d9a87f0c59f12737ba412250f81627afa8d"} Feb 16 16:59:25.096327 master-0 kubenswrapper[4155]: I0216 16:59:25.096249 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 16:59:25.109108 master-0 kubenswrapper[4155]: W0216 16:59:25.109033 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab80e0fb_09dd_4c93_b235_1487024105d2.slice/crio-d13fb6fe74528b3a775305b065aa4a4f2df12cd45d9b3cf6d3d699a5fdafc519 WatchSource:0}: Error finding container d13fb6fe74528b3a775305b065aa4a4f2df12cd45d9b3cf6d3d699a5fdafc519: Status 404 returned error can't find the container with id d13fb6fe74528b3a775305b065aa4a4f2df12cd45d9b3cf6d3d699a5fdafc519 Feb 16 16:59:25.111646 master-0 kubenswrapper[4155]: I0216 16:59:25.111560 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-kubelet\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.111646 master-0 kubenswrapper[4155]: I0216 16:59:25.111627 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-systemd-units\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.111646 master-0 kubenswrapper[4155]: I0216 16:59:25.111662 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112060 master-0 kubenswrapper[4155]: I0216 16:59:25.111715 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-var-lib-openvswitch\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112060 master-0 kubenswrapper[4155]: I0216 16:59:25.111750 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-etc-openvswitch\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112060 master-0 kubenswrapper[4155]: I0216 16:59:25.111782 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-node-log\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112060 master-0 kubenswrapper[4155]: I0216 16:59:25.111853 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-node-log\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112060 master-0 kubenswrapper[4155]: I0216 16:59:25.111893 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-systemd-units\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112468 master-0 kubenswrapper[4155]: I0216 16:59:25.112097 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112468 master-0 kubenswrapper[4155]: I0216 16:59:25.112181 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-env-overrides\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112468 master-0 kubenswrapper[4155]: I0216 16:59:25.112290 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-etc-openvswitch\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112468 master-0 kubenswrapper[4155]: I0216 16:59:25.112327 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-var-lib-openvswitch\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112468 master-0 kubenswrapper[4155]: I0216 16:59:25.112363 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-kubelet\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112711 master-0 kubenswrapper[4155]: I0216 16:59:25.112652 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovn-node-metrics-cert\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112799 master-0 kubenswrapper[4155]: I0216 16:59:25.112746 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-openvswitch\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112840 master-0 kubenswrapper[4155]: I0216 16:59:25.112808 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-systemd\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112877 master-0 kubenswrapper[4155]: I0216 16:59:25.112848 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-ovn\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.112913 master-0 kubenswrapper[4155]: I0216 16:59:25.112891 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-run-ovn-kubernetes\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113024 master-0 kubenswrapper[4155]: I0216 16:59:25.112971 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-cni-netd\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113140 master-0 kubenswrapper[4155]: I0216 16:59:25.113081 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-run-ovn-kubernetes\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113186 master-0 kubenswrapper[4155]: I0216 16:59:25.113157 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmf2v\" (UniqueName: \"kubernetes.io/projected/a9031b9a-ff74-4c8e-8733-2f36900bd05d-kube-api-access-nmf2v\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113265 master-0 kubenswrapper[4155]: I0216 16:59:25.113228 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-ovn\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113265 master-0 kubenswrapper[4155]: I0216 16:59:25.113233 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-slash\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113335 master-0 kubenswrapper[4155]: I0216 16:59:25.113126 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-openvswitch\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113335 master-0 kubenswrapper[4155]: I0216 16:59:25.113276 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-run-netns\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113335 master-0 kubenswrapper[4155]: I0216 16:59:25.113303 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovnkube-script-lib\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113445 master-0 kubenswrapper[4155]: I0216 16:59:25.113347 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-log-socket\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113445 master-0 kubenswrapper[4155]: I0216 16:59:25.113365 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-cni-bin\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113445 master-0 kubenswrapper[4155]: I0216 16:59:25.113364 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-env-overrides\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113554 master-0 kubenswrapper[4155]: I0216 16:59:25.113509 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-systemd\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113650 master-0 kubenswrapper[4155]: I0216 16:59:25.113537 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-run-netns\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113700 master-0 kubenswrapper[4155]: I0216 16:59:25.113651 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-cni-bin\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113700 master-0 kubenswrapper[4155]: I0216 16:59:25.113664 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovnkube-config\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113777 master-0 kubenswrapper[4155]: I0216 16:59:25.113573 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-cni-netd\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113777 master-0 kubenswrapper[4155]: I0216 16:59:25.113744 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-log-socket\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.113991 master-0 kubenswrapper[4155]: I0216 16:59:25.113958 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-slash\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.114910 master-0 kubenswrapper[4155]: I0216 16:59:25.114874 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovnkube-config\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.115035 master-0 kubenswrapper[4155]: I0216 16:59:25.114985 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovnkube-script-lib\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.117511 master-0 kubenswrapper[4155]: I0216 16:59:25.117112 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovn-node-metrics-cert\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.145621 master-0 kubenswrapper[4155]: I0216 16:59:25.145555 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmf2v\" (UniqueName: \"kubernetes.io/projected/a9031b9a-ff74-4c8e-8733-2f36900bd05d-kube-api-access-nmf2v\") pod \"ovnkube-node-xsclm\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.334967 master-0 kubenswrapper[4155]: I0216 16:59:25.334891 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:25.346943 master-0 kubenswrapper[4155]: W0216 16:59:25.346856 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9031b9a_ff74_4c8e_8733_2f36900bd05d.slice/crio-2d9ba94a2cb1dda6a9a82a06da06539c2aab4b9caa3c779ad9edc4023f449a1c WatchSource:0}: Error finding container 2d9ba94a2cb1dda6a9a82a06da06539c2aab4b9caa3c779ad9edc4023f449a1c: Status 404 returned error can't find the container with id 2d9ba94a2cb1dda6a9a82a06da06539c2aab4b9caa3c779ad9edc4023f449a1c Feb 16 16:59:26.037494 master-0 kubenswrapper[4155]: I0216 16:59:26.037327 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerStarted","Data":"6f66af8b0562664573bf8d9a4bb0da731f2d18edeb2c73c463d4bf0acaedcb60"} Feb 16 16:59:26.037494 master-0 kubenswrapper[4155]: I0216 16:59:26.037386 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerStarted","Data":"d13fb6fe74528b3a775305b065aa4a4f2df12cd45d9b3cf6d3d699a5fdafc519"} Feb 16 16:59:26.039450 master-0 kubenswrapper[4155]: I0216 16:59:26.039057 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerStarted","Data":"2d9ba94a2cb1dda6a9a82a06da06539c2aab4b9caa3c779ad9edc4023f449a1c"} Feb 16 16:59:26.735178 master-0 kubenswrapper[4155]: I0216 16:59:26.735056 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:59:26.735357 master-0 kubenswrapper[4155]: E0216 16:59:26.735271 4155 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 16:59:26.735403 master-0 kubenswrapper[4155]: E0216 16:59:26.735391 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert podName:568b22df-b454-4d74-bc21-6c84daf17c8c nodeName:}" failed. No retries permitted until 2026-02-16 16:59:58.735360298 +0000 UTC m=+103.074413852 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert") pod "cluster-version-operator-76959b6567-wnh7l" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c") : secret "cluster-version-operator-serving-cert" not found Feb 16 16:59:26.780036 master-0 kubenswrapper[4155]: I0216 16:59:26.779988 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:26.780822 master-0 kubenswrapper[4155]: E0216 16:59:26.780748 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:27.045555 master-0 kubenswrapper[4155]: I0216 16:59:27.044146 4155 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="9b508704ca913b3676949d448345a8f778d17c4d3d7c7156e1db34b5da7a8c96" exitCode=0 Feb 16 16:59:27.045555 master-0 kubenswrapper[4155]: I0216 16:59:27.044198 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"9b508704ca913b3676949d448345a8f778d17c4d3d7c7156e1db34b5da7a8c96"} Feb 16 16:59:27.062429 master-0 kubenswrapper[4155]: I0216 16:59:27.062363 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6r7wj" podStartSLOduration=3.101662723 podStartE2EDuration="15.062347595s" podCreationTimestamp="2026-02-16 16:59:12 +0000 UTC" firstStartedPulling="2026-02-16 16:59:12.696171699 +0000 UTC m=+57.035225243" lastFinishedPulling="2026-02-16 16:59:24.656856601 +0000 UTC m=+68.995910115" observedRunningTime="2026-02-16 16:59:25.058637126 +0000 UTC m=+69.397690630" watchObservedRunningTime="2026-02-16 16:59:27.062347595 +0000 UTC m=+71.401401099" Feb 16 16:59:27.939030 master-0 kubenswrapper[4155]: I0216 16:59:27.938975 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-vwvwx"] Feb 16 16:59:27.939322 master-0 kubenswrapper[4155]: I0216 16:59:27.939288 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:27.939430 master-0 kubenswrapper[4155]: E0216 16:59:27.939368 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:28.044833 master-0 kubenswrapper[4155]: I0216 16:59:28.044783 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:28.146248 master-0 kubenswrapper[4155]: I0216 16:59:28.146163 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:28.294787 master-0 kubenswrapper[4155]: E0216 16:59:28.294718 4155 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:28.294787 master-0 kubenswrapper[4155]: E0216 16:59:28.294763 4155 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:28.294787 master-0 kubenswrapper[4155]: E0216 16:59:28.294778 4155 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:28.295108 master-0 kubenswrapper[4155]: E0216 16:59:28.294849 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:28.794829262 +0000 UTC m=+73.133882766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:28.780687 master-0 kubenswrapper[4155]: I0216 16:59:28.780615 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:28.780934 master-0 kubenswrapper[4155]: E0216 16:59:28.780755 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:28.852054 master-0 kubenswrapper[4155]: I0216 16:59:28.851976 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:28.852272 master-0 kubenswrapper[4155]: E0216 16:59:28.852177 4155 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:28.852272 master-0 kubenswrapper[4155]: E0216 16:59:28.852210 4155 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:28.852272 master-0 kubenswrapper[4155]: E0216 16:59:28.852223 4155 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:28.852459 master-0 kubenswrapper[4155]: E0216 16:59:28.852279 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:29.852263972 +0000 UTC m=+74.191317476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:29.051289 master-0 kubenswrapper[4155]: I0216 16:59:29.051193 4155 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="07ee05b11ab243298aba0652acab149107fdee4d056b25a8d70e009ebf722842" exitCode=0 Feb 16 16:59:29.051289 master-0 kubenswrapper[4155]: I0216 16:59:29.051271 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"07ee05b11ab243298aba0652acab149107fdee4d056b25a8d70e009ebf722842"} Feb 16 16:59:29.255598 master-0 kubenswrapper[4155]: I0216 16:59:29.255423 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:29.256307 master-0 kubenswrapper[4155]: E0216 16:59:29.255589 4155 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:29.256307 master-0 kubenswrapper[4155]: E0216 16:59:29.255703 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:45.255681052 +0000 UTC m=+89.594734606 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:29.780173 master-0 kubenswrapper[4155]: I0216 16:59:29.780130 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:29.780375 master-0 kubenswrapper[4155]: E0216 16:59:29.780240 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:29.861015 master-0 kubenswrapper[4155]: I0216 16:59:29.860950 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:29.861211 master-0 kubenswrapper[4155]: E0216 16:59:29.861143 4155 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:29.861211 master-0 kubenswrapper[4155]: E0216 16:59:29.861175 4155 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:29.861211 master-0 kubenswrapper[4155]: E0216 16:59:29.861189 4155 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:29.861297 master-0 kubenswrapper[4155]: E0216 16:59:29.861250 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:31.861235545 +0000 UTC m=+76.200289039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:30.780850 master-0 kubenswrapper[4155]: I0216 16:59:30.780806 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:30.781370 master-0 kubenswrapper[4155]: E0216 16:59:30.780995 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:31.780290 master-0 kubenswrapper[4155]: I0216 16:59:31.780222 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:31.780577 master-0 kubenswrapper[4155]: E0216 16:59:31.780502 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:31.835155 master-0 kubenswrapper[4155]: I0216 16:59:31.835092 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-hhcpr"] Feb 16 16:59:31.835807 master-0 kubenswrapper[4155]: I0216 16:59:31.835676 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:31.840268 master-0 kubenswrapper[4155]: I0216 16:59:31.840218 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 16:59:31.840386 master-0 kubenswrapper[4155]: I0216 16:59:31.840346 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 16:59:31.840441 master-0 kubenswrapper[4155]: I0216 16:59:31.840399 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 16:59:31.840581 master-0 kubenswrapper[4155]: I0216 16:59:31.840544 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 16:59:31.840581 master-0 kubenswrapper[4155]: I0216 16:59:31.840576 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 16:59:31.880625 master-0 kubenswrapper[4155]: I0216 16:59:31.879945 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:31.880625 master-0 kubenswrapper[4155]: I0216 16:59:31.880045 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:31.880625 master-0 kubenswrapper[4155]: I0216 16:59:31.880134 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:31.880625 master-0 kubenswrapper[4155]: I0216 16:59:31.880179 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:31.880625 master-0 kubenswrapper[4155]: I0216 16:59:31.880231 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:31.880625 master-0 kubenswrapper[4155]: E0216 16:59:31.880319 4155 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:31.880625 master-0 kubenswrapper[4155]: E0216 16:59:31.880352 4155 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:31.880625 master-0 kubenswrapper[4155]: E0216 16:59:31.880368 4155 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:31.880625 master-0 kubenswrapper[4155]: E0216 16:59:31.880419 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:35.880400715 +0000 UTC m=+80.219454229 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:31.996543 master-0 kubenswrapper[4155]: I0216 16:59:31.986948 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:31.996543 master-0 kubenswrapper[4155]: I0216 16:59:31.987183 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:31.996543 master-0 kubenswrapper[4155]: I0216 16:59:31.987209 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:31.996543 master-0 kubenswrapper[4155]: I0216 16:59:31.987243 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:31.996543 master-0 kubenswrapper[4155]: I0216 16:59:31.988318 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:31.996543 master-0 kubenswrapper[4155]: E0216 16:59:31.988429 4155 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Feb 16 16:59:31.996543 master-0 kubenswrapper[4155]: I0216 16:59:31.988444 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:31.996543 master-0 kubenswrapper[4155]: E0216 16:59:31.988503 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:32.488485781 +0000 UTC m=+76.827539285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : secret "network-node-identity-cert" not found Feb 16 16:59:32.008357 master-0 kubenswrapper[4155]: I0216 16:59:32.008324 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:32.490830 master-0 kubenswrapper[4155]: I0216 16:59:32.490731 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:32.495976 master-0 kubenswrapper[4155]: I0216 16:59:32.495894 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:32.754722 master-0 kubenswrapper[4155]: I0216 16:59:32.754672 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 16:59:32.769340 master-0 kubenswrapper[4155]: W0216 16:59:32.769270 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39387549_c636_4bd4_b463_f6a93810f277.slice/crio-399ed6a40d22a74f841b008ea9aabb72324ddc42397c926051d186c2a8be50e2 WatchSource:0}: Error finding container 399ed6a40d22a74f841b008ea9aabb72324ddc42397c926051d186c2a8be50e2: Status 404 returned error can't find the container with id 399ed6a40d22a74f841b008ea9aabb72324ddc42397c926051d186c2a8be50e2 Feb 16 16:59:32.780610 master-0 kubenswrapper[4155]: I0216 16:59:32.780554 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:32.780805 master-0 kubenswrapper[4155]: E0216 16:59:32.780758 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:33.061137 master-0 kubenswrapper[4155]: I0216 16:59:33.061037 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerStarted","Data":"399ed6a40d22a74f841b008ea9aabb72324ddc42397c926051d186c2a8be50e2"} Feb 16 16:59:33.780530 master-0 kubenswrapper[4155]: I0216 16:59:33.780478 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:33.780761 master-0 kubenswrapper[4155]: E0216 16:59:33.780603 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:34.780506 master-0 kubenswrapper[4155]: I0216 16:59:34.780452 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:34.781088 master-0 kubenswrapper[4155]: E0216 16:59:34.780611 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:35.780343 master-0 kubenswrapper[4155]: I0216 16:59:35.780277 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:35.780343 master-0 kubenswrapper[4155]: E0216 16:59:35.780386 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:35.922795 master-0 kubenswrapper[4155]: I0216 16:59:35.922740 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:35.922993 master-0 kubenswrapper[4155]: E0216 16:59:35.922947 4155 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:35.922993 master-0 kubenswrapper[4155]: E0216 16:59:35.922977 4155 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:35.922993 master-0 kubenswrapper[4155]: E0216 16:59:35.922991 4155 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:35.923088 master-0 kubenswrapper[4155]: E0216 16:59:35.923043 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:43.923026409 +0000 UTC m=+88.262079993 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:36.780899 master-0 kubenswrapper[4155]: I0216 16:59:36.780848 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:36.781512 master-0 kubenswrapper[4155]: E0216 16:59:36.781466 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:37.780022 master-0 kubenswrapper[4155]: I0216 16:59:37.779833 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:37.780022 master-0 kubenswrapper[4155]: E0216 16:59:37.779973 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:38.781036 master-0 kubenswrapper[4155]: I0216 16:59:38.780979 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:38.782653 master-0 kubenswrapper[4155]: E0216 16:59:38.782142 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:39.780887 master-0 kubenswrapper[4155]: I0216 16:59:39.780737 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:39.781610 master-0 kubenswrapper[4155]: E0216 16:59:39.780909 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:39.794574 master-0 kubenswrapper[4155]: W0216 16:59:39.794157 4155 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Feb 16 16:59:39.794778 master-0 kubenswrapper[4155]: I0216 16:59:39.794727 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 16 16:59:40.785567 master-0 kubenswrapper[4155]: I0216 16:59:40.780104 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:40.785567 master-0 kubenswrapper[4155]: E0216 16:59:40.780389 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:40.800354 master-0 kubenswrapper[4155]: I0216 16:59:40.800303 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 16:59:41.780871 master-0 kubenswrapper[4155]: I0216 16:59:41.780494 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:41.781159 master-0 kubenswrapper[4155]: E0216 16:59:41.780984 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:42.084815 master-0 kubenswrapper[4155]: I0216 16:59:42.084762 4155 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="96c8b16be41a61f78ae9a0d158764cfb3f1dc1be9541f6dde4356d45ed489d8c" exitCode=0 Feb 16 16:59:42.085252 master-0 kubenswrapper[4155]: I0216 16:59:42.084818 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"96c8b16be41a61f78ae9a0d158764cfb3f1dc1be9541f6dde4356d45ed489d8c"} Feb 16 16:59:42.087288 master-0 kubenswrapper[4155]: I0216 16:59:42.087256 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerStarted","Data":"7c3069013a087b4b128510ad9f826bdcec64055b56ce1f6796106b46734c14be"} Feb 16 16:59:42.088682 master-0 kubenswrapper[4155]: I0216 16:59:42.088634 4155 generic.go:334] "Generic (PLEG): container finished" podID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerID="62efda6c94abe113ef01874ef37214190ec0fd357e274d80bba2a8516c7609ca" exitCode=0 Feb 16 16:59:42.088846 master-0 kubenswrapper[4155]: I0216 16:59:42.088684 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerDied","Data":"62efda6c94abe113ef01874ef37214190ec0fd357e274d80bba2a8516c7609ca"} Feb 16 16:59:42.097862 master-0 kubenswrapper[4155]: I0216 16:59:42.097799 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=3.097780004 podStartE2EDuration="3.097780004s" podCreationTimestamp="2026-02-16 16:59:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 16:59:42.096814268 +0000 UTC m=+86.435867762" watchObservedRunningTime="2026-02-16 16:59:42.097780004 +0000 UTC m=+86.436833548" Feb 16 16:59:42.130359 master-0 kubenswrapper[4155]: I0216 16:59:42.129742 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=2.129717275 podStartE2EDuration="2.129717275s" podCreationTimestamp="2026-02-16 16:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 16:59:42.109559305 +0000 UTC m=+86.448612829" watchObservedRunningTime="2026-02-16 16:59:42.129717275 +0000 UTC m=+86.468770779" Feb 16 16:59:42.164311 master-0 kubenswrapper[4155]: I0216 16:59:42.164259 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" podStartSLOduration=2.18278337 podStartE2EDuration="18.164239416s" podCreationTimestamp="2026-02-16 16:59:24 +0000 UTC" firstStartedPulling="2026-02-16 16:59:25.325984007 +0000 UTC m=+69.665037511" lastFinishedPulling="2026-02-16 16:59:41.307440053 +0000 UTC m=+85.646493557" observedRunningTime="2026-02-16 16:59:42.163350712 +0000 UTC m=+86.502404236" watchObservedRunningTime="2026-02-16 16:59:42.164239416 +0000 UTC m=+86.503292910" Feb 16 16:59:42.780107 master-0 kubenswrapper[4155]: I0216 16:59:42.780039 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:42.780452 master-0 kubenswrapper[4155]: E0216 16:59:42.780211 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:43.094622 master-0 kubenswrapper[4155]: I0216 16:59:43.094530 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerStarted","Data":"435716921b22f7e7070cefae2389c285bb38da6738e12db82db01bcb3c16943e"} Feb 16 16:59:43.094622 master-0 kubenswrapper[4155]: I0216 16:59:43.094567 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerStarted","Data":"6519a2b1bfa2be2c3969581f7167ddcb8c02686a73ac3685fab901e8a973c0da"} Feb 16 16:59:43.094622 master-0 kubenswrapper[4155]: I0216 16:59:43.094577 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerStarted","Data":"d6950ee4a855681c18c68598eb1bca4b7155cbe3da65c7781f9da4e38aa625d5"} Feb 16 16:59:43.094622 master-0 kubenswrapper[4155]: I0216 16:59:43.094586 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerStarted","Data":"f22eec932dfdb2c55de9706746fdccf1c65673567b9f1ea5f699b1b74e8fd5f2"} Feb 16 16:59:43.094622 master-0 kubenswrapper[4155]: I0216 16:59:43.094594 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerStarted","Data":"a370ae306accbab57b742d151b5600175343b0ef21bdbcf2ffe1f3b3de1537e0"} Feb 16 16:59:43.094622 master-0 kubenswrapper[4155]: I0216 16:59:43.094603 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerStarted","Data":"86bcafee52e09620e6a731bc465b87382fd516731f7d1784621005dcedea1aab"} Feb 16 16:59:43.097401 master-0 kubenswrapper[4155]: I0216 16:59:43.097371 4155 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="b5e6e0c200ef6468da128fab1a901d498e73068beb07a54310f215479193099d" exitCode=0 Feb 16 16:59:43.097546 master-0 kubenswrapper[4155]: I0216 16:59:43.097502 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"b5e6e0c200ef6468da128fab1a901d498e73068beb07a54310f215479193099d"} Feb 16 16:59:43.779870 master-0 kubenswrapper[4155]: I0216 16:59:43.779826 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:43.780100 master-0 kubenswrapper[4155]: E0216 16:59:43.780067 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:43.912219 master-0 kubenswrapper[4155]: I0216 16:59:43.911085 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 16 16:59:43.992322 master-0 kubenswrapper[4155]: I0216 16:59:43.992278 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:43.992520 master-0 kubenswrapper[4155]: E0216 16:59:43.992405 4155 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 16:59:43.992520 master-0 kubenswrapper[4155]: E0216 16:59:43.992426 4155 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 16:59:43.992520 master-0 kubenswrapper[4155]: E0216 16:59:43.992437 4155 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:43.992520 master-0 kubenswrapper[4155]: E0216 16:59:43.992478 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:59.992466108 +0000 UTC m=+104.331519612 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 16:59:44.105206 master-0 kubenswrapper[4155]: I0216 16:59:44.105144 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerStarted","Data":"71373993bd8fa85e34385967dc668cef9cf33a45809ff033e291394c3abdeb57"} Feb 16 16:59:44.119086 master-0 kubenswrapper[4155]: I0216 16:59:44.119024 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=1.119006559 podStartE2EDuration="1.119006559s" podCreationTimestamp="2026-02-16 16:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 16:59:44.118477425 +0000 UTC m=+88.457530949" watchObservedRunningTime="2026-02-16 16:59:44.119006559 +0000 UTC m=+88.458060053" Feb 16 16:59:44.780528 master-0 kubenswrapper[4155]: I0216 16:59:44.780460 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:44.780700 master-0 kubenswrapper[4155]: E0216 16:59:44.780657 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:45.115727 master-0 kubenswrapper[4155]: I0216 16:59:45.115655 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerStarted","Data":"7a92b9a90193dd381d456652171aca7b927db619862cb3bcbfa996317518e329"} Feb 16 16:59:45.119226 master-0 kubenswrapper[4155]: I0216 16:59:45.119107 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerStarted","Data":"0003ee69c56b0c73d7d4526fa1f5d5fb937628023fcef99de3436e9f297fc1a8"} Feb 16 16:59:45.119226 master-0 kubenswrapper[4155]: I0216 16:59:45.119157 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerStarted","Data":"e2e9f120a9e16219c47ddb40ab80ffcfe27430f9f99e0080976b18f917b8870a"} Feb 16 16:59:45.137523 master-0 kubenswrapper[4155]: I0216 16:59:45.137420 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" podStartSLOduration=4.792702779 podStartE2EDuration="33.137399309s" podCreationTimestamp="2026-02-16 16:59:12 +0000 UTC" firstStartedPulling="2026-02-16 16:59:12.883310477 +0000 UTC m=+57.222363981" lastFinishedPulling="2026-02-16 16:59:41.228007007 +0000 UTC m=+85.567060511" observedRunningTime="2026-02-16 16:59:44.145550013 +0000 UTC m=+88.484603537" watchObservedRunningTime="2026-02-16 16:59:45.137399309 +0000 UTC m=+89.476452803" Feb 16 16:59:45.303537 master-0 kubenswrapper[4155]: I0216 16:59:45.303454 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:45.303761 master-0 kubenswrapper[4155]: E0216 16:59:45.303630 4155 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:45.303761 master-0 kubenswrapper[4155]: E0216 16:59:45.303703 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:17.303680043 +0000 UTC m=+121.642733577 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 16:59:45.780680 master-0 kubenswrapper[4155]: I0216 16:59:45.780598 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:45.780953 master-0 kubenswrapper[4155]: E0216 16:59:45.780748 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:46.780958 master-0 kubenswrapper[4155]: I0216 16:59:46.780780 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:46.782583 master-0 kubenswrapper[4155]: E0216 16:59:46.782507 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:47.780829 master-0 kubenswrapper[4155]: I0216 16:59:47.780305 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:47.781503 master-0 kubenswrapper[4155]: E0216 16:59:47.781299 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:48.139622 master-0 kubenswrapper[4155]: I0216 16:59:48.139436 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerStarted","Data":"5cdb01628307bb81bc91843f33cba2f692be4e1dcf12cd49046f19effb134c45"} Feb 16 16:59:48.140114 master-0 kubenswrapper[4155]: I0216 16:59:48.139863 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:48.140114 master-0 kubenswrapper[4155]: I0216 16:59:48.140004 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:48.168356 master-0 kubenswrapper[4155]: I0216 16:59:48.168223 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:48.192019 master-0 kubenswrapper[4155]: I0216 16:59:48.191719 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" podStartSLOduration=8.206453386 podStartE2EDuration="24.191695585s" podCreationTimestamp="2026-02-16 16:59:24 +0000 UTC" firstStartedPulling="2026-02-16 16:59:25.34920762 +0000 UTC m=+69.688261144" lastFinishedPulling="2026-02-16 16:59:41.334449839 +0000 UTC m=+85.673503343" observedRunningTime="2026-02-16 16:59:48.189770952 +0000 UTC m=+92.528824526" watchObservedRunningTime="2026-02-16 16:59:48.191695585 +0000 UTC m=+92.530749119" Feb 16 16:59:48.192330 master-0 kubenswrapper[4155]: I0216 16:59:48.192064 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-hhcpr" podStartSLOduration=5.908408949 podStartE2EDuration="17.192055084s" podCreationTimestamp="2026-02-16 16:59:31 +0000 UTC" firstStartedPulling="2026-02-16 16:59:32.771229365 +0000 UTC m=+77.110282869" lastFinishedPulling="2026-02-16 16:59:44.05487546 +0000 UTC m=+88.393929004" observedRunningTime="2026-02-16 16:59:45.137970005 +0000 UTC m=+89.477023539" watchObservedRunningTime="2026-02-16 16:59:48.192055084 +0000 UTC m=+92.531108618" Feb 16 16:59:48.781017 master-0 kubenswrapper[4155]: I0216 16:59:48.780967 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:48.781316 master-0 kubenswrapper[4155]: E0216 16:59:48.781110 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:49.142973 master-0 kubenswrapper[4155]: I0216 16:59:49.142806 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:49.173430 master-0 kubenswrapper[4155]: I0216 16:59:49.173374 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:49.608644 master-0 kubenswrapper[4155]: I0216 16:59:49.608529 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-vwvwx"] Feb 16 16:59:49.608644 master-0 kubenswrapper[4155]: I0216 16:59:49.608644 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:49.609153 master-0 kubenswrapper[4155]: E0216 16:59:49.608709 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:49.612468 master-0 kubenswrapper[4155]: I0216 16:59:49.612407 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-279g6"] Feb 16 16:59:49.612468 master-0 kubenswrapper[4155]: I0216 16:59:49.612479 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:49.612792 master-0 kubenswrapper[4155]: E0216 16:59:49.612549 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:51.681366 master-0 kubenswrapper[4155]: I0216 16:59:51.681313 4155 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xsclm"] Feb 16 16:59:51.780130 master-0 kubenswrapper[4155]: I0216 16:59:51.780097 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:51.780236 master-0 kubenswrapper[4155]: I0216 16:59:51.780169 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:51.780283 master-0 kubenswrapper[4155]: E0216 16:59:51.780229 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:51.780389 master-0 kubenswrapper[4155]: E0216 16:59:51.780359 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:52.151838 master-0 kubenswrapper[4155]: I0216 16:59:52.151426 4155 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="ovn-controller" containerID="cri-o://86bcafee52e09620e6a731bc465b87382fd516731f7d1784621005dcedea1aab" gracePeriod=30 Feb 16 16:59:52.151838 master-0 kubenswrapper[4155]: I0216 16:59:52.151691 4155 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="kube-rbac-proxy-node" containerID="cri-o://f22eec932dfdb2c55de9706746fdccf1c65673567b9f1ea5f699b1b74e8fd5f2" gracePeriod=30 Feb 16 16:59:52.151838 master-0 kubenswrapper[4155]: I0216 16:59:52.151701 4155 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="ovn-acl-logging" containerID="cri-o://a370ae306accbab57b742d151b5600175343b0ef21bdbcf2ffe1f3b3de1537e0" gracePeriod=30 Feb 16 16:59:52.151838 master-0 kubenswrapper[4155]: I0216 16:59:52.151455 4155 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="nbdb" containerID="cri-o://435716921b22f7e7070cefae2389c285bb38da6738e12db82db01bcb3c16943e" gracePeriod=30 Feb 16 16:59:52.151838 master-0 kubenswrapper[4155]: I0216 16:59:52.151649 4155 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="northd" containerID="cri-o://6519a2b1bfa2be2c3969581f7167ddcb8c02686a73ac3685fab901e8a973c0da" gracePeriod=30 Feb 16 16:59:52.151838 master-0 kubenswrapper[4155]: I0216 16:59:52.151618 4155 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="sbdb" containerID="cri-o://7a92b9a90193dd381d456652171aca7b927db619862cb3bcbfa996317518e329" gracePeriod=30 Feb 16 16:59:52.152580 master-0 kubenswrapper[4155]: I0216 16:59:52.151890 4155 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://d6950ee4a855681c18c68598eb1bca4b7155cbe3da65c7781f9da4e38aa625d5" gracePeriod=30 Feb 16 16:59:52.170294 master-0 kubenswrapper[4155]: I0216 16:59:52.170157 4155 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="ovnkube-controller" probeResult="failure" output="" Feb 16 16:59:52.175625 master-0 kubenswrapper[4155]: I0216 16:59:52.175597 4155 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="ovnkube-controller" containerID="cri-o://5cdb01628307bb81bc91843f33cba2f692be4e1dcf12cd49046f19effb134c45" gracePeriod=30 Feb 16 16:59:53.156625 master-0 kubenswrapper[4155]: I0216 16:59:53.156547 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/kube-rbac-proxy-ovn-metrics/0.log" Feb 16 16:59:53.157278 master-0 kubenswrapper[4155]: I0216 16:59:53.156954 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/kube-rbac-proxy-node/0.log" Feb 16 16:59:53.157360 master-0 kubenswrapper[4155]: I0216 16:59:53.157327 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/ovn-acl-logging/0.log" Feb 16 16:59:53.157785 master-0 kubenswrapper[4155]: I0216 16:59:53.157729 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/ovn-controller/0.log" Feb 16 16:59:53.158067 master-0 kubenswrapper[4155]: I0216 16:59:53.158047 4155 generic.go:334] "Generic (PLEG): container finished" podID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerID="7a92b9a90193dd381d456652171aca7b927db619862cb3bcbfa996317518e329" exitCode=0 Feb 16 16:59:53.158127 master-0 kubenswrapper[4155]: I0216 16:59:53.158068 4155 generic.go:334] "Generic (PLEG): container finished" podID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerID="435716921b22f7e7070cefae2389c285bb38da6738e12db82db01bcb3c16943e" exitCode=0 Feb 16 16:59:53.158127 master-0 kubenswrapper[4155]: I0216 16:59:53.158076 4155 generic.go:334] "Generic (PLEG): container finished" podID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerID="6519a2b1bfa2be2c3969581f7167ddcb8c02686a73ac3685fab901e8a973c0da" exitCode=0 Feb 16 16:59:53.158127 master-0 kubenswrapper[4155]: I0216 16:59:53.158082 4155 generic.go:334] "Generic (PLEG): container finished" podID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerID="d6950ee4a855681c18c68598eb1bca4b7155cbe3da65c7781f9da4e38aa625d5" exitCode=143 Feb 16 16:59:53.158127 master-0 kubenswrapper[4155]: I0216 16:59:53.158089 4155 generic.go:334] "Generic (PLEG): container finished" podID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerID="f22eec932dfdb2c55de9706746fdccf1c65673567b9f1ea5f699b1b74e8fd5f2" exitCode=143 Feb 16 16:59:53.158127 master-0 kubenswrapper[4155]: I0216 16:59:53.158095 4155 generic.go:334] "Generic (PLEG): container finished" podID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerID="a370ae306accbab57b742d151b5600175343b0ef21bdbcf2ffe1f3b3de1537e0" exitCode=143 Feb 16 16:59:53.158127 master-0 kubenswrapper[4155]: I0216 16:59:53.158101 4155 generic.go:334] "Generic (PLEG): container finished" podID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerID="86bcafee52e09620e6a731bc465b87382fd516731f7d1784621005dcedea1aab" exitCode=143 Feb 16 16:59:53.158127 master-0 kubenswrapper[4155]: I0216 16:59:53.158119 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerDied","Data":"7a92b9a90193dd381d456652171aca7b927db619862cb3bcbfa996317518e329"} Feb 16 16:59:53.158301 master-0 kubenswrapper[4155]: I0216 16:59:53.158146 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerDied","Data":"435716921b22f7e7070cefae2389c285bb38da6738e12db82db01bcb3c16943e"} Feb 16 16:59:53.158301 master-0 kubenswrapper[4155]: I0216 16:59:53.158160 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerDied","Data":"6519a2b1bfa2be2c3969581f7167ddcb8c02686a73ac3685fab901e8a973c0da"} Feb 16 16:59:53.158301 master-0 kubenswrapper[4155]: I0216 16:59:53.158170 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerDied","Data":"d6950ee4a855681c18c68598eb1bca4b7155cbe3da65c7781f9da4e38aa625d5"} Feb 16 16:59:53.158301 master-0 kubenswrapper[4155]: I0216 16:59:53.158179 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerDied","Data":"f22eec932dfdb2c55de9706746fdccf1c65673567b9f1ea5f699b1b74e8fd5f2"} Feb 16 16:59:53.158301 master-0 kubenswrapper[4155]: I0216 16:59:53.158188 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerDied","Data":"a370ae306accbab57b742d151b5600175343b0ef21bdbcf2ffe1f3b3de1537e0"} Feb 16 16:59:53.158301 master-0 kubenswrapper[4155]: I0216 16:59:53.158196 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerDied","Data":"86bcafee52e09620e6a731bc465b87382fd516731f7d1784621005dcedea1aab"} Feb 16 16:59:53.780476 master-0 kubenswrapper[4155]: I0216 16:59:53.779768 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:53.780476 master-0 kubenswrapper[4155]: E0216 16:59:53.779877 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 16:59:53.780476 master-0 kubenswrapper[4155]: I0216 16:59:53.780106 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:53.780476 master-0 kubenswrapper[4155]: E0216 16:59:53.780161 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 16:59:55.167378 master-0 kubenswrapper[4155]: I0216 16:59:55.167325 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/ovnkube-controller/0.log" Feb 16 16:59:55.169703 master-0 kubenswrapper[4155]: I0216 16:59:55.169668 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/kube-rbac-proxy-ovn-metrics/0.log" Feb 16 16:59:55.170782 master-0 kubenswrapper[4155]: I0216 16:59:55.170726 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/kube-rbac-proxy-node/0.log" Feb 16 16:59:55.171362 master-0 kubenswrapper[4155]: I0216 16:59:55.171329 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/ovn-acl-logging/0.log" Feb 16 16:59:55.172235 master-0 kubenswrapper[4155]: I0216 16:59:55.172196 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/ovn-controller/0.log" Feb 16 16:59:55.172717 master-0 kubenswrapper[4155]: I0216 16:59:55.172672 4155 generic.go:334] "Generic (PLEG): container finished" podID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerID="5cdb01628307bb81bc91843f33cba2f692be4e1dcf12cd49046f19effb134c45" exitCode=1 Feb 16 16:59:55.172775 master-0 kubenswrapper[4155]: I0216 16:59:55.172718 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerDied","Data":"5cdb01628307bb81bc91843f33cba2f692be4e1dcf12cd49046f19effb134c45"} Feb 16 16:59:55.258218 master-0 kubenswrapper[4155]: I0216 16:59:55.258155 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/ovnkube-controller/0.log" Feb 16 16:59:55.260128 master-0 kubenswrapper[4155]: I0216 16:59:55.260094 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/kube-rbac-proxy-ovn-metrics/0.log" Feb 16 16:59:55.260742 master-0 kubenswrapper[4155]: I0216 16:59:55.260707 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/kube-rbac-proxy-node/0.log" Feb 16 16:59:55.261329 master-0 kubenswrapper[4155]: I0216 16:59:55.261300 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/ovn-acl-logging/0.log" Feb 16 16:59:55.261979 master-0 kubenswrapper[4155]: I0216 16:59:55.261947 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/ovn-controller/0.log" Feb 16 16:59:55.262498 master-0 kubenswrapper[4155]: I0216 16:59:55.262435 4155 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:55.388037 master-0 kubenswrapper[4155]: I0216 16:59:55.387969 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-openvswitch\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.388037 master-0 kubenswrapper[4155]: I0216 16:59:55.388038 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovn-node-metrics-cert\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.388285 master-0 kubenswrapper[4155]: I0216 16:59:55.388059 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-cni-netd\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.388285 master-0 kubenswrapper[4155]: I0216 16:59:55.388061 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.388285 master-0 kubenswrapper[4155]: I0216 16:59:55.388081 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-var-lib-openvswitch\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.388285 master-0 kubenswrapper[4155]: I0216 16:59:55.388111 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-slash\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.388285 master-0 kubenswrapper[4155]: I0216 16:59:55.388128 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-kubelet\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.388285 master-0 kubenswrapper[4155]: I0216 16:59:55.388149 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovnkube-config\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.388285 master-0 kubenswrapper[4155]: I0216 16:59:55.388170 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-node-log\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.388285 master-0 kubenswrapper[4155]: I0216 16:59:55.388188 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-ovn\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.388285 master-0 kubenswrapper[4155]: I0216 16:59:55.388209 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-run-ovn-kubernetes\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.388285 master-0 kubenswrapper[4155]: I0216 16:59:55.388230 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-var-lib-cni-networks-ovn-kubernetes\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.388285 master-0 kubenswrapper[4155]: I0216 16:59:55.388262 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.388575 master-0 kubenswrapper[4155]: I0216 16:59:55.388309 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.388575 master-0 kubenswrapper[4155]: I0216 16:59:55.388362 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-slash" (OuterVolumeSpecName: "host-slash") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.388575 master-0 kubenswrapper[4155]: I0216 16:59:55.388418 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.388575 master-0 kubenswrapper[4155]: I0216 16:59:55.388449 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.388575 master-0 kubenswrapper[4155]: I0216 16:59:55.388471 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.388575 master-0 kubenswrapper[4155]: I0216 16:59:55.388502 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-node-log" (OuterVolumeSpecName: "node-log") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.388786 master-0 kubenswrapper[4155]: I0216 16:59:55.388625 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.388786 master-0 kubenswrapper[4155]: I0216 16:59:55.388732 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55.388848 master-0 kubenswrapper[4155]: I0216 16:59:55.388825 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-systemd\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.389009 master-0 kubenswrapper[4155]: I0216 16:59:55.388902 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-log-socket\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.389009 master-0 kubenswrapper[4155]: I0216 16:59:55.388981 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-run-netns\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.389075 master-0 kubenswrapper[4155]: I0216 16:59:55.389029 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmf2v\" (UniqueName: \"kubernetes.io/projected/a9031b9a-ff74-4c8e-8733-2f36900bd05d-kube-api-access-nmf2v\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.389075 master-0 kubenswrapper[4155]: I0216 16:59:55.389061 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovnkube-script-lib\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.389138 master-0 kubenswrapper[4155]: I0216 16:59:55.389025 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.389420 master-0 kubenswrapper[4155]: I0216 16:59:55.389176 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-cni-bin\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.389420 master-0 kubenswrapper[4155]: I0216 16:59:55.389296 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.389519 master-0 kubenswrapper[4155]: I0216 16:59:55.389411 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-etc-openvswitch\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.389519 master-0 kubenswrapper[4155]: I0216 16:59:55.389457 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-env-overrides\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.389519 master-0 kubenswrapper[4155]: I0216 16:59:55.389494 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-systemd-units\") pod \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\" (UID: \"a9031b9a-ff74-4c8e-8733-2f36900bd05d\") " Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389680 4155 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389723 4155 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389743 4155 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389763 4155 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-slash\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389782 4155 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-kubelet\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389801 4155 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389598 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389729 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.388961 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-log-socket" (OuterVolumeSpecName: "log-socket") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389819 4155 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-node-log\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389881 4155 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389903 4155 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389943 4155 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389963 4155 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-run-netns\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389969 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55.389996 master-0 kubenswrapper[4155]: I0216 16:59:55.389982 4155 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.390394 master-0 kubenswrapper[4155]: I0216 16:59:55.390248 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 16:59:55.394242 master-0 kubenswrapper[4155]: I0216 16:59:55.394188 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9031b9a-ff74-4c8e-8733-2f36900bd05d-kube-api-access-nmf2v" (OuterVolumeSpecName: "kube-api-access-nmf2v") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "kube-api-access-nmf2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 16:59:55.394545 master-0 kubenswrapper[4155]: I0216 16:59:55.394503 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 16:59:55.398473 master-0 kubenswrapper[4155]: I0216 16:59:55.398385 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "a9031b9a-ff74-4c8e-8733-2f36900bd05d" (UID: "a9031b9a-ff74-4c8e-8733-2f36900bd05d"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 16:59:55.418163 master-0 kubenswrapper[4155]: I0216 16:59:55.418074 4155 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Feb 16 16:59:55.419197 master-0 kubenswrapper[4155]: I0216 16:59:55.419037 4155 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Feb 16 16:59:55.491193 master-0 kubenswrapper[4155]: I0216 16:59:55.491125 4155 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-run-systemd\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.491193 master-0 kubenswrapper[4155]: I0216 16:59:55.491163 4155 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-log-socket\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.491193 master-0 kubenswrapper[4155]: I0216 16:59:55.491173 4155 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.491193 master-0 kubenswrapper[4155]: I0216 16:59:55.491185 4155 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmf2v\" (UniqueName: \"kubernetes.io/projected/a9031b9a-ff74-4c8e-8733-2f36900bd05d-kube-api-access-nmf2v\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.491193 master-0 kubenswrapper[4155]: I0216 16:59:55.491193 4155 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.491193 master-0 kubenswrapper[4155]: I0216 16:59:55.491201 4155 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a9031b9a-ff74-4c8e-8733-2f36900bd05d-env-overrides\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.491193 master-0 kubenswrapper[4155]: I0216 16:59:55.491210 4155 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a9031b9a-ff74-4c8e-8733-2f36900bd05d-systemd-units\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.491193 master-0 kubenswrapper[4155]: I0216 16:59:55.491218 4155 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a9031b9a-ff74-4c8e-8733-2f36900bd05d-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 16:59:55.780387 master-0 kubenswrapper[4155]: I0216 16:59:55.780227 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 16:59:55.780660 master-0 kubenswrapper[4155]: I0216 16:59:55.780414 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 16:59:55.783069 master-0 kubenswrapper[4155]: I0216 16:59:55.782663 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 16:59:55.783069 master-0 kubenswrapper[4155]: I0216 16:59:55.782945 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 16:59:55.784193 master-0 kubenswrapper[4155]: I0216 16:59:55.784161 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 16:59:56.179258 master-0 kubenswrapper[4155]: I0216 16:59:56.179134 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/ovnkube-controller/0.log" Feb 16 16:59:56.181812 master-0 kubenswrapper[4155]: I0216 16:59:56.181767 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/kube-rbac-proxy-ovn-metrics/0.log" Feb 16 16:59:56.182395 master-0 kubenswrapper[4155]: I0216 16:59:56.182362 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/kube-rbac-proxy-node/0.log" Feb 16 16:59:56.183162 master-0 kubenswrapper[4155]: I0216 16:59:56.183124 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/ovn-acl-logging/0.log" Feb 16 16:59:56.183778 master-0 kubenswrapper[4155]: I0216 16:59:56.183738 4155 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-xsclm_a9031b9a-ff74-4c8e-8733-2f36900bd05d/ovn-controller/0.log" Feb 16 16:59:56.184217 master-0 kubenswrapper[4155]: I0216 16:59:56.184180 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" event={"ID":"a9031b9a-ff74-4c8e-8733-2f36900bd05d","Type":"ContainerDied","Data":"2d9ba94a2cb1dda6a9a82a06da06539c2aab4b9caa3c779ad9edc4023f449a1c"} Feb 16 16:59:56.184284 master-0 kubenswrapper[4155]: I0216 16:59:56.184224 4155 scope.go:117] "RemoveContainer" containerID="5cdb01628307bb81bc91843f33cba2f692be4e1dcf12cd49046f19effb134c45" Feb 16 16:59:56.184472 master-0 kubenswrapper[4155]: I0216 16:59:56.184426 4155 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-xsclm" Feb 16 16:59:56.203332 master-0 kubenswrapper[4155]: I0216 16:59:56.203236 4155 scope.go:117] "RemoveContainer" containerID="7a92b9a90193dd381d456652171aca7b927db619862cb3bcbfa996317518e329" Feb 16 16:59:56.213456 master-0 kubenswrapper[4155]: I0216 16:59:56.213260 4155 scope.go:117] "RemoveContainer" containerID="435716921b22f7e7070cefae2389c285bb38da6738e12db82db01bcb3c16943e" Feb 16 16:59:56.223888 master-0 kubenswrapper[4155]: I0216 16:59:56.223858 4155 scope.go:117] "RemoveContainer" containerID="6519a2b1bfa2be2c3969581f7167ddcb8c02686a73ac3685fab901e8a973c0da" Feb 16 16:59:56.231945 master-0 kubenswrapper[4155]: I0216 16:59:56.231896 4155 scope.go:117] "RemoveContainer" containerID="d6950ee4a855681c18c68598eb1bca4b7155cbe3da65c7781f9da4e38aa625d5" Feb 16 16:59:56.241694 master-0 kubenswrapper[4155]: I0216 16:59:56.241513 4155 scope.go:117] "RemoveContainer" containerID="f22eec932dfdb2c55de9706746fdccf1c65673567b9f1ea5f699b1b74e8fd5f2" Feb 16 16:59:56.251360 master-0 kubenswrapper[4155]: I0216 16:59:56.251280 4155 scope.go:117] "RemoveContainer" containerID="a370ae306accbab57b742d151b5600175343b0ef21bdbcf2ffe1f3b3de1537e0" Feb 16 16:59:56.260742 master-0 kubenswrapper[4155]: I0216 16:59:56.260666 4155 scope.go:117] "RemoveContainer" containerID="86bcafee52e09620e6a731bc465b87382fd516731f7d1784621005dcedea1aab" Feb 16 16:59:56.269192 master-0 kubenswrapper[4155]: I0216 16:59:56.269163 4155 scope.go:117] "RemoveContainer" containerID="62efda6c94abe113ef01874ef37214190ec0fd357e274d80bba2a8516c7609ca" Feb 16 16:59:56.367538 master-0 kubenswrapper[4155]: I0216 16:59:56.367483 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 16 16:59:56.689839 master-0 kubenswrapper[4155]: I0216 16:59:56.689665 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts"] Feb 16 16:59:56.691506 master-0 kubenswrapper[4155]: E0216 16:59:56.690448 4155 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="northd" Feb 16 16:59:56.692451 master-0 kubenswrapper[4155]: I0216 16:59:56.692368 4155 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="northd" Feb 16 16:59:56.692451 master-0 kubenswrapper[4155]: E0216 16:59:56.692429 4155 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="kubecfg-setup" Feb 16 16:59:56.692451 master-0 kubenswrapper[4155]: I0216 16:59:56.692440 4155 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="kubecfg-setup" Feb 16 16:59:56.692451 master-0 kubenswrapper[4155]: E0216 16:59:56.692452 4155 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="ovn-acl-logging" Feb 16 16:59:56.692666 master-0 kubenswrapper[4155]: I0216 16:59:56.692461 4155 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="ovn-acl-logging" Feb 16 16:59:56.692666 master-0 kubenswrapper[4155]: E0216 16:59:56.692497 4155 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="kube-rbac-proxy-node" Feb 16 16:59:56.692666 master-0 kubenswrapper[4155]: I0216 16:59:56.692508 4155 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="kube-rbac-proxy-node" Feb 16 16:59:56.693332 master-0 kubenswrapper[4155]: E0216 16:59:56.693297 4155 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="ovn-controller" Feb 16 16:59:56.693391 master-0 kubenswrapper[4155]: I0216 16:59:56.693322 4155 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="ovn-controller" Feb 16 16:59:56.693539 master-0 kubenswrapper[4155]: E0216 16:59:56.693510 4155 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="nbdb" Feb 16 16:59:56.693539 master-0 kubenswrapper[4155]: I0216 16:59:56.693528 4155 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="nbdb" Feb 16 16:59:56.693539 master-0 kubenswrapper[4155]: E0216 16:59:56.693539 4155 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="sbdb" Feb 16 16:59:56.693674 master-0 kubenswrapper[4155]: I0216 16:59:56.693548 4155 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="sbdb" Feb 16 16:59:56.693674 master-0 kubenswrapper[4155]: E0216 16:59:56.693557 4155 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="ovnkube-controller" Feb 16 16:59:56.693674 master-0 kubenswrapper[4155]: I0216 16:59:56.693566 4155 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="ovnkube-controller" Feb 16 16:59:56.693674 master-0 kubenswrapper[4155]: E0216 16:59:56.693575 4155 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 16:59:56.693674 master-0 kubenswrapper[4155]: I0216 16:59:56.693584 4155 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 16:59:56.693674 master-0 kubenswrapper[4155]: I0216 16:59:56.693639 4155 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="ovnkube-controller" Feb 16 16:59:56.693674 master-0 kubenswrapper[4155]: I0216 16:59:56.693652 4155 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 16:59:56.693674 master-0 kubenswrapper[4155]: I0216 16:59:56.693660 4155 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="nbdb" Feb 16 16:59:56.693674 master-0 kubenswrapper[4155]: I0216 16:59:56.693674 4155 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="northd" Feb 16 16:59:56.693674 master-0 kubenswrapper[4155]: I0216 16:59:56.693683 4155 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="sbdb" Feb 16 16:59:56.694137 master-0 kubenswrapper[4155]: I0216 16:59:56.693692 4155 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="kube-rbac-proxy-node" Feb 16 16:59:56.694137 master-0 kubenswrapper[4155]: I0216 16:59:56.693701 4155 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="ovn-controller" Feb 16 16:59:56.694137 master-0 kubenswrapper[4155]: I0216 16:59:56.693710 4155 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" containerName="ovn-acl-logging" Feb 16 16:59:56.694249 master-0 kubenswrapper[4155]: I0216 16:59:56.694193 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k"] Feb 16 16:59:56.694394 master-0 kubenswrapper[4155]: I0216 16:59:56.694365 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm"] Feb 16 16:59:56.694394 master-0 kubenswrapper[4155]: I0216 16:59:56.694373 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:56.694520 master-0 kubenswrapper[4155]: I0216 16:59:56.694496 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 16:59:56.694874 master-0 kubenswrapper[4155]: I0216 16:59:56.694845 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-86b8869b79-nhxlp"] Feb 16 16:59:56.695116 master-0 kubenswrapper[4155]: I0216 16:59:56.695045 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 16:59:56.695232 master-0 kubenswrapper[4155]: I0216 16:59:56.695202 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv"] Feb 16 16:59:56.695359 master-0 kubenswrapper[4155]: I0216 16:59:56.695335 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2"] Feb 16 16:59:56.695415 master-0 kubenswrapper[4155]: I0216 16:59:56.695378 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 16:59:56.695553 master-0 kubenswrapper[4155]: I0216 16:59:56.695527 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 16:59:56.699267 master-0 kubenswrapper[4155]: I0216 16:59:56.697441 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8"] Feb 16 16:59:56.699267 master-0 kubenswrapper[4155]: I0216 16:59:56.698645 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 16:59:56.699267 master-0 kubenswrapper[4155]: I0216 16:59:56.698789 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 16:59:56.702627 master-0 kubenswrapper[4155]: I0216 16:59:56.699515 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 16:59:56.702627 master-0 kubenswrapper[4155]: I0216 16:59:56.701501 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 16:59:56.702627 master-0 kubenswrapper[4155]: I0216 16:59:56.702345 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 16:59:56.704508 master-0 kubenswrapper[4155]: I0216 16:59:56.704242 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 16:59:56.704699 master-0 kubenswrapper[4155]: I0216 16:59:56.704554 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 16:59:56.704822 master-0 kubenswrapper[4155]: I0216 16:59:56.704784 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 16:59:56.704822 master-0 kubenswrapper[4155]: I0216 16:59:56.704796 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 16:59:56.704822 master-0 kubenswrapper[4155]: I0216 16:59:56.704815 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 16:59:56.705193 master-0 kubenswrapper[4155]: I0216 16:59:56.704894 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-755d954778-lf4cb"] Feb 16 16:59:56.705193 master-0 kubenswrapper[4155]: I0216 16:59:56.704991 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 16:59:56.705782 master-0 kubenswrapper[4155]: I0216 16:59:56.705271 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 16:59:56.705782 master-0 kubenswrapper[4155]: I0216 16:59:56.705334 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.705782 master-0 kubenswrapper[4155]: I0216 16:59:56.705433 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 16:59:56.705782 master-0 kubenswrapper[4155]: I0216 16:59:56.705645 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5"] Feb 16 16:59:56.707218 master-0 kubenswrapper[4155]: I0216 16:59:56.706053 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 16:59:56.707218 master-0 kubenswrapper[4155]: I0216 16:59:56.706154 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 16:59:56.707218 master-0 kubenswrapper[4155]: I0216 16:59:56.706415 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d"] Feb 16 16:59:56.707218 master-0 kubenswrapper[4155]: I0216 16:59:56.706422 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 16:59:56.707218 master-0 kubenswrapper[4155]: I0216 16:59:56.706457 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 16:59:56.707218 master-0 kubenswrapper[4155]: I0216 16:59:56.706832 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk"] Feb 16 16:59:56.707218 master-0 kubenswrapper[4155]: I0216 16:59:56.706493 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 16:59:56.707218 master-0 kubenswrapper[4155]: I0216 16:59:56.706526 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 16:59:56.707218 master-0 kubenswrapper[4155]: I0216 16:59:56.706559 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 16:59:56.707218 master-0 kubenswrapper[4155]: I0216 16:59:56.706586 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 16:59:56.707218 master-0 kubenswrapper[4155]: I0216 16:59:56.706747 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 16:59:56.707218 master-0 kubenswrapper[4155]: I0216 16:59:56.707124 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:56.707218 master-0 kubenswrapper[4155]: I0216 16:59:56.707134 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:56.708147 master-0 kubenswrapper[4155]: I0216 16:59:56.707545 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6"] Feb 16 16:59:56.708351 master-0 kubenswrapper[4155]: I0216 16:59:56.708261 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 16:59:56.708817 master-0 kubenswrapper[4155]: I0216 16:59:56.708515 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 16:59:56.708817 master-0 kubenswrapper[4155]: I0216 16:59:56.708644 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-4jz2t"] Feb 16 16:59:56.709380 master-0 kubenswrapper[4155]: I0216 16:59:56.709226 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v"] Feb 16 16:59:56.709380 master-0 kubenswrapper[4155]: I0216 16:59:56.709313 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 16:59:56.710281 master-0 kubenswrapper[4155]: I0216 16:59:56.709752 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 16:59:56.710281 master-0 kubenswrapper[4155]: I0216 16:59:56.710156 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6"] Feb 16 16:59:56.710814 master-0 kubenswrapper[4155]: I0216 16:59:56.710633 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8"] Feb 16 16:59:56.711134 master-0 kubenswrapper[4155]: I0216 16:59:56.711007 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.712481 master-0 kubenswrapper[4155]: I0216 16:59:56.711693 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 16:59:56.712481 master-0 kubenswrapper[4155]: I0216 16:59:56.712149 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf"] Feb 16 16:59:56.712481 master-0 kubenswrapper[4155]: I0216 16:59:56.712347 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 16:59:56.712481 master-0 kubenswrapper[4155]: I0216 16:59:56.712416 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9"] Feb 16 16:59:56.713449 master-0 kubenswrapper[4155]: I0216 16:59:56.712933 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf"] Feb 16 16:59:56.713449 master-0 kubenswrapper[4155]: I0216 16:59:56.713241 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 16:59:56.713449 master-0 kubenswrapper[4155]: I0216 16:59:56.713322 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 16:59:56.713449 master-0 kubenswrapper[4155]: I0216 16:59:56.713415 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv"] Feb 16 16:59:56.713994 master-0 kubenswrapper[4155]: I0216 16:59:56.713521 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 16:59:56.713994 master-0 kubenswrapper[4155]: I0216 16:59:56.713761 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 16:59:56.713994 master-0 kubenswrapper[4155]: I0216 16:59:56.713777 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 16:59:56.714229 master-0 kubenswrapper[4155]: I0216 16:59:56.714197 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 16:59:56.715065 master-0 kubenswrapper[4155]: I0216 16:59:56.714560 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 16:59:56.715065 master-0 kubenswrapper[4155]: I0216 16:59:56.714587 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 16:59:56.715065 master-0 kubenswrapper[4155]: I0216 16:59:56.714672 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 16:59:56.715065 master-0 kubenswrapper[4155]: I0216 16:59:56.714724 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 16:59:56.715774 master-0 kubenswrapper[4155]: I0216 16:59:56.715730 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 16:59:56.716862 master-0 kubenswrapper[4155]: I0216 16:59:56.716273 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 16:59:56.716862 master-0 kubenswrapper[4155]: I0216 16:59:56.716639 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 16:59:56.719470 master-0 kubenswrapper[4155]: I0216 16:59:56.717015 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 16:59:56.719470 master-0 kubenswrapper[4155]: I0216 16:59:56.717862 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 16:59:56.719470 master-0 kubenswrapper[4155]: I0216 16:59:56.718015 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 16:59:56.719470 master-0 kubenswrapper[4155]: I0216 16:59:56.718444 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 16:59:56.720684 master-0 kubenswrapper[4155]: I0216 16:59:56.719537 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 16:59:56.720793 master-0 kubenswrapper[4155]: I0216 16:59:56.720776 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 16:59:56.721122 master-0 kubenswrapper[4155]: I0216 16:59:56.721002 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 16:59:56.721122 master-0 kubenswrapper[4155]: I0216 16:59:56.721030 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 16:59:56.721122 master-0 kubenswrapper[4155]: I0216 16:59:56.721057 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 16:59:56.721122 master-0 kubenswrapper[4155]: I0216 16:59:56.721064 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 16:59:56.721376 master-0 kubenswrapper[4155]: I0216 16:59:56.721166 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 16:59:56.721376 master-0 kubenswrapper[4155]: I0216 16:59:56.721190 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 16:59:56.721376 master-0 kubenswrapper[4155]: I0216 16:59:56.721202 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 16:59:56.721376 master-0 kubenswrapper[4155]: I0216 16:59:56.721278 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 16:59:56.721376 master-0 kubenswrapper[4155]: I0216 16:59:56.721279 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 16:59:56.721376 master-0 kubenswrapper[4155]: I0216 16:59:56.721312 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 16:59:56.721376 master-0 kubenswrapper[4155]: I0216 16:59:56.721281 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 16:59:56.721754 master-0 kubenswrapper[4155]: I0216 16:59:56.721426 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 16:59:56.721754 master-0 kubenswrapper[4155]: I0216 16:59:56.721453 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 16:59:56.721754 master-0 kubenswrapper[4155]: I0216 16:59:56.721382 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 16:59:56.721986 master-0 kubenswrapper[4155]: I0216 16:59:56.721909 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 16:59:56.722417 master-0 kubenswrapper[4155]: I0216 16:59:56.722078 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 16:59:56.722534 master-0 kubenswrapper[4155]: I0216 16:59:56.722430 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 16:59:56.722534 master-0 kubenswrapper[4155]: I0216 16:59:56.722480 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 16:59:56.724604 master-0 kubenswrapper[4155]: I0216 16:59:56.724184 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 16:59:56.725373 master-0 kubenswrapper[4155]: I0216 16:59:56.725146 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 16:59:56.725373 master-0 kubenswrapper[4155]: I0216 16:59:56.725212 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 16:59:56.725824 master-0 kubenswrapper[4155]: I0216 16:59:56.725420 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 16:59:56.725916 master-0 kubenswrapper[4155]: I0216 16:59:56.725823 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 16:59:56.727801 master-0 kubenswrapper[4155]: I0216 16:59:56.727480 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 16:59:56.728491 master-0 kubenswrapper[4155]: I0216 16:59:56.728095 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 16:59:56.728491 master-0 kubenswrapper[4155]: I0216 16:59:56.728161 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 16:59:56.728491 master-0 kubenswrapper[4155]: I0216 16:59:56.728172 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 16:59:56.728491 master-0 kubenswrapper[4155]: I0216 16:59:56.728353 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 16:59:56.728491 master-0 kubenswrapper[4155]: I0216 16:59:56.728427 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 16:59:56.730154 master-0 kubenswrapper[4155]: I0216 16:59:56.730127 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 16:59:56.730213 master-0 kubenswrapper[4155]: I0216 16:59:56.730179 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 16:59:56.730388 master-0 kubenswrapper[4155]: I0216 16:59:56.730366 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 16:59:56.730432 master-0 kubenswrapper[4155]: I0216 16:59:56.730393 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 16:59:56.730965 master-0 kubenswrapper[4155]: I0216 16:59:56.730910 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 16:59:56.731781 master-0 kubenswrapper[4155]: I0216 16:59:56.731747 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 16:59:56.736751 master-0 kubenswrapper[4155]: I0216 16:59:56.736630 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 16:59:56.738974 master-0 kubenswrapper[4155]: I0216 16:59:56.738797 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 16:59:56.740000 master-0 kubenswrapper[4155]: I0216 16:59:56.739427 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 16:59:56.816585 master-0 kubenswrapper[4155]: I0216 16:59:56.816541 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 16:59:56.816722 master-0 kubenswrapper[4155]: I0216 16:59:56.816582 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.816722 master-0 kubenswrapper[4155]: I0216 16:59:56.816612 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.816722 master-0 kubenswrapper[4155]: I0216 16:59:56.816629 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 16:59:56.816722 master-0 kubenswrapper[4155]: I0216 16:59:56.816649 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:56.816722 master-0 kubenswrapper[4155]: I0216 16:59:56.816688 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 16:59:56.816852 master-0 kubenswrapper[4155]: I0216 16:59:56.816756 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 16:59:56.816852 master-0 kubenswrapper[4155]: I0216 16:59:56.816819 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 16:59:56.816852 master-0 kubenswrapper[4155]: I0216 16:59:56.816844 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 16:59:56.816952 master-0 kubenswrapper[4155]: I0216 16:59:56.816866 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.816952 master-0 kubenswrapper[4155]: I0216 16:59:56.816886 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 16:59:56.816952 master-0 kubenswrapper[4155]: I0216 16:59:56.816906 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 16:59:56.816952 master-0 kubenswrapper[4155]: I0216 16:59:56.816942 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 16:59:56.817048 master-0 kubenswrapper[4155]: I0216 16:59:56.816981 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 16:59:56.817048 master-0 kubenswrapper[4155]: I0216 16:59:56.816999 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 16:59:56.817048 master-0 kubenswrapper[4155]: I0216 16:59:56.817015 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 16:59:56.817048 master-0 kubenswrapper[4155]: I0216 16:59:56.817031 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 16:59:56.817140 master-0 kubenswrapper[4155]: I0216 16:59:56.817050 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 16:59:56.817140 master-0 kubenswrapper[4155]: I0216 16:59:56.817082 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.817140 master-0 kubenswrapper[4155]: I0216 16:59:56.817116 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 16:59:56.817222 master-0 kubenswrapper[4155]: I0216 16:59:56.817144 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 16:59:56.817222 master-0 kubenswrapper[4155]: I0216 16:59:56.817173 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:56.817222 master-0 kubenswrapper[4155]: I0216 16:59:56.817210 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 16:59:56.817296 master-0 kubenswrapper[4155]: I0216 16:59:56.817243 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:56.817296 master-0 kubenswrapper[4155]: I0216 16:59:56.817263 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 16:59:56.817348 master-0 kubenswrapper[4155]: I0216 16:59:56.817281 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 16:59:56.817348 master-0 kubenswrapper[4155]: I0216 16:59:56.817318 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:56.817406 master-0 kubenswrapper[4155]: I0216 16:59:56.817351 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 16:59:56.817406 master-0 kubenswrapper[4155]: I0216 16:59:56.817385 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 16:59:56.817459 master-0 kubenswrapper[4155]: I0216 16:59:56.817424 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmk2b\" (UniqueName: \"kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 16:59:56.817486 master-0 kubenswrapper[4155]: I0216 16:59:56.817455 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.817486 master-0 kubenswrapper[4155]: I0216 16:59:56.817479 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.817608 master-0 kubenswrapper[4155]: I0216 16:59:56.817502 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 16:59:56.817608 master-0 kubenswrapper[4155]: I0216 16:59:56.817529 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:56.817608 master-0 kubenswrapper[4155]: I0216 16:59:56.817546 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:56.817608 master-0 kubenswrapper[4155]: I0216 16:59:56.817568 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 16:59:56.817608 master-0 kubenswrapper[4155]: I0216 16:59:56.817593 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 16:59:56.817730 master-0 kubenswrapper[4155]: I0216 16:59:56.817609 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.817730 master-0 kubenswrapper[4155]: I0216 16:59:56.817653 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 16:59:56.817730 master-0 kubenswrapper[4155]: I0216 16:59:56.817670 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.817730 master-0 kubenswrapper[4155]: I0216 16:59:56.817700 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 16:59:56.817830 master-0 kubenswrapper[4155]: I0216 16:59:56.817749 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 16:59:56.817830 master-0 kubenswrapper[4155]: I0216 16:59:56.817777 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:56.817830 master-0 kubenswrapper[4155]: I0216 16:59:56.817801 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 16:59:56.817907 master-0 kubenswrapper[4155]: I0216 16:59:56.817827 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.817907 master-0 kubenswrapper[4155]: I0216 16:59:56.817892 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 16:59:56.817980 master-0 kubenswrapper[4155]: I0216 16:59:56.817938 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:56.817980 master-0 kubenswrapper[4155]: I0216 16:59:56.817969 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 16:59:56.818030 master-0 kubenswrapper[4155]: I0216 16:59:56.817995 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:56.818030 master-0 kubenswrapper[4155]: I0216 16:59:56.818020 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.818084 master-0 kubenswrapper[4155]: I0216 16:59:56.818046 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.818084 master-0 kubenswrapper[4155]: I0216 16:59:56.818072 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:56.818134 master-0 kubenswrapper[4155]: I0216 16:59:56.818098 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 16:59:56.818345 master-0 kubenswrapper[4155]: I0216 16:59:56.818314 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 16:59:56.818382 master-0 kubenswrapper[4155]: I0216 16:59:56.818356 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 16:59:56.818409 master-0 kubenswrapper[4155]: I0216 16:59:56.818387 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 16:59:56.818435 master-0 kubenswrapper[4155]: I0216 16:59:56.818412 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 16:59:56.818461 master-0 kubenswrapper[4155]: I0216 16:59:56.818439 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:56.818489 master-0 kubenswrapper[4155]: I0216 16:59:56.818467 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 16:59:56.818518 master-0 kubenswrapper[4155]: I0216 16:59:56.818497 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 16:59:56.818578 master-0 kubenswrapper[4155]: I0216 16:59:56.818544 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 16:59:56.818611 master-0 kubenswrapper[4155]: I0216 16:59:56.818584 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 16:59:56.818640 master-0 kubenswrapper[4155]: I0216 16:59:56.818611 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:56.919967 master-0 kubenswrapper[4155]: I0216 16:59:56.919882 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.919967 master-0 kubenswrapper[4155]: I0216 16:59:56.919954 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 16:59:56.920187 master-0 kubenswrapper[4155]: I0216 16:59:56.920160 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 16:59:56.920224 master-0 kubenswrapper[4155]: I0216 16:59:56.920205 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:56.920417 master-0 kubenswrapper[4155]: I0216 16:59:56.920354 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 16:59:56.920417 master-0 kubenswrapper[4155]: I0216 16:59:56.920389 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:56.920417 master-0 kubenswrapper[4155]: I0216 16:59:56.920406 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 16:59:56.920599 master-0 kubenswrapper[4155]: I0216 16:59:56.920427 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:56.920599 master-0 kubenswrapper[4155]: I0216 16:59:56.920444 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 16:59:56.920599 master-0 kubenswrapper[4155]: I0216 16:59:56.920462 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 16:59:56.920708 master-0 kubenswrapper[4155]: I0216 16:59:56.920650 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.920876 master-0 kubenswrapper[4155]: E0216 16:59:56.920835 4155 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 16:59:56.923598 master-0 kubenswrapper[4155]: E0216 16:59:56.923134 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.423093045 +0000 UTC m=+101.762146599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : secret "multus-admission-controller-secret" not found Feb 16 16:59:56.923598 master-0 kubenswrapper[4155]: E0216 16:59:56.923332 4155 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 16:59:56.923598 master-0 kubenswrapper[4155]: E0216 16:59:56.923382 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.423362712 +0000 UTC m=+101.762416216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : secret "metrics-tls" not found Feb 16 16:59:56.923598 master-0 kubenswrapper[4155]: I0216 16:59:56.923570 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 16:59:56.923911 master-0 kubenswrapper[4155]: I0216 16:59:56.923675 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmk2b\" (UniqueName: \"kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 16:59:56.923969 master-0 kubenswrapper[4155]: I0216 16:59:56.923886 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.924085 master-0 kubenswrapper[4155]: I0216 16:59:56.924028 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.925951 master-0 kubenswrapper[4155]: I0216 16:59:56.925374 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 16:59:56.925951 master-0 kubenswrapper[4155]: I0216 16:59:56.925491 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:56.925951 master-0 kubenswrapper[4155]: I0216 16:59:56.925569 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:56.925951 master-0 kubenswrapper[4155]: I0216 16:59:56.925630 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.925951 master-0 kubenswrapper[4155]: I0216 16:59:56.925640 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 16:59:56.925951 master-0 kubenswrapper[4155]: I0216 16:59:56.925699 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.925951 master-0 kubenswrapper[4155]: I0216 16:59:56.925759 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 16:59:56.925951 master-0 kubenswrapper[4155]: I0216 16:59:56.925822 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 16:59:56.925951 master-0 kubenswrapper[4155]: I0216 16:59:56.925885 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 16:59:56.926345 master-0 kubenswrapper[4155]: I0216 16:59:56.925968 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.926345 master-0 kubenswrapper[4155]: I0216 16:59:56.926030 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 16:59:56.926345 master-0 kubenswrapper[4155]: I0216 16:59:56.926092 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:56.926345 master-0 kubenswrapper[4155]: I0216 16:59:56.926150 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 16:59:56.926345 master-0 kubenswrapper[4155]: I0216 16:59:56.926199 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.926527 master-0 kubenswrapper[4155]: I0216 16:59:56.926344 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 16:59:56.926527 master-0 kubenswrapper[4155]: I0216 16:59:56.926405 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:56.926527 master-0 kubenswrapper[4155]: I0216 16:59:56.926464 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 16:59:56.926527 master-0 kubenswrapper[4155]: I0216 16:59:56.926521 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:56.926688 master-0 kubenswrapper[4155]: I0216 16:59:56.926581 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.926688 master-0 kubenswrapper[4155]: I0216 16:59:56.926639 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.926766 master-0 kubenswrapper[4155]: I0216 16:59:56.926705 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 16:59:56.926813 master-0 kubenswrapper[4155]: I0216 16:59:56.926764 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 16:59:56.926850 master-0 kubenswrapper[4155]: I0216 16:59:56.926825 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 16:59:56.927248 master-0 kubenswrapper[4155]: I0216 16:59:56.926874 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:56.927248 master-0 kubenswrapper[4155]: I0216 16:59:56.927017 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:56.927248 master-0 kubenswrapper[4155]: I0216 16:59:56.927020 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 16:59:56.927248 master-0 kubenswrapper[4155]: I0216 16:59:56.927090 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 16:59:56.927248 master-0 kubenswrapper[4155]: I0216 16:59:56.927158 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:56.927248 master-0 kubenswrapper[4155]: I0216 16:59:56.927233 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 16:59:56.927593 master-0 kubenswrapper[4155]: I0216 16:59:56.927297 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 16:59:56.927593 master-0 kubenswrapper[4155]: I0216 16:59:56.927357 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 16:59:56.927593 master-0 kubenswrapper[4155]: I0216 16:59:56.927419 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 16:59:56.927593 master-0 kubenswrapper[4155]: I0216 16:59:56.927477 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:56.927962 master-0 kubenswrapper[4155]: I0216 16:59:56.927934 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 16:59:56.928356 master-0 kubenswrapper[4155]: I0216 16:59:56.928272 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.928356 master-0 kubenswrapper[4155]: I0216 16:59:56.928325 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 16:59:56.928356 master-0 kubenswrapper[4155]: I0216 16:59:56.928286 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 16:59:56.928356 master-0 kubenswrapper[4155]: E0216 16:59:56.928350 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 16:59:56.928487 master-0 kubenswrapper[4155]: E0216 16:59:56.928406 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.428385379 +0000 UTC m=+101.767438883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "performance-addon-operator-webhook-cert" not found Feb 16 16:59:56.928487 master-0 kubenswrapper[4155]: E0216 16:59:56.928421 4155 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 16:59:56.928487 master-0 kubenswrapper[4155]: E0216 16:59:56.928464 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.428450781 +0000 UTC m=+101.767504285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : secret "metrics-tls" not found Feb 16 16:59:56.929257 master-0 kubenswrapper[4155]: I0216 16:59:56.928670 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.929257 master-0 kubenswrapper[4155]: E0216 16:59:56.928758 4155 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 16:59:56.929257 master-0 kubenswrapper[4155]: E0216 16:59:56.928799 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.42878235 +0000 UTC m=+101.767835854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : secret "package-server-manager-serving-cert" not found Feb 16 16:59:56.929257 master-0 kubenswrapper[4155]: I0216 16:59:56.928881 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.929257 master-0 kubenswrapper[4155]: E0216 16:59:56.928979 4155 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 16:59:56.929257 master-0 kubenswrapper[4155]: E0216 16:59:56.929191 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.429003366 +0000 UTC m=+101.768056870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : secret "cluster-monitoring-operator-tls" not found Feb 16 16:59:56.929257 master-0 kubenswrapper[4155]: I0216 16:59:56.929207 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 16:59:56.930457 master-0 kubenswrapper[4155]: I0216 16:59:56.929390 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 16:59:56.930457 master-0 kubenswrapper[4155]: I0216 16:59:56.930053 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:56.930457 master-0 kubenswrapper[4155]: E0216 16:59:56.930176 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 16:59:56.930607 master-0 kubenswrapper[4155]: I0216 16:59:56.930546 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 16:59:56.930649 master-0 kubenswrapper[4155]: I0216 16:59:56.930623 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 16:59:56.930649 master-0 kubenswrapper[4155]: E0216 16:59:56.930642 4155 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 16:59:56.930904 master-0 kubenswrapper[4155]: I0216 16:59:56.930756 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.930904 master-0 kubenswrapper[4155]: E0216 16:59:56.930828 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.430798755 +0000 UTC m=+101.769852269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "node-tuning-operator-tls" not found Feb 16 16:59:56.930904 master-0 kubenswrapper[4155]: E0216 16:59:56.930854 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.430843246 +0000 UTC m=+101.769896750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : secret "marketplace-operator-metrics" not found Feb 16 16:59:56.931480 master-0 kubenswrapper[4155]: I0216 16:59:56.931445 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:56.932004 master-0 kubenswrapper[4155]: I0216 16:59:56.931531 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.932004 master-0 kubenswrapper[4155]: E0216 16:59:56.931603 4155 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 16:59:56.932106 master-0 kubenswrapper[4155]: E0216 16:59:56.932039 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 16:59:57.432004328 +0000 UTC m=+101.771057832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : secret "image-registry-operator-tls" not found Feb 16 16:59:56.934483 master-0 kubenswrapper[4155]: I0216 16:59:56.931659 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 16:59:56.934483 master-0 kubenswrapper[4155]: I0216 16:59:56.934471 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 16:59:56.934589 master-0 kubenswrapper[4155]: I0216 16:59:56.934441 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.934633 master-0 kubenswrapper[4155]: I0216 16:59:56.934598 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:56.934757 master-0 kubenswrapper[4155]: I0216 16:59:56.934704 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 16:59:56.934757 master-0 kubenswrapper[4155]: I0216 16:59:56.934741 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 16:59:56.934838 master-0 kubenswrapper[4155]: I0216 16:59:56.934775 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 16:59:56.934838 master-0 kubenswrapper[4155]: I0216 16:59:56.934794 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 16:59:56.934838 master-0 kubenswrapper[4155]: I0216 16:59:56.934812 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.934838 master-0 kubenswrapper[4155]: I0216 16:59:56.934828 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 16:59:56.934998 master-0 kubenswrapper[4155]: I0216 16:59:56.934848 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 16:59:56.935272 master-0 kubenswrapper[4155]: I0216 16:59:56.935193 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 16:59:56.935272 master-0 kubenswrapper[4155]: I0216 16:59:56.935231 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 16:59:56.935362 master-0 kubenswrapper[4155]: I0216 16:59:56.935300 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.935707 master-0 kubenswrapper[4155]: I0216 16:59:56.935595 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 16:59:56.935707 master-0 kubenswrapper[4155]: I0216 16:59:56.935651 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 16:59:56.935707 master-0 kubenswrapper[4155]: I0216 16:59:56.935679 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 16:59:56.935707 master-0 kubenswrapper[4155]: I0216 16:59:56.935700 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 16:59:56.936013 master-0 kubenswrapper[4155]: I0216 16:59:56.935717 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 16:59:56.936013 master-0 kubenswrapper[4155]: I0216 16:59:56.935736 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 16:59:56.936013 master-0 kubenswrapper[4155]: I0216 16:59:56.935756 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 16:59:56.936013 master-0 kubenswrapper[4155]: I0216 16:59:56.935798 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 16:59:56.936013 master-0 kubenswrapper[4155]: I0216 16:59:56.935796 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:56.936582 master-0 kubenswrapper[4155]: I0216 16:59:56.936190 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 16:59:56.936582 master-0 kubenswrapper[4155]: I0216 16:59:56.936332 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 16:59:56.937110 master-0 kubenswrapper[4155]: I0216 16:59:56.937007 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 16:59:56.938059 master-0 kubenswrapper[4155]: I0216 16:59:56.937396 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 16:59:56.938511 master-0 kubenswrapper[4155]: I0216 16:59:56.938284 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 16:59:56.938511 master-0 kubenswrapper[4155]: I0216 16:59:56.938444 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 16:59:56.939159 master-0 kubenswrapper[4155]: I0216 16:59:56.938913 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 16:59:56.939379 master-0 kubenswrapper[4155]: I0216 16:59:56.939286 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 16:59:56.939642 master-0 kubenswrapper[4155]: I0216 16:59:56.939603 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:56.942376 master-0 kubenswrapper[4155]: I0216 16:59:56.942264 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 16:59:57.394463 master-0 kubenswrapper[4155]: I0216 16:59:57.394401 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-flr86"] Feb 16 16:59:57.396663 master-0 kubenswrapper[4155]: I0216 16:59:57.396593 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.399222 master-0 kubenswrapper[4155]: I0216 16:59:57.399173 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 16:59:57.426611 master-0 kubenswrapper[4155]: I0216 16:59:57.426524 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 16:59:57.442350 master-0 kubenswrapper[4155]: I0216 16:59:57.442094 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 16:59:57.442350 master-0 kubenswrapper[4155]: I0216 16:59:57.442340 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442350 master-0 kubenswrapper[4155]: I0216 16:59:57.442359 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442378 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442396 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442415 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442439 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442454 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442470 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442491 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442512 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442530 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442550 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442573 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442589 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442630 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442671 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442689 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 16:59:57.442886 master-0 kubenswrapper[4155]: I0216 16:59:57.442704 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: I0216 16:59:57.442718 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: I0216 16:59:57.442734 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: I0216 16:59:57.442749 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: I0216 16:59:57.442781 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: I0216 16:59:57.442812 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: I0216 16:59:57.442839 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: I0216 16:59:57.442861 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: I0216 16:59:57.442897 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: I0216 16:59:57.442954 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: I0216 16:59:57.442976 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: E0216 16:59:57.443116 4155 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: E0216 16:59:57.443154 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:58.443139934 +0000 UTC m=+102.782193438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : secret "marketplace-operator-metrics" not found Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: E0216 16:59:57.443245 4155 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: E0216 16:59:57.443457 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 16:59:58.443449173 +0000 UTC m=+102.782502677 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : secret "image-registry-operator-tls" not found Feb 16 16:59:57.444825 master-0 kubenswrapper[4155]: E0216 16:59:57.443598 4155 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: E0216 16:59:57.443617 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 16:59:58.443611387 +0000 UTC m=+102.782664891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : secret "multus-admission-controller-secret" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: E0216 16:59:57.443657 4155 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: E0216 16:59:57.443674 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:58.443668009 +0000 UTC m=+102.782721513 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : secret "metrics-tls" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: E0216 16:59:57.443731 4155 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: E0216 16:59:57.443749 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:58.443743981 +0000 UTC m=+102.782797485 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : secret "package-server-manager-serving-cert" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: E0216 16:59:57.443781 4155 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: E0216 16:59:57.443799 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 16:59:58.443794772 +0000 UTC m=+102.782848276 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : secret "cluster-monitoring-operator-tls" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: I0216 16:59:57.443820 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-czzz2"] Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: E0216 16:59:57.443836 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: E0216 16:59:57.443857 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:58.443850474 +0000 UTC m=+102.782903988 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "node-tuning-operator-tls" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: E0216 16:59:57.443907 4155 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: E0216 16:59:57.443957 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:58.443948606 +0000 UTC m=+102.783002130 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : secret "metrics-tls" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: E0216 16:59:57.444008 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: E0216 16:59:57.444031 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 16:59:58.444023779 +0000 UTC m=+102.783077293 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "performance-addon-operator-webhook-cert" not found Feb 16 16:59:57.446240 master-0 kubenswrapper[4155]: I0216 16:59:57.444334 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 16:59:57.447356 master-0 kubenswrapper[4155]: I0216 16:59:57.446373 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 16:59:57.544032 master-0 kubenswrapper[4155]: I0216 16:59:57.543898 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544032 master-0 kubenswrapper[4155]: I0216 16:59:57.544000 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544032 master-0 kubenswrapper[4155]: I0216 16:59:57.544036 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544304 master-0 kubenswrapper[4155]: I0216 16:59:57.544072 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544304 master-0 kubenswrapper[4155]: I0216 16:59:57.544161 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544304 master-0 kubenswrapper[4155]: I0216 16:59:57.544204 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544304 master-0 kubenswrapper[4155]: I0216 16:59:57.544272 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 16:59:57.544304 master-0 kubenswrapper[4155]: I0216 16:59:57.544281 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544426 master-0 kubenswrapper[4155]: I0216 16:59:57.544373 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544456 master-0 kubenswrapper[4155]: I0216 16:59:57.544421 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544456 master-0 kubenswrapper[4155]: I0216 16:59:57.544444 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544509 master-0 kubenswrapper[4155]: I0216 16:59:57.544480 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544509 master-0 kubenswrapper[4155]: I0216 16:59:57.544499 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544558 master-0 kubenswrapper[4155]: I0216 16:59:57.544522 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544558 master-0 kubenswrapper[4155]: I0216 16:59:57.544533 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544607 master-0 kubenswrapper[4155]: I0216 16:59:57.544581 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544607 master-0 kubenswrapper[4155]: I0216 16:59:57.544580 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544781 master-0 kubenswrapper[4155]: I0216 16:59:57.544749 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544824 master-0 kubenswrapper[4155]: I0216 16:59:57.544791 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544824 master-0 kubenswrapper[4155]: I0216 16:59:57.544797 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544906 master-0 kubenswrapper[4155]: I0216 16:59:57.544813 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 16:59:57.544906 master-0 kubenswrapper[4155]: I0216 16:59:57.544891 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.544906 master-0 kubenswrapper[4155]: I0216 16:59:57.544891 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545027 master-0 kubenswrapper[4155]: I0216 16:59:57.544944 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545027 master-0 kubenswrapper[4155]: I0216 16:59:57.544951 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545027 master-0 kubenswrapper[4155]: I0216 16:59:57.544961 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545027 master-0 kubenswrapper[4155]: I0216 16:59:57.544983 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545027 master-0 kubenswrapper[4155]: I0216 16:59:57.545020 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545147 master-0 kubenswrapper[4155]: I0216 16:59:57.545040 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545187 master-0 kubenswrapper[4155]: I0216 16:59:57.545108 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545187 master-0 kubenswrapper[4155]: I0216 16:59:57.545174 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545257 master-0 kubenswrapper[4155]: I0216 16:59:57.545210 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545257 master-0 kubenswrapper[4155]: I0216 16:59:57.545188 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545257 master-0 kubenswrapper[4155]: I0216 16:59:57.545226 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545257 master-0 kubenswrapper[4155]: I0216 16:59:57.545253 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545387 master-0 kubenswrapper[4155]: I0216 16:59:57.545280 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 16:59:57.545387 master-0 kubenswrapper[4155]: I0216 16:59:57.545317 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545387 master-0 kubenswrapper[4155]: I0216 16:59:57.545355 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545387 master-0 kubenswrapper[4155]: I0216 16:59:57.545368 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545518 master-0 kubenswrapper[4155]: I0216 16:59:57.545430 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.545597 master-0 kubenswrapper[4155]: I0216 16:59:57.545573 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.547195 master-0 kubenswrapper[4155]: I0216 16:59:57.547157 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:57.646232 master-0 kubenswrapper[4155]: I0216 16:59:57.646038 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 16:59:57.646398 master-0 kubenswrapper[4155]: I0216 16:59:57.646257 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 16:59:57.646441 master-0 kubenswrapper[4155]: I0216 16:59:57.646411 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 16:59:57.646475 master-0 kubenswrapper[4155]: I0216 16:59:57.646442 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 16:59:57.647200 master-0 kubenswrapper[4155]: I0216 16:59:57.647170 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 16:59:58.218969 master-0 kubenswrapper[4155]: I0216 16:59:58.211738 4155 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xsclm"] Feb 16 16:59:58.218969 master-0 kubenswrapper[4155]: I0216 16:59:58.216664 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:58.218969 master-0 kubenswrapper[4155]: I0216 16:59:58.218122 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 16:59:58.219434 master-0 kubenswrapper[4155]: I0216 16:59:58.219128 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.220684 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.221406 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.222367 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.222905 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.223709 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.226722 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.227796 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.230607 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.230757 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.230765 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.231041 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.232107 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.232112 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmk2b\" (UniqueName: \"kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 16:59:58.234617 master-0 kubenswrapper[4155]: I0216 16:59:58.232160 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:58.236045 master-0 kubenswrapper[4155]: I0216 16:59:58.235428 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 16:59:58.236863 master-0 kubenswrapper[4155]: I0216 16:59:58.236801 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 16:59:58.236863 master-0 kubenswrapper[4155]: I0216 16:59:58.236842 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 16:59:58.238599 master-0 kubenswrapper[4155]: I0216 16:59:58.238516 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 16:59:58.248104 master-0 kubenswrapper[4155]: I0216 16:59:58.248038 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:58.263952 master-0 kubenswrapper[4155]: I0216 16:59:58.263878 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 16:59:58.270658 master-0 kubenswrapper[4155]: I0216 16:59:58.270618 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:58.277353 master-0 kubenswrapper[4155]: I0216 16:59:58.277321 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 16:59:58.283538 master-0 kubenswrapper[4155]: I0216 16:59:58.283406 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 16:59:58.308328 master-0 kubenswrapper[4155]: I0216 16:59:58.308260 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 16:59:58.314945 master-0 kubenswrapper[4155]: I0216 16:59:58.314581 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:58.333296 master-0 kubenswrapper[4155]: I0216 16:59:58.333251 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 16:59:58.337277 master-0 kubenswrapper[4155]: I0216 16:59:58.337235 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 16:59:58.344563 master-0 kubenswrapper[4155]: I0216 16:59:58.344504 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 16:59:58.350187 master-0 kubenswrapper[4155]: I0216 16:59:58.349807 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: I0216 16:59:58.456462 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: I0216 16:59:58.456567 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: I0216 16:59:58.456601 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: I0216 16:59:58.456632 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: I0216 16:59:58.456661 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: I0216 16:59:58.456685 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: I0216 16:59:58.456732 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: I0216 16:59:58.456755 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: I0216 16:59:58.456782 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: E0216 16:59:58.456905 4155 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: E0216 16:59:58.456977 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:00.456959619 +0000 UTC m=+104.796013123 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : secret "marketplace-operator-metrics" not found Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: E0216 16:59:58.457034 4155 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: E0216 16:59:58.457061 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:00:00.457052822 +0000 UTC m=+104.796106326 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : secret "image-registry-operator-tls" not found Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: E0216 16:59:58.457106 4155 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 16:59:58.460608 master-0 kubenswrapper[4155]: E0216 16:59:58.457130 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:00:00.457121914 +0000 UTC m=+104.796175418 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : secret "multus-admission-controller-secret" not found Feb 16 16:59:58.462662 master-0 kubenswrapper[4155]: E0216 16:59:58.457176 4155 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 16:59:58.462662 master-0 kubenswrapper[4155]: E0216 16:59:58.457199 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:00.457191796 +0000 UTC m=+104.796245300 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : secret "metrics-tls" not found Feb 16 16:59:58.462662 master-0 kubenswrapper[4155]: E0216 16:59:58.457242 4155 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 16:59:58.462662 master-0 kubenswrapper[4155]: E0216 16:59:58.457268 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:00.457260058 +0000 UTC m=+104.796313572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : secret "package-server-manager-serving-cert" not found Feb 16 16:59:58.462662 master-0 kubenswrapper[4155]: E0216 16:59:58.457314 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 16:59:58.462662 master-0 kubenswrapper[4155]: E0216 16:59:58.457340 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:00.457332379 +0000 UTC m=+104.796385883 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "node-tuning-operator-tls" not found Feb 16 16:59:58.462662 master-0 kubenswrapper[4155]: E0216 16:59:58.457385 4155 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 16:59:58.462662 master-0 kubenswrapper[4155]: E0216 16:59:58.457409 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:00:00.457401511 +0000 UTC m=+104.796455015 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : secret "cluster-monitoring-operator-tls" not found Feb 16 16:59:58.462662 master-0 kubenswrapper[4155]: E0216 16:59:58.457469 4155 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 16:59:58.462662 master-0 kubenswrapper[4155]: E0216 16:59:58.457491 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:00.457484224 +0000 UTC m=+104.796537728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : secret "metrics-tls" not found Feb 16 16:59:58.462662 master-0 kubenswrapper[4155]: E0216 16:59:58.457535 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 16:59:58.462662 master-0 kubenswrapper[4155]: E0216 16:59:58.457558 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:00.457551015 +0000 UTC m=+104.796604519 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "performance-addon-operator-webhook-cert" not found Feb 16 16:59:58.465442 master-0 kubenswrapper[4155]: E0216 16:59:58.465119 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 16:59:58.465442 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator_d020c902-2adb-4919-8dd9-0c2109830580_0(8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa): error adding pod openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa" Netns:"/var/run/netns/bba907b7-1ee2-456e-adb0-906b6dc4cffd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-54984b6678-gp8gv;K8S_POD_INFRA_CONTAINER_ID=8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa;K8S_POD_UID=d020c902-2adb-4919-8dd9-0c2109830580" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv] networking: [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv/d020c902-2adb-4919-8dd9-0c2109830580:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.465442 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.465442 master-0 kubenswrapper[4155]: > Feb 16 16:59:58.465442 master-0 kubenswrapper[4155]: E0216 16:59:58.465204 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 16:59:58.465442 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator_d020c902-2adb-4919-8dd9-0c2109830580_0(8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa): error adding pod openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa" Netns:"/var/run/netns/bba907b7-1ee2-456e-adb0-906b6dc4cffd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-54984b6678-gp8gv;K8S_POD_INFRA_CONTAINER_ID=8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa;K8S_POD_UID=d020c902-2adb-4919-8dd9-0c2109830580" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv] networking: [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv/d020c902-2adb-4919-8dd9-0c2109830580:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.465442 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.465442 master-0 kubenswrapper[4155]: > pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 16:59:58.465442 master-0 kubenswrapper[4155]: E0216 16:59:58.465228 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 16:59:58.465442 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator_d020c902-2adb-4919-8dd9-0c2109830580_0(8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa): error adding pod openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa" Netns:"/var/run/netns/bba907b7-1ee2-456e-adb0-906b6dc4cffd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-54984b6678-gp8gv;K8S_POD_INFRA_CONTAINER_ID=8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa;K8S_POD_UID=d020c902-2adb-4919-8dd9-0c2109830580" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv] networking: [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv/d020c902-2adb-4919-8dd9-0c2109830580:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.465442 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.465442 master-0 kubenswrapper[4155]: > pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 16:59:58.466614 master-0 kubenswrapper[4155]: E0216 16:59:58.465291 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator(d020c902-2adb-4919-8dd9-0c2109830580)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator(d020c902-2adb-4919-8dd9-0c2109830580)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator_d020c902-2adb-4919-8dd9-0c2109830580_0(8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa): error adding pod openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa\\\" Netns:\\\"/var/run/netns/bba907b7-1ee2-456e-adb0-906b6dc4cffd\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-54984b6678-gp8gv;K8S_POD_INFRA_CONTAINER_ID=8002491fda0e247fdf2662be7a07345fe5a59d610a6631e93ba98e8dd873d8fa;K8S_POD_UID=d020c902-2adb-4919-8dd9-0c2109830580\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv] networking: [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv/d020c902-2adb-4919-8dd9-0c2109830580:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 16:59:58.473480 master-0 kubenswrapper[4155]: E0216 16:59:58.470930 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 16:59:58.473480 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator_e69d8c51-e2a6-4f61-9c26-072784f6cf40_0(67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e): error adding pod openshift-config-operator_openshift-config-operator-7c6bdb986f-v8dr8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e" Netns:"/var/run/netns/e5a96e5f-0b0f-4350-9658-36fcc994f6a3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-7c6bdb986f-v8dr8;K8S_POD_INFRA_CONTAINER_ID=67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e;K8S_POD_UID=e69d8c51-e2a6-4f61-9c26-072784f6cf40" Path:"" ERRORED: error configuring pod [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8] networking: [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8/e69d8c51-e2a6-4f61-9c26-072784f6cf40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.473480 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.473480 master-0 kubenswrapper[4155]: > Feb 16 16:59:58.473480 master-0 kubenswrapper[4155]: E0216 16:59:58.471004 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 16:59:58.473480 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator_e69d8c51-e2a6-4f61-9c26-072784f6cf40_0(67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e): error adding pod openshift-config-operator_openshift-config-operator-7c6bdb986f-v8dr8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e" Netns:"/var/run/netns/e5a96e5f-0b0f-4350-9658-36fcc994f6a3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-7c6bdb986f-v8dr8;K8S_POD_INFRA_CONTAINER_ID=67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e;K8S_POD_UID=e69d8c51-e2a6-4f61-9c26-072784f6cf40" Path:"" ERRORED: error configuring pod [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8] networking: [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8/e69d8c51-e2a6-4f61-9c26-072784f6cf40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.473480 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.473480 master-0 kubenswrapper[4155]: > pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 16:59:58.473480 master-0 kubenswrapper[4155]: E0216 16:59:58.471025 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 16:59:58.473480 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator_e69d8c51-e2a6-4f61-9c26-072784f6cf40_0(67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e): error adding pod openshift-config-operator_openshift-config-operator-7c6bdb986f-v8dr8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e" Netns:"/var/run/netns/e5a96e5f-0b0f-4350-9658-36fcc994f6a3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-7c6bdb986f-v8dr8;K8S_POD_INFRA_CONTAINER_ID=67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e;K8S_POD_UID=e69d8c51-e2a6-4f61-9c26-072784f6cf40" Path:"" ERRORED: error configuring pod [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8] networking: [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8/e69d8c51-e2a6-4f61-9c26-072784f6cf40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.473480 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.473480 master-0 kubenswrapper[4155]: > pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 16:59:58.473882 master-0 kubenswrapper[4155]: E0216 16:59:58.471084 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator(e69d8c51-e2a6-4f61-9c26-072784f6cf40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator(e69d8c51-e2a6-4f61-9c26-072784f6cf40)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator_e69d8c51-e2a6-4f61-9c26-072784f6cf40_0(67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e): error adding pod openshift-config-operator_openshift-config-operator-7c6bdb986f-v8dr8 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e\\\" Netns:\\\"/var/run/netns/e5a96e5f-0b0f-4350-9658-36fcc994f6a3\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-7c6bdb986f-v8dr8;K8S_POD_INFRA_CONTAINER_ID=67e1bae23319139d7024954eb4999d70d8015fe9f4aa752c730c783c7a2e4f0e;K8S_POD_UID=e69d8c51-e2a6-4f61-9c26-072784f6cf40\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8] networking: [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8/e69d8c51-e2a6-4f61-9c26-072784f6cf40:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 16:59:58.473882 master-0 kubenswrapper[4155]: E0216 16:59:58.472777 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 16:59:58.473882 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_authentication-operator-755d954778-lf4cb_openshift-authentication-operator_9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41_0(7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe): error adding pod openshift-authentication-operator_authentication-operator-755d954778-lf4cb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe" Netns:"/var/run/netns/1aa4ba52-cd68-418d-a567-84c6be05a263" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-755d954778-lf4cb;K8S_POD_INFRA_CONTAINER_ID=7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe;K8S_POD_UID=9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Path:"" ERRORED: error configuring pod [openshift-authentication-operator/authentication-operator-755d954778-lf4cb] networking: [openshift-authentication-operator/authentication-operator-755d954778-lf4cb/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.473882 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.473882 master-0 kubenswrapper[4155]: > Feb 16 16:59:58.473882 master-0 kubenswrapper[4155]: E0216 16:59:58.472811 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 16:59:58.473882 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_authentication-operator-755d954778-lf4cb_openshift-authentication-operator_9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41_0(7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe): error adding pod openshift-authentication-operator_authentication-operator-755d954778-lf4cb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe" Netns:"/var/run/netns/1aa4ba52-cd68-418d-a567-84c6be05a263" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-755d954778-lf4cb;K8S_POD_INFRA_CONTAINER_ID=7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe;K8S_POD_UID=9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Path:"" ERRORED: error configuring pod [openshift-authentication-operator/authentication-operator-755d954778-lf4cb] networking: [openshift-authentication-operator/authentication-operator-755d954778-lf4cb/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.473882 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.474175 master-0 kubenswrapper[4155]: > pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:58.474175 master-0 kubenswrapper[4155]: E0216 16:59:58.472830 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 16:59:58.474175 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_authentication-operator-755d954778-lf4cb_openshift-authentication-operator_9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41_0(7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe): error adding pod openshift-authentication-operator_authentication-operator-755d954778-lf4cb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe" Netns:"/var/run/netns/1aa4ba52-cd68-418d-a567-84c6be05a263" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-755d954778-lf4cb;K8S_POD_INFRA_CONTAINER_ID=7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe;K8S_POD_UID=9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Path:"" ERRORED: error configuring pod [openshift-authentication-operator/authentication-operator-755d954778-lf4cb] networking: [openshift-authentication-operator/authentication-operator-755d954778-lf4cb/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.474175 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.474175 master-0 kubenswrapper[4155]: > pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 16:59:58.474175 master-0 kubenswrapper[4155]: E0216 16:59:58.472883 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"authentication-operator-755d954778-lf4cb_openshift-authentication-operator(9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"authentication-operator-755d954778-lf4cb_openshift-authentication-operator(9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_authentication-operator-755d954778-lf4cb_openshift-authentication-operator_9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41_0(7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe): error adding pod openshift-authentication-operator_authentication-operator-755d954778-lf4cb to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe\\\" Netns:\\\"/var/run/netns/1aa4ba52-cd68-418d-a567-84c6be05a263\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-755d954778-lf4cb;K8S_POD_INFRA_CONTAINER_ID=7d93cb4502ffa248022429ed843e52a6dda3fa68352e9ee52370209f6d1530fe;K8S_POD_UID=9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication-operator/authentication-operator-755d954778-lf4cb] networking: [openshift-authentication-operator/authentication-operator-755d954778-lf4cb/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 16:59:58.483988 master-0 kubenswrapper[4155]: E0216 16:59:58.483897 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 16:59:58.483988 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator_29402454-a920-471e-895e-764235d16eb4_0(a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e): error adding pod openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e" Netns:"/var/run/netns/7437ea00-14a5-4896-a312-ed4facd15119" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-5dc4688546-pl7r5;K8S_POD_INFRA_CONTAINER_ID=a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e;K8S_POD_UID=29402454-a920-471e-895e-764235d16eb4" Path:"" ERRORED: error configuring pod [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5] networking: [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5/29402454-a920-471e-895e-764235d16eb4:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.483988 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.483988 master-0 kubenswrapper[4155]: > Feb 16 16:59:58.484405 master-0 kubenswrapper[4155]: E0216 16:59:58.484007 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 16:59:58.484405 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator_29402454-a920-471e-895e-764235d16eb4_0(a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e): error adding pod openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e" Netns:"/var/run/netns/7437ea00-14a5-4896-a312-ed4facd15119" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-5dc4688546-pl7r5;K8S_POD_INFRA_CONTAINER_ID=a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e;K8S_POD_UID=29402454-a920-471e-895e-764235d16eb4" Path:"" ERRORED: error configuring pod [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5] networking: [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5/29402454-a920-471e-895e-764235d16eb4:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.484405 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.484405 master-0 kubenswrapper[4155]: > pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 16:59:58.484405 master-0 kubenswrapper[4155]: E0216 16:59:58.484036 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 16:59:58.484405 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator_29402454-a920-471e-895e-764235d16eb4_0(a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e): error adding pod openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e" Netns:"/var/run/netns/7437ea00-14a5-4896-a312-ed4facd15119" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-5dc4688546-pl7r5;K8S_POD_INFRA_CONTAINER_ID=a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e;K8S_POD_UID=29402454-a920-471e-895e-764235d16eb4" Path:"" ERRORED: error configuring pod [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5] networking: [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5/29402454-a920-471e-895e-764235d16eb4:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.484405 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.484405 master-0 kubenswrapper[4155]: > pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 16:59:58.484711 master-0 kubenswrapper[4155]: E0216 16:59:58.484095 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator(29402454-a920-471e-895e-764235d16eb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator(29402454-a920-471e-895e-764235d16eb4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator_29402454-a920-471e-895e-764235d16eb4_0(a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e): error adding pod openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e\\\" Netns:\\\"/var/run/netns/7437ea00-14a5-4896-a312-ed4facd15119\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-5dc4688546-pl7r5;K8S_POD_INFRA_CONTAINER_ID=a04f7d073c17a4c39225017f51732020490208057d3e3110b685e0b4d5077a0e;K8S_POD_UID=29402454-a920-471e-895e-764235d16eb4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5] networking: [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5/29402454-a920-471e-895e-764235d16eb4:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 16:59:58.498823 master-0 kubenswrapper[4155]: E0216 16:59:58.498748 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 16:59:58.498823 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator_6b3e071c-1c62-489b-91c1-aef0d197f40b_0(afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38): error adding pod openshift-etcd-operator_etcd-operator-67bf55ccdd-cppj8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38" Netns:"/var/run/netns/db21bb61-7a5d-45f3-b573-dc8187d9d155" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-67bf55ccdd-cppj8;K8S_POD_INFRA_CONTAINER_ID=afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38;K8S_POD_UID=6b3e071c-1c62-489b-91c1-aef0d197f40b" Path:"" ERRORED: error configuring pod [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8] networking: [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8/6b3e071c-1c62-489b-91c1-aef0d197f40b:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.498823 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.498823 master-0 kubenswrapper[4155]: > Feb 16 16:59:58.498823 master-0 kubenswrapper[4155]: E0216 16:59:58.498814 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 16:59:58.498823 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator_6b3e071c-1c62-489b-91c1-aef0d197f40b_0(afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38): error adding pod openshift-etcd-operator_etcd-operator-67bf55ccdd-cppj8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38" Netns:"/var/run/netns/db21bb61-7a5d-45f3-b573-dc8187d9d155" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-67bf55ccdd-cppj8;K8S_POD_INFRA_CONTAINER_ID=afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38;K8S_POD_UID=6b3e071c-1c62-489b-91c1-aef0d197f40b" Path:"" ERRORED: error configuring pod [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8] networking: [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8/6b3e071c-1c62-489b-91c1-aef0d197f40b:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.498823 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.498823 master-0 kubenswrapper[4155]: > pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:58.499077 master-0 kubenswrapper[4155]: E0216 16:59:58.498832 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 16:59:58.499077 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator_6b3e071c-1c62-489b-91c1-aef0d197f40b_0(afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38): error adding pod openshift-etcd-operator_etcd-operator-67bf55ccdd-cppj8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38" Netns:"/var/run/netns/db21bb61-7a5d-45f3-b573-dc8187d9d155" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-67bf55ccdd-cppj8;K8S_POD_INFRA_CONTAINER_ID=afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38;K8S_POD_UID=6b3e071c-1c62-489b-91c1-aef0d197f40b" Path:"" ERRORED: error configuring pod [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8] networking: [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8/6b3e071c-1c62-489b-91c1-aef0d197f40b:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.499077 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.499077 master-0 kubenswrapper[4155]: > pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 16:59:58.499194 master-0 kubenswrapper[4155]: E0216 16:59:58.499122 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator(6b3e071c-1c62-489b-91c1-aef0d197f40b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator(6b3e071c-1c62-489b-91c1-aef0d197f40b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator_6b3e071c-1c62-489b-91c1-aef0d197f40b_0(afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38): error adding pod openshift-etcd-operator_etcd-operator-67bf55ccdd-cppj8 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38\\\" Netns:\\\"/var/run/netns/db21bb61-7a5d-45f3-b573-dc8187d9d155\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-67bf55ccdd-cppj8;K8S_POD_INFRA_CONTAINER_ID=afc0ebee0ce67d7db3b2e28e159fc34ee079fae8c754db466db5ad5cc84c5f38;K8S_POD_UID=6b3e071c-1c62-489b-91c1-aef0d197f40b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8] networking: [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8/6b3e071c-1c62-489b-91c1-aef0d197f40b:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 16:59:58.511083 master-0 kubenswrapper[4155]: I0216 16:59:58.510544 4155 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-xsclm"] Feb 16 16:59:58.512134 master-0 kubenswrapper[4155]: E0216 16:59:58.512075 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 16:59:58.512134 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator_737fcc7d-d850-4352-9f17-383c85d5bc28_0(9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40): error adding pod openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-qhn9v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40" Netns:"/var/run/netns/d4891200-80fc-498e-b0a2-148d1fb5cc8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-6d4655d9cf-qhn9v;K8S_POD_INFRA_CONTAINER_ID=9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40;K8S_POD_UID=737fcc7d-d850-4352-9f17-383c85d5bc28" Path:"" ERRORED: error configuring pod [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v] networking: [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v/737fcc7d-d850-4352-9f17-383c85d5bc28:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.512134 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.512134 master-0 kubenswrapper[4155]: > Feb 16 16:59:58.512265 master-0 kubenswrapper[4155]: E0216 16:59:58.512161 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 16:59:58.512265 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator_737fcc7d-d850-4352-9f17-383c85d5bc28_0(9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40): error adding pod openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-qhn9v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40" Netns:"/var/run/netns/d4891200-80fc-498e-b0a2-148d1fb5cc8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-6d4655d9cf-qhn9v;K8S_POD_INFRA_CONTAINER_ID=9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40;K8S_POD_UID=737fcc7d-d850-4352-9f17-383c85d5bc28" Path:"" ERRORED: error configuring pod [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v] networking: [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v/737fcc7d-d850-4352-9f17-383c85d5bc28:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.512265 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.512265 master-0 kubenswrapper[4155]: > pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 16:59:58.512265 master-0 kubenswrapper[4155]: E0216 16:59:58.512198 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 16:59:58.512265 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator_737fcc7d-d850-4352-9f17-383c85d5bc28_0(9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40): error adding pod openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-qhn9v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40" Netns:"/var/run/netns/d4891200-80fc-498e-b0a2-148d1fb5cc8b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-6d4655d9cf-qhn9v;K8S_POD_INFRA_CONTAINER_ID=9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40;K8S_POD_UID=737fcc7d-d850-4352-9f17-383c85d5bc28" Path:"" ERRORED: error configuring pod [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v] networking: [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v/737fcc7d-d850-4352-9f17-383c85d5bc28:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.512265 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.512265 master-0 kubenswrapper[4155]: > pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 16:59:58.512519 master-0 kubenswrapper[4155]: E0216 16:59:58.512256 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator(737fcc7d-d850-4352-9f17-383c85d5bc28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator(737fcc7d-d850-4352-9f17-383c85d5bc28)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator_737fcc7d-d850-4352-9f17-383c85d5bc28_0(9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40): error adding pod openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-qhn9v to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40\\\" Netns:\\\"/var/run/netns/d4891200-80fc-498e-b0a2-148d1fb5cc8b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-6d4655d9cf-qhn9v;K8S_POD_INFRA_CONTAINER_ID=9ad6258e251e3dbbaadec002e826387d150df6d5da093f60a971bd84fcd05e40;K8S_POD_UID=737fcc7d-d850-4352-9f17-383c85d5bc28\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v] networking: [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v/737fcc7d-d850-4352-9f17-383c85d5bc28:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 16:59:58.519748 master-0 kubenswrapper[4155]: I0216 16:59:58.519704 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 16:59:58.520195 master-0 kubenswrapper[4155]: I0216 16:59:58.520166 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:58.522675 master-0 kubenswrapper[4155]: I0216 16:59:58.522005 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 16:59:58.532159 master-0 kubenswrapper[4155]: I0216 16:59:58.532101 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 16:59:58.533459 master-0 kubenswrapper[4155]: E0216 16:59:58.533379 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 16:59:58.533459 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator_eaf7edff-0a89-4ac0-b9dd-511e098b5434_0(2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442): error adding pod openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442" Netns:"/var/run/netns/8996bbae-c1b9-4d59-a8e8-15086fa1b425" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-7485d55966-sgmpf;K8S_POD_INFRA_CONTAINER_ID=2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442;K8S_POD_UID=eaf7edff-0a89-4ac0-b9dd-511e098b5434" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf] networking: [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf/eaf7edff-0a89-4ac0-b9dd-511e098b5434:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.533459 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.533459 master-0 kubenswrapper[4155]: > Feb 16 16:59:58.533587 master-0 kubenswrapper[4155]: E0216 16:59:58.533493 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 16:59:58.533587 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator_eaf7edff-0a89-4ac0-b9dd-511e098b5434_0(2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442): error adding pod openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442" Netns:"/var/run/netns/8996bbae-c1b9-4d59-a8e8-15086fa1b425" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-7485d55966-sgmpf;K8S_POD_INFRA_CONTAINER_ID=2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442;K8S_POD_UID=eaf7edff-0a89-4ac0-b9dd-511e098b5434" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf] networking: [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf/eaf7edff-0a89-4ac0-b9dd-511e098b5434:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.533587 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.533587 master-0 kubenswrapper[4155]: > pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 16:59:58.533587 master-0 kubenswrapper[4155]: E0216 16:59:58.533524 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 16:59:58.533587 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator_eaf7edff-0a89-4ac0-b9dd-511e098b5434_0(2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442): error adding pod openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442" Netns:"/var/run/netns/8996bbae-c1b9-4d59-a8e8-15086fa1b425" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-7485d55966-sgmpf;K8S_POD_INFRA_CONTAINER_ID=2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442;K8S_POD_UID=eaf7edff-0a89-4ac0-b9dd-511e098b5434" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf] networking: [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf/eaf7edff-0a89-4ac0-b9dd-511e098b5434:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.533587 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.533587 master-0 kubenswrapper[4155]: > pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 16:59:58.533784 master-0 kubenswrapper[4155]: E0216 16:59:58.533605 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator(eaf7edff-0a89-4ac0-b9dd-511e098b5434)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator(eaf7edff-0a89-4ac0-b9dd-511e098b5434)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator_eaf7edff-0a89-4ac0-b9dd-511e098b5434_0(2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442): error adding pod openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442\\\" Netns:\\\"/var/run/netns/8996bbae-c1b9-4d59-a8e8-15086fa1b425\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-7485d55966-sgmpf;K8S_POD_INFRA_CONTAINER_ID=2bc3babcbb6f126937892bc8065ace4c78159ec11b8ed57951ba23b509746442;K8S_POD_UID=eaf7edff-0a89-4ac0-b9dd-511e098b5434\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf] networking: [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf/eaf7edff-0a89-4ac0-b9dd-511e098b5434:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 16:59:58.535514 master-0 kubenswrapper[4155]: E0216 16:59:58.535464 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 16:59:58.535514 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator_970d4376-f299-412c-a8ee-90aa980c689e_0(fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778): error adding pod openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-q55rf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778" Netns:"/var/run/netns/b2be8708-dd7f-47c4-b615-ab54fdd9a6af" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=csi-snapshot-controller-operator-7b87b97578-q55rf;K8S_POD_INFRA_CONTAINER_ID=fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778;K8S_POD_UID=970d4376-f299-412c-a8ee-90aa980c689e" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf] networking: [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf/970d4376-f299-412c-a8ee-90aa980c689e:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.535514 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.535514 master-0 kubenswrapper[4155]: > Feb 16 16:59:58.535630 master-0 kubenswrapper[4155]: E0216 16:59:58.535546 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 16:59:58.535630 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator_970d4376-f299-412c-a8ee-90aa980c689e_0(fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778): error adding pod openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-q55rf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778" Netns:"/var/run/netns/b2be8708-dd7f-47c4-b615-ab54fdd9a6af" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=csi-snapshot-controller-operator-7b87b97578-q55rf;K8S_POD_INFRA_CONTAINER_ID=fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778;K8S_POD_UID=970d4376-f299-412c-a8ee-90aa980c689e" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf] networking: [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf/970d4376-f299-412c-a8ee-90aa980c689e:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.535630 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.535630 master-0 kubenswrapper[4155]: > pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 16:59:58.535630 master-0 kubenswrapper[4155]: E0216 16:59:58.535571 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 16:59:58.535630 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator_970d4376-f299-412c-a8ee-90aa980c689e_0(fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778): error adding pod openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-q55rf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778" Netns:"/var/run/netns/b2be8708-dd7f-47c4-b615-ab54fdd9a6af" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=csi-snapshot-controller-operator-7b87b97578-q55rf;K8S_POD_INFRA_CONTAINER_ID=fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778;K8S_POD_UID=970d4376-f299-412c-a8ee-90aa980c689e" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf] networking: [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf/970d4376-f299-412c-a8ee-90aa980c689e:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.535630 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.535630 master-0 kubenswrapper[4155]: > pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 16:59:58.535839 master-0 kubenswrapper[4155]: E0216 16:59:58.535640 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator(970d4376-f299-412c-a8ee-90aa980c689e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator(970d4376-f299-412c-a8ee-90aa980c689e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator_970d4376-f299-412c-a8ee-90aa980c689e_0(fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778): error adding pod openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-q55rf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778\\\" Netns:\\\"/var/run/netns/b2be8708-dd7f-47c4-b615-ab54fdd9a6af\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=csi-snapshot-controller-operator-7b87b97578-q55rf;K8S_POD_INFRA_CONTAINER_ID=fc1ea8e89ae816a7e9621632d7327d3181ca587da55bd33150d371e0426f5778;K8S_POD_UID=970d4376-f299-412c-a8ee-90aa980c689e\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf] networking: [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf/970d4376-f299-412c-a8ee-90aa980c689e:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 16:59:58.545201 master-0 kubenswrapper[4155]: E0216 16:59:58.545070 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 16:59:58.545201 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator_4e51bba5-0ebe-4e55-a588-38b71548c605_0(ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0): error adding pod openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0" Netns:"/var/run/netns/2180ed14-522f-46d7-8ba0-4349a862d134" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-olm-operator;K8S_POD_NAME=cluster-olm-operator-55b69c6c48-7chjv;K8S_POD_INFRA_CONTAINER_ID=ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0;K8S_POD_UID=4e51bba5-0ebe-4e55-a588-38b71548c605" Path:"" ERRORED: error configuring pod [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv] networking: [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv/4e51bba5-0ebe-4e55-a588-38b71548c605:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.545201 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.545201 master-0 kubenswrapper[4155]: > Feb 16 16:59:58.545446 master-0 kubenswrapper[4155]: E0216 16:59:58.545232 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 16:59:58.545446 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator_4e51bba5-0ebe-4e55-a588-38b71548c605_0(ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0): error adding pod openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0" Netns:"/var/run/netns/2180ed14-522f-46d7-8ba0-4349a862d134" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-olm-operator;K8S_POD_NAME=cluster-olm-operator-55b69c6c48-7chjv;K8S_POD_INFRA_CONTAINER_ID=ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0;K8S_POD_UID=4e51bba5-0ebe-4e55-a588-38b71548c605" Path:"" ERRORED: error configuring pod [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv] networking: [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv/4e51bba5-0ebe-4e55-a588-38b71548c605:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.545446 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.545446 master-0 kubenswrapper[4155]: > pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 16:59:58.545446 master-0 kubenswrapper[4155]: E0216 16:59:58.545262 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 16:59:58.545446 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator_4e51bba5-0ebe-4e55-a588-38b71548c605_0(ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0): error adding pod openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0" Netns:"/var/run/netns/2180ed14-522f-46d7-8ba0-4349a862d134" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-olm-operator;K8S_POD_NAME=cluster-olm-operator-55b69c6c48-7chjv;K8S_POD_INFRA_CONTAINER_ID=ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0;K8S_POD_UID=4e51bba5-0ebe-4e55-a588-38b71548c605" Path:"" ERRORED: error configuring pod [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv] networking: [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv/4e51bba5-0ebe-4e55-a588-38b71548c605:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.545446 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.545446 master-0 kubenswrapper[4155]: > pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 16:59:58.545741 master-0 kubenswrapper[4155]: E0216 16:59:58.545344 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator(4e51bba5-0ebe-4e55-a588-38b71548c605)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator(4e51bba5-0ebe-4e55-a588-38b71548c605)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator_4e51bba5-0ebe-4e55-a588-38b71548c605_0(ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0): error adding pod openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0\\\" Netns:\\\"/var/run/netns/2180ed14-522f-46d7-8ba0-4349a862d134\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-olm-operator;K8S_POD_NAME=cluster-olm-operator-55b69c6c48-7chjv;K8S_POD_INFRA_CONTAINER_ID=ed265d6536a68d81dda61816fa2822ca400dbf17dc0da85b56379df1fdf318f0;K8S_POD_UID=4e51bba5-0ebe-4e55-a588-38b71548c605\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv] networking: [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv/4e51bba5-0ebe-4e55-a588-38b71548c605:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 16:59:58.548667 master-0 kubenswrapper[4155]: E0216 16:59:58.548616 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 16:59:58.548667 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator_edbaac23-11f0-4bc7-a7ce-b593c774c0fa_0(a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338): error adding pod openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338" Netns:"/var/run/netns/98988e83-c517-4c97-a50c-bd966596e7a4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-5f5f84757d-ktmm9;K8S_POD_INFRA_CONTAINER_ID=a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338;K8S_POD_UID=edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Path:"" ERRORED: error configuring pod [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9] networking: [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9/edbaac23-11f0-4bc7-a7ce-b593c774c0fa:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.548667 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.548667 master-0 kubenswrapper[4155]: > Feb 16 16:59:58.548905 master-0 kubenswrapper[4155]: E0216 16:59:58.548700 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 16:59:58.548905 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator_edbaac23-11f0-4bc7-a7ce-b593c774c0fa_0(a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338): error adding pod openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338" Netns:"/var/run/netns/98988e83-c517-4c97-a50c-bd966596e7a4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-5f5f84757d-ktmm9;K8S_POD_INFRA_CONTAINER_ID=a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338;K8S_POD_UID=edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Path:"" ERRORED: error configuring pod [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9] networking: [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9/edbaac23-11f0-4bc7-a7ce-b593c774c0fa:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.548905 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.548905 master-0 kubenswrapper[4155]: > pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 16:59:58.548905 master-0 kubenswrapper[4155]: E0216 16:59:58.548723 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 16:59:58.548905 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator_edbaac23-11f0-4bc7-a7ce-b593c774c0fa_0(a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338): error adding pod openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338" Netns:"/var/run/netns/98988e83-c517-4c97-a50c-bd966596e7a4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-5f5f84757d-ktmm9;K8S_POD_INFRA_CONTAINER_ID=a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338;K8S_POD_UID=edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Path:"" ERRORED: error configuring pod [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9] networking: [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9/edbaac23-11f0-4bc7-a7ce-b593c774c0fa:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.548905 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.548905 master-0 kubenswrapper[4155]: > pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 16:59:58.549181 master-0 kubenswrapper[4155]: E0216 16:59:58.548796 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator(edbaac23-11f0-4bc7-a7ce-b593c774c0fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator(edbaac23-11f0-4bc7-a7ce-b593c774c0fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator_edbaac23-11f0-4bc7-a7ce-b593c774c0fa_0(a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338): error adding pod openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338\\\" Netns:\\\"/var/run/netns/98988e83-c517-4c97-a50c-bd966596e7a4\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-5f5f84757d-ktmm9;K8S_POD_INFRA_CONTAINER_ID=a2ec0a2826dc82749d055f95ddfb454465724d6d16beaba46a9182d655ce3338;K8S_POD_UID=edbaac23-11f0-4bc7-a7ce-b593c774c0fa\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9] networking: [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9/edbaac23-11f0-4bc7-a7ce-b593c774c0fa:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 16:59:58.613033 master-0 kubenswrapper[4155]: I0216 16:59:58.612963 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 16:59:58.657697 master-0 kubenswrapper[4155]: I0216 16:59:58.657585 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 16:59:58.710961 master-0 kubenswrapper[4155]: E0216 16:59:58.710654 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 16:59:58.710961 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator_442600dc-09b2-4fee-9f89-777296b2ee40_0(22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab): error adding pod openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab" Netns:"/var/run/netns/04d428b0-1e00-46ad-8b1a-2b3539456e9e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-78ff47c7c5-txr5k;K8S_POD_INFRA_CONTAINER_ID=22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab;K8S_POD_UID=442600dc-09b2-4fee-9f89-777296b2ee40" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k] networking: [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k/442600dc-09b2-4fee-9f89-777296b2ee40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.710961 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.710961 master-0 kubenswrapper[4155]: > Feb 16 16:59:58.710961 master-0 kubenswrapper[4155]: E0216 16:59:58.710733 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 16:59:58.710961 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator_442600dc-09b2-4fee-9f89-777296b2ee40_0(22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab): error adding pod openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab" Netns:"/var/run/netns/04d428b0-1e00-46ad-8b1a-2b3539456e9e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-78ff47c7c5-txr5k;K8S_POD_INFRA_CONTAINER_ID=22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab;K8S_POD_UID=442600dc-09b2-4fee-9f89-777296b2ee40" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k] networking: [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k/442600dc-09b2-4fee-9f89-777296b2ee40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.710961 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.710961 master-0 kubenswrapper[4155]: > pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 16:59:58.710961 master-0 kubenswrapper[4155]: E0216 16:59:58.710763 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 16:59:58.710961 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator_442600dc-09b2-4fee-9f89-777296b2ee40_0(22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab): error adding pod openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab" Netns:"/var/run/netns/04d428b0-1e00-46ad-8b1a-2b3539456e9e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-78ff47c7c5-txr5k;K8S_POD_INFRA_CONTAINER_ID=22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab;K8S_POD_UID=442600dc-09b2-4fee-9f89-777296b2ee40" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k] networking: [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k/442600dc-09b2-4fee-9f89-777296b2ee40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.710961 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.710961 master-0 kubenswrapper[4155]: > pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 16:59:58.711405 master-0 kubenswrapper[4155]: E0216 16:59:58.710825 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator(442600dc-09b2-4fee-9f89-777296b2ee40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator(442600dc-09b2-4fee-9f89-777296b2ee40)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator_442600dc-09b2-4fee-9f89-777296b2ee40_0(22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab): error adding pod openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab\\\" Netns:\\\"/var/run/netns/04d428b0-1e00-46ad-8b1a-2b3539456e9e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-78ff47c7c5-txr5k;K8S_POD_INFRA_CONTAINER_ID=22c631e15f175386fc7bc2946d5bbdc67e8b6c0891aab2689fbf461e266b07ab;K8S_POD_UID=442600dc-09b2-4fee-9f89-777296b2ee40\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k] networking: [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k/442600dc-09b2-4fee-9f89-777296b2ee40:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 16:59:58.715446 master-0 kubenswrapper[4155]: E0216 16:59:58.715395 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 16:59:58.715446 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator_8e623376-9e14-4341-9dcf-7a7c218b6f9f_0(d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e): error adding pod openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-829l6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e" Netns:"/var/run/netns/1101c864-1f7c-4508-ad48-be8e4697efa6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-cd5474998-829l6;K8S_POD_INFRA_CONTAINER_ID=d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e;K8S_POD_UID=8e623376-9e14-4341-9dcf-7a7c218b6f9f" Path:"" ERRORED: error configuring pod [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6] networking: [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6/8e623376-9e14-4341-9dcf-7a7c218b6f9f:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.715446 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.715446 master-0 kubenswrapper[4155]: > Feb 16 16:59:58.715552 master-0 kubenswrapper[4155]: E0216 16:59:58.715468 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 16:59:58.715552 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator_8e623376-9e14-4341-9dcf-7a7c218b6f9f_0(d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e): error adding pod openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-829l6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e" Netns:"/var/run/netns/1101c864-1f7c-4508-ad48-be8e4697efa6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-cd5474998-829l6;K8S_POD_INFRA_CONTAINER_ID=d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e;K8S_POD_UID=8e623376-9e14-4341-9dcf-7a7c218b6f9f" Path:"" ERRORED: error configuring pod [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6] networking: [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6/8e623376-9e14-4341-9dcf-7a7c218b6f9f:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.715552 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.715552 master-0 kubenswrapper[4155]: > pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 16:59:58.715552 master-0 kubenswrapper[4155]: E0216 16:59:58.715487 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 16:59:58.715552 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator_8e623376-9e14-4341-9dcf-7a7c218b6f9f_0(d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e): error adding pod openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-829l6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e" Netns:"/var/run/netns/1101c864-1f7c-4508-ad48-be8e4697efa6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-cd5474998-829l6;K8S_POD_INFRA_CONTAINER_ID=d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e;K8S_POD_UID=8e623376-9e14-4341-9dcf-7a7c218b6f9f" Path:"" ERRORED: error configuring pod [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6] networking: [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6/8e623376-9e14-4341-9dcf-7a7c218b6f9f:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 16:59:58.715552 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 16:59:58.715552 master-0 kubenswrapper[4155]: > pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 16:59:58.715731 master-0 kubenswrapper[4155]: E0216 16:59:58.715547 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator(8e623376-9e14-4341-9dcf-7a7c218b6f9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator(8e623376-9e14-4341-9dcf-7a7c218b6f9f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator_8e623376-9e14-4341-9dcf-7a7c218b6f9f_0(d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e): error adding pod openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-829l6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e\\\" Netns:\\\"/var/run/netns/1101c864-1f7c-4508-ad48-be8e4697efa6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-cd5474998-829l6;K8S_POD_INFRA_CONTAINER_ID=d44992ae35a95513783a9d6a17bd9ea03ad031e9ede191229d4c06ba4ed7a92e;K8S_POD_UID=8e623376-9e14-4341-9dcf-7a7c218b6f9f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6] networking: [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6/8e623376-9e14-4341-9dcf-7a7c218b6f9f:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 16:59:58.760396 master-0 kubenswrapper[4155]: I0216 16:59:58.760326 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 16:59:58.760646 master-0 kubenswrapper[4155]: E0216 16:59:58.760601 4155 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 16:59:58.760727 master-0 kubenswrapper[4155]: E0216 16:59:58.760707 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert podName:568b22df-b454-4d74-bc21-6c84daf17c8c nodeName:}" failed. No retries permitted until 2026-02-16 17:01:02.760684982 +0000 UTC m=+167.099738546 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert") pod "cluster-version-operator-76959b6567-wnh7l" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c") : secret "cluster-version-operator-serving-cert" not found Feb 16 16:59:58.784680 master-0 kubenswrapper[4155]: I0216 16:59:58.784636 4155 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9031b9a-ff74-4c8e-8733-2f36900bd05d" path="/var/lib/kubelet/pods/a9031b9a-ff74-4c8e-8733-2f36900bd05d/volumes" Feb 16 16:59:59.194043 master-0 kubenswrapper[4155]: I0216 16:59:59.193948 4155 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="6a76b7400b08797d8e5d6ecf8b5e5677ebdccdcb8c93451e24cae607d87b5dde" exitCode=0 Feb 16 16:59:59.194043 master-0 kubenswrapper[4155]: I0216 16:59:59.194022 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"6a76b7400b08797d8e5d6ecf8b5e5677ebdccdcb8c93451e24cae607d87b5dde"} Feb 16 16:59:59.194043 master-0 kubenswrapper[4155]: I0216 16:59:59.194053 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"dadda19ba6587a75b418addc51b36c2e0a7c53f63977a60f6f393649c7b6d587"} Feb 16 16:59:59.195884 master-0 kubenswrapper[4155]: I0216 16:59:59.195825 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-czzz2" event={"ID":"b3fa6ac1-781f-446c-b6b4-18bdb7723c23","Type":"ContainerStarted","Data":"a51ce5dfcf0ae0215dfb9bb56d30c910b8c1e31cb77a303efaded16db5c0b84f"} Feb 16 17:00:00.077683 master-0 kubenswrapper[4155]: I0216 17:00:00.077290 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:00:00.081669 master-0 kubenswrapper[4155]: I0216 17:00:00.081603 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:00:00.200343 master-0 kubenswrapper[4155]: I0216 17:00:00.200256 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"bfff95a0d14f0841a22b2fd65881101b798827da455a93e9bb8b076c265fc42a"} Feb 16 17:00:00.302277 master-0 kubenswrapper[4155]: I0216 17:00:00.302195 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:00:00.470318 master-0 kubenswrapper[4155]: E0216 17:00:00.470209 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:00:00.470318 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-vwvwx_openshift-network-diagnostics_c303189e-adae-4fe2-8dd7-cc9b80f73e66_0(589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca): error adding pod openshift-network-diagnostics_network-check-target-vwvwx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca" Netns:"/var/run/netns/9f55b3f9-65e4-4808-8ab5-f901464b769d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-vwvwx;K8S_POD_INFRA_CONTAINER_ID=589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca;K8S_POD_UID=c303189e-adae-4fe2-8dd7-cc9b80f73e66" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-vwvwx] networking: [openshift-network-diagnostics/network-check-target-vwvwx/c303189e-adae-4fe2-8dd7-cc9b80f73e66:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:00.470318 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:00.470318 master-0 kubenswrapper[4155]: > Feb 16 17:00:00.470459 master-0 kubenswrapper[4155]: E0216 17:00:00.470368 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:00:00.470459 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-vwvwx_openshift-network-diagnostics_c303189e-adae-4fe2-8dd7-cc9b80f73e66_0(589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca): error adding pod openshift-network-diagnostics_network-check-target-vwvwx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca" Netns:"/var/run/netns/9f55b3f9-65e4-4808-8ab5-f901464b769d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-vwvwx;K8S_POD_INFRA_CONTAINER_ID=589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca;K8S_POD_UID=c303189e-adae-4fe2-8dd7-cc9b80f73e66" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-vwvwx] networking: [openshift-network-diagnostics/network-check-target-vwvwx/c303189e-adae-4fe2-8dd7-cc9b80f73e66:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:00.470459 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:00.470459 master-0 kubenswrapper[4155]: > pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:00:00.470459 master-0 kubenswrapper[4155]: E0216 17:00:00.470393 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:00:00.470459 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-vwvwx_openshift-network-diagnostics_c303189e-adae-4fe2-8dd7-cc9b80f73e66_0(589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca): error adding pod openshift-network-diagnostics_network-check-target-vwvwx to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca" Netns:"/var/run/netns/9f55b3f9-65e4-4808-8ab5-f901464b769d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-vwvwx;K8S_POD_INFRA_CONTAINER_ID=589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca;K8S_POD_UID=c303189e-adae-4fe2-8dd7-cc9b80f73e66" Path:"" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-vwvwx] networking: [openshift-network-diagnostics/network-check-target-vwvwx/c303189e-adae-4fe2-8dd7-cc9b80f73e66:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:00.470459 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:00.470459 master-0 kubenswrapper[4155]: > pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:00:00.470652 master-0 kubenswrapper[4155]: E0216 17:00:00.470477 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"network-check-target-vwvwx_openshift-network-diagnostics(c303189e-adae-4fe2-8dd7-cc9b80f73e66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"network-check-target-vwvwx_openshift-network-diagnostics(c303189e-adae-4fe2-8dd7-cc9b80f73e66)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-vwvwx_openshift-network-diagnostics_c303189e-adae-4fe2-8dd7-cc9b80f73e66_0(589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca): error adding pod openshift-network-diagnostics_network-check-target-vwvwx to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca\\\" Netns:\\\"/var/run/netns/9f55b3f9-65e4-4808-8ab5-f901464b769d\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-diagnostics;K8S_POD_NAME=network-check-target-vwvwx;K8S_POD_INFRA_CONTAINER_ID=589e794cc13ac7ae262170034bb7e668792bbef133dc2a18c91dd2dab5091aca;K8S_POD_UID=c303189e-adae-4fe2-8dd7-cc9b80f73e66\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-network-diagnostics/network-check-target-vwvwx] networking: [openshift-network-diagnostics/network-check-target-vwvwx/c303189e-adae-4fe2-8dd7-cc9b80f73e66:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:00:00.481727 master-0 kubenswrapper[4155]: I0216 17:00:00.481677 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:00.481727 master-0 kubenswrapper[4155]: I0216 17:00:00.481728 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:00.481934 master-0 kubenswrapper[4155]: I0216 17:00:00.481752 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:00.481934 master-0 kubenswrapper[4155]: E0216 17:00:00.481881 4155 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 17:00:00.482024 master-0 kubenswrapper[4155]: E0216 17:00:00.481998 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:00:04.481979348 +0000 UTC m=+108.821032852 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : secret "multus-admission-controller-secret" not found Feb 16 17:00:00.482061 master-0 kubenswrapper[4155]: I0216 17:00:00.482030 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:00.482094 master-0 kubenswrapper[4155]: I0216 17:00:00.482072 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:00.482094 master-0 kubenswrapper[4155]: I0216 17:00:00.482093 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:00.482266 master-0 kubenswrapper[4155]: E0216 17:00:00.482172 4155 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 17:00:00.482370 master-0 kubenswrapper[4155]: E0216 17:00:00.482284 4155 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:00.482435 master-0 kubenswrapper[4155]: E0216 17:00:00.482415 4155 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:00.482480 master-0 kubenswrapper[4155]: I0216 17:00:00.482395 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:00.482533 master-0 kubenswrapper[4155]: E0216 17:00:00.482419 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:04.482388109 +0000 UTC m=+108.821441603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : secret "package-server-manager-serving-cert" not found Feb 16 17:00:00.482579 master-0 kubenswrapper[4155]: E0216 17:00:00.482543 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:00:04.482513583 +0000 UTC m=+108.821567087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:00.482579 master-0 kubenswrapper[4155]: E0216 17:00:00.482560 4155 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:00.482579 master-0 kubenswrapper[4155]: E0216 17:00:00.482558 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 17:00:00.482704 master-0 kubenswrapper[4155]: E0216 17:00:00.482564 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:04.482553624 +0000 UTC m=+108.821607128 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : secret "metrics-tls" not found Feb 16 17:00:00.482704 master-0 kubenswrapper[4155]: E0216 17:00:00.482625 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 17:00:00.482704 master-0 kubenswrapper[4155]: I0216 17:00:00.482677 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:00.482704 master-0 kubenswrapper[4155]: E0216 17:00:00.482691 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:04.482683847 +0000 UTC m=+108.821737341 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "node-tuning-operator-tls" not found Feb 16 17:00:00.482855 master-0 kubenswrapper[4155]: E0216 17:00:00.482751 4155 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 17:00:00.482855 master-0 kubenswrapper[4155]: I0216 17:00:00.482783 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:00.482980 master-0 kubenswrapper[4155]: E0216 17:00:00.482792 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:04.48277928 +0000 UTC m=+108.821832794 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : secret "marketplace-operator-metrics" not found Feb 16 17:00:00.482980 master-0 kubenswrapper[4155]: E0216 17:00:00.482951 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:04.482940414 +0000 UTC m=+108.821993918 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : secret "metrics-tls" not found Feb 16 17:00:00.482980 master-0 kubenswrapper[4155]: E0216 17:00:00.482882 4155 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 17:00:00.482980 master-0 kubenswrapper[4155]: E0216 17:00:00.482971 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:04.482960095 +0000 UTC m=+108.822013589 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "performance-addon-operator-webhook-cert" not found Feb 16 17:00:00.483109 master-0 kubenswrapper[4155]: E0216 17:00:00.483006 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:00:04.482997576 +0000 UTC m=+108.822051080 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : secret "image-registry-operator-tls" not found Feb 16 17:00:01.045209 master-0 kubenswrapper[4155]: I0216 17:00:01.045124 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=5.045099793 podStartE2EDuration="5.045099793s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:00:01.04388433 +0000 UTC m=+105.382937844" watchObservedRunningTime="2026-02-16 17:00:01.045099793 +0000 UTC m=+105.384153297" Feb 16 17:00:01.207991 master-0 kubenswrapper[4155]: I0216 17:00:01.207870 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"6964245d58587f66b762b5ac2d9e1b1dc13364bf0c4f27c746f3f696d56a4d52"} Feb 16 17:00:01.207991 master-0 kubenswrapper[4155]: I0216 17:00:01.207941 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"80a684ef556f1e87f3c9e02305940474602ddfe3de8f6beeb708e0f676fea206"} Feb 16 17:00:01.207991 master-0 kubenswrapper[4155]: I0216 17:00:01.207957 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"75939ba6ce47e33cbb255166206afbbb5bb2eddc8618e626a18427d506fc7a2f"} Feb 16 17:00:01.207991 master-0 kubenswrapper[4155]: I0216 17:00:01.207971 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"21bc05ac92fb28745962add720939690ac3c68281bef41a2c339dfc844b33eb9"} Feb 16 17:00:01.207991 master-0 kubenswrapper[4155]: I0216 17:00:01.207985 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"9c906689512d1a1264797a823e480178e96aca8c88376bbe95cad584cee2c02c"} Feb 16 17:00:04.220907 master-0 kubenswrapper[4155]: I0216 17:00:04.220830 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"e9b033b48182246ed491c211e63d13c81386e7e5e19d72d1dd3822fc6dd2d4e4"} Feb 16 17:00:04.526785 master-0 kubenswrapper[4155]: I0216 17:00:04.526656 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:04.527115 master-0 kubenswrapper[4155]: E0216 17:00:04.526912 4155 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 17:00:04.527115 master-0 kubenswrapper[4155]: E0216 17:00:04.527034 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:00:12.527012348 +0000 UTC m=+116.866065852 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : secret "multus-admission-controller-secret" not found Feb 16 17:00:04.527115 master-0 kubenswrapper[4155]: I0216 17:00:04.527035 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:04.527432 master-0 kubenswrapper[4155]: I0216 17:00:04.527136 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:04.527432 master-0 kubenswrapper[4155]: E0216 17:00:04.527198 4155 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:04.527432 master-0 kubenswrapper[4155]: I0216 17:00:04.527216 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:04.527432 master-0 kubenswrapper[4155]: E0216 17:00:04.527249 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:12.527232144 +0000 UTC m=+116.866285728 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : secret "metrics-tls" not found Feb 16 17:00:04.527432 master-0 kubenswrapper[4155]: I0216 17:00:04.527289 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:04.527432 master-0 kubenswrapper[4155]: I0216 17:00:04.527404 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: E0216 17:00:04.527467 4155 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: E0216 17:00:04.527543 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: E0216 17:00:04.527559 4155 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: I0216 17:00:04.527482 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: E0216 17:00:04.527614 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: E0216 17:00:04.527654 4155 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: E0216 17:00:04.527582 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:12.527548512 +0000 UTC m=+116.866602056 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : secret "package-server-manager-serving-cert" not found Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: E0216 17:00:04.527728 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:12.527702197 +0000 UTC m=+116.866755741 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "performance-addon-operator-webhook-cert" not found Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: I0216 17:00:04.527768 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: I0216 17:00:04.527821 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: E0216 17:00:04.527868 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:00:12.527849691 +0000 UTC m=+116.866903295 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: E0216 17:00:04.527889 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:12.527881531 +0000 UTC m=+116.866935035 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "node-tuning-operator-tls" not found Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: E0216 17:00:04.527899 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:12.527894702 +0000 UTC m=+116.866948206 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : secret "metrics-tls" not found Feb 16 17:00:04.527991 master-0 kubenswrapper[4155]: E0216 17:00:04.527913 4155 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 17:00:04.528859 master-0 kubenswrapper[4155]: E0216 17:00:04.528046 4155 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 17:00:04.528859 master-0 kubenswrapper[4155]: E0216 17:00:04.528050 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:12.528022455 +0000 UTC m=+116.867076009 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : secret "marketplace-operator-metrics" not found Feb 16 17:00:04.528859 master-0 kubenswrapper[4155]: E0216 17:00:04.528123 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:00:12.528108238 +0000 UTC m=+116.867161862 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : secret "image-registry-operator-tls" not found Feb 16 17:00:06.231809 master-0 kubenswrapper[4155]: I0216 17:00:06.231402 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-czzz2" event={"ID":"b3fa6ac1-781f-446c-b6b4-18bdb7723c23","Type":"ContainerStarted","Data":"1dd5d8988b37bdb2482d281fd59a39049b27b81843f30e0726690490865aefa6"} Feb 16 17:00:06.236134 master-0 kubenswrapper[4155]: I0216 17:00:06.236067 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"9f614b14cbff08be0e14be8cba5e89de122b81583a34321af46bbe62e5a802b3"} Feb 16 17:00:06.236392 master-0 kubenswrapper[4155]: I0216 17:00:06.236353 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:06.246952 master-0 kubenswrapper[4155]: I0216 17:00:06.246873 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/iptables-alerter-czzz2" podStartSLOduration=6.049636715 podStartE2EDuration="10.246854625s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="2026-02-16 16:59:58.675480648 +0000 UTC m=+103.014534152" lastFinishedPulling="2026-02-16 17:00:02.872698558 +0000 UTC m=+107.211752062" observedRunningTime="2026-02-16 17:00:06.246385742 +0000 UTC m=+110.585439246" watchObservedRunningTime="2026-02-16 17:00:06.246854625 +0000 UTC m=+110.585908129" Feb 16 17:00:06.257312 master-0 kubenswrapper[4155]: I0216 17:00:06.257262 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:06.312800 master-0 kubenswrapper[4155]: I0216 17:00:06.312717 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" podStartSLOduration=10.31269548 podStartE2EDuration="10.31269548s" podCreationTimestamp="2026-02-16 16:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:00:06.312074843 +0000 UTC m=+110.651128357" watchObservedRunningTime="2026-02-16 17:00:06.31269548 +0000 UTC m=+110.651748984" Feb 16 17:00:06.492910 master-0 kubenswrapper[4155]: I0216 17:00:06.492871 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts"] Feb 16 17:00:06.495093 master-0 kubenswrapper[4155]: I0216 17:00:06.495027 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-755d954778-lf4cb"] Feb 16 17:00:06.495261 master-0 kubenswrapper[4155]: I0216 17:00:06.495234 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:06.495812 master-0 kubenswrapper[4155]: I0216 17:00:06.495782 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:06.497342 master-0 kubenswrapper[4155]: I0216 17:00:06.497292 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9"] Feb 16 17:00:06.497996 master-0 kubenswrapper[4155]: I0216 17:00:06.497911 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:00:06.498583 master-0 kubenswrapper[4155]: I0216 17:00:06.498558 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:00:06.501055 master-0 kubenswrapper[4155]: I0216 17:00:06.500997 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-4jz2t"] Feb 16 17:00:06.504002 master-0 kubenswrapper[4155]: I0216 17:00:06.503279 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8"] Feb 16 17:00:06.504002 master-0 kubenswrapper[4155]: I0216 17:00:06.503450 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:06.504002 master-0 kubenswrapper[4155]: I0216 17:00:06.503825 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:06.508980 master-0 kubenswrapper[4155]: I0216 17:00:06.508911 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2"] Feb 16 17:00:06.511916 master-0 kubenswrapper[4155]: I0216 17:00:06.511867 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv"] Feb 16 17:00:06.512074 master-0 kubenswrapper[4155]: I0216 17:00:06.512050 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:00:06.512520 master-0 kubenswrapper[4155]: I0216 17:00:06.512495 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:00:06.513202 master-0 kubenswrapper[4155]: I0216 17:00:06.513101 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm"] Feb 16 17:00:06.516566 master-0 kubenswrapper[4155]: I0216 17:00:06.516518 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8"] Feb 16 17:00:06.516725 master-0 kubenswrapper[4155]: I0216 17:00:06.516701 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:06.517179 master-0 kubenswrapper[4155]: I0216 17:00:06.517155 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:06.517460 master-0 kubenswrapper[4155]: I0216 17:00:06.517432 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk"] Feb 16 17:00:06.521541 master-0 kubenswrapper[4155]: I0216 17:00:06.521472 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-86b8869b79-nhxlp"] Feb 16 17:00:06.523559 master-0 kubenswrapper[4155]: I0216 17:00:06.523520 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf"] Feb 16 17:00:06.523668 master-0 kubenswrapper[4155]: I0216 17:00:06.523638 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:00:06.524109 master-0 kubenswrapper[4155]: I0216 17:00:06.524076 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:00:06.529427 master-0 kubenswrapper[4155]: I0216 17:00:06.529382 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf"] Feb 16 17:00:06.529589 master-0 kubenswrapper[4155]: I0216 17:00:06.529563 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:00:06.529979 master-0 kubenswrapper[4155]: I0216 17:00:06.529956 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:00:06.533732 master-0 kubenswrapper[4155]: I0216 17:00:06.533698 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv"] Feb 16 17:00:06.533911 master-0 kubenswrapper[4155]: I0216 17:00:06.533822 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:00:06.534155 master-0 kubenswrapper[4155]: I0216 17:00:06.534139 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:00:06.536868 master-0 kubenswrapper[4155]: I0216 17:00:06.536847 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d"] Feb 16 17:00:06.536966 master-0 kubenswrapper[4155]: I0216 17:00:06.536878 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v"] Feb 16 17:00:06.536966 master-0 kubenswrapper[4155]: I0216 17:00:06.536889 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6"] Feb 16 17:00:06.536966 master-0 kubenswrapper[4155]: I0216 17:00:06.536961 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:00:06.537237 master-0 kubenswrapper[4155]: I0216 17:00:06.537223 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:00:06.539015 master-0 kubenswrapper[4155]: I0216 17:00:06.538979 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k"] Feb 16 17:00:06.539015 master-0 kubenswrapper[4155]: I0216 17:00:06.539008 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5"] Feb 16 17:00:06.539136 master-0 kubenswrapper[4155]: I0216 17:00:06.539056 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:00:06.539357 master-0 kubenswrapper[4155]: I0216 17:00:06.539316 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:00:06.539427 master-0 kubenswrapper[4155]: I0216 17:00:06.539379 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:00:06.539827 master-0 kubenswrapper[4155]: I0216 17:00:06.539803 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:00:06.541819 master-0 kubenswrapper[4155]: I0216 17:00:06.541788 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6"] Feb 16 17:00:06.541901 master-0 kubenswrapper[4155]: I0216 17:00:06.541881 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:00:06.542163 master-0 kubenswrapper[4155]: I0216 17:00:06.542141 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:00:06.705874 master-0 kubenswrapper[4155]: E0216 17:00:06.705154 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:00:06.705874 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator_6b3e071c-1c62-489b-91c1-aef0d197f40b_0(cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad): error adding pod openshift-etcd-operator_etcd-operator-67bf55ccdd-cppj8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad" Netns:"/var/run/netns/371a6a52-a392-48ff-afcb-28e21dfce00e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-67bf55ccdd-cppj8;K8S_POD_INFRA_CONTAINER_ID=cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad;K8S_POD_UID=6b3e071c-1c62-489b-91c1-aef0d197f40b" Path:"" ERRORED: error configuring pod [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8] networking: [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8/6b3e071c-1c62-489b-91c1-aef0d197f40b:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.705874 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.705874 master-0 kubenswrapper[4155]: > Feb 16 17:00:06.705874 master-0 kubenswrapper[4155]: E0216 17:00:06.705213 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:00:06.705874 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator_6b3e071c-1c62-489b-91c1-aef0d197f40b_0(cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad): error adding pod openshift-etcd-operator_etcd-operator-67bf55ccdd-cppj8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad" Netns:"/var/run/netns/371a6a52-a392-48ff-afcb-28e21dfce00e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-67bf55ccdd-cppj8;K8S_POD_INFRA_CONTAINER_ID=cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad;K8S_POD_UID=6b3e071c-1c62-489b-91c1-aef0d197f40b" Path:"" ERRORED: error configuring pod [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8] networking: [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8/6b3e071c-1c62-489b-91c1-aef0d197f40b:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.705874 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.705874 master-0 kubenswrapper[4155]: > pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:06.705874 master-0 kubenswrapper[4155]: E0216 17:00:06.705232 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:00:06.705874 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator_6b3e071c-1c62-489b-91c1-aef0d197f40b_0(cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad): error adding pod openshift-etcd-operator_etcd-operator-67bf55ccdd-cppj8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad" Netns:"/var/run/netns/371a6a52-a392-48ff-afcb-28e21dfce00e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-67bf55ccdd-cppj8;K8S_POD_INFRA_CONTAINER_ID=cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad;K8S_POD_UID=6b3e071c-1c62-489b-91c1-aef0d197f40b" Path:"" ERRORED: error configuring pod [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8] networking: [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8/6b3e071c-1c62-489b-91c1-aef0d197f40b:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.705874 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.705874 master-0 kubenswrapper[4155]: > pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:06.706224 master-0 kubenswrapper[4155]: E0216 17:00:06.705290 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator(6b3e071c-1c62-489b-91c1-aef0d197f40b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator(6b3e071c-1c62-489b-91c1-aef0d197f40b)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator_6b3e071c-1c62-489b-91c1-aef0d197f40b_0(cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad): error adding pod openshift-etcd-operator_etcd-operator-67bf55ccdd-cppj8 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad\\\" Netns:\\\"/var/run/netns/371a6a52-a392-48ff-afcb-28e21dfce00e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd-operator;K8S_POD_NAME=etcd-operator-67bf55ccdd-cppj8;K8S_POD_INFRA_CONTAINER_ID=cf4885ff2a46b0e82f57eb221e8ee2f0502195c547f676d307cdeafa72434cad;K8S_POD_UID=6b3e071c-1c62-489b-91c1-aef0d197f40b\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8] networking: [openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8/6b3e071c-1c62-489b-91c1-aef0d197f40b:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:00:06.713173 master-0 kubenswrapper[4155]: E0216 17:00:06.713131 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:00:06.713173 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator_970d4376-f299-412c-a8ee-90aa980c689e_0(8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1): error adding pod openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-q55rf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1" Netns:"/var/run/netns/007141fd-0270-4856-90fb-c390f7db5784" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=csi-snapshot-controller-operator-7b87b97578-q55rf;K8S_POD_INFRA_CONTAINER_ID=8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1;K8S_POD_UID=970d4376-f299-412c-a8ee-90aa980c689e" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf] networking: [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf/970d4376-f299-412c-a8ee-90aa980c689e:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.713173 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.713173 master-0 kubenswrapper[4155]: > Feb 16 17:00:06.713407 master-0 kubenswrapper[4155]: E0216 17:00:06.713200 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:00:06.713407 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator_970d4376-f299-412c-a8ee-90aa980c689e_0(8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1): error adding pod openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-q55rf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1" Netns:"/var/run/netns/007141fd-0270-4856-90fb-c390f7db5784" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=csi-snapshot-controller-operator-7b87b97578-q55rf;K8S_POD_INFRA_CONTAINER_ID=8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1;K8S_POD_UID=970d4376-f299-412c-a8ee-90aa980c689e" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf] networking: [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf/970d4376-f299-412c-a8ee-90aa980c689e:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.713407 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.713407 master-0 kubenswrapper[4155]: > pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:00:06.713407 master-0 kubenswrapper[4155]: E0216 17:00:06.713218 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:00:06.713407 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator_970d4376-f299-412c-a8ee-90aa980c689e_0(8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1): error adding pod openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-q55rf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1" Netns:"/var/run/netns/007141fd-0270-4856-90fb-c390f7db5784" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=csi-snapshot-controller-operator-7b87b97578-q55rf;K8S_POD_INFRA_CONTAINER_ID=8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1;K8S_POD_UID=970d4376-f299-412c-a8ee-90aa980c689e" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf] networking: [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf/970d4376-f299-412c-a8ee-90aa980c689e:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.713407 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.713407 master-0 kubenswrapper[4155]: > pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:00:06.713650 master-0 kubenswrapper[4155]: E0216 17:00:06.713283 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator(970d4376-f299-412c-a8ee-90aa980c689e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator(970d4376-f299-412c-a8ee-90aa980c689e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_csi-snapshot-controller-operator-7b87b97578-q55rf_openshift-cluster-storage-operator_970d4376-f299-412c-a8ee-90aa980c689e_0(8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1): error adding pod openshift-cluster-storage-operator_csi-snapshot-controller-operator-7b87b97578-q55rf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1\\\" Netns:\\\"/var/run/netns/007141fd-0270-4856-90fb-c390f7db5784\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=csi-snapshot-controller-operator-7b87b97578-q55rf;K8S_POD_INFRA_CONTAINER_ID=8ddf4a9531ac41c6684beb8f4d64d69696fbf31533d5f98d300795b0e57d08b1;K8S_POD_UID=970d4376-f299-412c-a8ee-90aa980c689e\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf] networking: [openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf/970d4376-f299-412c-a8ee-90aa980c689e:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:00:06.714671 master-0 kubenswrapper[4155]: E0216 17:00:06.714645 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:00:06.714671 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator_edbaac23-11f0-4bc7-a7ce-b593c774c0fa_0(4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56): error adding pod openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56" Netns:"/var/run/netns/85ba8f39-f59d-4583-bfce-719724e7acd9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-5f5f84757d-ktmm9;K8S_POD_INFRA_CONTAINER_ID=4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56;K8S_POD_UID=edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Path:"" ERRORED: error configuring pod [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9] networking: [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9/edbaac23-11f0-4bc7-a7ce-b593c774c0fa:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.714671 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.714671 master-0 kubenswrapper[4155]: > Feb 16 17:00:06.714839 master-0 kubenswrapper[4155]: E0216 17:00:06.714679 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:00:06.714839 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator_edbaac23-11f0-4bc7-a7ce-b593c774c0fa_0(4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56): error adding pod openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56" Netns:"/var/run/netns/85ba8f39-f59d-4583-bfce-719724e7acd9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-5f5f84757d-ktmm9;K8S_POD_INFRA_CONTAINER_ID=4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56;K8S_POD_UID=edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Path:"" ERRORED: error configuring pod [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9] networking: [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9/edbaac23-11f0-4bc7-a7ce-b593c774c0fa:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.714839 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.714839 master-0 kubenswrapper[4155]: > pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:00:06.714839 master-0 kubenswrapper[4155]: E0216 17:00:06.714696 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:00:06.714839 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator_edbaac23-11f0-4bc7-a7ce-b593c774c0fa_0(4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56): error adding pod openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56" Netns:"/var/run/netns/85ba8f39-f59d-4583-bfce-719724e7acd9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-5f5f84757d-ktmm9;K8S_POD_INFRA_CONTAINER_ID=4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56;K8S_POD_UID=edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Path:"" ERRORED: error configuring pod [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9] networking: [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9/edbaac23-11f0-4bc7-a7ce-b593c774c0fa:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.714839 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.714839 master-0 kubenswrapper[4155]: > pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:00:06.715136 master-0 kubenswrapper[4155]: E0216 17:00:06.714732 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator(edbaac23-11f0-4bc7-a7ce-b593c774c0fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator(edbaac23-11f0-4bc7-a7ce-b593c774c0fa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator_edbaac23-11f0-4bc7-a7ce-b593c774c0fa_0(4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56): error adding pod openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56\\\" Netns:\\\"/var/run/netns/85ba8f39-f59d-4583-bfce-719724e7acd9\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager-operator;K8S_POD_NAME=openshift-controller-manager-operator-5f5f84757d-ktmm9;K8S_POD_INFRA_CONTAINER_ID=4b541c6e94783fcdd1b741781407080d49e083833643fd64a1a2a38b4e947f56;K8S_POD_UID=edbaac23-11f0-4bc7-a7ce-b593c774c0fa\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9] networking: [openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9/edbaac23-11f0-4bc7-a7ce-b593c774c0fa:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:00:06.718605 master-0 kubenswrapper[4155]: E0216 17:00:06.718534 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:00:06.718605 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_authentication-operator-755d954778-lf4cb_openshift-authentication-operator_9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41_0(368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1): error adding pod openshift-authentication-operator_authentication-operator-755d954778-lf4cb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1" Netns:"/var/run/netns/88258e9d-54a2-45ed-9119-b752c64f5183" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-755d954778-lf4cb;K8S_POD_INFRA_CONTAINER_ID=368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1;K8S_POD_UID=9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Path:"" ERRORED: error configuring pod [openshift-authentication-operator/authentication-operator-755d954778-lf4cb] networking: [openshift-authentication-operator/authentication-operator-755d954778-lf4cb/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.718605 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.718605 master-0 kubenswrapper[4155]: > Feb 16 17:00:06.718605 master-0 kubenswrapper[4155]: E0216 17:00:06.718562 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:00:06.718605 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_authentication-operator-755d954778-lf4cb_openshift-authentication-operator_9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41_0(368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1): error adding pod openshift-authentication-operator_authentication-operator-755d954778-lf4cb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1" Netns:"/var/run/netns/88258e9d-54a2-45ed-9119-b752c64f5183" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-755d954778-lf4cb;K8S_POD_INFRA_CONTAINER_ID=368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1;K8S_POD_UID=9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Path:"" ERRORED: error configuring pod [openshift-authentication-operator/authentication-operator-755d954778-lf4cb] networking: [openshift-authentication-operator/authentication-operator-755d954778-lf4cb/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.718605 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.718605 master-0 kubenswrapper[4155]: > pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:06.718605 master-0 kubenswrapper[4155]: E0216 17:00:06.718579 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:00:06.718605 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_authentication-operator-755d954778-lf4cb_openshift-authentication-operator_9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41_0(368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1): error adding pod openshift-authentication-operator_authentication-operator-755d954778-lf4cb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1" Netns:"/var/run/netns/88258e9d-54a2-45ed-9119-b752c64f5183" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-755d954778-lf4cb;K8S_POD_INFRA_CONTAINER_ID=368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1;K8S_POD_UID=9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Path:"" ERRORED: error configuring pod [openshift-authentication-operator/authentication-operator-755d954778-lf4cb] networking: [openshift-authentication-operator/authentication-operator-755d954778-lf4cb/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.718605 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.718605 master-0 kubenswrapper[4155]: > pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:06.718993 master-0 kubenswrapper[4155]: E0216 17:00:06.718623 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"authentication-operator-755d954778-lf4cb_openshift-authentication-operator(9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"authentication-operator-755d954778-lf4cb_openshift-authentication-operator(9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_authentication-operator-755d954778-lf4cb_openshift-authentication-operator_9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41_0(368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1): error adding pod openshift-authentication-operator_authentication-operator-755d954778-lf4cb to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1\\\" Netns:\\\"/var/run/netns/88258e9d-54a2-45ed-9119-b752c64f5183\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication-operator;K8S_POD_NAME=authentication-operator-755d954778-lf4cb;K8S_POD_INFRA_CONTAINER_ID=368ea21b12ee629f43d4f26e25177485b2a06fdf19b09ce692f98e61e2248fe1;K8S_POD_UID=9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication-operator/authentication-operator-755d954778-lf4cb] networking: [openshift-authentication-operator/authentication-operator-755d954778-lf4cb/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:00:06.722721 master-0 kubenswrapper[4155]: E0216 17:00:06.722229 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:00:06.722721 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator_e69d8c51-e2a6-4f61-9c26-072784f6cf40_0(7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70): error adding pod openshift-config-operator_openshift-config-operator-7c6bdb986f-v8dr8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70" Netns:"/var/run/netns/90530ca2-cb1b-4f6f-a2af-5abfa7b5fa14" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-7c6bdb986f-v8dr8;K8S_POD_INFRA_CONTAINER_ID=7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70;K8S_POD_UID=e69d8c51-e2a6-4f61-9c26-072784f6cf40" Path:"" ERRORED: error configuring pod [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8] networking: [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8/e69d8c51-e2a6-4f61-9c26-072784f6cf40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.722721 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.722721 master-0 kubenswrapper[4155]: > Feb 16 17:00:06.722721 master-0 kubenswrapper[4155]: E0216 17:00:06.722287 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:00:06.722721 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator_e69d8c51-e2a6-4f61-9c26-072784f6cf40_0(7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70): error adding pod openshift-config-operator_openshift-config-operator-7c6bdb986f-v8dr8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70" Netns:"/var/run/netns/90530ca2-cb1b-4f6f-a2af-5abfa7b5fa14" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-7c6bdb986f-v8dr8;K8S_POD_INFRA_CONTAINER_ID=7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70;K8S_POD_UID=e69d8c51-e2a6-4f61-9c26-072784f6cf40" Path:"" ERRORED: error configuring pod [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8] networking: [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8/e69d8c51-e2a6-4f61-9c26-072784f6cf40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.722721 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.722721 master-0 kubenswrapper[4155]: > pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:06.722721 master-0 kubenswrapper[4155]: E0216 17:00:06.722305 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:00:06.722721 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator_e69d8c51-e2a6-4f61-9c26-072784f6cf40_0(7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70): error adding pod openshift-config-operator_openshift-config-operator-7c6bdb986f-v8dr8 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70" Netns:"/var/run/netns/90530ca2-cb1b-4f6f-a2af-5abfa7b5fa14" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-7c6bdb986f-v8dr8;K8S_POD_INFRA_CONTAINER_ID=7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70;K8S_POD_UID=e69d8c51-e2a6-4f61-9c26-072784f6cf40" Path:"" ERRORED: error configuring pod [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8] networking: [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8/e69d8c51-e2a6-4f61-9c26-072784f6cf40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.722721 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.722721 master-0 kubenswrapper[4155]: > pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:06.723060 master-0 kubenswrapper[4155]: E0216 17:00:06.722384 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator(e69d8c51-e2a6-4f61-9c26-072784f6cf40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator(e69d8c51-e2a6-4f61-9c26-072784f6cf40)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-config-operator-7c6bdb986f-v8dr8_openshift-config-operator_e69d8c51-e2a6-4f61-9c26-072784f6cf40_0(7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70): error adding pod openshift-config-operator_openshift-config-operator-7c6bdb986f-v8dr8 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70\\\" Netns:\\\"/var/run/netns/90530ca2-cb1b-4f6f-a2af-5abfa7b5fa14\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-config-operator;K8S_POD_NAME=openshift-config-operator-7c6bdb986f-v8dr8;K8S_POD_INFRA_CONTAINER_ID=7eec65195d41d7e3206665d096fd164b76a541bd88b21c37d7009e437faabd70;K8S_POD_UID=e69d8c51-e2a6-4f61-9c26-072784f6cf40\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8] networking: [openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8/e69d8c51-e2a6-4f61-9c26-072784f6cf40:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:00:06.723441 master-0 kubenswrapper[4155]: E0216 17:00:06.723392 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:00:06.723441 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator_4e51bba5-0ebe-4e55-a588-38b71548c605_0(ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226): error adding pod openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226" Netns:"/var/run/netns/33bed5d2-c006-44b4-aae8-4a42c70e8ed6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-olm-operator;K8S_POD_NAME=cluster-olm-operator-55b69c6c48-7chjv;K8S_POD_INFRA_CONTAINER_ID=ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226;K8S_POD_UID=4e51bba5-0ebe-4e55-a588-38b71548c605" Path:"" ERRORED: error configuring pod [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv] networking: [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv/4e51bba5-0ebe-4e55-a588-38b71548c605:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.723441 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.723441 master-0 kubenswrapper[4155]: > Feb 16 17:00:06.723441 master-0 kubenswrapper[4155]: E0216 17:00:06.723433 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:00:06.723441 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator_4e51bba5-0ebe-4e55-a588-38b71548c605_0(ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226): error adding pod openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226" Netns:"/var/run/netns/33bed5d2-c006-44b4-aae8-4a42c70e8ed6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-olm-operator;K8S_POD_NAME=cluster-olm-operator-55b69c6c48-7chjv;K8S_POD_INFRA_CONTAINER_ID=ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226;K8S_POD_UID=4e51bba5-0ebe-4e55-a588-38b71548c605" Path:"" ERRORED: error configuring pod [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv] networking: [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv/4e51bba5-0ebe-4e55-a588-38b71548c605:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.723441 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.723441 master-0 kubenswrapper[4155]: > pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:00:06.723658 master-0 kubenswrapper[4155]: E0216 17:00:06.723448 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:00:06.723658 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator_4e51bba5-0ebe-4e55-a588-38b71548c605_0(ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226): error adding pod openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226" Netns:"/var/run/netns/33bed5d2-c006-44b4-aae8-4a42c70e8ed6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-olm-operator;K8S_POD_NAME=cluster-olm-operator-55b69c6c48-7chjv;K8S_POD_INFRA_CONTAINER_ID=ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226;K8S_POD_UID=4e51bba5-0ebe-4e55-a588-38b71548c605" Path:"" ERRORED: error configuring pod [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv] networking: [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv/4e51bba5-0ebe-4e55-a588-38b71548c605:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.723658 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.723658 master-0 kubenswrapper[4155]: > pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:00:06.723658 master-0 kubenswrapper[4155]: E0216 17:00:06.723485 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator(4e51bba5-0ebe-4e55-a588-38b71548c605)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator(4e51bba5-0ebe-4e55-a588-38b71548c605)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-olm-operator-55b69c6c48-7chjv_openshift-cluster-olm-operator_4e51bba5-0ebe-4e55-a588-38b71548c605_0(ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226): error adding pod openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226\\\" Netns:\\\"/var/run/netns/33bed5d2-c006-44b4-aae8-4a42c70e8ed6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-olm-operator;K8S_POD_NAME=cluster-olm-operator-55b69c6c48-7chjv;K8S_POD_INFRA_CONTAINER_ID=ad4325dc845443734ef1d0bb819bdf66fbb17bbb066f554008e5407b4160e226;K8S_POD_UID=4e51bba5-0ebe-4e55-a588-38b71548c605\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv] networking: [openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv/4e51bba5-0ebe-4e55-a588-38b71548c605:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:00:06.744607 master-0 kubenswrapper[4155]: E0216 17:00:06.744566 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:00:06.744607 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator_d020c902-2adb-4919-8dd9-0c2109830580_0(4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638): error adding pod openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638" Netns:"/var/run/netns/6461b4ce-caea-4e10-82eb-0b4a70194f9e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-54984b6678-gp8gv;K8S_POD_INFRA_CONTAINER_ID=4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638;K8S_POD_UID=d020c902-2adb-4919-8dd9-0c2109830580" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv] networking: [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv/d020c902-2adb-4919-8dd9-0c2109830580:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.744607 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.744607 master-0 kubenswrapper[4155]: > Feb 16 17:00:06.744752 master-0 kubenswrapper[4155]: E0216 17:00:06.744640 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:00:06.744752 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator_d020c902-2adb-4919-8dd9-0c2109830580_0(4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638): error adding pod openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638" Netns:"/var/run/netns/6461b4ce-caea-4e10-82eb-0b4a70194f9e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-54984b6678-gp8gv;K8S_POD_INFRA_CONTAINER_ID=4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638;K8S_POD_UID=d020c902-2adb-4919-8dd9-0c2109830580" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv] networking: [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv/d020c902-2adb-4919-8dd9-0c2109830580:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.744752 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.744752 master-0 kubenswrapper[4155]: > pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:00:06.744752 master-0 kubenswrapper[4155]: E0216 17:00:06.744664 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:00:06.744752 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator_d020c902-2adb-4919-8dd9-0c2109830580_0(4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638): error adding pod openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638" Netns:"/var/run/netns/6461b4ce-caea-4e10-82eb-0b4a70194f9e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-54984b6678-gp8gv;K8S_POD_INFRA_CONTAINER_ID=4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638;K8S_POD_UID=d020c902-2adb-4919-8dd9-0c2109830580" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv] networking: [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv/d020c902-2adb-4919-8dd9-0c2109830580:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.744752 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.744752 master-0 kubenswrapper[4155]: > pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:00:06.744954 master-0 kubenswrapper[4155]: E0216 17:00:06.744724 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator(d020c902-2adb-4919-8dd9-0c2109830580)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator(d020c902-2adb-4919-8dd9-0c2109830580)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator_d020c902-2adb-4919-8dd9-0c2109830580_0(4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638): error adding pod openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638\\\" Netns:\\\"/var/run/netns/6461b4ce-caea-4e10-82eb-0b4a70194f9e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver-operator;K8S_POD_NAME=kube-apiserver-operator-54984b6678-gp8gv;K8S_POD_INFRA_CONTAINER_ID=4615c8ef7e8d45dff5fb8683f827a8ccd883c12edeee795decfd5334214d9638;K8S_POD_UID=d020c902-2adb-4919-8dd9-0c2109830580\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv] networking: [openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv/d020c902-2adb-4919-8dd9-0c2109830580:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:00:06.754046 master-0 kubenswrapper[4155]: E0216 17:00:06.754001 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:00:06.754046 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator_737fcc7d-d850-4352-9f17-383c85d5bc28_0(2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976): error adding pod openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-qhn9v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976" Netns:"/var/run/netns/6977e5fd-a8f4-422b-be90-ee0cf848be7b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-6d4655d9cf-qhn9v;K8S_POD_INFRA_CONTAINER_ID=2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976;K8S_POD_UID=737fcc7d-d850-4352-9f17-383c85d5bc28" Path:"" ERRORED: error configuring pod [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v] networking: [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v/737fcc7d-d850-4352-9f17-383c85d5bc28:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.754046 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.754046 master-0 kubenswrapper[4155]: > Feb 16 17:00:06.754209 master-0 kubenswrapper[4155]: E0216 17:00:06.754060 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:00:06.754209 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator_737fcc7d-d850-4352-9f17-383c85d5bc28_0(2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976): error adding pod openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-qhn9v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976" Netns:"/var/run/netns/6977e5fd-a8f4-422b-be90-ee0cf848be7b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-6d4655d9cf-qhn9v;K8S_POD_INFRA_CONTAINER_ID=2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976;K8S_POD_UID=737fcc7d-d850-4352-9f17-383c85d5bc28" Path:"" ERRORED: error configuring pod [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v] networking: [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v/737fcc7d-d850-4352-9f17-383c85d5bc28:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.754209 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.754209 master-0 kubenswrapper[4155]: > pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:00:06.754209 master-0 kubenswrapper[4155]: E0216 17:00:06.754084 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:00:06.754209 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator_737fcc7d-d850-4352-9f17-383c85d5bc28_0(2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976): error adding pod openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-qhn9v to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976" Netns:"/var/run/netns/6977e5fd-a8f4-422b-be90-ee0cf848be7b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-6d4655d9cf-qhn9v;K8S_POD_INFRA_CONTAINER_ID=2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976;K8S_POD_UID=737fcc7d-d850-4352-9f17-383c85d5bc28" Path:"" ERRORED: error configuring pod [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v] networking: [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v/737fcc7d-d850-4352-9f17-383c85d5bc28:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.754209 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.754209 master-0 kubenswrapper[4155]: > pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:00:06.754452 master-0 kubenswrapper[4155]: E0216 17:00:06.754145 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator(737fcc7d-d850-4352-9f17-383c85d5bc28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator(737fcc7d-d850-4352-9f17-383c85d5bc28)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator_737fcc7d-d850-4352-9f17-383c85d5bc28_0(2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976): error adding pod openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-qhn9v to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976\\\" Netns:\\\"/var/run/netns/6977e5fd-a8f4-422b-be90-ee0cf848be7b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-apiserver-operator;K8S_POD_NAME=openshift-apiserver-operator-6d4655d9cf-qhn9v;K8S_POD_INFRA_CONTAINER_ID=2a887b298c4b0c719a72078eb6cab947f5884748175be8dbca3d99c8845c9976;K8S_POD_UID=737fcc7d-d850-4352-9f17-383c85d5bc28\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v] networking: [openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v/737fcc7d-d850-4352-9f17-383c85d5bc28:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:00:06.760360 master-0 kubenswrapper[4155]: E0216 17:00:06.760313 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:00:06.760360 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator_8e623376-9e14-4341-9dcf-7a7c218b6f9f_0(b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc): error adding pod openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-829l6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc" Netns:"/var/run/netns/6a61aea1-9c74-430c-8e64-70650d12eb3e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-cd5474998-829l6;K8S_POD_INFRA_CONTAINER_ID=b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc;K8S_POD_UID=8e623376-9e14-4341-9dcf-7a7c218b6f9f" Path:"" ERRORED: error configuring pod [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6] networking: [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6/8e623376-9e14-4341-9dcf-7a7c218b6f9f:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.760360 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.760360 master-0 kubenswrapper[4155]: > Feb 16 17:00:06.760554 master-0 kubenswrapper[4155]: E0216 17:00:06.760377 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:00:06.760554 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator_8e623376-9e14-4341-9dcf-7a7c218b6f9f_0(b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc): error adding pod openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-829l6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc" Netns:"/var/run/netns/6a61aea1-9c74-430c-8e64-70650d12eb3e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-cd5474998-829l6;K8S_POD_INFRA_CONTAINER_ID=b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc;K8S_POD_UID=8e623376-9e14-4341-9dcf-7a7c218b6f9f" Path:"" ERRORED: error configuring pod [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6] networking: [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6/8e623376-9e14-4341-9dcf-7a7c218b6f9f:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.760554 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.760554 master-0 kubenswrapper[4155]: > pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:00:06.760554 master-0 kubenswrapper[4155]: E0216 17:00:06.760396 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:00:06.760554 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator_8e623376-9e14-4341-9dcf-7a7c218b6f9f_0(b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc): error adding pod openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-829l6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc" Netns:"/var/run/netns/6a61aea1-9c74-430c-8e64-70650d12eb3e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-cd5474998-829l6;K8S_POD_INFRA_CONTAINER_ID=b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc;K8S_POD_UID=8e623376-9e14-4341-9dcf-7a7c218b6f9f" Path:"" ERRORED: error configuring pod [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6] networking: [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6/8e623376-9e14-4341-9dcf-7a7c218b6f9f:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.760554 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.760554 master-0 kubenswrapper[4155]: > pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:00:06.760904 master-0 kubenswrapper[4155]: E0216 17:00:06.760531 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator(8e623376-9e14-4341-9dcf-7a7c218b6f9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator(8e623376-9e14-4341-9dcf-7a7c218b6f9f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator_8e623376-9e14-4341-9dcf-7a7c218b6f9f_0(b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc): error adding pod openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-829l6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc\\\" Netns:\\\"/var/run/netns/6a61aea1-9c74-430c-8e64-70650d12eb3e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-storage-version-migrator-operator;K8S_POD_NAME=kube-storage-version-migrator-operator-cd5474998-829l6;K8S_POD_INFRA_CONTAINER_ID=b26c62dd7c9775dfb3bff84a67792f06cb6aa0de9748ad910ecb14ca5b786bfc;K8S_POD_UID=8e623376-9e14-4341-9dcf-7a7c218b6f9f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6] networking: [openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6/8e623376-9e14-4341-9dcf-7a7c218b6f9f:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:00:06.773201 master-0 kubenswrapper[4155]: E0216 17:00:06.773153 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:00:06.773201 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator_442600dc-09b2-4fee-9f89-777296b2ee40_0(ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5): error adding pod openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5" Netns:"/var/run/netns/8f141194-0e04-4abe-834a-0112db25606b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-78ff47c7c5-txr5k;K8S_POD_INFRA_CONTAINER_ID=ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5;K8S_POD_UID=442600dc-09b2-4fee-9f89-777296b2ee40" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k] networking: [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k/442600dc-09b2-4fee-9f89-777296b2ee40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.773201 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.773201 master-0 kubenswrapper[4155]: > Feb 16 17:00:06.773351 master-0 kubenswrapper[4155]: E0216 17:00:06.773202 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:00:06.773351 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator_442600dc-09b2-4fee-9f89-777296b2ee40_0(ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5): error adding pod openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5" Netns:"/var/run/netns/8f141194-0e04-4abe-834a-0112db25606b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-78ff47c7c5-txr5k;K8S_POD_INFRA_CONTAINER_ID=ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5;K8S_POD_UID=442600dc-09b2-4fee-9f89-777296b2ee40" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k] networking: [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k/442600dc-09b2-4fee-9f89-777296b2ee40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.773351 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.773351 master-0 kubenswrapper[4155]: > pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:00:06.773351 master-0 kubenswrapper[4155]: E0216 17:00:06.773220 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:00:06.773351 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator_442600dc-09b2-4fee-9f89-777296b2ee40_0(ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5): error adding pod openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5" Netns:"/var/run/netns/8f141194-0e04-4abe-834a-0112db25606b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-78ff47c7c5-txr5k;K8S_POD_INFRA_CONTAINER_ID=ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5;K8S_POD_UID=442600dc-09b2-4fee-9f89-777296b2ee40" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k] networking: [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k/442600dc-09b2-4fee-9f89-777296b2ee40:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.773351 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.773351 master-0 kubenswrapper[4155]: > pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:00:06.773621 master-0 kubenswrapper[4155]: E0216 17:00:06.773271 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator(442600dc-09b2-4fee-9f89-777296b2ee40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator(442600dc-09b2-4fee-9f89-777296b2ee40)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator_442600dc-09b2-4fee-9f89-777296b2ee40_0(ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5): error adding pod openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5\\\" Netns:\\\"/var/run/netns/8f141194-0e04-4abe-834a-0112db25606b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager-operator;K8S_POD_NAME=kube-controller-manager-operator-78ff47c7c5-txr5k;K8S_POD_INFRA_CONTAINER_ID=ae8301f683e0c2e7dd3e78dd17a0392299d3335bf1d839dd7b9e85e118c390a5;K8S_POD_UID=442600dc-09b2-4fee-9f89-777296b2ee40\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k] networking: [openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k/442600dc-09b2-4fee-9f89-777296b2ee40:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:00:06.775991 master-0 kubenswrapper[4155]: E0216 17:00:06.775953 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:00:06.775991 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator_eaf7edff-0a89-4ac0-b9dd-511e098b5434_0(477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16): error adding pod openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16" Netns:"/var/run/netns/70c72911-5472-4bbc-b159-33b358522f9f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-7485d55966-sgmpf;K8S_POD_INFRA_CONTAINER_ID=477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16;K8S_POD_UID=eaf7edff-0a89-4ac0-b9dd-511e098b5434" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf] networking: [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf/eaf7edff-0a89-4ac0-b9dd-511e098b5434:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.775991 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.775991 master-0 kubenswrapper[4155]: > Feb 16 17:00:06.776110 master-0 kubenswrapper[4155]: E0216 17:00:06.775997 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:00:06.776110 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator_eaf7edff-0a89-4ac0-b9dd-511e098b5434_0(477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16): error adding pod openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16" Netns:"/var/run/netns/70c72911-5472-4bbc-b159-33b358522f9f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-7485d55966-sgmpf;K8S_POD_INFRA_CONTAINER_ID=477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16;K8S_POD_UID=eaf7edff-0a89-4ac0-b9dd-511e098b5434" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf] networking: [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf/eaf7edff-0a89-4ac0-b9dd-511e098b5434:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.776110 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.776110 master-0 kubenswrapper[4155]: > pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:00:06.776110 master-0 kubenswrapper[4155]: E0216 17:00:06.776013 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:00:06.776110 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator_eaf7edff-0a89-4ac0-b9dd-511e098b5434_0(477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16): error adding pod openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16" Netns:"/var/run/netns/70c72911-5472-4bbc-b159-33b358522f9f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-7485d55966-sgmpf;K8S_POD_INFRA_CONTAINER_ID=477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16;K8S_POD_UID=eaf7edff-0a89-4ac0-b9dd-511e098b5434" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf] networking: [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf/eaf7edff-0a89-4ac0-b9dd-511e098b5434:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.776110 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.776110 master-0 kubenswrapper[4155]: > pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:00:06.776294 master-0 kubenswrapper[4155]: E0216 17:00:06.776057 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator(eaf7edff-0a89-4ac0-b9dd-511e098b5434)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator(eaf7edff-0a89-4ac0-b9dd-511e098b5434)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator_eaf7edff-0a89-4ac0-b9dd-511e098b5434_0(477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16): error adding pod openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16\\\" Netns:\\\"/var/run/netns/70c72911-5472-4bbc-b159-33b358522f9f\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler-operator;K8S_POD_NAME=openshift-kube-scheduler-operator-7485d55966-sgmpf;K8S_POD_INFRA_CONTAINER_ID=477c6eacf6146dc85fde02438a7cce135e356333a1586cb5b9f379a3547e4c16;K8S_POD_UID=eaf7edff-0a89-4ac0-b9dd-511e098b5434\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf] networking: [openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf/eaf7edff-0a89-4ac0-b9dd-511e098b5434:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:00:06.785751 master-0 kubenswrapper[4155]: E0216 17:00:06.785710 4155 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:00:06.785751 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator_29402454-a920-471e-895e-764235d16eb4_0(51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d): error adding pod openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d" Netns:"/var/run/netns/e946613c-70bd-438a-9797-f316c2074daa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-5dc4688546-pl7r5;K8S_POD_INFRA_CONTAINER_ID=51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d;K8S_POD_UID=29402454-a920-471e-895e-764235d16eb4" Path:"" ERRORED: error configuring pod [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5] networking: [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5/29402454-a920-471e-895e-764235d16eb4:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.785751 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.785751 master-0 kubenswrapper[4155]: > Feb 16 17:00:06.785863 master-0 kubenswrapper[4155]: E0216 17:00:06.785767 4155 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:00:06.785863 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator_29402454-a920-471e-895e-764235d16eb4_0(51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d): error adding pod openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d" Netns:"/var/run/netns/e946613c-70bd-438a-9797-f316c2074daa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-5dc4688546-pl7r5;K8S_POD_INFRA_CONTAINER_ID=51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d;K8S_POD_UID=29402454-a920-471e-895e-764235d16eb4" Path:"" ERRORED: error configuring pod [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5] networking: [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5/29402454-a920-471e-895e-764235d16eb4:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.785863 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.785863 master-0 kubenswrapper[4155]: > pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:00:06.785863 master-0 kubenswrapper[4155]: E0216 17:00:06.785784 4155 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:00:06.785863 master-0 kubenswrapper[4155]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator_29402454-a920-471e-895e-764235d16eb4_0(51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d): error adding pod openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d" Netns:"/var/run/netns/e946613c-70bd-438a-9797-f316c2074daa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-5dc4688546-pl7r5;K8S_POD_INFRA_CONTAINER_ID=51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d;K8S_POD_UID=29402454-a920-471e-895e-764235d16eb4" Path:"" ERRORED: error configuring pod [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5] networking: [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5/29402454-a920-471e-895e-764235d16eb4:ovn-kubernetes]: error adding container to network "ovn-kubernetes": failed to send CNI request: Post "http://dummy/": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused Feb 16 17:00:06.785863 master-0 kubenswrapper[4155]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:00:06.785863 master-0 kubenswrapper[4155]: > pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:00:06.786152 master-0 kubenswrapper[4155]: E0216 17:00:06.785831 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator(29402454-a920-471e-895e-764235d16eb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator(29402454-a920-471e-895e-764235d16eb4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator_29402454-a920-471e-895e-764235d16eb4_0(51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d): error adding pod openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d\\\" Netns:\\\"/var/run/netns/e946613c-70bd-438a-9797-f316c2074daa\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-service-ca-operator;K8S_POD_NAME=service-ca-operator-5dc4688546-pl7r5;K8S_POD_INFRA_CONTAINER_ID=51d0d34b40769f5c7489b29138aa1c253ca1e1f168963a53692bef6bd78eef3d;K8S_POD_UID=29402454-a920-471e-895e-764235d16eb4\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5] networking: [openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5/29402454-a920-471e-895e-764235d16eb4:ovn-kubernetes]: error adding container to network \\\"ovn-kubernetes\\\": failed to send CNI request: Post \\\"http://dummy/\\\": dial unix /var/run/ovn-kubernetes/cni//ovn-cni-server.sock: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:00:07.239731 master-0 kubenswrapper[4155]: I0216 17:00:07.239601 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:07.239731 master-0 kubenswrapper[4155]: I0216 17:00:07.239653 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:07.257384 master-0 kubenswrapper[4155]: I0216 17:00:07.257326 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:11.780219 master-0 kubenswrapper[4155]: I0216 17:00:11.779825 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:00:11.780945 master-0 kubenswrapper[4155]: I0216 17:00:11.780617 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:00:11.940952 master-0 kubenswrapper[4155]: I0216 17:00:11.940422 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-vwvwx"] Feb 16 17:00:11.947393 master-0 kubenswrapper[4155]: W0216 17:00:11.947354 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc303189e_adae_4fe2_8dd7_cc9b80f73e66.slice/crio-f4ce4120d8890f765a717b5f92a49bb939a1d012d4ecb18b255dc6309ea6d107 WatchSource:0}: Error finding container f4ce4120d8890f765a717b5f92a49bb939a1d012d4ecb18b255dc6309ea6d107: Status 404 returned error can't find the container with id f4ce4120d8890f765a717b5f92a49bb939a1d012d4ecb18b255dc6309ea6d107 Feb 16 17:00:12.257276 master-0 kubenswrapper[4155]: I0216 17:00:12.257175 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vwvwx" event={"ID":"c303189e-adae-4fe2-8dd7-cc9b80f73e66","Type":"ContainerStarted","Data":"ade1880aca33a2a6fecd8de7a6fb9caa6cf30a4d0a9280f0ea929a2643dc290b"} Feb 16 17:00:12.257276 master-0 kubenswrapper[4155]: I0216 17:00:12.257262 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vwvwx" event={"ID":"c303189e-adae-4fe2-8dd7-cc9b80f73e66","Type":"ContainerStarted","Data":"f4ce4120d8890f765a717b5f92a49bb939a1d012d4ecb18b255dc6309ea6d107"} Feb 16 17:00:12.257663 master-0 kubenswrapper[4155]: I0216 17:00:12.257449 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:00:12.273387 master-0 kubenswrapper[4155]: I0216 17:00:12.273297 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-target-vwvwx" podStartSLOduration=45.273278713 podStartE2EDuration="45.273278713s" podCreationTimestamp="2026-02-16 16:59:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:00:12.271716311 +0000 UTC m=+116.610769815" watchObservedRunningTime="2026-02-16 17:00:12.273278713 +0000 UTC m=+116.612332217" Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: I0216 17:00:12.617647 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: E0216 17:00:12.617838 4155 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: I0216 17:00:12.617884 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: E0216 17:00:12.617959 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.617908571 +0000 UTC m=+132.956962075 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : secret "marketplace-operator-metrics" not found Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: I0216 17:00:12.618011 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: I0216 17:00:12.618038 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: I0216 17:00:12.618093 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: I0216 17:00:12.618138 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: I0216 17:00:12.618157 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: I0216 17:00:12.618180 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: E0216 17:00:12.618192 4155 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: E0216 17:00:12.618271 4155 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: E0216 17:00:12.618274 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.61825651 +0000 UTC m=+132.957310014 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : secret "metrics-tls" not found Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: E0216 17:00:12.618334 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.618320912 +0000 UTC m=+132.957374466 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : secret "package-server-manager-serving-cert" not found Feb 16 17:00:12.618712 master-0 kubenswrapper[4155]: I0216 17:00:12.618353 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:12.619506 master-0 kubenswrapper[4155]: E0216 17:00:12.618270 4155 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:12.619506 master-0 kubenswrapper[4155]: E0216 17:00:12.618313 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 17:00:12.619506 master-0 kubenswrapper[4155]: E0216 17:00:12.618460 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.618430445 +0000 UTC m=+132.957483949 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : secret "metrics-tls" not found Feb 16 17:00:12.619506 master-0 kubenswrapper[4155]: E0216 17:00:12.618690 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.618676301 +0000 UTC m=+132.957729805 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "node-tuning-operator-tls" not found Feb 16 17:00:12.619506 master-0 kubenswrapper[4155]: E0216 17:00:12.618773 4155 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 17:00:12.619506 master-0 kubenswrapper[4155]: E0216 17:00:12.618808 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.618799855 +0000 UTC m=+132.957853359 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : secret "image-registry-operator-tls" not found Feb 16 17:00:12.619506 master-0 kubenswrapper[4155]: E0216 17:00:12.618908 4155 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:12.619506 master-0 kubenswrapper[4155]: E0216 17:00:12.618954 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.618945829 +0000 UTC m=+132.957999333 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:12.619506 master-0 kubenswrapper[4155]: E0216 17:00:12.618984 4155 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 17:00:12.619506 master-0 kubenswrapper[4155]: E0216 17:00:12.619020 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.619008681 +0000 UTC m=+132.958062325 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : secret "multus-admission-controller-secret" not found Feb 16 17:00:12.619506 master-0 kubenswrapper[4155]: E0216 17:00:12.619067 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 17:00:12.619506 master-0 kubenswrapper[4155]: E0216 17:00:12.619113 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.619102173 +0000 UTC m=+132.958155767 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "performance-addon-operator-webhook-cert" not found Feb 16 17:00:17.368145 master-0 kubenswrapper[4155]: I0216 17:00:17.368074 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:00:17.372068 master-0 kubenswrapper[4155]: I0216 17:00:17.371981 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 17:00:17.379762 master-0 kubenswrapper[4155]: E0216 17:00:17.379643 4155 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 17:00:17.379989 master-0 kubenswrapper[4155]: E0216 17:00:17.379796 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:21.379767298 +0000 UTC m=+185.718820832 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : secret "metrics-daemon-secret" not found Feb 16 17:00:17.780436 master-0 kubenswrapper[4155]: I0216 17:00:17.780315 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:00:17.780436 master-0 kubenswrapper[4155]: I0216 17:00:17.780335 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:17.780436 master-0 kubenswrapper[4155]: I0216 17:00:17.780402 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:00:17.781042 master-0 kubenswrapper[4155]: I0216 17:00:17.780480 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:00:17.781042 master-0 kubenswrapper[4155]: I0216 17:00:17.780793 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:00:17.781042 master-0 kubenswrapper[4155]: I0216 17:00:17.780801 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:17.781042 master-0 kubenswrapper[4155]: I0216 17:00:17.780968 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:00:17.781564 master-0 kubenswrapper[4155]: I0216 17:00:17.781496 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:17.781564 master-0 kubenswrapper[4155]: I0216 17:00:17.781522 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:00:17.781799 master-0 kubenswrapper[4155]: I0216 17:00:17.781511 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:17.782243 master-0 kubenswrapper[4155]: I0216 17:00:17.782186 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:00:17.783524 master-0 kubenswrapper[4155]: I0216 17:00:17.782464 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:00:18.174136 master-0 kubenswrapper[4155]: I0216 17:00:18.173385 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6"] Feb 16 17:00:18.175083 master-0 kubenswrapper[4155]: I0216 17:00:18.174456 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9"] Feb 16 17:00:18.181390 master-0 kubenswrapper[4155]: W0216 17:00:18.181081 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedbaac23_11f0_4bc7_a7ce_b593c774c0fa.slice/crio-1a6fc168713ed892fb86b4e303cafe982f512d1d221599bf5dd49b75c3751ce5 WatchSource:0}: Error finding container 1a6fc168713ed892fb86b4e303cafe982f512d1d221599bf5dd49b75c3751ce5: Status 404 returned error can't find the container with id 1a6fc168713ed892fb86b4e303cafe982f512d1d221599bf5dd49b75c3751ce5 Feb 16 17:00:18.183431 master-0 kubenswrapper[4155]: W0216 17:00:18.183163 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e623376_9e14_4341_9dcf_7a7c218b6f9f.slice/crio-11ed7f8e3ea465f63c87bfe4f19d1af5fa7ffa1300819fc95ebd1dd0c7c845d0 WatchSource:0}: Error finding container 11ed7f8e3ea465f63c87bfe4f19d1af5fa7ffa1300819fc95ebd1dd0c7c845d0: Status 404 returned error can't find the container with id 11ed7f8e3ea465f63c87bfe4f19d1af5fa7ffa1300819fc95ebd1dd0c7c845d0 Feb 16 17:00:18.230769 master-0 kubenswrapper[4155]: I0216 17:00:18.230733 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5"] Feb 16 17:00:18.238459 master-0 kubenswrapper[4155]: I0216 17:00:18.238259 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv"] Feb 16 17:00:18.241333 master-0 kubenswrapper[4155]: I0216 17:00:18.241168 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-755d954778-lf4cb"] Feb 16 17:00:18.246022 master-0 kubenswrapper[4155]: I0216 17:00:18.245692 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8"] Feb 16 17:00:18.247952 master-0 kubenswrapper[4155]: W0216 17:00:18.247879 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode69d8c51_e2a6_4f61_9c26_072784f6cf40.slice/crio-95380b516961f947b4de886138c9d7adc4beb7c7579d206d803e4d6c415fb290 WatchSource:0}: Error finding container 95380b516961f947b4de886138c9d7adc4beb7c7579d206d803e4d6c415fb290: Status 404 returned error can't find the container with id 95380b516961f947b4de886138c9d7adc4beb7c7579d206d803e4d6c415fb290 Feb 16 17:00:18.248885 master-0 kubenswrapper[4155]: W0216 17:00:18.248853 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9aa57eb4_c511_4ab8_a5d7_385e1ed9ee41.slice/crio-c5fa73884bbf6d82e89a9b049cd7e08d54171e2ca181ad4d436172b3a8202990 WatchSource:0}: Error finding container c5fa73884bbf6d82e89a9b049cd7e08d54171e2ca181ad4d436172b3a8202990: Status 404 returned error can't find the container with id c5fa73884bbf6d82e89a9b049cd7e08d54171e2ca181ad4d436172b3a8202990 Feb 16 17:00:18.249977 master-0 kubenswrapper[4155]: W0216 17:00:18.249948 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd020c902_2adb_4919_8dd9_0c2109830580.slice/crio-0cff847538436e1b2bb3434e2e04b8332738e465e04639ea97b586f2461bb9fc WatchSource:0}: Error finding container 0cff847538436e1b2bb3434e2e04b8332738e465e04639ea97b586f2461bb9fc: Status 404 returned error can't find the container with id 0cff847538436e1b2bb3434e2e04b8332738e465e04639ea97b586f2461bb9fc Feb 16 17:00:18.276550 master-0 kubenswrapper[4155]: I0216 17:00:18.276492 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" event={"ID":"edbaac23-11f0-4bc7-a7ce-b593c774c0fa","Type":"ContainerStarted","Data":"1a6fc168713ed892fb86b4e303cafe982f512d1d221599bf5dd49b75c3751ce5"} Feb 16 17:00:18.278639 master-0 kubenswrapper[4155]: I0216 17:00:18.278563 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerStarted","Data":"74ced4b4e3fdce2aecbb38a4d03ec1a93853cd8aa3de1fd3350c1e935e0a300f"} Feb 16 17:00:18.279980 master-0 kubenswrapper[4155]: I0216 17:00:18.279878 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" event={"ID":"8e623376-9e14-4341-9dcf-7a7c218b6f9f","Type":"ContainerStarted","Data":"11ed7f8e3ea465f63c87bfe4f19d1af5fa7ffa1300819fc95ebd1dd0c7c845d0"} Feb 16 17:00:18.281539 master-0 kubenswrapper[4155]: I0216 17:00:18.281487 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerStarted","Data":"95380b516961f947b4de886138c9d7adc4beb7c7579d206d803e4d6c415fb290"} Feb 16 17:00:18.282463 master-0 kubenswrapper[4155]: I0216 17:00:18.282403 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerStarted","Data":"0cff847538436e1b2bb3434e2e04b8332738e465e04639ea97b586f2461bb9fc"} Feb 16 17:00:18.283355 master-0 kubenswrapper[4155]: I0216 17:00:18.283295 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" event={"ID":"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41","Type":"ContainerStarted","Data":"c5fa73884bbf6d82e89a9b049cd7e08d54171e2ca181ad4d436172b3a8202990"} Feb 16 17:00:18.780743 master-0 kubenswrapper[4155]: I0216 17:00:18.780668 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:00:18.780743 master-0 kubenswrapper[4155]: I0216 17:00:18.780702 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:18.781613 master-0 kubenswrapper[4155]: I0216 17:00:18.781240 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:00:18.781613 master-0 kubenswrapper[4155]: I0216 17:00:18.781402 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:19.288766 master-0 kubenswrapper[4155]: I0216 17:00:19.287800 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerStarted","Data":"e310e36fd740b75515307293e697ecd768c9c8241ff939db071d778913f35a7a"} Feb 16 17:00:19.320974 master-0 kubenswrapper[4155]: I0216 17:00:19.309320 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv"] Feb 16 17:00:19.339688 master-0 kubenswrapper[4155]: W0216 17:00:19.339630 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e51bba5_0ebe_4e55_a588_38b71548c605.slice/crio-27e2fd204d60ad6b8a779a11015379244968cbe2949b9df72430bd5ea3162c81 WatchSource:0}: Error finding container 27e2fd204d60ad6b8a779a11015379244968cbe2949b9df72430bd5ea3162c81: Status 404 returned error can't find the container with id 27e2fd204d60ad6b8a779a11015379244968cbe2949b9df72430bd5ea3162c81 Feb 16 17:00:19.382803 master-0 kubenswrapper[4155]: I0216 17:00:19.382747 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8"] Feb 16 17:00:19.398483 master-0 kubenswrapper[4155]: I0216 17:00:19.398393 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podStartSLOduration=89.398371131 podStartE2EDuration="1m29.398371131s" podCreationTimestamp="2026-02-16 16:58:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:00:19.396809768 +0000 UTC m=+123.735863282" watchObservedRunningTime="2026-02-16 17:00:19.398371131 +0000 UTC m=+123.737424635" Feb 16 17:00:19.780133 master-0 kubenswrapper[4155]: I0216 17:00:19.780088 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:00:19.780596 master-0 kubenswrapper[4155]: I0216 17:00:19.780574 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:00:19.991197 master-0 kubenswrapper[4155]: I0216 17:00:19.991155 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf"] Feb 16 17:00:20.293544 master-0 kubenswrapper[4155]: I0216 17:00:20.293178 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerStarted","Data":"27e2fd204d60ad6b8a779a11015379244968cbe2949b9df72430bd5ea3162c81"} Feb 16 17:00:20.295348 master-0 kubenswrapper[4155]: I0216 17:00:20.295115 4155 generic.go:334] "Generic (PLEG): container finished" podID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" containerID="ff9b3b2992b50e55900986e351d7a1b84719ad88820b81ad374c423bd1f1a2a8" exitCode=0 Feb 16 17:00:20.295348 master-0 kubenswrapper[4155]: I0216 17:00:20.295175 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerDied","Data":"ff9b3b2992b50e55900986e351d7a1b84719ad88820b81ad374c423bd1f1a2a8"} Feb 16 17:00:20.296506 master-0 kubenswrapper[4155]: I0216 17:00:20.296189 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerStarted","Data":"84fbcf4f8c4afda2e79a3be2b73e332fc9f2d8ce27d17523a1712d76d3ce752e"} Feb 16 17:00:20.731616 master-0 kubenswrapper[4155]: W0216 17:00:20.731519 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaf7edff_0a89_4ac0_b9dd_511e098b5434.slice/crio-de6b829dcdd7c76ab814893ea3af1edbdeba5f9048feac58739f12fbe595c34c WatchSource:0}: Error finding container de6b829dcdd7c76ab814893ea3af1edbdeba5f9048feac58739f12fbe595c34c: Status 404 returned error can't find the container with id de6b829dcdd7c76ab814893ea3af1edbdeba5f9048feac58739f12fbe595c34c Feb 16 17:00:20.780362 master-0 kubenswrapper[4155]: I0216 17:00:20.780290 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:00:20.780544 master-0 kubenswrapper[4155]: I0216 17:00:20.780372 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:00:20.780820 master-0 kubenswrapper[4155]: I0216 17:00:20.780771 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:00:20.780820 master-0 kubenswrapper[4155]: I0216 17:00:20.780789 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:00:21.300501 master-0 kubenswrapper[4155]: I0216 17:00:21.300438 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" event={"ID":"eaf7edff-0a89-4ac0-b9dd-511e098b5434","Type":"ContainerStarted","Data":"de6b829dcdd7c76ab814893ea3af1edbdeba5f9048feac58739f12fbe595c34c"} Feb 16 17:00:21.780221 master-0 kubenswrapper[4155]: I0216 17:00:21.780143 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:00:21.780528 master-0 kubenswrapper[4155]: I0216 17:00:21.780505 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:00:23.996617 master-0 kubenswrapper[4155]: I0216 17:00:23.996576 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k"] Feb 16 17:00:24.039541 master-0 kubenswrapper[4155]: I0216 17:00:24.039467 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf"] Feb 16 17:00:24.461236 master-0 kubenswrapper[4155]: W0216 17:00:24.460277 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod970d4376_f299_412c_a8ee_90aa980c689e.slice/crio-99e6140d34fdb87885ac6f6b6458e82728271da4c3991473cfe40419736e575d WatchSource:0}: Error finding container 99e6140d34fdb87885ac6f6b6458e82728271da4c3991473cfe40419736e575d: Status 404 returned error can't find the container with id 99e6140d34fdb87885ac6f6b6458e82728271da4c3991473cfe40419736e575d Feb 16 17:00:24.650211 master-0 kubenswrapper[4155]: I0216 17:00:24.649820 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v"] Feb 16 17:00:25.319712 master-0 kubenswrapper[4155]: I0216 17:00:25.317598 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" event={"ID":"737fcc7d-d850-4352-9f17-383c85d5bc28","Type":"ContainerStarted","Data":"bacf9b29c15cf47bbdc9ebe2fcba6bca3cfdec92ee8864175a5412cbbe3c9659"} Feb 16 17:00:25.320810 master-0 kubenswrapper[4155]: I0216 17:00:25.320105 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" event={"ID":"8e623376-9e14-4341-9dcf-7a7c218b6f9f","Type":"ContainerStarted","Data":"8399cb1f8a954f603085247154bb48084a1a6283fe2b99aa8facab4cb78f381d"} Feb 16 17:00:25.326949 master-0 kubenswrapper[4155]: I0216 17:00:25.324348 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerStarted","Data":"410ef06e22d76a946b4285857693ab64a631161fcff4dd55a6b1f8d6e54ed325"} Feb 16 17:00:25.326949 master-0 kubenswrapper[4155]: I0216 17:00:25.326615 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerStarted","Data":"a650093628feaa4193c1b7c57ea685e55d5af706446f54a283f32836e6d703a9"} Feb 16 17:00:25.326949 master-0 kubenswrapper[4155]: I0216 17:00:25.326728 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:25.330937 master-0 kubenswrapper[4155]: I0216 17:00:25.327861 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" event={"ID":"442600dc-09b2-4fee-9f89-777296b2ee40","Type":"ContainerStarted","Data":"4c0337d0eb1672f3cea8f23ec0619b33320b69137b0d2e9bc66e94fe15bbe412"} Feb 16 17:00:25.334939 master-0 kubenswrapper[4155]: I0216 17:00:25.332475 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" event={"ID":"970d4376-f299-412c-a8ee-90aa980c689e","Type":"ContainerStarted","Data":"99e6140d34fdb87885ac6f6b6458e82728271da4c3991473cfe40419736e575d"} Feb 16 17:00:25.334939 master-0 kubenswrapper[4155]: I0216 17:00:25.334538 4155 generic.go:334] "Generic (PLEG): container finished" podID="4e51bba5-0ebe-4e55-a588-38b71548c605" containerID="16a0cd95be2918fe98e0a8ede15fe5203c9e491ca6e96550b8c7ea95ff6081d2" exitCode=0 Feb 16 17:00:25.334939 master-0 kubenswrapper[4155]: I0216 17:00:25.334589 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerDied","Data":"16a0cd95be2918fe98e0a8ede15fe5203c9e491ca6e96550b8c7ea95ff6081d2"} Feb 16 17:00:25.340133 master-0 kubenswrapper[4155]: I0216 17:00:25.338616 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" event={"ID":"eaf7edff-0a89-4ac0-b9dd-511e098b5434","Type":"ContainerStarted","Data":"e8c4ffcf7c4ece8cb912757e2c966b100c9bb74e9a2ec208a540c26e8e9187ce"} Feb 16 17:00:25.343938 master-0 kubenswrapper[4155]: I0216 17:00:25.340381 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" event={"ID":"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41","Type":"ContainerStarted","Data":"8fa44b1ac9949e31fd12e8a885f114d1074a93f335ef9c428586ae9835e14643"} Feb 16 17:00:25.343938 master-0 kubenswrapper[4155]: I0216 17:00:25.342970 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" event={"ID":"edbaac23-11f0-4bc7-a7ce-b593c774c0fa","Type":"ContainerStarted","Data":"58c88a445d8c10824c3855b7412ae17cbbff466b8394e38c4224ab694839c37d"} Feb 16 17:00:25.352842 master-0 kubenswrapper[4155]: I0216 17:00:25.352412 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerStarted","Data":"f31ff62ede3b23583193a8479095d460885c4665f91f714a80c48601aa1a71ad"} Feb 16 17:00:25.353955 master-0 kubenswrapper[4155]: I0216 17:00:25.353769 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podStartSLOduration=88.688807131 podStartE2EDuration="1m34.353757184s" podCreationTimestamp="2026-02-16 16:58:51 +0000 UTC" firstStartedPulling="2026-02-16 17:00:18.18590968 +0000 UTC m=+122.524963174" lastFinishedPulling="2026-02-16 17:00:23.850859723 +0000 UTC m=+128.189913227" observedRunningTime="2026-02-16 17:00:25.353160188 +0000 UTC m=+129.692213692" watchObservedRunningTime="2026-02-16 17:00:25.353757184 +0000 UTC m=+129.692810688" Feb 16 17:00:25.471005 master-0 kubenswrapper[4155]: I0216 17:00:25.470395 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podStartSLOduration=88.411129967 podStartE2EDuration="1m33.470375104s" podCreationTimestamp="2026-02-16 16:58:52 +0000 UTC" firstStartedPulling="2026-02-16 17:00:19.401080965 +0000 UTC m=+123.740134469" lastFinishedPulling="2026-02-16 17:00:24.460326102 +0000 UTC m=+128.799379606" observedRunningTime="2026-02-16 17:00:25.39650673 +0000 UTC m=+129.735560234" watchObservedRunningTime="2026-02-16 17:00:25.470375104 +0000 UTC m=+129.809428608" Feb 16 17:00:25.477716 master-0 kubenswrapper[4155]: I0216 17:00:25.476275 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podStartSLOduration=88.275115829 podStartE2EDuration="1m33.476263045s" podCreationTimestamp="2026-02-16 16:58:52 +0000 UTC" firstStartedPulling="2026-02-16 17:00:18.183589316 +0000 UTC m=+122.522642820" lastFinishedPulling="2026-02-16 17:00:23.384736532 +0000 UTC m=+127.723790036" observedRunningTime="2026-02-16 17:00:25.464788992 +0000 UTC m=+129.803842496" watchObservedRunningTime="2026-02-16 17:00:25.476263045 +0000 UTC m=+129.815316549" Feb 16 17:00:25.548956 master-0 kubenswrapper[4155]: I0216 17:00:25.548815 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podStartSLOduration=87.300430151 podStartE2EDuration="1m33.548798863s" podCreationTimestamp="2026-02-16 16:58:52 +0000 UTC" firstStartedPulling="2026-02-16 17:00:18.249818802 +0000 UTC m=+122.588872306" lastFinishedPulling="2026-02-16 17:00:24.498187474 +0000 UTC m=+128.837241018" observedRunningTime="2026-02-16 17:00:25.506028827 +0000 UTC m=+129.845082341" watchObservedRunningTime="2026-02-16 17:00:25.548798863 +0000 UTC m=+129.887852367" Feb 16 17:00:25.549222 master-0 kubenswrapper[4155]: I0216 17:00:25.549186 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podStartSLOduration=89.766100125 podStartE2EDuration="1m33.549181453s" podCreationTimestamp="2026-02-16 16:58:52 +0000 UTC" firstStartedPulling="2026-02-16 17:00:20.733802216 +0000 UTC m=+125.072855720" lastFinishedPulling="2026-02-16 17:00:24.516883544 +0000 UTC m=+128.855937048" observedRunningTime="2026-02-16 17:00:25.548262508 +0000 UTC m=+129.887316012" watchObservedRunningTime="2026-02-16 17:00:25.549181453 +0000 UTC m=+129.888234957" Feb 16 17:00:25.601029 master-0 kubenswrapper[4155]: I0216 17:00:25.600212 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podStartSLOduration=88.390677742 podStartE2EDuration="1m34.600190794s" podCreationTimestamp="2026-02-16 16:58:51 +0000 UTC" firstStartedPulling="2026-02-16 17:00:18.251224531 +0000 UTC m=+122.590278035" lastFinishedPulling="2026-02-16 17:00:24.460737573 +0000 UTC m=+128.799791087" observedRunningTime="2026-02-16 17:00:25.599347471 +0000 UTC m=+129.938400995" watchObservedRunningTime="2026-02-16 17:00:25.600190794 +0000 UTC m=+129.939244298" Feb 16 17:00:25.601029 master-0 kubenswrapper[4155]: I0216 17:00:25.600459 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podStartSLOduration=89.384483333 podStartE2EDuration="1m35.600454011s" podCreationTimestamp="2026-02-16 16:58:50 +0000 UTC" firstStartedPulling="2026-02-16 17:00:18.244364304 +0000 UTC m=+122.583417808" lastFinishedPulling="2026-02-16 17:00:24.460334992 +0000 UTC m=+128.799388486" observedRunningTime="2026-02-16 17:00:25.58244842 +0000 UTC m=+129.921501934" watchObservedRunningTime="2026-02-16 17:00:25.600454011 +0000 UTC m=+129.939507525" Feb 16 17:00:26.959407 master-0 kubenswrapper[4155]: I0216 17:00:26.958439 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-5zd2r"] Feb 16 17:00:26.959407 master-0 kubenswrapper[4155]: I0216 17:00:26.958846 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:26.962849 master-0 kubenswrapper[4155]: I0216 17:00:26.962549 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:00:26.965527 master-0 kubenswrapper[4155]: I0216 17:00:26.963778 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:00:26.965527 master-0 kubenswrapper[4155]: I0216 17:00:26.963792 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:00:26.965527 master-0 kubenswrapper[4155]: I0216 17:00:26.963892 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:00:26.965527 master-0 kubenswrapper[4155]: I0216 17:00:26.963913 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:00:26.965527 master-0 kubenswrapper[4155]: I0216 17:00:26.964049 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:00:26.987312 master-0 kubenswrapper[4155]: I0216 17:00:26.987264 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-5zd2r"] Feb 16 17:00:27.095163 master-0 kubenswrapper[4155]: I0216 17:00:27.094859 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/decdca06-6892-49d3-8ed4-10e29d7c5de8-serving-cert\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.095390 master-0 kubenswrapper[4155]: I0216 17:00:27.095366 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-client-ca\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.095497 master-0 kubenswrapper[4155]: I0216 17:00:27.095484 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-config\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.095752 master-0 kubenswrapper[4155]: I0216 17:00:27.095681 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwztm\" (UniqueName: \"kubernetes.io/projected/decdca06-6892-49d3-8ed4-10e29d7c5de8-kube-api-access-lwztm\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.095916 master-0 kubenswrapper[4155]: I0216 17:00:27.095874 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.197330 master-0 kubenswrapper[4155]: I0216 17:00:27.197253 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/decdca06-6892-49d3-8ed4-10e29d7c5de8-serving-cert\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.197330 master-0 kubenswrapper[4155]: I0216 17:00:27.197327 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-client-ca\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.197607 master-0 kubenswrapper[4155]: E0216 17:00:27.197418 4155 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:27.197607 master-0 kubenswrapper[4155]: E0216 17:00:27.197456 4155 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:27.197607 master-0 kubenswrapper[4155]: E0216 17:00:27.197486 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-client-ca podName:decdca06-6892-49d3-8ed4-10e29d7c5de8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:27.697464739 +0000 UTC m=+132.036518243 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-client-ca") pod "controller-manager-dc99ff586-5zd2r" (UID: "decdca06-6892-49d3-8ed4-10e29d7c5de8") : configmap "client-ca" not found Feb 16 17:00:27.197607 master-0 kubenswrapper[4155]: E0216 17:00:27.197532 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/decdca06-6892-49d3-8ed4-10e29d7c5de8-serving-cert podName:decdca06-6892-49d3-8ed4-10e29d7c5de8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:27.69751236 +0000 UTC m=+132.036565864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/decdca06-6892-49d3-8ed4-10e29d7c5de8-serving-cert") pod "controller-manager-dc99ff586-5zd2r" (UID: "decdca06-6892-49d3-8ed4-10e29d7c5de8") : secret "serving-cert" not found Feb 16 17:00:27.197607 master-0 kubenswrapper[4155]: I0216 17:00:27.197579 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-config\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.197986 master-0 kubenswrapper[4155]: I0216 17:00:27.197619 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwztm\" (UniqueName: \"kubernetes.io/projected/decdca06-6892-49d3-8ed4-10e29d7c5de8-kube-api-access-lwztm\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.197986 master-0 kubenswrapper[4155]: I0216 17:00:27.197703 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.197986 master-0 kubenswrapper[4155]: E0216 17:00:27.197802 4155 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 16 17:00:27.197986 master-0 kubenswrapper[4155]: E0216 17:00:27.197837 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-config podName:decdca06-6892-49d3-8ed4-10e29d7c5de8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:27.697828659 +0000 UTC m=+132.036882163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-config") pod "controller-manager-dc99ff586-5zd2r" (UID: "decdca06-6892-49d3-8ed4-10e29d7c5de8") : configmap "config" not found Feb 16 17:00:27.197986 master-0 kubenswrapper[4155]: E0216 17:00:27.197877 4155 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 16 17:00:27.197986 master-0 kubenswrapper[4155]: E0216 17:00:27.197900 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-proxy-ca-bundles podName:decdca06-6892-49d3-8ed4-10e29d7c5de8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:27.69789354 +0000 UTC m=+132.036947044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-proxy-ca-bundles") pod "controller-manager-dc99ff586-5zd2r" (UID: "decdca06-6892-49d3-8ed4-10e29d7c5de8") : configmap "openshift-global-ca" not found Feb 16 17:00:27.281074 master-0 kubenswrapper[4155]: I0216 17:00:27.281025 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwztm\" (UniqueName: \"kubernetes.io/projected/decdca06-6892-49d3-8ed4-10e29d7c5de8-kube-api-access-lwztm\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.288826 master-0 kubenswrapper[4155]: I0216 17:00:27.287152 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6"] Feb 16 17:00:27.288826 master-0 kubenswrapper[4155]: I0216 17:00:27.287864 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:00:27.290224 master-0 kubenswrapper[4155]: I0216 17:00:27.290204 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 17:00:27.290821 master-0 kubenswrapper[4155]: I0216 17:00:27.290781 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 17:00:27.303083 master-0 kubenswrapper[4155]: I0216 17:00:27.303031 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6"] Feb 16 17:00:27.400039 master-0 kubenswrapper[4155]: I0216 17:00:27.399983 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:00:27.501361 master-0 kubenswrapper[4155]: I0216 17:00:27.501304 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:00:27.523006 master-0 kubenswrapper[4155]: I0216 17:00:27.521017 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:00:27.610227 master-0 kubenswrapper[4155]: I0216 17:00:27.610077 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:00:27.703031 master-0 kubenswrapper[4155]: I0216 17:00:27.702981 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.703229 master-0 kubenswrapper[4155]: E0216 17:00:27.703177 4155 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Feb 16 17:00:27.703366 master-0 kubenswrapper[4155]: E0216 17:00:27.703338 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-proxy-ca-bundles podName:decdca06-6892-49d3-8ed4-10e29d7c5de8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.703304352 +0000 UTC m=+133.042357896 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-proxy-ca-bundles") pod "controller-manager-dc99ff586-5zd2r" (UID: "decdca06-6892-49d3-8ed4-10e29d7c5de8") : configmap "openshift-global-ca" not found Feb 16 17:00:27.703438 master-0 kubenswrapper[4155]: I0216 17:00:27.703404 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/decdca06-6892-49d3-8ed4-10e29d7c5de8-serving-cert\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.703539 master-0 kubenswrapper[4155]: I0216 17:00:27.703507 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-client-ca\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.703614 master-0 kubenswrapper[4155]: I0216 17:00:27.703576 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-config\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:27.703614 master-0 kubenswrapper[4155]: E0216 17:00:27.703583 4155 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:27.703697 master-0 kubenswrapper[4155]: E0216 17:00:27.703664 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/decdca06-6892-49d3-8ed4-10e29d7c5de8-serving-cert podName:decdca06-6892-49d3-8ed4-10e29d7c5de8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.703646951 +0000 UTC m=+133.042700465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/decdca06-6892-49d3-8ed4-10e29d7c5de8-serving-cert") pod "controller-manager-dc99ff586-5zd2r" (UID: "decdca06-6892-49d3-8ed4-10e29d7c5de8") : secret "serving-cert" not found Feb 16 17:00:27.703745 master-0 kubenswrapper[4155]: E0216 17:00:27.703711 4155 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:27.703745 master-0 kubenswrapper[4155]: E0216 17:00:27.703737 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-client-ca podName:decdca06-6892-49d3-8ed4-10e29d7c5de8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.703728573 +0000 UTC m=+133.042782087 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-client-ca") pod "controller-manager-dc99ff586-5zd2r" (UID: "decdca06-6892-49d3-8ed4-10e29d7c5de8") : configmap "client-ca" not found Feb 16 17:00:27.703822 master-0 kubenswrapper[4155]: E0216 17:00:27.703772 4155 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Feb 16 17:00:27.703860 master-0 kubenswrapper[4155]: E0216 17:00:27.703831 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-config podName:decdca06-6892-49d3-8ed4-10e29d7c5de8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.703812715 +0000 UTC m=+133.042866269 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-config") pod "controller-manager-dc99ff586-5zd2r" (UID: "decdca06-6892-49d3-8ed4-10e29d7c5de8") : configmap "config" not found Feb 16 17:00:27.846581 master-0 kubenswrapper[4155]: I0216 17:00:27.846527 4155 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-5zd2r"] Feb 16 17:00:27.846781 master-0 kubenswrapper[4155]: E0216 17:00:27.846748 4155 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" podUID="decdca06-6892-49d3-8ed4-10e29d7c5de8" Feb 16 17:00:27.867779 master-0 kubenswrapper[4155]: I0216 17:00:27.864850 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4"] Feb 16 17:00:27.867779 master-0 kubenswrapper[4155]: I0216 17:00:27.865340 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:27.868321 master-0 kubenswrapper[4155]: I0216 17:00:27.868271 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:00:27.869138 master-0 kubenswrapper[4155]: I0216 17:00:27.868596 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:00:27.869138 master-0 kubenswrapper[4155]: I0216 17:00:27.868768 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:00:27.869138 master-0 kubenswrapper[4155]: I0216 17:00:27.868803 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:00:27.869415 master-0 kubenswrapper[4155]: I0216 17:00:27.869376 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:00:27.882854 master-0 kubenswrapper[4155]: I0216 17:00:27.882771 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4"] Feb 16 17:00:28.008760 master-0 kubenswrapper[4155]: I0216 17:00:28.008717 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcsw6\" (UniqueName: \"kubernetes.io/projected/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-kube-api-access-qcsw6\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:28.009215 master-0 kubenswrapper[4155]: I0216 17:00:28.008802 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:28.009215 master-0 kubenswrapper[4155]: I0216 17:00:28.008934 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-config\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:28.009215 master-0 kubenswrapper[4155]: I0216 17:00:28.009066 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:28.112079 master-0 kubenswrapper[4155]: I0216 17:00:28.110308 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcsw6\" (UniqueName: \"kubernetes.io/projected/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-kube-api-access-qcsw6\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:28.112079 master-0 kubenswrapper[4155]: I0216 17:00:28.111102 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:28.112079 master-0 kubenswrapper[4155]: E0216 17:00:28.111260 4155 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:28.112079 master-0 kubenswrapper[4155]: E0216 17:00:28.111393 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.611378389 +0000 UTC m=+132.950431893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : secret "serving-cert" not found Feb 16 17:00:28.112079 master-0 kubenswrapper[4155]: I0216 17:00:28.111526 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-config\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:28.112079 master-0 kubenswrapper[4155]: I0216 17:00:28.111624 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:28.112079 master-0 kubenswrapper[4155]: E0216 17:00:28.111708 4155 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:28.112079 master-0 kubenswrapper[4155]: E0216 17:00:28.111741 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:28.611733729 +0000 UTC m=+132.950787233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : configmap "client-ca" not found Feb 16 17:00:28.112889 master-0 kubenswrapper[4155]: I0216 17:00:28.112827 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-config\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:28.133475 master-0 kubenswrapper[4155]: I0216 17:00:28.133366 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcsw6\" (UniqueName: \"kubernetes.io/projected/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-kube-api-access-qcsw6\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:28.254696 master-0 kubenswrapper[4155]: I0216 17:00:28.254630 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-676cd8b9b5-cp9rb"] Feb 16 17:00:28.255158 master-0 kubenswrapper[4155]: I0216 17:00:28.255130 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:28.256894 master-0 kubenswrapper[4155]: I0216 17:00:28.256806 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 17:00:28.257156 master-0 kubenswrapper[4155]: I0216 17:00:28.256959 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 17:00:28.257244 master-0 kubenswrapper[4155]: I0216 17:00:28.257178 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 17:00:28.257244 master-0 kubenswrapper[4155]: I0216 17:00:28.257212 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 17:00:28.270606 master-0 kubenswrapper[4155]: I0216 17:00:28.270554 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-676cd8b9b5-cp9rb"] Feb 16 17:00:28.285406 master-0 kubenswrapper[4155]: I0216 17:00:28.285354 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:28.376087 master-0 kubenswrapper[4155]: I0216 17:00:28.376043 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:28.382126 master-0 kubenswrapper[4155]: I0216 17:00:28.382086 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:28.415674 master-0 kubenswrapper[4155]: I0216 17:00:28.415576 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:28.415752 master-0 kubenswrapper[4155]: I0216 17:00:28.415676 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:28.415752 master-0 kubenswrapper[4155]: I0216 17:00:28.415709 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:28.517101 master-0 kubenswrapper[4155]: I0216 17:00:28.517035 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwztm\" (UniqueName: \"kubernetes.io/projected/decdca06-6892-49d3-8ed4-10e29d7c5de8-kube-api-access-lwztm\") pod \"decdca06-6892-49d3-8ed4-10e29d7c5de8\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " Feb 16 17:00:28.517331 master-0 kubenswrapper[4155]: I0216 17:00:28.517278 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:28.517525 master-0 kubenswrapper[4155]: I0216 17:00:28.517500 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:28.517719 master-0 kubenswrapper[4155]: I0216 17:00:28.517540 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:28.518521 master-0 kubenswrapper[4155]: I0216 17:00:28.518497 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:28.521117 master-0 kubenswrapper[4155]: I0216 17:00:28.521076 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/decdca06-6892-49d3-8ed4-10e29d7c5de8-kube-api-access-lwztm" (OuterVolumeSpecName: "kube-api-access-lwztm") pod "decdca06-6892-49d3-8ed4-10e29d7c5de8" (UID: "decdca06-6892-49d3-8ed4-10e29d7c5de8"). InnerVolumeSpecName "kube-api-access-lwztm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:28.521647 master-0 kubenswrapper[4155]: I0216 17:00:28.521609 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:28.534733 master-0 kubenswrapper[4155]: I0216 17:00:28.534665 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:28.581110 master-0 kubenswrapper[4155]: I0216 17:00:28.581037 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:28.618373 master-0 kubenswrapper[4155]: I0216 17:00:28.618318 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:28.618373 master-0 kubenswrapper[4155]: I0216 17:00:28.618363 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:28.618615 master-0 kubenswrapper[4155]: E0216 17:00:28.618504 4155 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:28.618615 master-0 kubenswrapper[4155]: I0216 17:00:28.618564 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:28.618681 master-0 kubenswrapper[4155]: I0216 17:00:28.618644 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:28.618681 master-0 kubenswrapper[4155]: E0216 17:00:28.618651 4155 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:28.618681 master-0 kubenswrapper[4155]: I0216 17:00:28.618675 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:28.618762 master-0 kubenswrapper[4155]: E0216 17:00:28.618726 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:29.618706913 +0000 UTC m=+133.957760417 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : secret "serving-cert" not found Feb 16 17:00:28.618762 master-0 kubenswrapper[4155]: E0216 17:00:28.618745 4155 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:28.618826 master-0 kubenswrapper[4155]: I0216 17:00:28.618796 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:28.619328 master-0 kubenswrapper[4155]: E0216 17:00:28.618807 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:29.618791596 +0000 UTC m=+133.957845090 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : configmap "client-ca" not found Feb 16 17:00:28.619328 master-0 kubenswrapper[4155]: E0216 17:00:28.618971 4155 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 17:00:28.619328 master-0 kubenswrapper[4155]: E0216 17:00:28.618863 4155 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 17:00:28.619328 master-0 kubenswrapper[4155]: E0216 17:00:28.618896 4155 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:28.619328 master-0 kubenswrapper[4155]: E0216 17:00:28.618996 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:00.618988791 +0000 UTC m=+164.958042295 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : secret "marketplace-operator-metrics" not found Feb 16 17:00:28.619328 master-0 kubenswrapper[4155]: I0216 17:00:28.619118 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:28.619328 master-0 kubenswrapper[4155]: I0216 17:00:28.619169 4155 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwztm\" (UniqueName: \"kubernetes.io/projected/decdca06-6892-49d3-8ed4-10e29d7c5de8-kube-api-access-lwztm\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:28.619328 master-0 kubenswrapper[4155]: E0216 17:00:28.619193 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:00.619184166 +0000 UTC m=+164.958237760 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : secret "metrics-tls" not found Feb 16 17:00:28.619328 master-0 kubenswrapper[4155]: E0216 17:00:28.619208 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:00.619199657 +0000 UTC m=+164.958253261 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : secret "package-server-manager-serving-cert" not found Feb 16 17:00:28.619328 master-0 kubenswrapper[4155]: E0216 17:00:28.619223 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:00.619215407 +0000 UTC m=+164.958269001 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : secret "metrics-tls" not found Feb 16 17:00:28.619328 master-0 kubenswrapper[4155]: E0216 17:00:28.619233 4155 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 17:00:28.619328 master-0 kubenswrapper[4155]: E0216 17:00:28.619293 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:01:00.619277949 +0000 UTC m=+164.958331463 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : secret "image-registry-operator-tls" not found Feb 16 17:00:28.630524 master-0 kubenswrapper[4155]: I0216 17:00:28.630473 4155 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:28.720416 master-0 kubenswrapper[4155]: I0216 17:00:28.720167 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:28.720416 master-0 kubenswrapper[4155]: I0216 17:00:28.720324 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:28.722266 master-0 kubenswrapper[4155]: E0216 17:00:28.720428 4155 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 17:00:28.722266 master-0 kubenswrapper[4155]: E0216 17:00:28.720506 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:01:00.720482929 +0000 UTC m=+165.059536443 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : secret "multus-admission-controller-secret" not found Feb 16 17:00:28.722266 master-0 kubenswrapper[4155]: I0216 17:00:28.720628 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:28.722266 master-0 kubenswrapper[4155]: I0216 17:00:28.720670 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:28.722266 master-0 kubenswrapper[4155]: I0216 17:00:28.720721 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:28.722266 master-0 kubenswrapper[4155]: I0216 17:00:28.720791 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/decdca06-6892-49d3-8ed4-10e29d7c5de8-serving-cert\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:28.722266 master-0 kubenswrapper[4155]: I0216 17:00:28.721734 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-proxy-ca-bundles\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:28.722266 master-0 kubenswrapper[4155]: E0216 17:00:28.721769 4155 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:28.722266 master-0 kubenswrapper[4155]: E0216 17:00:28.721835 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:01:00.721812985 +0000 UTC m=+165.060866529 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:28.722266 master-0 kubenswrapper[4155]: E0216 17:00:28.721842 4155 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:28.722266 master-0 kubenswrapper[4155]: E0216 17:00:28.721881 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 17:00:28.722266 master-0 kubenswrapper[4155]: E0216 17:00:28.721894 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/decdca06-6892-49d3-8ed4-10e29d7c5de8-serving-cert podName:decdca06-6892-49d3-8ed4-10e29d7c5de8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:30.721878417 +0000 UTC m=+135.060931981 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/decdca06-6892-49d3-8ed4-10e29d7c5de8-serving-cert") pod "controller-manager-dc99ff586-5zd2r" (UID: "decdca06-6892-49d3-8ed4-10e29d7c5de8") : secret "serving-cert" not found Feb 16 17:00:28.722266 master-0 kubenswrapper[4155]: E0216 17:00:28.721915 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:00.721903187 +0000 UTC m=+165.060956701 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "performance-addon-operator-webhook-cert" not found Feb 16 17:00:28.723133 master-0 kubenswrapper[4155]: E0216 17:00:28.722362 4155 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 17:00:28.723133 master-0 kubenswrapper[4155]: E0216 17:00:28.722409 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:00.722394511 +0000 UTC m=+165.061448055 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "node-tuning-operator-tls" not found Feb 16 17:00:28.723133 master-0 kubenswrapper[4155]: I0216 17:00:28.722589 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-client-ca\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:28.723133 master-0 kubenswrapper[4155]: I0216 17:00:28.722652 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-config\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:28.723133 master-0 kubenswrapper[4155]: E0216 17:00:28.722703 4155 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:28.723133 master-0 kubenswrapper[4155]: E0216 17:00:28.722762 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-client-ca podName:decdca06-6892-49d3-8ed4-10e29d7c5de8 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:30.72274725 +0000 UTC m=+135.061800774 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-client-ca") pod "controller-manager-dc99ff586-5zd2r" (UID: "decdca06-6892-49d3-8ed4-10e29d7c5de8") : configmap "client-ca" not found Feb 16 17:00:28.723477 master-0 kubenswrapper[4155]: I0216 17:00:28.723450 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-config\") pod \"controller-manager-dc99ff586-5zd2r\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:28.823459 master-0 kubenswrapper[4155]: I0216 17:00:28.823401 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-config\") pod \"decdca06-6892-49d3-8ed4-10e29d7c5de8\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " Feb 16 17:00:28.823619 master-0 kubenswrapper[4155]: I0216 17:00:28.823487 4155 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-proxy-ca-bundles\") pod \"decdca06-6892-49d3-8ed4-10e29d7c5de8\" (UID: \"decdca06-6892-49d3-8ed4-10e29d7c5de8\") " Feb 16 17:00:28.823904 master-0 kubenswrapper[4155]: I0216 17:00:28.823871 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-config" (OuterVolumeSpecName: "config") pod "decdca06-6892-49d3-8ed4-10e29d7c5de8" (UID: "decdca06-6892-49d3-8ed4-10e29d7c5de8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:28.824092 master-0 kubenswrapper[4155]: I0216 17:00:28.824047 4155 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "decdca06-6892-49d3-8ed4-10e29d7c5de8" (UID: "decdca06-6892-49d3-8ed4-10e29d7c5de8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:28.925245 master-0 kubenswrapper[4155]: I0216 17:00:28.925191 4155 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:28.925245 master-0 kubenswrapper[4155]: I0216 17:00:28.925228 4155 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:29.382661 master-0 kubenswrapper[4155]: I0216 17:00:29.382327 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" event={"ID":"970d4376-f299-412c-a8ee-90aa980c689e","Type":"ContainerStarted","Data":"f0cff197ff851b70b3d6e59d84a65158829bfc95e73a2af61cd24901eaa4cfe8"} Feb 16 17:00:29.385744 master-0 kubenswrapper[4155]: I0216 17:00:29.384220 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerStarted","Data":"a0dc239cad7cf5c0f46eaeb5867ad213f7711a1950bb1f960b003e867bacaff0"} Feb 16 17:00:29.385744 master-0 kubenswrapper[4155]: I0216 17:00:29.385441 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-dc99ff586-5zd2r" Feb 16 17:00:29.385744 master-0 kubenswrapper[4155]: I0216 17:00:29.385713 4155 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" event={"ID":"737fcc7d-d850-4352-9f17-383c85d5bc28","Type":"ContainerStarted","Data":"435ef5863cc155441a05593945ab2775001a00c9f99d0e797a375813404c36ac"} Feb 16 17:00:29.401433 master-0 kubenswrapper[4155]: I0216 17:00:29.401366 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podStartSLOduration=93.709689231 podStartE2EDuration="1m38.401346714s" podCreationTimestamp="2026-02-16 16:58:51 +0000 UTC" firstStartedPulling="2026-02-16 17:00:24.462403059 +0000 UTC m=+128.801456573" lastFinishedPulling="2026-02-16 17:00:29.154060552 +0000 UTC m=+133.493114056" observedRunningTime="2026-02-16 17:00:29.400803329 +0000 UTC m=+133.739856833" watchObservedRunningTime="2026-02-16 17:00:29.401346714 +0000 UTC m=+133.740400218" Feb 16 17:00:29.414638 master-0 kubenswrapper[4155]: I0216 17:00:29.414595 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-676cd8b9b5-cp9rb"] Feb 16 17:00:29.462751 master-0 kubenswrapper[4155]: I0216 17:00:29.462680 4155 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podStartSLOduration=93.9784566 podStartE2EDuration="1m38.462659566s" podCreationTimestamp="2026-02-16 16:58:51 +0000 UTC" firstStartedPulling="2026-02-16 17:00:24.670119673 +0000 UTC m=+129.009173177" lastFinishedPulling="2026-02-16 17:00:29.154322599 +0000 UTC m=+133.493376143" observedRunningTime="2026-02-16 17:00:29.443772811 +0000 UTC m=+133.782826315" watchObservedRunningTime="2026-02-16 17:00:29.462659566 +0000 UTC m=+133.801713070" Feb 16 17:00:29.480813 master-0 kubenswrapper[4155]: I0216 17:00:29.480194 4155 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-5zd2r"] Feb 16 17:00:29.487403 master-0 kubenswrapper[4155]: I0216 17:00:29.487364 4155 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-dc99ff586-5zd2r"] Feb 16 17:00:29.498902 master-0 kubenswrapper[4155]: I0216 17:00:29.498851 4155 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-869cbbd595-47pjz"] Feb 16 17:00:29.500656 master-0 kubenswrapper[4155]: I0216 17:00:29.500625 4155 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:29.502913 master-0 kubenswrapper[4155]: I0216 17:00:29.502867 4155 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:00:29.502913 master-0 kubenswrapper[4155]: I0216 17:00:29.502900 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:00:29.503177 master-0 kubenswrapper[4155]: I0216 17:00:29.503149 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:00:29.505996 master-0 kubenswrapper[4155]: I0216 17:00:29.505895 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:00:29.506114 master-0 kubenswrapper[4155]: I0216 17:00:29.506015 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:00:29.516036 master-0 kubenswrapper[4155]: I0216 17:00:29.515971 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-869cbbd595-47pjz"] Feb 16 17:00:29.521321 master-0 kubenswrapper[4155]: I0216 17:00:29.521026 4155 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:00:29.532590 master-0 kubenswrapper[4155]: I0216 17:00:29.531247 4155 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6"] Feb 16 17:00:29.559438 master-0 kubenswrapper[4155]: W0216 17:00:29.559397 4155 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62fc29f4_557f_4a75_8b78_6ca425c81b81.slice/crio-7f3624c603b0a3ab1d6d22b9ebbf3c00bc31ae7c696fca7464238b99ca1dc1bf WatchSource:0}: Error finding container 7f3624c603b0a3ab1d6d22b9ebbf3c00bc31ae7c696fca7464238b99ca1dc1bf: Status 404 returned error can't find the container with id 7f3624c603b0a3ab1d6d22b9ebbf3c00bc31ae7c696fca7464238b99ca1dc1bf Feb 16 17:00:29.635837 master-0 kubenswrapper[4155]: I0216 17:00:29.635696 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-proxy-ca-bundles\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:29.635837 master-0 kubenswrapper[4155]: I0216 17:00:29.635777 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:29.636088 master-0 kubenswrapper[4155]: I0216 17:00:29.635966 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:29.636088 master-0 kubenswrapper[4155]: I0216 17:00:29.636035 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-config\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:29.636088 master-0 kubenswrapper[4155]: I0216 17:00:29.636066 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47bnn\" (UniqueName: \"kubernetes.io/projected/01921947-c416-44b6-953d-75b935ad8977-kube-api-access-47bnn\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:29.636182 master-0 kubenswrapper[4155]: I0216 17:00:29.636165 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:29.636379 master-0 kubenswrapper[4155]: I0216 17:00:29.636233 4155 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:29.636379 master-0 kubenswrapper[4155]: I0216 17:00:29.636298 4155 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/decdca06-6892-49d3-8ed4-10e29d7c5de8-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:29.636379 master-0 kubenswrapper[4155]: I0216 17:00:29.636309 4155 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/decdca06-6892-49d3-8ed4-10e29d7c5de8-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:29.636483 master-0 kubenswrapper[4155]: E0216 17:00:29.636382 4155 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:29.636483 master-0 kubenswrapper[4155]: E0216 17:00:29.636424 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:31.636408714 +0000 UTC m=+135.975462218 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : configmap "client-ca" not found Feb 16 17:00:29.636483 master-0 kubenswrapper[4155]: E0216 17:00:29.636434 4155 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:29.636575 master-0 kubenswrapper[4155]: E0216 17:00:29.636506 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:31.636480415 +0000 UTC m=+135.975533989 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : secret "serving-cert" not found Feb 16 17:00:29.737766 master-0 kubenswrapper[4155]: I0216 17:00:29.737717 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-proxy-ca-bundles\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:29.737943 master-0 kubenswrapper[4155]: I0216 17:00:29.737774 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:29.737943 master-0 kubenswrapper[4155]: I0216 17:00:29.737837 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-config\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:29.737943 master-0 kubenswrapper[4155]: I0216 17:00:29.737862 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47bnn\" (UniqueName: \"kubernetes.io/projected/01921947-c416-44b6-953d-75b935ad8977-kube-api-access-47bnn\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:29.738072 master-0 kubenswrapper[4155]: I0216 17:00:29.737999 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:29.738166 master-0 kubenswrapper[4155]: E0216 17:00:29.738124 4155 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:29.738214 master-0 kubenswrapper[4155]: E0216 17:00:29.738184 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert podName:01921947-c416-44b6-953d-75b935ad8977 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:30.238165768 +0000 UTC m=+134.577219272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert") pod "controller-manager-869cbbd595-47pjz" (UID: "01921947-c416-44b6-953d-75b935ad8977") : secret "serving-cert" not found Feb 16 17:00:29.739497 master-0 kubenswrapper[4155]: I0216 17:00:29.739455 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-proxy-ca-bundles\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:29.739719 master-0 kubenswrapper[4155]: E0216 17:00:29.739695 4155 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:29.739781 master-0 kubenswrapper[4155]: E0216 17:00:29.739735 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca podName:01921947-c416-44b6-953d-75b935ad8977 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:30.239723901 +0000 UTC m=+134.578777405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca") pod "controller-manager-869cbbd595-47pjz" (UID: "01921947-c416-44b6-953d-75b935ad8977") : configmap "client-ca" not found Feb 16 17:00:29.739975 master-0 kubenswrapper[4155]: I0216 17:00:29.739908 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-config\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:29.762688 master-0 kubenswrapper[4155]: I0216 17:00:29.762633 4155 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47bnn\" (UniqueName: \"kubernetes.io/projected/01921947-c416-44b6-953d-75b935ad8977-kube-api-access-47bnn\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:30.243551 master-0 kubenswrapper[4155]: I0216 17:00:30.243268 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:30.243741 master-0 kubenswrapper[4155]: E0216 17:00:30.243534 4155 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:30.243741 master-0 kubenswrapper[4155]: I0216 17:00:30.243576 4155 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:30.243741 master-0 kubenswrapper[4155]: E0216 17:00:30.243663 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert podName:01921947-c416-44b6-953d-75b935ad8977 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:31.243628231 +0000 UTC m=+135.582681775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert") pod "controller-manager-869cbbd595-47pjz" (UID: "01921947-c416-44b6-953d-75b935ad8977") : secret "serving-cert" not found Feb 16 17:00:30.243741 master-0 kubenswrapper[4155]: E0216 17:00:30.243691 4155 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:30.243741 master-0 kubenswrapper[4155]: E0216 17:00:30.243746 4155 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca podName:01921947-c416-44b6-953d-75b935ad8977 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:31.243728104 +0000 UTC m=+135.582781668 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca") pod "controller-manager-869cbbd595-47pjz" (UID: "01921947-c416-44b6-953d-75b935ad8977") : configmap "client-ca" not found Feb 16 17:00:30.378833 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 16 17:00:30.410529 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 16 17:00:30.410794 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 16 17:00:30.412185 master-0 systemd[1]: kubelet.service: Consumed 10.551s CPU time. Feb 16 17:00:30.429757 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 17:00:30.594192 master-0 kubenswrapper[10003]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:00:30.594192 master-0 kubenswrapper[10003]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 17:00:30.594192 master-0 kubenswrapper[10003]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:00:30.594192 master-0 kubenswrapper[10003]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:00:30.594192 master-0 kubenswrapper[10003]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 17:00:30.594192 master-0 kubenswrapper[10003]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:00:30.594978 master-0 kubenswrapper[10003]: I0216 17:00:30.594223 10003 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 17:00:30.597211 master-0 kubenswrapper[10003]: W0216 17:00:30.597174 10003 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:00:30.597211 master-0 kubenswrapper[10003]: W0216 17:00:30.597194 10003 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:00:30.597211 master-0 kubenswrapper[10003]: W0216 17:00:30.597200 10003 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:00:30.597211 master-0 kubenswrapper[10003]: W0216 17:00:30.597206 10003 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:00:30.597211 master-0 kubenswrapper[10003]: W0216 17:00:30.597213 10003 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:00:30.597211 master-0 kubenswrapper[10003]: W0216 17:00:30.597219 10003 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:00:30.597211 master-0 kubenswrapper[10003]: W0216 17:00:30.597224 10003 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:00:30.597211 master-0 kubenswrapper[10003]: W0216 17:00:30.597230 10003 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597236 10003 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597242 10003 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597247 10003 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597259 10003 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597265 10003 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597272 10003 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597278 10003 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597285 10003 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597294 10003 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597304 10003 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597312 10003 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597320 10003 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597327 10003 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597333 10003 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597342 10003 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597349 10003 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597357 10003 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597366 10003 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:00:30.597798 master-0 kubenswrapper[10003]: W0216 17:00:30.597374 10003 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597383 10003 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597389 10003 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597396 10003 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597403 10003 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597409 10003 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597415 10003 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597422 10003 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597428 10003 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597435 10003 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597443 10003 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597450 10003 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597456 10003 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597463 10003 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597470 10003 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597476 10003 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597482 10003 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597487 10003 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597493 10003 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597499 10003 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:00:30.599351 master-0 kubenswrapper[10003]: W0216 17:00:30.597504 10003 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597510 10003 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597518 10003 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597529 10003 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597542 10003 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597549 10003 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597560 10003 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597568 10003 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597575 10003 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597583 10003 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597589 10003 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597596 10003 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597602 10003 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597608 10003 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597613 10003 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597619 10003 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597624 10003 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597630 10003 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597635 10003 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597640 10003 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:00:30.600492 master-0 kubenswrapper[10003]: W0216 17:00:30.597646 10003 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: W0216 17:00:30.597651 10003 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: W0216 17:00:30.597657 10003 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: W0216 17:00:30.597663 10003 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: W0216 17:00:30.597669 10003 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: W0216 17:00:30.597675 10003 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597818 10003 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597830 10003 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597841 10003 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597848 10003 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597856 10003 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597862 10003 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597870 10003 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597878 10003 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597884 10003 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597891 10003 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597905 10003 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597911 10003 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597917 10003 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597953 10003 flags.go:64] FLAG: --cgroup-root="" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597959 10003 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597965 10003 flags.go:64] FLAG: --client-ca-file="" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597971 10003 flags.go:64] FLAG: --cloud-config="" Feb 16 17:00:30.601583 master-0 kubenswrapper[10003]: I0216 17:00:30.597977 10003 flags.go:64] FLAG: --cloud-provider="" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.597983 10003 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.597990 10003 flags.go:64] FLAG: --cluster-domain="" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.597996 10003 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598002 10003 flags.go:64] FLAG: --config-dir="" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598008 10003 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598014 10003 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598022 10003 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598028 10003 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598034 10003 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598042 10003 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598048 10003 flags.go:64] FLAG: --contention-profiling="false" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598055 10003 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598062 10003 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598069 10003 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598076 10003 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598083 10003 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598090 10003 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598096 10003 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598102 10003 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598108 10003 flags.go:64] FLAG: --enable-server="true" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598114 10003 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598122 10003 flags.go:64] FLAG: --event-burst="100" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598128 10003 flags.go:64] FLAG: --event-qps="50" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598134 10003 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 17:00:30.602945 master-0 kubenswrapper[10003]: I0216 17:00:30.598143 10003 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598149 10003 flags.go:64] FLAG: --eviction-hard="" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598156 10003 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598162 10003 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598168 10003 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598174 10003 flags.go:64] FLAG: --eviction-soft="" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598180 10003 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598186 10003 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598192 10003 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598199 10003 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598205 10003 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598211 10003 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598217 10003 flags.go:64] FLAG: --feature-gates="" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598224 10003 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598230 10003 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598237 10003 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598243 10003 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598249 10003 flags.go:64] FLAG: --healthz-port="10248" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598256 10003 flags.go:64] FLAG: --help="false" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598262 10003 flags.go:64] FLAG: --hostname-override="" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598268 10003 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598275 10003 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598281 10003 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598288 10003 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598294 10003 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 17:00:30.604430 master-0 kubenswrapper[10003]: I0216 17:00:30.598301 10003 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598307 10003 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598313 10003 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598319 10003 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598325 10003 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598332 10003 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598338 10003 flags.go:64] FLAG: --kube-reserved="" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598346 10003 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598352 10003 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598359 10003 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598365 10003 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598372 10003 flags.go:64] FLAG: --lock-file="" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598377 10003 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598383 10003 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598389 10003 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598398 10003 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598407 10003 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598413 10003 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598419 10003 flags.go:64] FLAG: --logging-format="text" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598425 10003 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598431 10003 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598437 10003 flags.go:64] FLAG: --manifest-url="" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598443 10003 flags.go:64] FLAG: --manifest-url-header="" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598455 10003 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598461 10003 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 17:00:30.606243 master-0 kubenswrapper[10003]: I0216 17:00:30.598468 10003 flags.go:64] FLAG: --max-pods="110" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598474 10003 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598480 10003 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598486 10003 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598492 10003 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598498 10003 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598504 10003 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598512 10003 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598525 10003 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598530 10003 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598536 10003 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598542 10003 flags.go:64] FLAG: --pod-cidr="" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598549 10003 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598557 10003 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598562 10003 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598570 10003 flags.go:64] FLAG: --pods-per-core="0" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598577 10003 flags.go:64] FLAG: --port="10250" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598583 10003 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598589 10003 flags.go:64] FLAG: --provider-id="" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598595 10003 flags.go:64] FLAG: --qos-reserved="" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598601 10003 flags.go:64] FLAG: --read-only-port="10255" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598608 10003 flags.go:64] FLAG: --register-node="true" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598615 10003 flags.go:64] FLAG: --register-schedulable="true" Feb 16 17:00:30.607749 master-0 kubenswrapper[10003]: I0216 17:00:30.598625 10003 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598637 10003 flags.go:64] FLAG: --registry-burst="10" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598645 10003 flags.go:64] FLAG: --registry-qps="5" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598652 10003 flags.go:64] FLAG: --reserved-cpus="" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598660 10003 flags.go:64] FLAG: --reserved-memory="" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598669 10003 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598677 10003 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598685 10003 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598693 10003 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598700 10003 flags.go:64] FLAG: --runonce="false" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598708 10003 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598716 10003 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598723 10003 flags.go:64] FLAG: --seccomp-default="false" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598731 10003 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598738 10003 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598745 10003 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598751 10003 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598757 10003 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598763 10003 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598769 10003 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598776 10003 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598783 10003 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598789 10003 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598795 10003 flags.go:64] FLAG: --system-cgroups="" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598803 10003 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 17:00:30.609000 master-0 kubenswrapper[10003]: I0216 17:00:30.598814 10003 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: I0216 17:00:30.598820 10003 flags.go:64] FLAG: --tls-cert-file="" Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: I0216 17:00:30.598826 10003 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: I0216 17:00:30.598833 10003 flags.go:64] FLAG: --tls-min-version="" Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: I0216 17:00:30.598839 10003 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: I0216 17:00:30.598845 10003 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: I0216 17:00:30.598851 10003 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: I0216 17:00:30.598859 10003 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: I0216 17:00:30.598865 10003 flags.go:64] FLAG: --v="2" Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: I0216 17:00:30.598873 10003 flags.go:64] FLAG: --version="false" Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: I0216 17:00:30.598881 10003 flags.go:64] FLAG: --vmodule="" Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: I0216 17:00:30.598888 10003 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: I0216 17:00:30.598894 10003 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: W0216 17:00:30.599045 10003 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: W0216 17:00:30.599053 10003 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: W0216 17:00:30.599059 10003 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: W0216 17:00:30.599065 10003 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: W0216 17:00:30.599071 10003 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: W0216 17:00:30.599076 10003 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: W0216 17:00:30.599082 10003 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: W0216 17:00:30.599088 10003 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: W0216 17:00:30.599093 10003 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: W0216 17:00:30.599098 10003 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:00:30.610025 master-0 kubenswrapper[10003]: W0216 17:00:30.599104 10003 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599110 10003 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599115 10003 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599121 10003 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599126 10003 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599131 10003 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599136 10003 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599142 10003 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599150 10003 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599155 10003 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599160 10003 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599165 10003 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599170 10003 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599176 10003 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599181 10003 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599186 10003 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599193 10003 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599198 10003 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599204 10003 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599209 10003 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:00:30.610776 master-0 kubenswrapper[10003]: W0216 17:00:30.599214 10003 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599219 10003 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599226 10003 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599233 10003 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599240 10003 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599245 10003 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599251 10003 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599256 10003 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599261 10003 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599267 10003 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599272 10003 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599277 10003 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599282 10003 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599287 10003 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599294 10003 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599300 10003 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599305 10003 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599311 10003 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599316 10003 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599321 10003 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:00:30.611759 master-0 kubenswrapper[10003]: W0216 17:00:30.599328 10003 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599333 10003 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599338 10003 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599343 10003 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599350 10003 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599356 10003 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599362 10003 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599367 10003 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599374 10003 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599379 10003 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599385 10003 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599390 10003 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599395 10003 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599402 10003 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599409 10003 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599417 10003 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599423 10003 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599428 10003 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599435 10003 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:00:30.612606 master-0 kubenswrapper[10003]: W0216 17:00:30.599440 10003 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: W0216 17:00:30.599446 10003 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: W0216 17:00:30.599451 10003 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: I0216 17:00:30.599460 10003 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: I0216 17:00:30.609768 10003 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: I0216 17:00:30.609823 10003 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: W0216 17:00:30.610021 10003 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: W0216 17:00:30.610036 10003 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: W0216 17:00:30.610048 10003 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: W0216 17:00:30.610062 10003 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: W0216 17:00:30.610074 10003 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: W0216 17:00:30.610083 10003 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: W0216 17:00:30.610091 10003 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: W0216 17:00:30.610099 10003 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: W0216 17:00:30.610107 10003 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:00:30.613346 master-0 kubenswrapper[10003]: W0216 17:00:30.610115 10003 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610123 10003 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610132 10003 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610141 10003 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610149 10003 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610156 10003 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610165 10003 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610174 10003 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610184 10003 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610198 10003 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610217 10003 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610229 10003 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610239 10003 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610250 10003 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610263 10003 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610278 10003 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610295 10003 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610305 10003 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610313 10003 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:00:30.613828 master-0 kubenswrapper[10003]: W0216 17:00:30.610323 10003 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610333 10003 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610342 10003 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610353 10003 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610366 10003 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610376 10003 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610385 10003 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610393 10003 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610402 10003 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610410 10003 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610417 10003 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610425 10003 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610433 10003 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610441 10003 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610449 10003 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610458 10003 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610466 10003 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610474 10003 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610482 10003 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:00:30.614426 master-0 kubenswrapper[10003]: W0216 17:00:30.610490 10003 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610498 10003 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610508 10003 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610517 10003 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610525 10003 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610533 10003 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610541 10003 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610549 10003 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610558 10003 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610567 10003 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610576 10003 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610584 10003 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610592 10003 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610600 10003 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610608 10003 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610616 10003 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610626 10003 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610634 10003 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610642 10003 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610650 10003 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:00:30.615061 master-0 kubenswrapper[10003]: W0216 17:00:30.610658 10003 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.610665 10003 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.610673 10003 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.610681 10003 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.610688 10003 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: I0216 17:00:30.610703 10003 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.611071 10003 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.611097 10003 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.611108 10003 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.611127 10003 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.611149 10003 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.611165 10003 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.611176 10003 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.611186 10003 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.611197 10003 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:00:30.615680 master-0 kubenswrapper[10003]: W0216 17:00:30.611208 10003 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611218 10003 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611228 10003 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611238 10003 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611250 10003 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611260 10003 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611271 10003 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611281 10003 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611291 10003 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611299 10003 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611307 10003 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611315 10003 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611323 10003 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611332 10003 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611343 10003 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611354 10003 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611364 10003 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611373 10003 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611381 10003 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:00:30.616281 master-0 kubenswrapper[10003]: W0216 17:00:30.611390 10003 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611397 10003 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611406 10003 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611413 10003 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611421 10003 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611428 10003 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611436 10003 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611444 10003 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611452 10003 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611459 10003 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611470 10003 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611480 10003 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611500 10003 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611515 10003 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611525 10003 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611535 10003 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611546 10003 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611557 10003 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611568 10003 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611578 10003 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:00:30.617141 master-0 kubenswrapper[10003]: W0216 17:00:30.611589 10003 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611603 10003 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611614 10003 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611623 10003 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611631 10003 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611639 10003 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611647 10003 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611655 10003 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611666 10003 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611675 10003 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611686 10003 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611695 10003 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611703 10003 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611711 10003 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611719 10003 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611728 10003 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611736 10003 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611744 10003 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611751 10003 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:00:30.618167 master-0 kubenswrapper[10003]: W0216 17:00:30.611761 10003 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:00:30.618969 master-0 kubenswrapper[10003]: W0216 17:00:30.611769 10003 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:00:30.618969 master-0 kubenswrapper[10003]: W0216 17:00:30.611777 10003 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:00:30.618969 master-0 kubenswrapper[10003]: W0216 17:00:30.611784 10003 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:00:30.618969 master-0 kubenswrapper[10003]: W0216 17:00:30.611792 10003 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:00:30.618969 master-0 kubenswrapper[10003]: I0216 17:00:30.611805 10003 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:00:30.618969 master-0 kubenswrapper[10003]: I0216 17:00:30.612163 10003 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 17:00:30.618969 master-0 kubenswrapper[10003]: I0216 17:00:30.615121 10003 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 17:00:30.618969 master-0 kubenswrapper[10003]: I0216 17:00:30.615257 10003 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 17:00:30.618969 master-0 kubenswrapper[10003]: I0216 17:00:30.615651 10003 server.go:997] "Starting client certificate rotation" Feb 16 17:00:30.618969 master-0 kubenswrapper[10003]: I0216 17:00:30.615668 10003 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 17:00:30.618969 master-0 kubenswrapper[10003]: I0216 17:00:30.615900 10003 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 12:37:50.246134113 +0000 UTC Feb 16 17:00:30.618969 master-0 kubenswrapper[10003]: I0216 17:00:30.616071 10003 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h37m19.630110167s for next certificate rotation Feb 16 17:00:30.619458 master-0 kubenswrapper[10003]: I0216 17:00:30.616829 10003 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:00:30.619458 master-0 kubenswrapper[10003]: I0216 17:00:30.619252 10003 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:00:30.623224 master-0 kubenswrapper[10003]: I0216 17:00:30.623184 10003 log.go:25] "Validated CRI v1 runtime API" Feb 16 17:00:30.626152 master-0 kubenswrapper[10003]: I0216 17:00:30.626105 10003 log.go:25] "Validated CRI v1 image API" Feb 16 17:00:30.627154 master-0 kubenswrapper[10003]: I0216 17:00:30.627124 10003 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 17:00:30.634854 master-0 kubenswrapper[10003]: I0216 17:00:30.634777 10003 fs.go:135] Filesystem UUIDs: map[35a0b0cc-84b1-4374-a18a-0f49ad7a8333:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 17:00:30.635817 master-0 kubenswrapper[10003]: I0216 17:00:30.634837 10003 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/040d7d0293a7b20224cd27a16c0bf2020794d17010ab130f879f9e5ce8511a88/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/040d7d0293a7b20224cd27a16c0bf2020794d17010ab130f879f9e5ce8511a88/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0a3ce6339796232d6462786af4891ac2f6ae4477b24c445386f55fd5ad1be497/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0a3ce6339796232d6462786af4891ac2f6ae4477b24c445386f55fd5ad1be497/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0cff847538436e1b2bb3434e2e04b8332738e465e04639ea97b586f2461bb9fc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0cff847538436e1b2bb3434e2e04b8332738e465e04639ea97b586f2461bb9fc/userdata/shm major:0 minor:298 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/11ed7f8e3ea465f63c87bfe4f19d1af5fa7ffa1300819fc95ebd1dd0c7c845d0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/11ed7f8e3ea465f63c87bfe4f19d1af5fa7ffa1300819fc95ebd1dd0c7c845d0/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1a6fc168713ed892fb86b4e303cafe982f512d1d221599bf5dd49b75c3751ce5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1a6fc168713ed892fb86b4e303cafe982f512d1d221599bf5dd49b75c3751ce5/userdata/shm major:0 minor:294 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/27e2fd204d60ad6b8a779a11015379244968cbe2949b9df72430bd5ea3162c81/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/27e2fd204d60ad6b8a779a11015379244968cbe2949b9df72430bd5ea3162c81/userdata/shm major:0 minor:319 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/399ed6a40d22a74f841b008ea9aabb72324ddc42397c926051d186c2a8be50e2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/399ed6a40d22a74f841b008ea9aabb72324ddc42397c926051d186c2a8be50e2/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4b4cf6ce22ab8720cdceaa9299137fdb7eefaf7a73cc07e0b511a6eb79ff2810/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4b4cf6ce22ab8720cdceaa9299137fdb7eefaf7a73cc07e0b511a6eb79ff2810/userdata/shm major:0 minor:108 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4c0337d0eb1672f3cea8f23ec0619b33320b69137b0d2e9bc66e94fe15bbe412/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4c0337d0eb1672f3cea8f23ec0619b33320b69137b0d2e9bc66e94fe15bbe412/userdata/shm major:0 minor:338 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/74ced4b4e3fdce2aecbb38a4d03ec1a93853cd8aa3de1fd3350c1e935e0a300f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/74ced4b4e3fdce2aecbb38a4d03ec1a93853cd8aa3de1fd3350c1e935e0a300f/userdata/shm major:0 minor:297 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/75a3e91157092df61ab323caf67fd50fd02c9c52e83eb981207e27d0552a17af/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/75a3e91157092df61ab323caf67fd50fd02c9c52e83eb981207e27d0552a17af/userdata/shm major:0 minor:109 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7f3624c603b0a3ab1d6d22b9ebbf3c00bc31ae7c696fca7464238b99ca1dc1bf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7f3624c603b0a3ab1d6d22b9ebbf3c00bc31ae7c696fca7464238b99ca1dc1bf/userdata/shm major:0 minor:397 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84fbcf4f8c4afda2e79a3be2b73e332fc9f2d8ce27d17523a1712d76d3ce752e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84fbcf4f8c4afda2e79a3be2b73e332fc9f2d8ce27d17523a1712d76d3ce752e/userdata/shm major:0 minor:324 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95380b516961f947b4de886138c9d7adc4beb7c7579d206d803e4d6c415fb290/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95380b516961f947b4de886138c9d7adc4beb7c7579d206d803e4d6c415fb290/userdata/shm major:0 minor:299 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/99e6140d34fdb87885ac6f6b6458e82728271da4c3991473cfe40419736e575d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/99e6140d34fdb87885ac6f6b6458e82728271da4c3991473cfe40419736e575d/userdata/shm major:0 minor:337 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a493ec972b676ff0c630095722c8d8d6f05ae211809b90b8791aa422b9dcb2fb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a493ec972b676ff0c630095722c8d8d6f05ae211809b90b8791aa422b9dcb2fb/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a51ce5dfcf0ae0215dfb9bb56d30c910b8c1e31cb77a303efaded16db5c0b84f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a51ce5dfcf0ae0215dfb9bb56d30c910b8c1e31cb77a303efaded16db5c0b84f/userdata/shm major:0 minor:238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a6e1a17cdf628ad1d6c859dc2741c8e5533022bb4b1d4a9deacf8709bd53c33e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a6e1a17cdf628ad1d6c859dc2741c8e5533022bb4b1d4a9deacf8709bd53c33e/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/af61cb28f0ada5d7c4d2b6d4eb5d095894f589e163adb675a754c13d082c9ab9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/af61cb28f0ada5d7c4d2b6d4eb5d095894f589e163adb675a754c13d082c9ab9/userdata/shm major:0 minor:398 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bacf9b29c15cf47bbdc9ebe2fcba6bca3cfdec92ee8864175a5412cbbe3c9659/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bacf9b29c15cf47bbdc9ebe2fcba6bca3cfdec92ee8864175a5412cbbe3c9659/userdata/shm major:0 minor:340 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/be29035bd3f07d8681e71946753c9f5c4233d203be4ff12561b76d96bc674177/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/be29035bd3f07d8681e71946753c9f5c4233d203be4ff12561b76d96bc674177/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5fa73884bbf6d82e89a9b049cd7e08d54171e2ca181ad4d436172b3a8202990/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5fa73884bbf6d82e89a9b049cd7e08d54171e2ca181ad4d436172b3a8202990/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d13fb6fe74528b3a775305b065aa4a4f2df12cd45d9b3cf6d3d699a5fdafc519/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d13fb6fe74528b3a775305b065aa4a4f2df12cd45d9b3cf6d3d699a5fdafc519/userdata/shm major:0 minor:147 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d49374ae3fffc96a5ea6ebfe8e306371f24f9cbc5677024d9ced60c8e5b1a65e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d49374ae3fffc96a5ea6ebfe8e306371f24f9cbc5677024d9ced60c8e5b1a65e/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dadda19ba6587a75b418addc51b36c2e0a7c53f63977a60f6f393649c7b6d587/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dadda19ba6587a75b418addc51b36c2e0a7c53f63977a60f6f393649c7b6d587/userdata/shm major:0 minor:231 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/de6b829dcdd7c76ab814893ea3af1edbdeba5f9048feac58739f12fbe595c34c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/de6b829dcdd7c76ab814893ea3af1edbdeba5f9048feac58739f12fbe595c34c/userdata/shm major:0 minor:331 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f4ce4120d8890f765a717b5f92a49bb939a1d012d4ecb18b255dc6309ea6d107/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f4ce4120d8890f765a717b5f92a49bb939a1d012d4ecb18b255dc6309ea6d107/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/01921947-c416-44b6-953d-75b935ad8977/volumes/kubernetes.io~projected/kube-api-access-47bnn:{mountpoint:/var/lib/kubelet/pods/01921947-c416-44b6-953d-75b935ad8977/volumes/kubernetes.io~projected/kube-api-access-47bnn major:0 minor:427 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18e9a9d3-9b18-4c19-9558-f33c68101922/volumes/kubernetes.io~projected/kube-api-access-6bbcf:{mountpoint:/var/lib/kubelet/pods/18e9a9d3-9b18-4c19-9558-f33c68101922/volumes/kubernetes.io~projected/kube-api-access-6bbcf major:0 minor:191 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29402454-a920-471e-895e-764235d16eb4/volumes/kubernetes.io~projected/kube-api-access-r9bv7:{mountpoint:/var/lib/kubelet/pods/29402454-a920-471e-895e-764235d16eb4/volumes/kubernetes.io~projected/kube-api-access-r9bv7 major:0 minor:195 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29402454-a920-471e-895e-764235d16eb4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/29402454-a920-471e-895e-764235d16eb4/volumes/kubernetes.io~secret/serving-cert major:0 minor:159 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl:{mountpoint:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl major:0 minor:166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x:{mountpoint:/var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x major:0 minor:107 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/442600dc-09b2-4fee-9f89-777296b2ee40/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/442600dc-09b2-4fee-9f89-777296b2ee40/volumes/kubernetes.io~projected/kube-api-access major:0 minor:201 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/442600dc-09b2-4fee-9f89-777296b2ee40/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/442600dc-09b2-4fee-9f89-777296b2ee40/volumes/kubernetes.io~secret/serving-cert major:0 minor:185 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt:{mountpoint:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt major:0 minor:68 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/48801344-a48a-493e-aea4-19d998d0b708/volumes/kubernetes.io~projected/kube-api-access-nqfds:{mountpoint:/var/lib/kubelet/pods/48801344-a48a-493e-aea4-19d998d0b708/volumes/kubernetes.io~projected/kube-api-access-nqfds major:0 minor:395 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/48801344-a48a-493e-aea4-19d998d0b708/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/48801344-a48a-493e-aea4-19d998d0b708/volumes/kubernetes.io~secret/signing-key major:0 minor:365 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e51bba5-0ebe-4e55-a588-38b71548c605/volumes/kubernetes.io~projected/kube-api-access-2dxw9:{mountpoint:/var/lib/kubelet/pods/4e51bba5-0ebe-4e55-a588-38b71548c605/volumes/kubernetes.io~projected/kube-api-access-2dxw9 major:0 minor:205 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e51bba5-0ebe-4e55-a588-38b71548c605/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/4e51bba5-0ebe-4e55-a588-38b71548c605/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:161 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x:{mountpoint:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x major:0 minor:196 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/568b22df-b454-4d74-bc21-6c84daf17c8c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/568b22df-b454-4d74-bc21-6c84daf17c8c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:77 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/kube-api-access-b5mwd:{mountpoint:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/kube-api-access-b5mwd major:0 minor:194 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62fc29f4-557f-4a75-8b78-6ca425c81b81/volumes/kubernetes.io~projected/kube-api-access-bs597:{mountpoint:/var/lib/kubelet/pods/62fc29f4-557f-4a75-8b78-6ca425c81b81/volumes/kubernetes.io~projected/kube-api-access-bs597 major:0 minor:373 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~projected/kube-api-access-rjd5j:{mountpoint:/var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~projected/kube-api-access-rjd5j major:0 minor:188 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~secret/etcd-client major:0 minor:184 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~secret/serving-cert major:0 minor:160 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/737fcc7d-d850-4352-9f17-383c85d5bc28/volumes/kubernetes.io~projected/kube-api-access-5dpp2:{mountpoint:/var/lib/kubelet/pods/737fcc7d-d850-4352-9f17-383c85d5bc28/volumes/kubernetes.io~projected/kube-api-access-5dpp2 major:0 minor:190 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/737fcc7d-d850-4352-9f17-383c85d5bc28/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/737fcc7d-d850-4352-9f17-383c85d5bc28/volumes/kubernetes.io~secret/serving-cert major:0 minor:176 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74b2561b-933b-4c58-a63a-7a8c671d0ae9/volumes/kubernetes.io~projected/kube-api-access-kx9vc:{mountpoint:/var/lib/kubelet/pods/74b2561b-933b-4c58-a63a-7a8c671d0ae9/volumes/kubernetes.io~projected/kube-api-access-kx9vc major:0 minor:206 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e623376-9e14-4341-9dcf-7a7c218b6f9f/volumes/kubernetes.io~projected/kube-api-access-xvwzr:{mountpoint:/var/lib/kubelet/pods/8e623376-9e14-4341-9dcf-7a7c218b6f9f/volumes/kubernetes.io~projected/kube-api-access-xvwzr major:0 minor:203 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e623376-9e14-4341-9dcf-7a7c218b6f9f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8e623376-9e14-4341-9dcf-7a7c218b6f9f/volumes/kubernetes.io~secret/serving-cert major:0 minor:154 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:198 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/kube-api-access-t24jh:{mountpoint:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/kube-api-access-t24jh major:0 minor:204 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/970d4376-f299-412c-a8ee-90aa980c689e/volumes/kubernetes.io~projected/kube-api-access-hqstc:{mountpoint:/var/lib/kubelet/pods/970d4376-f299-412c-a8ee-90aa980c689e/volumes/kubernetes.io~projected/kube-api-access-hqstc major:0 minor:189 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/volumes/kubernetes.io~projected/kube-api-access-f42cr:{mountpoint:/var/lib/kubelet/pods/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/volumes/kubernetes.io~projected/kube-api-access-f42cr major:0 minor:200 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/volumes/kubernetes.io~secret/serving-cert major:0 minor:151 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2 major:0 minor:212 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:187 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm:{mountpoint:/var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab6e5720-2c30-4962-9c67-89f1607d137f/volumes/kubernetes.io~projected/kube-api-access-xmk2b:{mountpoint:/var/lib/kubelet/pods/ab6e5720-2c30-4962-9c67-89f1607d137f/volumes/kubernetes.io~projected/kube-api-access-xmk2b major:0 minor:202 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl:{mountpoint:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl major:0 minor:146 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:142 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5:{mountpoint:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5 major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg:{mountpoint:/var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg major:0 minor:211 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c303189e-adae-4fe2-8dd7-cc9b80f73e66/volumes/kubernetes.io~projected/kube-api-access-v2s8l:{mountpoint:/var/lib/kubelet/pods/c303189e-adae-4fe2-8dd7-cc9b80f73e66/volumes/kubernetes.io~projected/kube-api-access-v2s8l major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb/volumes/kubernetes.io~projected/kube-api-access-qcsw6:{mountpoint:/var/lib/kubelet/pods/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb/volumes/kubernetes.io~projected/kube-api-access-qcsw6 major:0 minor:374 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d020c902-2adb-4919-8dd9-0c2109830580/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/d020c902-2adb-4919-8dd9-0c2109830580/volumes/kubernetes.io~projected/kube-api-access major:0 minor:207 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d020c902-2adb-4919-8dd9-0c2109830580/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d020c902-2adb-4919-8dd9-0c2109830580/volumes/kubernetes.io~secret/serving-cert major:0 minor:177 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9859457-f0d1-4754-a6c5-cf05d5abf447/volumes/kubernetes.io~projected/kube-api-access-t4gl5:{mountpoint:/var/lib/kubelet/pods/d9859457-f0d1-4754-a6c5-cf05d5abf447/volumes/kubernetes.io~projected/kube-api-access-t4gl5 major:0 minor:199 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67:{mountpoint:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67 major:0 minor:208 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e69d8c51-e2a6-4f61-9c26-072784f6cf40/volumes/kubernetes.io~projected/kube-api-access-xr8t6:{mountpoint:/var/lib/kubelet/pods/e69d8c51-e2a6-4f61-9c26-072784f6cf40/volumes/kubernetes.io~projected/kube-api-access-xr8t6 major:0 minor:192 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e69d8c51-e2a6-4f61-9c26-072784f6cf40/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e69d8c51-e2a6-4f61-9c26-072784f6cf40/volumes/kubernetes.io~secret/serving-cert major:0 minor:186 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eaf7edff-0a89-4ac0-b9dd-511e098b5434/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/eaf7edff-0a89-4ac0-b9dd-511e098b5434/volumes/kubernetes.io~projected/kube-api-access major:0 minor:197 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eaf7edff-0a89-4ac0-b9dd-511e098b5434/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/eaf7edff-0a89-4ac0-b9dd-511e098b5434/volumes/kubernetes.io~secret/serving-cert major:0 minor:155 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/edbaac23-11f0-4bc7-a7ce-b593c774c0fa/volumes/kubernetes.io~projected/kube-api-access-dptnc:{mountpoint:/var/lib/kubelet/pods/edbaac23-11f0-4bc7-a7ce-b593c774c0fa/volumes/kubernetes.io~projected/kube-api-access-dptnc major:0 minor:193 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/edbaac23-11f0-4bc7-a7ce-b593c774c0fa/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/edbaac23-11f0-4bc7-a7ce-b593c774c0fa/volumes/kubernetes.io~secret/serving-cert major:0 minor:180 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/299d5db622db39f92d58928871ced643987ca227c08e8caffed59ed33bee6dc4/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-112:{mountpoint:/var/lib/containers/storage/overlay/ce81bbecbcc863a95ed0d10f27172967bc7bbe367409b4ecdf486fe896eb3d02/merged major:0 minor:112 fsType:overlay blockSize:0} overlay_0-114:{mountpoint:/var/lib/containers/storage/overlay/240fbd3480795448881e3185229bb5ea2f53a6772328ecb7637f0edd311280b0/merged major:0 minor:114 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/a5d08108c9c597dc05436c7894a3410e90d4ed23214da40f1960d5a943f5b451/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/3b83e5b8c015be60f8807049c4a40c9efb971a4240b73f8d970c3262dea3fae9/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/36df33378cc0b59e98add0ea326634525c0feac09aa99b30d154f5aa997adb28/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-123:{mountpoint:/var/lib/containers/storage/overlay/7759418e25dbe997a51aab5e46e101eabdafb1836c9e3752c1131d01ecca82c4/merged major:0 minor:123 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/29a2374e228420896b7f30eb355ef8c1cb5f374dd23a1a355e03c40a33e522b0/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/d733d2aa9e7b209a944a430010e0f22f2d487aa5a3561f6410522a2de89b2558/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/9c5ec6bd02e00979482116988d1b306ffae5e0ebc398bb3e78d6abc6e28f5d62/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/43fa2383e928d5696d56446d213c47bf85788cf4aba45a066c9e4009dfb4527f/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-149:{mountpoint:/var/lib/containers/storage/overlay/2b4f28a6ed575a266c31138e7629277c9490bb458d8a976a4cb6850e54a55a41/merged major:0 minor:149 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/1d255e08b7c9baf9634ab59c50f063480d1a9fc4593debd7871eca18732b4738/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-162:{mountpoint:/var/lib/containers/storage/overlay/c2ff895be99f92ccf878588023dd7c7541852816b81b5b175cb6b462033a1875/merged major:0 minor:162 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/428278196b465f9736e75e77fa99c57e448b3415cbe04ee18157ed1473b6c610/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/27e8d9a9aa9a0b69e28767a9145087cae598a6ade3fa40026578198fe4e9b0de/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/44204d75a1494103ada9c6cb43832da1b72d3d4a24fc7fe71a116ed9fd5e0053/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/2d38b5784933ede16604ba6338d79bc805d0f4127afefae2688c1e78f8ed0342/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/adb7c441e93f719fa2a6e861f64c376bd9f1ea6f9f7c320efd7002b516c99bf0/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/ec4e929eaf7048ed063da2d34bbf9678ff3a29a20ee79486633d8b6ea7ea3606/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-213:{mountpoint:/var/lib/containers/storage/overlay/4441d044f3dca641184c1599aa07d2e375fd56e92208ca91a5a1766dc658a7f4/merged major:0 minor:213 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/84eb34ca24454ba6ab143b219301a344fa41bfc3e6933bbd3079d41939ce966f/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-220:{mountpoint:/var/lib/containers/storage/overlay/d342ae8422eeb409e769043cd29a26a61c7f48664e1c32bbaf7cffed2a130d0e/merged major:0 minor:220 fsType:overlay blockSize:0} overlay_0-229:{mountpoint:/var/lib/containers/storage/overlay/db861271223daad05feed50aeb8d265d40680f9091c856c41485e8ce6c97e87f/merged major:0 minor:229 fsType:overlay blockSize:0} overlay_0-233:{mountpoint:/var/lib/containers/storage/overlay/ba662e2cc19a324502f74b7a07aa7e03c56d5d2ccef0323325f5c3668d7404fd/merged major:0 minor:233 fsType:overlay blockSize:0} overlay_0-235:{mountpoint:/var/lib/containers/storage/overlay/66c225a6916a9c8a947b2aa323b73fd82a26be7b38370dbee342855b2c1d8194/merged major:0 minor:235 fsType:overlay blockSize:0} overlay_0-240:{mountpoint:/var/lib/containers/storage/overlay/8f63b48314bb4bda0d7f27b15ddd2a76df73759a10dc4c8fc1f54be447f5e426/merged major:0 minor:240 fsType:overlay blockSize:0} overlay_0-245:{mountpoint:/var/lib/containers/storage/overlay/86bd6f1f647ad44ab51cf62d8fca17302a2594b45f63dd7c8c27345540dac403/merged major:0 minor:245 fsType:overlay blockSize:0} overlay_0-250:{mountpoint:/var/lib/containers/storage/overlay/abac6e0d1bb99825429d7282b13a5c2988af0913ef7cc2696c5df950e2a942ee/merged major:0 minor:250 fsType:overlay blockSize:0} overlay_0-255:{mountpoint:/var/lib/containers/storage/overlay/2b39a72e49d1da69ed2a99ff4ab8e9da393360ce8b42c0cfb7cdf17d15d58b3f/merged major:0 minor:255 fsType:overlay blockSize:0} overlay_0-260:{mountpoint:/var/lib/containers/storage/overlay/da70de85edeeaad62089327e90d51985a066903d1f5f280b6b643699410f7b79/merged major:0 minor:260 fsType:overlay blockSize:0} overlay_0-265:{mountpoint:/var/lib/containers/storage/overlay/a69049b3edf8a87c1686e831899d662ac7adebed0c52e8f63153f58fdbaa91c5/merged major:0 minor:265 fsType:overlay blockSize:0} overlay_0-270:{mountpoint:/var/lib/containers/storage/overlay/76ee37cb963ce0292e358b39316bcedd6b0b6521f691a5a3d9a795d4790f0be9/merged major:0 minor:270 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/2cff0aacf676f37753cb56cc7eab4c57b131264f250182900ae88309ac58cde5/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/18c1f9eee7d37fb48d410e39e01faf9ccaa53efd6e3a5bcecbe4471eb380f910/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/8f37690ca7fa757339ce9d364bf14b458b1d51805557c8df990d3458e664987d/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/d0c40a2f0a4af382dda361085134ee98d5c558550c83b449c8e53246c46a972e/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/4b07bcf55655096c5047253d96dca3371494f580f5bfe25ee8718a2cb503bc31/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/9e6baea84ceeb7d0eef954d517afeb3ea512303ae5f2246ae292acf57939fa15/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/475b19fbf2528b81709e07cbf639a3aa6e7853c3f20346b52f552cdacfa25596/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/818c90e2e877c77e2d88df68aeb94819307b02584060ffcefd8dee6cb5c3915a/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/cd07d52935600dd6452e5a409db54885260e02b66813f07c412ed0bd6c2b7ac3/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/08b1469eb210c641a532196fdff472a586b1af1ecfb1a684a1a78d18ab89ff2e/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/f8b6378b399c05f470eda4fea51aa2abff2521cfec002fa2227a007c45c030bf/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/f0081dbfc14ac87641c84830d07932cba6bf4abed98fc9fe001ad21b51bf3632/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-333:{mountpoint:/var/lib/containers/storage/overlay/fe082c95fb3fb318eefa2ed56ccb274bd7771102a16fd1ef05713a7929916850/merged major:0 minor:333 fsType:overlay blockSize:0} overlay_0-335:{mountpoint:/var/lib/containers/storage/overlay/f015eff68f9fb578c4818f6ea865e6aee92861f7e20d01e9851bbbf127ac0563/merged major:0 minor:335 fsType:overlay blockSize:0} overlay_0-342:{mountpoint:/var/lib/containers/storage/overlay/e37ed7d2de76aa90e2bab8f6d10b5e63df8efc10d72c646ae1529d79feedc254/merged major:0 minor:342 fsType:overlay blockSize:0} overlay_0-345:{mountpoint:/var/lib/containers/storage/overlay/0bd5d8db5ae3eaeee975924d021183d5322f01775ed60f23315d67888a4de726/merged major:0 minor:345 fsType:overlay blockSize:0} overlay_0-347:{mountpoint:/var/lib/containers/storage/overlay/e031b605975c923d2bf4be3a485ee514c9965b833d397a8259ff3a2613fae1bb/merged major:0 minor:347 fsType:overlay blockSize:0} overlay_0-349:{mountpoint:/var/lib/containers/storage/overlay/8d2cc8c7bdef474e075127e354e0878538e0a26cb073cdd65c66469f07554577/merged major:0 minor:349 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/cfe49923389ab9c3faffef71f263229db7d12022c66f5421b495136cc6f969c8/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-353:{mountpoint:/var/lib/containers/storage/overlay/b09d2449c1a3c895c7305063a3df5a44eb57a1395a63a5d953c4a9523d95abf2/merged major:0 minor:353 fsType:overlay blockSize:0} overlay_0-355:{mountpoint:/var/lib/containers/storage/overlay/e58c3d22c53e613aec3788da22de84a019ba082b9e5b0389c582dee14d1685c7/merged major:0 minor:355 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/4db3f440cdac4dba8aa04880a9835907bb9be84c7eb0129ec73ee6bc7aef298f/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-359:{mountpoint:/var/lib/containers/storage/overlay/54f43f0cf2134131d13f60486cf880c2e811c34f551c4511f9ff9908670b45d2/merged major:0 minor:359 fsType:overlay blockSize:0} overlay_0-361:{mountpoint:/var/lib/containers/storage/overlay/febd1e0bf295b46ba2601f60b3f3b13119f5b8a64dfcce613a3c5aa9af3dd5d2/merged major:0 minor:361 fsType:overlay blockSize:0} overlay_0-369:{mountpoint:/var/lib/containers/storage/overlay/ea9dc292d37beae68b768200d6faec79dde73a80d15dba89badce35e25709288/merged major:0 minor:369 fsType:overlay blockSize:0} overlay_0-401:{mountpoint:/var/lib/containers/storage/overlay/ea7a00ebfeea9c74ae7d9a2b98a9762753c6599ed3893c8951d1c62c49c7a221/merged major:0 minor:401 fsType:overlay blockSize:0} overlay_0-403:{mountpoint:/var/lib/containers/storage/overlay/c889b7393bb92eeb007d2e09ba0d72c9faaebc32e4f2ebd5bcadc02be491bf93/merged major:0 minor:403 fsType:overlay blockSize:0} overlay_0-405:{mountpoint:/var/lib/containers/storage/overlay/d5c7bb2bcba729348f2d91e6a2102d80a73ef6f26b723ad973477810e3fcb151/merged major:0 minor:405 fsType:overlay blockSize:0} overlay_0-407:{mountpoint:/var/lib/containers/storage/overlay/6ad93cda0a4c36908d10ed13c5fd65365766c0bc33e5f483e6aab4ee1068cebd/merged major:0 minor:407 fsType:overlay blockSize:0} overlay_0-409:{mountpoint:/var/lib/containers/storage/overlay/309eb2cd2a0615b0e652f523b3dce331dbe9f1e8725743adfe0fc3649a5f0fb1/merged major:0 minor:409 fsType:overlay blockSize:0} overlay_0-419:{mountpoint:/var/lib/containers/storage/overlay/5586634a8cc493bb3eec2e9b60cdc01b1be4a015872718ab25ab5e60671eebb4/merged major:0 minor:419 fsType:overlay blockSize:0} overlay_0-421:{mountpoint:/var/lib/containers/storage/overlay/a07091bb63a7a51a48a43f3e7d2c2f1fea2c9004cf47abd4c40e2f494bdc0e0e/merged major:0 minor:421 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/e036e1579af4bf1f3b75c25ff983ccbe4e7d2e2e2e0dee660e06a3999b42a2c4/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/1a704c7ce5b7e7c23d2dcf60a9a4aa3a78233c063ce33db0f2bd4f81a8f4d6a7/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/b6579114c0b8bd65796c72e3b7a472be1e21e3856425a83313186fa9a77ab9ae/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/5892f8fd8563a7556060a1d1dc18b5a5050d99d83120cfb93e6048162f37e9b7/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/e09d1db4a47874e29f4f0b5e5bb565034881f034c7b3cf3a45c76e13eb16515f/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/a0a9765bac0fa4bee18dec3955765a829fa19c3fe212136ef13d36b9d31e5956/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/ea602b2ce4f19a4134a3e28d7d12ee6feb55d2e6fa7f7d8493aea527c49d2b64/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/56734c3dd736245d685600d2f315dcaf0c64a5aac28d64e2f6b176280d3b76e6/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/ed41644de5b186c89b9e2d0af21f9d40df1133d901a406a04c0c53044cabab26/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/7fbdb50c7e6683ea976ed49e8d06713ab3dec49be60b888c74dfee03dcbedc03/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/ae3f6996321420d811aac85edcd1dcbb17543fd35884df4fac988b2b72986313/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/e614a45cca8f764badc3b965c7fcabc141d5f1ad4d52797fd9d3013def7bd6ec/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/d5a8854703fe26d5c961318fa3ec1e1aa70e4e244fe6d1af7a5187013aed1c73/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-92:{mountpoint:/var/lib/containers/storage/overlay/35d012802d7fbd7b4f0bd596b648020e4e37d52b6358e55a34fef7140ca50d96/merged major:0 minor:92 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/34c4ff1d03a2c67584c836ebfec3ee99f0e1fc13a41354f3a97d87c9a1ef3e39/merged major:0 minor:97 fsType:overlay blockSize:0}] Feb 16 17:00:30.679301 master-0 kubenswrapper[10003]: I0216 17:00:30.678410 10003 manager.go:217] Machine: {Timestamp:2026-02-16 17:00:30.676829954 +0000 UTC m=+0.192315665 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514149376 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:47bfea951bd14de8bb3b008f6812b13f SystemUUID:47bfea95-1bd1-4de8-bb3b-008f6812b13f BootID:4b8043a6-19a9-42c4-a3dd-d330b8dbba91 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-347 DeviceMajor:0 DeviceMinor:347 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9859457-f0d1-4754-a6c5-cf05d5abf447/volumes/kubernetes.io~projected/kube-api-access-t4gl5 DeviceMajor:0 DeviceMinor:199 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg DeviceMajor:0 DeviceMinor:211 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-250 DeviceMajor:0 DeviceMinor:250 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-355 DeviceMajor:0 DeviceMinor:355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5 DeviceMajor:0 DeviceMinor:135 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-369 DeviceMajor:0 DeviceMinor:369 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7f3624c603b0a3ab1d6d22b9ebbf3c00bc31ae7c696fca7464238b99ca1dc1bf/userdata/shm DeviceMajor:0 DeviceMinor:397 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-123 DeviceMajor:0 DeviceMinor:123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4e51bba5-0ebe-4e55-a588-38b71548c605/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:161 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x DeviceMajor:0 DeviceMinor:196 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-213 DeviceMajor:0 DeviceMinor:213 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:142 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/399ed6a40d22a74f841b008ea9aabb72324ddc42397c926051d186c2a8be50e2/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-220 DeviceMajor:0 DeviceMinor:220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-229 DeviceMajor:0 DeviceMinor:229 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f4ce4120d8890f765a717b5f92a49bb939a1d012d4ecb18b255dc6309ea6d107/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-421 DeviceMajor:0 DeviceMinor:421 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4e51bba5-0ebe-4e55-a588-38b71548c605/volumes/kubernetes.io~projected/kube-api-access-2dxw9 DeviceMajor:0 DeviceMinor:205 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-270 DeviceMajor:0 DeviceMinor:270 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/74ced4b4e3fdce2aecbb38a4d03ec1a93853cd8aa3de1fd3350c1e935e0a300f/userdata/shm DeviceMajor:0 DeviceMinor:297 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-349 DeviceMajor:0 DeviceMinor:349 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/48801344-a48a-493e-aea4-19d998d0b708/volumes/kubernetes.io~projected/kube-api-access-nqfds DeviceMajor:0 DeviceMinor:395 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a6e1a17cdf628ad1d6c859dc2741c8e5533022bb4b1d4a9deacf8709bd53c33e/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~projected/kube-api-access-rjd5j DeviceMajor:0 DeviceMinor:188 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ab6e5720-2c30-4962-9c67-89f1607d137f/volumes/kubernetes.io~projected/kube-api-access-xmk2b DeviceMajor:0 DeviceMinor:202 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/48801344-a48a-493e-aea4-19d998d0b708/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:365 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:167 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/kube-api-access-t24jh DeviceMajor:0 DeviceMinor:204 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d020c902-2adb-4919-8dd9-0c2109830580/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:207 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4b4cf6ce22ab8720cdceaa9299137fdb7eefaf7a73cc07e0b511a6eb79ff2810/userdata/shm DeviceMajor:0 DeviceMinor:108 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d13fb6fe74528b3a775305b065aa4a4f2df12cd45d9b3cf6d3d699a5fdafc519/userdata/shm DeviceMajor:0 DeviceMinor:147 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-240 DeviceMajor:0 DeviceMinor:240 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/62fc29f4-557f-4a75-8b78-6ca425c81b81/volumes/kubernetes.io~projected/kube-api-access-bs597 DeviceMajor:0 DeviceMinor:373 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/442600dc-09b2-4fee-9f89-777296b2ee40/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:201 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:209 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/11ed7f8e3ea465f63c87bfe4f19d1af5fa7ffa1300819fc95ebd1dd0c7c845d0/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/01921947-c416-44b6-953d-75b935ad8977/volumes/kubernetes.io~projected/kube-api-access-47bnn DeviceMajor:0 DeviceMinor:427 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5fa73884bbf6d82e89a9b049cd7e08d54171e2ca181ad4d436172b3a8202990/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/27e2fd204d60ad6b8a779a11015379244968cbe2949b9df72430bd5ea3162c81/userdata/shm DeviceMajor:0 DeviceMinor:319 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x DeviceMajor:0 DeviceMinor:107 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:151 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/442600dc-09b2-4fee-9f89-777296b2ee40/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:185 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/29402454-a920-471e-895e-764235d16eb4/volumes/kubernetes.io~projected/kube-api-access-r9bv7 DeviceMajor:0 DeviceMinor:195 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-255 DeviceMajor:0 DeviceMinor:255 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e623376-9e14-4341-9dcf-7a7c218b6f9f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:154 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e69d8c51-e2a6-4f61-9c26-072784f6cf40/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:186 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt DeviceMajor:0 DeviceMinor:68 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/kube-api-access-b5mwd DeviceMajor:0 DeviceMinor:194 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/eaf7edff-0a89-4ac0-b9dd-511e098b5434/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:197 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-162 DeviceMajor:0 DeviceMinor:162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eaf7edff-0a89-4ac0-b9dd-511e098b5434/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:155 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0cff847538436e1b2bb3434e2e04b8332738e465e04639ea97b586f2461bb9fc/userdata/shm DeviceMajor:0 DeviceMinor:298 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-409 DeviceMajor:0 DeviceMinor:409 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/737fcc7d-d850-4352-9f17-383c85d5bc28/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:176 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/970d4376-f299-412c-a8ee-90aa980c689e/volumes/kubernetes.io~projected/kube-api-access-hqstc DeviceMajor:0 DeviceMinor:189 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/99e6140d34fdb87885ac6f6b6458e82728271da4c3991473cfe40419736e575d/userdata/shm DeviceMajor:0 DeviceMinor:337 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-361 DeviceMajor:0 DeviceMinor:361 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb/volumes/kubernetes.io~projected/kube-api-access-qcsw6 DeviceMajor:0 DeviceMinor:374 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0a3ce6339796232d6462786af4891ac2f6ae4477b24c445386f55fd5ad1be497/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/volumes/kubernetes.io~projected/kube-api-access-f42cr DeviceMajor:0 DeviceMinor:200 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-342 DeviceMajor:0 DeviceMinor:342 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-405 DeviceMajor:0 DeviceMinor:405 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl DeviceMajor:0 DeviceMinor:146 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/737fcc7d-d850-4352-9f17-383c85d5bc28/volumes/kubernetes.io~projected/kube-api-access-5dpp2 DeviceMajor:0 DeviceMinor:190 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84fbcf4f8c4afda2e79a3be2b73e332fc9f2d8ce27d17523a1712d76d3ce752e/userdata/shm DeviceMajor:0 DeviceMinor:324 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bacf9b29c15cf47bbdc9ebe2fcba6bca3cfdec92ee8864175a5412cbbe3c9659/userdata/shm DeviceMajor:0 DeviceMinor:340 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/be29035bd3f07d8681e71946753c9f5c4233d203be4ff12561b76d96bc674177/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/18e9a9d3-9b18-4c19-9558-f33c68101922/volumes/kubernetes.io~projected/kube-api-access-6bbcf DeviceMajor:0 DeviceMinor:191 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-335 DeviceMajor:0 DeviceMinor:335 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-353 DeviceMajor:0 DeviceMinor:353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/040d7d0293a7b20224cd27a16c0bf2020794d17010ab130f879f9e5ce8511a88/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:184 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/edbaac23-11f0-4bc7-a7ce-b593c774c0fa/volumes/kubernetes.io~projected/kube-api-access-dptnc DeviceMajor:0 DeviceMinor:193 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8e623376-9e14-4341-9dcf-7a7c218b6f9f/volumes/kubernetes.io~projected/kube-api-access-xvwzr DeviceMajor:0 DeviceMinor:203 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-419 DeviceMajor:0 DeviceMinor:419 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-260 DeviceMajor:0 DeviceMinor:260 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-403 DeviceMajor:0 DeviceMinor:403 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-407 DeviceMajor:0 DeviceMinor:407 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-112 DeviceMajor:0 DeviceMinor:112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/de6b829dcdd7c76ab814893ea3af1edbdeba5f9048feac58739f12fbe595c34c/userdata/shm DeviceMajor:0 DeviceMinor:331 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/568b22df-b454-4d74-bc21-6c84daf17c8c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:77 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67 DeviceMajor:0 DeviceMinor:208 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2 DeviceMajor:0 DeviceMinor:212 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-245 DeviceMajor:0 DeviceMinor:245 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4c0337d0eb1672f3cea8f23ec0619b33320b69137b0d2e9bc66e94fe15bbe412/userdata/shm DeviceMajor:0 DeviceMinor:338 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d49374ae3fffc96a5ea6ebfe8e306371f24f9cbc5677024d9ced60c8e5b1a65e/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl DeviceMajor:0 DeviceMinor:166 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:160 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a51ce5dfcf0ae0215dfb9bb56d30c910b8c1e31cb77a303efaded16db5c0b84f/userdata/shm DeviceMajor:0 DeviceMinor:238 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-92 DeviceMajor:0 DeviceMinor:92 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/75a3e91157092df61ab323caf67fd50fd02c9c52e83eb981207e27d0552a17af/userdata/shm DeviceMajor:0 DeviceMinor:109 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/edbaac23-11f0-4bc7-a7ce-b593c774c0fa/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:180 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:187 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:198 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95380b516961f947b4de886138c9d7adc4beb7c7579d206d803e4d6c415fb290/userdata/shm DeviceMajor:0 DeviceMinor:299 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a493ec972b676ff0c630095722c8d8d6f05ae211809b90b8791aa422b9dcb2fb/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-233 DeviceMajor:0 DeviceMinor:233 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-265 DeviceMajor:0 DeviceMinor:265 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1a6fc168713ed892fb86b4e303cafe982f512d1d221599bf5dd49b75c3751ce5/userdata/shm DeviceMajor:0 DeviceMinor:294 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-114 DeviceMajor:0 DeviceMinor:114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dadda19ba6587a75b418addc51b36c2e0a7c53f63977a60f6f393649c7b6d587/userdata/shm DeviceMajor:0 DeviceMinor:231 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-235 DeviceMajor:0 DeviceMinor:235 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-359 DeviceMajor:0 DeviceMinor:359 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/29402454-a920-471e-895e-764235d16eb4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:159 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-401 DeviceMajor:0 DeviceMinor:401 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm DeviceMajor:0 DeviceMinor:127 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d020c902-2adb-4919-8dd9-0c2109830580/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:177 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c303189e-adae-4fe2-8dd7-cc9b80f73e66/volumes/kubernetes.io~projected/kube-api-access-v2s8l DeviceMajor:0 DeviceMinor:243 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-333 DeviceMajor:0 DeviceMinor:333 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-345 DeviceMajor:0 DeviceMinor:345 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-149 DeviceMajor:0 DeviceMinor:149 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e69d8c51-e2a6-4f61-9c26-072784f6cf40/volumes/kubernetes.io~projected/kube-api-access-xr8t6 DeviceMajor:0 DeviceMinor:192 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/74b2561b-933b-4c58-a63a-7a8c671d0ae9/volumes/kubernetes.io~projected/kube-api-access-kx9vc DeviceMajor:0 DeviceMinor:206 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/af61cb28f0ada5d7c4d2b6d4eb5d095894f589e163adb675a754c13d082c9ab9/userdata/shm DeviceMajor:0 DeviceMinor:398 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0cff847538436e1 MacAddress:be:68:a2:80:ca:17 Speed:10000 Mtu:8900} {Name:11ed7f8e3ea465f MacAddress:7a:82:99:f6:2e:7a Speed:10000 Mtu:8900} {Name:1a6fc168713ed89 MacAddress:7a:0e:4d:4c:4c:5b Speed:10000 Mtu:8900} {Name:27e2fd204d60ad6 MacAddress:da:3b:cb:b9:7e:bc Speed:10000 Mtu:8900} {Name:4c0337d0eb1672f MacAddress:1a:a7:5e:16:83:86 Speed:10000 Mtu:8900} {Name:74ced4b4e3fdce2 MacAddress:d6:ae:d8:0c:3a:1b Speed:10000 Mtu:8900} {Name:7f3624c603b0a3a MacAddress:a2:cb:54:f2:75:55 Speed:10000 Mtu:8900} {Name:84fbcf4f8c4afda MacAddress:26:a6:2c:f0:0b:ba Speed:10000 Mtu:8900} {Name:95380b516961f94 MacAddress:c6:89:0b:dd:6a:15 Speed:10000 Mtu:8900} {Name:99e6140d34fdb87 MacAddress:56:e1:ea:75:6d:e9 Speed:10000 Mtu:8900} {Name:af61cb28f0ada5d MacAddress:fe:2e:e5:e2:b6:0f Speed:10000 Mtu:8900} {Name:bacf9b29c15cf47 MacAddress:12:82:fc:2d:93:84 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:52:47:03:db:66:8a Speed:0 Mtu:8900} {Name:c5fa73884bbf6d8 MacAddress:82:62:d8:02:bd:8b Speed:10000 Mtu:8900} {Name:de6b829dcdd7c76 MacAddress:16:b2:d8:e5:f1:d2 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:2c:b9:e2 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:4a:2e:ce Speed:-1 Mtu:9000} {Name:f4ce4120d8890f7 MacAddress:fe:10:91:7a:8d:25 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:6a:29:7e:a8:c6:78 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514149376 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 17:00:30.679301 master-0 kubenswrapper[10003]: I0216 17:00:30.679262 10003 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 17:00:30.679957 master-0 kubenswrapper[10003]: I0216 17:00:30.679473 10003 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 17:00:30.679957 master-0 kubenswrapper[10003]: I0216 17:00:30.679832 10003 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 17:00:30.680093 master-0 kubenswrapper[10003]: I0216 17:00:30.680039 10003 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 17:00:30.680317 master-0 kubenswrapper[10003]: I0216 17:00:30.680081 10003 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 17:00:30.680393 master-0 kubenswrapper[10003]: I0216 17:00:30.680328 10003 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 17:00:30.680393 master-0 kubenswrapper[10003]: I0216 17:00:30.680341 10003 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 17:00:30.680393 master-0 kubenswrapper[10003]: I0216 17:00:30.680352 10003 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:00:30.680393 master-0 kubenswrapper[10003]: I0216 17:00:30.680378 10003 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:00:30.680563 master-0 kubenswrapper[10003]: I0216 17:00:30.680524 10003 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:00:30.680952 master-0 kubenswrapper[10003]: I0216 17:00:30.680901 10003 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 17:00:30.681034 master-0 kubenswrapper[10003]: I0216 17:00:30.681003 10003 kubelet.go:418] "Attempting to sync node with API server" Feb 16 17:00:30.681034 master-0 kubenswrapper[10003]: I0216 17:00:30.681026 10003 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 17:00:30.681143 master-0 kubenswrapper[10003]: I0216 17:00:30.681058 10003 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 17:00:30.681143 master-0 kubenswrapper[10003]: I0216 17:00:30.681073 10003 kubelet.go:324] "Adding apiserver pod source" Feb 16 17:00:30.681143 master-0 kubenswrapper[10003]: I0216 17:00:30.681095 10003 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 17:00:30.682583 master-0 kubenswrapper[10003]: I0216 17:00:30.682506 10003 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 17:00:30.683108 master-0 kubenswrapper[10003]: I0216 17:00:30.683066 10003 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 17:00:30.683739 master-0 kubenswrapper[10003]: I0216 17:00:30.683684 10003 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 17:00:30.684024 master-0 kubenswrapper[10003]: I0216 17:00:30.683991 10003 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 17:00:30.684182 master-0 kubenswrapper[10003]: I0216 17:00:30.684032 10003 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 17:00:30.684182 master-0 kubenswrapper[10003]: I0216 17:00:30.684046 10003 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 17:00:30.684182 master-0 kubenswrapper[10003]: I0216 17:00:30.684061 10003 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 17:00:30.684182 master-0 kubenswrapper[10003]: I0216 17:00:30.684074 10003 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 17:00:30.684182 master-0 kubenswrapper[10003]: I0216 17:00:30.684089 10003 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 17:00:30.684182 master-0 kubenswrapper[10003]: I0216 17:00:30.684107 10003 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 17:00:30.684182 master-0 kubenswrapper[10003]: I0216 17:00:30.684120 10003 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 17:00:30.684182 master-0 kubenswrapper[10003]: I0216 17:00:30.684135 10003 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 17:00:30.684182 master-0 kubenswrapper[10003]: I0216 17:00:30.684148 10003 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 17:00:30.684182 master-0 kubenswrapper[10003]: I0216 17:00:30.684187 10003 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 17:00:30.684636 master-0 kubenswrapper[10003]: I0216 17:00:30.684212 10003 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 17:00:30.684636 master-0 kubenswrapper[10003]: I0216 17:00:30.684283 10003 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 17:00:30.684918 master-0 kubenswrapper[10003]: I0216 17:00:30.684876 10003 server.go:1280] "Started kubelet" Feb 16 17:00:30.685460 master-0 kubenswrapper[10003]: I0216 17:00:30.685363 10003 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 17:00:30.685546 master-0 kubenswrapper[10003]: I0216 17:00:30.685473 10003 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 17:00:30.687040 master-0 kubenswrapper[10003]: I0216 17:00:30.686937 10003 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 17:00:30.687637 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 17:00:30.693020 master-0 kubenswrapper[10003]: I0216 17:00:30.692957 10003 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 17:00:30.693551 master-0 kubenswrapper[10003]: I0216 17:00:30.693232 10003 server.go:449] "Adding debug handlers to kubelet server" Feb 16 17:00:30.693732 master-0 kubenswrapper[10003]: I0216 17:00:30.693677 10003 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:00:30.694168 master-0 kubenswrapper[10003]: I0216 17:00:30.694130 10003 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:00:30.695565 master-0 kubenswrapper[10003]: I0216 17:00:30.695521 10003 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 17:00:30.695628 master-0 kubenswrapper[10003]: I0216 17:00:30.695570 10003 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 17:00:30.696562 master-0 kubenswrapper[10003]: I0216 17:00:30.696477 10003 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 12:36:58.274063417 +0000 UTC Feb 16 17:00:30.696562 master-0 kubenswrapper[10003]: I0216 17:00:30.696554 10003 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h36m27.577514085s for next certificate rotation Feb 16 17:00:30.697392 master-0 kubenswrapper[10003]: I0216 17:00:30.697311 10003 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 17:00:30.697392 master-0 kubenswrapper[10003]: I0216 17:00:30.697338 10003 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 17:00:30.697538 master-0 kubenswrapper[10003]: I0216 17:00:30.697507 10003 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.699423 10003 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708171 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c303189e-adae-4fe2-8dd7-cc9b80f73e66" volumeName="kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708229 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708247 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708262 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708273 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708285 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad805251-19d0-4d2f-b741-7d11158f1f03" volumeName="kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708297 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708309 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708323 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708353 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708365 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708377 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708432 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708446 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708457 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708471 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9859457-f0d1-4754-a6c5-cf05d5abf447" volumeName="kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708483 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708494 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708507 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708519 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708531 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="970d4376-f299-412c-a8ee-90aa980c689e" volumeName="kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708544 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708555 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62fc29f4-557f-4a75-8b78-6ca425c81b81" volumeName="kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708568 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708580 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708594 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708615 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708628 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708640 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="568b22df-b454-4d74-bc21-6c84daf17c8c" volumeName="kubernetes.io/projected/568b22df-b454-4d74-bc21-6c84daf17c8c-kube-api-access" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708653 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708667 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708678 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708692 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708705 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708718 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708718 10003 factory.go:153] Registering CRI-O factory Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708749 10003 factory.go:221] Registration of the crio container factory successfully Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708730 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="568b22df-b454-4d74-bc21-6c84daf17c8c" volumeName="kubernetes.io/configmap/568b22df-b454-4d74-bc21-6c84daf17c8c-service-ca" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708857 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708871 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708904 10003 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708916 10003 factory.go:55] Registering systemd factory Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708970 10003 factory.go:221] Registration of the systemd container factory successfully Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.708995 10003 factory.go:103] Registering Raw factory Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709011 10003 manager.go:1196] Started watching for new ooms in manager Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709171 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709194 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709207 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709231 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709257 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cff91b5b-3cbb-489a-94e7-9f279ae6cbbb" volumeName="kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-config" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709273 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709290 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01921947-c416-44b6-953d-75b935ad8977" volumeName="kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-config" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709307 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709321 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709338 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709355 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709370 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab6e5720-2c30-4962-9c67-89f1607d137f" volumeName="kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709384 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709399 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18e9a9d3-9b18-4c19-9558-f33c68101922" volumeName="kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709419 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709437 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709454 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709470 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709487 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709505 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709521 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709536 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709552 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709567 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709582 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709599 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cff91b5b-3cbb-489a-94e7-9f279ae6cbbb" volumeName="kubernetes.io/projected/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-kube-api-access-qcsw6" seLinuxMountContext="" Feb 16 17:00:30.714138 master-0 kubenswrapper[10003]: I0216 17:00:30.709614 10003 manager.go:319] Starting recovery of all containers Feb 16 17:00:30.721166 master-0 kubenswrapper[10003]: I0216 17:00:30.709613 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert" seLinuxMountContext="" Feb 16 17:00:30.721331 master-0 kubenswrapper[10003]: I0216 17:00:30.721246 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets" seLinuxMountContext="" Feb 16 17:00:30.721374 master-0 kubenswrapper[10003]: I0216 17:00:30.721337 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9" seLinuxMountContext="" Feb 16 17:00:30.721457 master-0 kubenswrapper[10003]: I0216 17:00:30.721407 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config" seLinuxMountContext="" Feb 16 17:00:30.721523 master-0 kubenswrapper[10003]: I0216 17:00:30.721498 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl" seLinuxMountContext="" Feb 16 17:00:30.721594 master-0 kubenswrapper[10003]: I0216 17:00:30.721537 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls" seLinuxMountContext="" Feb 16 17:00:30.721660 master-0 kubenswrapper[10003]: I0216 17:00:30.721605 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client" seLinuxMountContext="" Feb 16 17:00:30.721713 master-0 kubenswrapper[10003]: I0216 17:00:30.721687 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle" seLinuxMountContext="" Feb 16 17:00:30.721783 master-0 kubenswrapper[10003]: I0216 17:00:30.721726 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config" seLinuxMountContext="" Feb 16 17:00:30.721823 master-0 kubenswrapper[10003]: I0216 17:00:30.721794 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01921947-c416-44b6-953d-75b935ad8977" volumeName="kubernetes.io/projected/01921947-c416-44b6-953d-75b935ad8977-kube-api-access-47bnn" seLinuxMountContext="" Feb 16 17:00:30.721908 master-0 kubenswrapper[10003]: I0216 17:00:30.721879 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides" seLinuxMountContext="" Feb 16 17:00:30.721986 master-0 kubenswrapper[10003]: I0216 17:00:30.721912 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token" seLinuxMountContext="" Feb 16 17:00:30.722045 master-0 kubenswrapper[10003]: I0216 17:00:30.721983 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy" seLinuxMountContext="" Feb 16 17:00:30.722115 master-0 kubenswrapper[10003]: I0216 17:00:30.722064 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config" seLinuxMountContext="" Feb 16 17:00:30.722115 master-0 kubenswrapper[10003]: I0216 17:00:30.722093 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl" seLinuxMountContext="" Feb 16 17:00:30.722216 master-0 kubenswrapper[10003]: I0216 17:00:30.722154 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x" seLinuxMountContext="" Feb 16 17:00:30.722216 master-0 kubenswrapper[10003]: I0216 17:00:30.722180 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2" seLinuxMountContext="" Feb 16 17:00:30.722277 master-0 kubenswrapper[10003]: I0216 17:00:30.722249 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j" seLinuxMountContext="" Feb 16 17:00:30.722315 master-0 kubenswrapper[10003]: I0216 17:00:30.722285 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token" seLinuxMountContext="" Feb 16 17:00:30.722373 master-0 kubenswrapper[10003]: I0216 17:00:30.722345 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config" seLinuxMountContext="" Feb 16 17:00:30.722455 master-0 kubenswrapper[10003]: I0216 17:00:30.722384 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x" seLinuxMountContext="" Feb 16 17:00:30.722489 master-0 kubenswrapper[10003]: I0216 17:00:30.722465 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds" seLinuxMountContext="" Feb 16 17:00:30.722546 master-0 kubenswrapper[10003]: I0216 17:00:30.722520 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 16 17:00:30.722582 master-0 kubenswrapper[10003]: I0216 17:00:30.722562 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm" seLinuxMountContext="" Feb 16 17:00:30.722649 master-0 kubenswrapper[10003]: I0216 17:00:30.722624 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01921947-c416-44b6-953d-75b935ad8977" volumeName="kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-proxy-ca-bundles" seLinuxMountContext="" Feb 16 17:00:30.722732 master-0 kubenswrapper[10003]: I0216 17:00:30.722664 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7" seLinuxMountContext="" Feb 16 17:00:30.722763 master-0 kubenswrapper[10003]: I0216 17:00:30.722743 10003 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert" seLinuxMountContext="" Feb 16 17:00:30.722793 master-0 kubenswrapper[10003]: I0216 17:00:30.722764 10003 reconstruct.go:97] "Volume reconstruction finished" Feb 16 17:00:30.722842 master-0 kubenswrapper[10003]: I0216 17:00:30.722822 10003 reconciler.go:26] "Reconciler: start to sync state" Feb 16 17:00:30.794136 master-0 kubenswrapper[10003]: E0216 17:00:30.794061 10003 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 16 17:00:30.796063 master-0 kubenswrapper[10003]: I0216 17:00:30.795988 10003 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 17:00:30.797665 master-0 kubenswrapper[10003]: I0216 17:00:30.797625 10003 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 17:00:30.797732 master-0 kubenswrapper[10003]: I0216 17:00:30.797669 10003 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 17:00:30.797732 master-0 kubenswrapper[10003]: I0216 17:00:30.797693 10003 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 17:00:30.797794 master-0 kubenswrapper[10003]: E0216 17:00:30.797751 10003 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 17:00:30.799262 master-0 kubenswrapper[10003]: I0216 17:00:30.799227 10003 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 17:00:30.805465 master-0 kubenswrapper[10003]: I0216 17:00:30.805388 10003 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="b5e6e0c200ef6468da128fab1a901d498e73068beb07a54310f215479193099d" exitCode=0 Feb 16 17:00:30.805465 master-0 kubenswrapper[10003]: I0216 17:00:30.805444 10003 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="96c8b16be41a61f78ae9a0d158764cfb3f1dc1be9541f6dde4356d45ed489d8c" exitCode=0 Feb 16 17:00:30.805465 master-0 kubenswrapper[10003]: I0216 17:00:30.805453 10003 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="07ee05b11ab243298aba0652acab149107fdee4d056b25a8d70e009ebf722842" exitCode=0 Feb 16 17:00:30.805465 master-0 kubenswrapper[10003]: I0216 17:00:30.805462 10003 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="9b508704ca913b3676949d448345a8f778d17c4d3d7c7156e1db34b5da7a8c96" exitCode=0 Feb 16 17:00:30.805465 master-0 kubenswrapper[10003]: I0216 17:00:30.805471 10003 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="6f850c8263f7a5fffe361664a6b474015b2a97155111509d5a8154875803d4f3" exitCode=0 Feb 16 17:00:30.805649 master-0 kubenswrapper[10003]: I0216 17:00:30.805480 10003 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="f47270eadf232a1b51b70eb1069033d1ee831e9e2a83cf22e20d3b2db1ceb184" exitCode=0 Feb 16 17:00:30.806691 master-0 kubenswrapper[10003]: I0216 17:00:30.806660 10003 generic.go:334] "Generic (PLEG): container finished" podID="10280d4e-9a32-4fea-aea0-211e7c9f0502" containerID="500d24f874646514d290aa65da48da18a395647cf9847d120c566c759fe02946" exitCode=0 Feb 16 17:00:30.815127 master-0 kubenswrapper[10003]: I0216 17:00:30.815081 10003 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa" exitCode=0 Feb 16 17:00:30.816984 master-0 kubenswrapper[10003]: I0216 17:00:30.816905 10003 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e" exitCode=1 Feb 16 17:00:30.826318 master-0 kubenswrapper[10003]: I0216 17:00:30.826270 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 17:00:30.826649 master-0 kubenswrapper[10003]: I0216 17:00:30.826622 10003 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="6bb85739a7a836abfdb346023915e77abb0b10b023f88b2b7e7c9536a35657a8" exitCode=1 Feb 16 17:00:30.826721 master-0 kubenswrapper[10003]: I0216 17:00:30.826649 10003 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="aa5ecc6a98445fdbcf4dc0a764b1f3d8e109d603e5ddc36d010d08e31acfcc8f" exitCode=0 Feb 16 17:00:30.836276 master-0 kubenswrapper[10003]: I0216 17:00:30.836225 10003 generic.go:334] "Generic (PLEG): container finished" podID="f8589094-f18e-4070-a550-b2da6f8acfc0" containerID="a029a9519b0af6df58434184bb4dd337dec578276ce41db33a7f4964a78b38d1" exitCode=0 Feb 16 17:00:30.844837 master-0 kubenswrapper[10003]: I0216 17:00:30.844715 10003 generic.go:334] "Generic (PLEG): container finished" podID="4e51bba5-0ebe-4e55-a588-38b71548c605" containerID="a0dc239cad7cf5c0f46eaeb5867ad213f7711a1950bb1f960b003e867bacaff0" exitCode=0 Feb 16 17:00:30.845040 master-0 kubenswrapper[10003]: I0216 17:00:30.845018 10003 generic.go:334] "Generic (PLEG): container finished" podID="4e51bba5-0ebe-4e55-a588-38b71548c605" containerID="16a0cd95be2918fe98e0a8ede15fe5203c9e491ca6e96550b8c7ea95ff6081d2" exitCode=0 Feb 16 17:00:30.855677 master-0 kubenswrapper[10003]: I0216 17:00:30.855631 10003 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="6a76b7400b08797d8e5d6ecf8b5e5677ebdccdcb8c93451e24cae607d87b5dde" exitCode=0 Feb 16 17:00:30.857999 master-0 kubenswrapper[10003]: I0216 17:00:30.857956 10003 generic.go:334] "Generic (PLEG): container finished" podID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" containerID="ff9b3b2992b50e55900986e351d7a1b84719ad88820b81ad374c423bd1f1a2a8" exitCode=0 Feb 16 17:00:30.897895 master-0 kubenswrapper[10003]: E0216 17:00:30.897837 10003 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 17:00:30.913893 master-0 kubenswrapper[10003]: I0216 17:00:30.913856 10003 manager.go:324] Recovery completed Feb 16 17:00:30.950469 master-0 kubenswrapper[10003]: I0216 17:00:30.950305 10003 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 17:00:30.950469 master-0 kubenswrapper[10003]: I0216 17:00:30.950343 10003 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 17:00:30.950469 master-0 kubenswrapper[10003]: I0216 17:00:30.950375 10003 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:00:30.950738 master-0 kubenswrapper[10003]: I0216 17:00:30.950646 10003 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 16 17:00:30.950738 master-0 kubenswrapper[10003]: I0216 17:00:30.950657 10003 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 16 17:00:30.950738 master-0 kubenswrapper[10003]: I0216 17:00:30.950676 10003 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 16 17:00:30.950738 master-0 kubenswrapper[10003]: I0216 17:00:30.950682 10003 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 16 17:00:30.950738 master-0 kubenswrapper[10003]: I0216 17:00:30.950689 10003 policy_none.go:49] "None policy: Start" Feb 16 17:00:30.953343 master-0 kubenswrapper[10003]: I0216 17:00:30.953303 10003 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 17:00:30.953395 master-0 kubenswrapper[10003]: I0216 17:00:30.953357 10003 state_mem.go:35] "Initializing new in-memory state store" Feb 16 17:00:30.953641 master-0 kubenswrapper[10003]: I0216 17:00:30.953614 10003 state_mem.go:75] "Updated machine memory state" Feb 16 17:00:30.953641 master-0 kubenswrapper[10003]: I0216 17:00:30.953634 10003 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 16 17:00:30.967940 master-0 kubenswrapper[10003]: I0216 17:00:30.967890 10003 manager.go:334] "Starting Device Plugin manager" Feb 16 17:00:30.968113 master-0 kubenswrapper[10003]: I0216 17:00:30.968075 10003 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 17:00:30.968113 master-0 kubenswrapper[10003]: I0216 17:00:30.968088 10003 server.go:79] "Starting device plugin registration server" Feb 16 17:00:30.968484 master-0 kubenswrapper[10003]: I0216 17:00:30.968454 10003 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 17:00:30.968554 master-0 kubenswrapper[10003]: I0216 17:00:30.968473 10003 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 17:00:30.968593 master-0 kubenswrapper[10003]: I0216 17:00:30.968567 10003 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 17:00:30.968698 master-0 kubenswrapper[10003]: I0216 17:00:30.968679 10003 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 17:00:30.968698 master-0 kubenswrapper[10003]: I0216 17:00:30.968690 10003 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 17:00:31.069407 master-0 kubenswrapper[10003]: I0216 17:00:31.069326 10003 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:00:31.074989 master-0 kubenswrapper[10003]: I0216 17:00:31.074950 10003 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:00:31.074989 master-0 kubenswrapper[10003]: I0216 17:00:31.074992 10003 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:00:31.075109 master-0 kubenswrapper[10003]: I0216 17:00:31.075002 10003 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:00:31.075109 master-0 kubenswrapper[10003]: I0216 17:00:31.075021 10003 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:00:31.099007 master-0 kubenswrapper[10003]: I0216 17:00:31.098769 10003 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Feb 16 17:00:31.099348 master-0 kubenswrapper[10003]: I0216 17:00:31.099308 10003 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78f7ca346bca4984ddbbaf801650ea12f9c20b44ed1343037c4daed41481b056" Feb 16 17:00:31.099440 master-0 kubenswrapper[10003]: I0216 17:00:31.099334 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098"} Feb 16 17:00:31.099440 master-0 kubenswrapper[10003]: I0216 17:00:31.099408 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"9460ca0802075a8a6a10d7b3e6052c4d","Type":"ContainerStarted","Data":"040d7d0293a7b20224cd27a16c0bf2020794d17010ab130f879f9e5ce8511a88"} Feb 16 17:00:31.099520 master-0 kubenswrapper[10003]: I0216 17:00:31.099462 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31"} Feb 16 17:00:31.099520 master-0 kubenswrapper[10003]: I0216 17:00:31.099474 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b"} Feb 16 17:00:31.099520 master-0 kubenswrapper[10003]: I0216 17:00:31.099483 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerDied","Data":"ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa"} Feb 16 17:00:31.099520 master-0 kubenswrapper[10003]: I0216 17:00:31.099495 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5d1e91e5a1fed5cf7076a92d2830d36f","Type":"ContainerStarted","Data":"a6e1a17cdf628ad1d6c859dc2741c8e5533022bb4b1d4a9deacf8709bd53c33e"} Feb 16 17:00:31.099520 master-0 kubenswrapper[10003]: I0216 17:00:31.099504 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26"} Feb 16 17:00:31.099520 master-0 kubenswrapper[10003]: I0216 17:00:31.099513 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"67636bc611814bbf34e6bb9093e3c3fce5ce2b828a2dd05d2b7fdd2dd015348f"} Feb 16 17:00:31.099520 master-0 kubenswrapper[10003]: I0216 17:00:31.099522 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e"} Feb 16 17:00:31.099939 master-0 kubenswrapper[10003]: I0216 17:00:31.099534 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"be29035bd3f07d8681e71946753c9f5c4233d203be4ff12561b76d96bc674177"} Feb 16 17:00:31.099939 master-0 kubenswrapper[10003]: I0216 17:00:31.099545 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9"} Feb 16 17:00:31.099939 master-0 kubenswrapper[10003]: I0216 17:00:31.099554 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841"} Feb 16 17:00:31.099939 master-0 kubenswrapper[10003]: I0216 17:00:31.099563 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"0a3ce6339796232d6462786af4891ac2f6ae4477b24c445386f55fd5ad1be497"} Feb 16 17:00:31.099939 master-0 kubenswrapper[10003]: I0216 17:00:31.099575 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"d85f4bae9120dd5571ac4aef5b4bc508cd0c2e61ac41e2e016d2fca33cf2c0df"} Feb 16 17:00:31.099939 master-0 kubenswrapper[10003]: I0216 17:00:31.099586 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"6bb85739a7a836abfdb346023915e77abb0b10b023f88b2b7e7c9536a35657a8"} Feb 16 17:00:31.099939 master-0 kubenswrapper[10003]: I0216 17:00:31.099609 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"aa5ecc6a98445fdbcf4dc0a764b1f3d8e109d603e5ddc36d010d08e31acfcc8f"} Feb 16 17:00:31.099939 master-0 kubenswrapper[10003]: I0216 17:00:31.099623 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"a493ec972b676ff0c630095722c8d8d6f05ae211809b90b8791aa422b9dcb2fb"} Feb 16 17:00:31.099939 master-0 kubenswrapper[10003]: I0216 17:00:31.099652 10003 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="032b64d679f57601688a8c909d1648c2b2ff07b1d0ed9eae1ac157ec69dbfe35" Feb 16 17:00:31.169946 master-0 kubenswrapper[10003]: I0216 17:00:31.169874 10003 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 17:00:31.270990 master-0 kubenswrapper[10003]: I0216 17:00:31.270944 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:00:31.271305 master-0 kubenswrapper[10003]: I0216 17:00:31.271285 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.271389 master-0 kubenswrapper[10003]: I0216 17:00:31.271377 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.271491 master-0 kubenswrapper[10003]: I0216 17:00:31.271475 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:00:31.271582 master-0 kubenswrapper[10003]: I0216 17:00:31.271567 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:00:31.271692 master-0 kubenswrapper[10003]: I0216 17:00:31.271679 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.271765 master-0 kubenswrapper[10003]: I0216 17:00:31.271753 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.271828 master-0 kubenswrapper[10003]: I0216 17:00:31.271817 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.271899 master-0 kubenswrapper[10003]: I0216 17:00:31.271885 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.272026 master-0 kubenswrapper[10003]: I0216 17:00:31.272011 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.272114 master-0 kubenswrapper[10003]: I0216 17:00:31.272101 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:00:31.272188 master-0 kubenswrapper[10003]: I0216 17:00:31.272175 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:00:31.272258 master-0 kubenswrapper[10003]: I0216 17:00:31.272246 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.272326 master-0 kubenswrapper[10003]: I0216 17:00:31.272314 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.272398 master-0 kubenswrapper[10003]: I0216 17:00:31.272387 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.272474 master-0 kubenswrapper[10003]: I0216 17:00:31.272463 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.272548 master-0 kubenswrapper[10003]: I0216 17:00:31.272536 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:00:31.373733 master-0 kubenswrapper[10003]: I0216 17:00:31.373591 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.373733 master-0 kubenswrapper[10003]: I0216 17:00:31.373671 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:00:31.374004 master-0 kubenswrapper[10003]: I0216 17:00:31.373769 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.374004 master-0 kubenswrapper[10003]: I0216 17:00:31.373841 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:00:31.374004 master-0 kubenswrapper[10003]: I0216 17:00:31.373894 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:00:31.374004 master-0 kubenswrapper[10003]: I0216 17:00:31.373916 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:00:31.374135 master-0 kubenswrapper[10003]: I0216 17:00:31.374000 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.374135 master-0 kubenswrapper[10003]: I0216 17:00:31.374047 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:00:31.374135 master-0 kubenswrapper[10003]: I0216 17:00:31.374081 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.374135 master-0 kubenswrapper[10003]: I0216 17:00:31.374097 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.374135 master-0 kubenswrapper[10003]: I0216 17:00:31.374116 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.374261 master-0 kubenswrapper[10003]: I0216 17:00:31.374202 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.374261 master-0 kubenswrapper[10003]: I0216 17:00:31.374248 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.374318 master-0 kubenswrapper[10003]: I0216 17:00:31.374284 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.374346 master-0 kubenswrapper[10003]: I0216 17:00:31.374316 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.374377 master-0 kubenswrapper[10003]: I0216 17:00:31.374348 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.374623 master-0 kubenswrapper[10003]: I0216 17:00:31.374401 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.374623 master-0 kubenswrapper[10003]: I0216 17:00:31.374611 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:00:31.374685 master-0 kubenswrapper[10003]: I0216 17:00:31.374623 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.374685 master-0 kubenswrapper[10003]: I0216 17:00:31.374602 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.374685 master-0 kubenswrapper[10003]: I0216 17:00:31.374658 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.374761 master-0 kubenswrapper[10003]: I0216 17:00:31.374684 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.374761 master-0 kubenswrapper[10003]: I0216 17:00:31.374686 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.374761 master-0 kubenswrapper[10003]: I0216 17:00:31.374716 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.374761 master-0 kubenswrapper[10003]: I0216 17:00:31.374746 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.374872 master-0 kubenswrapper[10003]: I0216 17:00:31.374782 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:00:31.374872 master-0 kubenswrapper[10003]: I0216 17:00:31.374783 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d1e91e5a1fed5cf7076a92d2830d36f-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5d1e91e5a1fed5cf7076a92d2830d36f\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:31.374872 master-0 kubenswrapper[10003]: I0216 17:00:31.374824 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:00:31.374872 master-0 kubenswrapper[10003]: I0216 17:00:31.374862 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:00:31.374872 master-0 kubenswrapper[10003]: I0216 17:00:31.374898 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.374872 master-0 kubenswrapper[10003]: I0216 17:00:31.374956 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:00:31.375133 master-0 kubenswrapper[10003]: I0216 17:00:31.374966 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:00:31.375133 master-0 kubenswrapper[10003]: I0216 17:00:31.375002 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:31.375133 master-0 kubenswrapper[10003]: I0216 17:00:31.375020 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/9460ca0802075a8a6a10d7b3e6052c4d-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"9460ca0802075a8a6a10d7b3e6052c4d\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:00:31.682186 master-0 kubenswrapper[10003]: I0216 17:00:31.682058 10003 apiserver.go:52] "Watching apiserver" Feb 16 17:00:31.694318 master-0 kubenswrapper[10003]: I0216 17:00:31.694231 10003 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 17:00:31.695535 master-0 kubenswrapper[10003]: I0216 17:00:31.695478 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts","openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l","openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8","openshift-controller-manager/controller-manager-869cbbd595-47pjz","openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d","openshift-multus/multus-6r7wj","assisted-installer/assisted-installer-controller-thhq2","openshift-authentication-operator/authentication-operator-755d954778-lf4cb","openshift-network-operator/network-operator-6fcf4c966-6bmf9","openshift-multus/multus-additional-cni-plugins-rjdlk","openshift-network-operator/iptables-alerter-czzz2","openshift-etcd/etcd-master-0-master-0","openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm","openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6","openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4","kube-system/bootstrap-kube-scheduler-master-0","openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8","openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k","openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2","openshift-network-diagnostics/network-check-target-vwvwx","kube-system/bootstrap-kube-controller-manager-master-0","openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6","openshift-network-node-identity/network-node-identity-hhcpr","openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9","openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v","openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv","openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf","openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5","openshift-ovn-kubernetes/ovnkube-node-flr86","openshift-service-ca/service-ca-676cd8b9b5-cp9rb","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-multus/multus-admission-controller-7c64d55f8-4jz2t","openshift-multus/network-metrics-daemon-279g6","openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9","openshift-dns-operator/dns-operator-86b8869b79-nhxlp"] Feb 16 17:00:31.696218 master-0 kubenswrapper[10003]: I0216 17:00:31.696169 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:31.696218 master-0 kubenswrapper[10003]: I0216 17:00:31.696175 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 17:00:31.698116 master-0 kubenswrapper[10003]: I0216 17:00:31.698080 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:31.698224 master-0 kubenswrapper[10003]: I0216 17:00:31.698197 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:31.698301 master-0 kubenswrapper[10003]: I0216 17:00:31.698214 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:31.698301 master-0 kubenswrapper[10003]: I0216 17:00:31.698223 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:31.698904 master-0 kubenswrapper[10003]: I0216 17:00:31.698879 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:31.698989 master-0 kubenswrapper[10003]: I0216 17:00:31.698970 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:31.704863 master-0 kubenswrapper[10003]: I0216 17:00:31.704803 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 17:00:31.705327 master-0 kubenswrapper[10003]: I0216 17:00:31.705279 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 17:00:31.705458 master-0 kubenswrapper[10003]: I0216 17:00:31.705326 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 17:00:31.705458 master-0 kubenswrapper[10003]: I0216 17:00:31.705390 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 17:00:31.706108 master-0 kubenswrapper[10003]: I0216 17:00:31.705329 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 17:00:31.706108 master-0 kubenswrapper[10003]: I0216 17:00:31.705590 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 17:00:31.706108 master-0 kubenswrapper[10003]: I0216 17:00:31.705674 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 17:00:31.706108 master-0 kubenswrapper[10003]: I0216 17:00:31.705791 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:00:31.706108 master-0 kubenswrapper[10003]: I0216 17:00:31.705837 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 17:00:31.706108 master-0 kubenswrapper[10003]: I0216 17:00:31.705883 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:31.706108 master-0 kubenswrapper[10003]: I0216 17:00:31.705958 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 17:00:31.706108 master-0 kubenswrapper[10003]: I0216 17:00:31.706052 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 17:00:31.706772 master-0 kubenswrapper[10003]: I0216 17:00:31.706553 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 17:00:31.710562 master-0 kubenswrapper[10003]: I0216 17:00:31.710512 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 17:00:31.710651 master-0 kubenswrapper[10003]: I0216 17:00:31.710617 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 17:00:31.712047 master-0 kubenswrapper[10003]: I0216 17:00:31.712009 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 17:00:31.712226 master-0 kubenswrapper[10003]: I0216 17:00:31.712183 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 17:00:31.712283 master-0 kubenswrapper[10003]: I0216 17:00:31.712272 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 17:00:31.712321 master-0 kubenswrapper[10003]: I0216 17:00:31.712295 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 17:00:31.712376 master-0 kubenswrapper[10003]: I0216 17:00:31.712329 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 17:00:31.712376 master-0 kubenswrapper[10003]: I0216 17:00:31.712366 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 17:00:31.712463 master-0 kubenswrapper[10003]: I0216 17:00:31.712427 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 17:00:31.712531 master-0 kubenswrapper[10003]: I0216 17:00:31.712463 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 17:00:31.712531 master-0 kubenswrapper[10003]: I0216 17:00:31.712501 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 17:00:31.712649 master-0 kubenswrapper[10003]: I0216 17:00:31.712606 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:00:31.712699 master-0 kubenswrapper[10003]: I0216 17:00:31.712624 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 17:00:31.712736 master-0 kubenswrapper[10003]: I0216 17:00:31.712704 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 17:00:31.712776 master-0 kubenswrapper[10003]: I0216 17:00:31.712739 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:31.713218 master-0 kubenswrapper[10003]: I0216 17:00:31.712967 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 17:00:31.713218 master-0 kubenswrapper[10003]: I0216 17:00:31.713104 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 17:00:31.713360 master-0 kubenswrapper[10003]: I0216 17:00:31.713242 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 17:00:31.713360 master-0 kubenswrapper[10003]: I0216 17:00:31.713307 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 17:00:31.713459 master-0 kubenswrapper[10003]: I0216 17:00:31.713403 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 17:00:31.713459 master-0 kubenswrapper[10003]: I0216 17:00:31.713449 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:00:31.713571 master-0 kubenswrapper[10003]: I0216 17:00:31.713496 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 17:00:31.713571 master-0 kubenswrapper[10003]: I0216 17:00:31.713548 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 17:00:31.713692 master-0 kubenswrapper[10003]: I0216 17:00:31.713666 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 17:00:31.713733 master-0 kubenswrapper[10003]: I0216 17:00:31.713693 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 17:00:31.713769 master-0 kubenswrapper[10003]: I0216 17:00:31.713742 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 17:00:31.713972 master-0 kubenswrapper[10003]: I0216 17:00:31.713904 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 17:00:31.715017 master-0 kubenswrapper[10003]: I0216 17:00:31.714978 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:31.715346 master-0 kubenswrapper[10003]: I0216 17:00:31.715308 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:31.720957 master-0 kubenswrapper[10003]: I0216 17:00:31.718835 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 17:00:31.720957 master-0 kubenswrapper[10003]: I0216 17:00:31.720109 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 17:00:31.720957 master-0 kubenswrapper[10003]: I0216 17:00:31.720526 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 17:00:31.720957 master-0 kubenswrapper[10003]: I0216 17:00:31.720729 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 17:00:31.720957 master-0 kubenswrapper[10003]: I0216 17:00:31.720888 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 17:00:31.721203 master-0 kubenswrapper[10003]: I0216 17:00:31.721100 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 17:00:31.721241 master-0 kubenswrapper[10003]: I0216 17:00:31.721181 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.721445 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.721559 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.721872 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.721878 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.721877 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.721971 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.722321 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.722332 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.722463 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.722505 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.722333 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.722459 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.722694 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 17:00:31.723804 master-0 kubenswrapper[10003]: I0216 17:00:31.722762 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 17:00:31.724494 master-0 kubenswrapper[10003]: I0216 17:00:31.724037 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 17:00:31.727953 master-0 kubenswrapper[10003]: I0216 17:00:31.724899 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 17:00:31.727953 master-0 kubenswrapper[10003]: I0216 17:00:31.725231 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:00:31.727953 master-0 kubenswrapper[10003]: I0216 17:00:31.725346 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 17:00:31.727953 master-0 kubenswrapper[10003]: I0216 17:00:31.725479 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 17:00:31.727953 master-0 kubenswrapper[10003]: I0216 17:00:31.725501 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 17:00:31.727953 master-0 kubenswrapper[10003]: I0216 17:00:31.725590 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 17:00:31.727953 master-0 kubenswrapper[10003]: I0216 17:00:31.725769 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 17:00:31.727953 master-0 kubenswrapper[10003]: I0216 17:00:31.725782 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 17:00:31.727953 master-0 kubenswrapper[10003]: I0216 17:00:31.726114 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:00:31.727953 master-0 kubenswrapper[10003]: I0216 17:00:31.726263 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 17:00:31.727953 master-0 kubenswrapper[10003]: I0216 17:00:31.727395 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 17:00:31.727953 master-0 kubenswrapper[10003]: I0216 17:00:31.727758 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 17:00:31.728420 master-0 kubenswrapper[10003]: I0216 17:00:31.728007 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 17:00:31.728420 master-0 kubenswrapper[10003]: I0216 17:00:31.728208 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 17:00:31.728420 master-0 kubenswrapper[10003]: I0216 17:00:31.728359 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 17:00:31.728524 master-0 kubenswrapper[10003]: I0216 17:00:31.728451 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 17:00:31.732962 master-0 kubenswrapper[10003]: I0216 17:00:31.729513 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 17:00:31.866585 master-0 kubenswrapper[10003]: I0216 17:00:31.866520 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:31.866585 master-0 kubenswrapper[10003]: I0216 17:00:31.866563 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 17:00:31.866970 master-0 kubenswrapper[10003]: I0216 17:00:31.866954 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:00:31.867073 master-0 kubenswrapper[10003]: I0216 17:00:31.867053 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 17:00:31.867269 master-0 kubenswrapper[10003]: I0216 17:00:31.867216 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:31.867269 master-0 kubenswrapper[10003]: I0216 17:00:31.867267 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 17:00:31.867434 master-0 kubenswrapper[10003]: I0216 17:00:31.867357 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 17:00:31.867434 master-0 kubenswrapper[10003]: I0216 17:00:31.866576 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:00:31.867592 master-0 kubenswrapper[10003]: I0216 17:00:31.867477 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:00:31.867592 master-0 kubenswrapper[10003]: I0216 17:00:31.867514 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:00:31.867592 master-0 kubenswrapper[10003]: I0216 17:00:31.867534 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:31.867592 master-0 kubenswrapper[10003]: I0216 17:00:31.867555 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:00:31.867592 master-0 kubenswrapper[10003]: I0216 17:00:31.867572 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:00:31.868077 master-0 kubenswrapper[10003]: I0216 17:00:31.867629 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:31.868077 master-0 kubenswrapper[10003]: I0216 17:00:31.867664 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:31.868077 master-0 kubenswrapper[10003]: I0216 17:00:31.867694 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:31.868077 master-0 kubenswrapper[10003]: I0216 17:00:31.867638 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 17:00:31.868077 master-0 kubenswrapper[10003]: I0216 17:00:31.867944 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:31.868077 master-0 kubenswrapper[10003]: I0216 17:00:31.867965 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:00:31.868077 master-0 kubenswrapper[10003]: I0216 17:00:31.867973 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:00:31.868910 master-0 kubenswrapper[10003]: I0216 17:00:31.868092 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:31.868910 master-0 kubenswrapper[10003]: I0216 17:00:31.868038 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:31.868910 master-0 kubenswrapper[10003]: I0216 17:00:31.868147 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:31.868910 master-0 kubenswrapper[10003]: I0216 17:00:31.868177 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:31.868910 master-0 kubenswrapper[10003]: I0216 17:00:31.868202 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869069 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869149 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869148 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869171 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869200 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869212 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869239 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869198 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869266 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869287 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869332 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869342 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869354 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869366 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869409 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869409 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869424 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869410 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869461 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869420 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:00:31.869454 master-0 kubenswrapper[10003]: I0216 17:00:31.869497 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.869977 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.870082 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.870125 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.870994 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.871033 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.871088 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.871114 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.871133 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.871152 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.871171 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.871190 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.871211 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.871221 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 17:00:31.871442 master-0 kubenswrapper[10003]: I0216 17:00:31.871316 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:00:31.874287 master-0 kubenswrapper[10003]: I0216 17:00:31.871493 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:31.874287 master-0 kubenswrapper[10003]: I0216 17:00:31.871226 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:31.874287 master-0 kubenswrapper[10003]: I0216 17:00:31.871784 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:00:31.874287 master-0 kubenswrapper[10003]: I0216 17:00:31.871834 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:00:31.874287 master-0 kubenswrapper[10003]: I0216 17:00:31.871964 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:00:31.874287 master-0 kubenswrapper[10003]: I0216 17:00:31.872127 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:00:31.874287 master-0 kubenswrapper[10003]: I0216 17:00:31.872226 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:00:31.874287 master-0 kubenswrapper[10003]: I0216 17:00:31.872622 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:00:31.874287 master-0 kubenswrapper[10003]: I0216 17:00:31.872849 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:00:31.874287 master-0 kubenswrapper[10003]: I0216 17:00:31.872858 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:00:31.874287 master-0 kubenswrapper[10003]: I0216 17:00:31.872852 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 17:00:31.874287 master-0 kubenswrapper[10003]: I0216 17:00:31.872960 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 17:00:31.875041 master-0 kubenswrapper[10003]: I0216 17:00:31.874716 10003 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 16 17:00:31.878658 master-0 kubenswrapper[10003]: I0216 17:00:31.878614 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:00:31.879505 master-0 kubenswrapper[10003]: I0216 17:00:31.879464 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:31.885079 master-0 kubenswrapper[10003]: I0216 17:00:31.884971 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 17:00:31.905767 master-0 kubenswrapper[10003]: I0216 17:00:31.905734 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 17:00:31.925452 master-0 kubenswrapper[10003]: I0216 17:00:31.925396 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 17:00:31.945381 master-0 kubenswrapper[10003]: I0216 17:00:31.945250 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 17:00:31.965903 master-0 kubenswrapper[10003]: I0216 17:00:31.965843 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 17:00:31.973025 master-0 kubenswrapper[10003]: I0216 17:00:31.972975 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:31.973212 master-0 kubenswrapper[10003]: I0216 17:00:31.973042 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:31.973212 master-0 kubenswrapper[10003]: I0216 17:00:31.973072 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:31.973212 master-0 kubenswrapper[10003]: I0216 17:00:31.973096 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.973212 master-0 kubenswrapper[10003]: I0216 17:00:31.973123 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:00:31.973212 master-0 kubenswrapper[10003]: I0216 17:00:31.973164 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/568b22df-b454-4d74-bc21-6c84daf17c8c-kube-api-access\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:31.973212 master-0 kubenswrapper[10003]: I0216 17:00:31.973189 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:31.973365 master-0 kubenswrapper[10003]: I0216 17:00:31.973213 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.973365 master-0 kubenswrapper[10003]: I0216 17:00:31.973235 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:31.973365 master-0 kubenswrapper[10003]: I0216 17:00:31.973259 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:00:31.973365 master-0 kubenswrapper[10003]: I0216 17:00:31.973281 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:31.973365 master-0 kubenswrapper[10003]: I0216 17:00:31.973303 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:00:31.973365 master-0 kubenswrapper[10003]: I0216 17:00:31.973327 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.973365 master-0 kubenswrapper[10003]: I0216 17:00:31.973345 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:31.973365 master-0 kubenswrapper[10003]: I0216 17:00:31.973366 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.973779 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.973821 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.973830 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.973846 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.973872 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.973903 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.973950 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.973974 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.973991 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.973996 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974030 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974055 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974063 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974080 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974318 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974329 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974372 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974400 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/568b22df-b454-4d74-bc21-6c84daf17c8c-service-ca\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974401 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974506 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974517 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974605 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974613 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/568b22df-b454-4d74-bc21-6c84daf17c8c-service-ca\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974635 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974660 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974716 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-config\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:31.974786 master-0 kubenswrapper[10003]: I0216 17:00:31.974746 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.974863 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.974968 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975002 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975006 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975012 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-config\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975235 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975284 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975317 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975372 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975416 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975451 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975473 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975495 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975516 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975539 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975584 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975621 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975646 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975667 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-config\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975688 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975694 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975712 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:31.975887 master-0 kubenswrapper[10003]: I0216 17:00:31.975897 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.975957 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.975983 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976009 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976032 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976057 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976100 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976135 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976160 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcsw6\" (UniqueName: \"kubernetes.io/projected/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-kube-api-access-qcsw6\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976184 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976205 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976234 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976257 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976278 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976499 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976490 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:00:31.976646 master-0 kubenswrapper[10003]: I0216 17:00:31.976528 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.976710 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.976747 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.976772 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.976800 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.976795 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.976824 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.976858 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.976889 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.976953 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.977008 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.977040 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.977064 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.977090 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.977126 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.977123 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.977186 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:31.977184 master-0 kubenswrapper[10003]: I0216 17:00:31.977209 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:31.977780 master-0 kubenswrapper[10003]: I0216 17:00:31.977204 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:31.977780 master-0 kubenswrapper[10003]: E0216 17:00:31.977298 10003 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 17:00:31.977780 master-0 kubenswrapper[10003]: E0216 17:00:31.977389 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.477370138 +0000 UTC m=+1.992855809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : secret "marketplace-operator-metrics" not found Feb 16 17:00:31.977780 master-0 kubenswrapper[10003]: I0216 17:00:31.977457 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:00:31.977780 master-0 kubenswrapper[10003]: I0216 17:00:31.977518 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:31.977780 master-0 kubenswrapper[10003]: I0216 17:00:31.977546 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:00:31.977780 master-0 kubenswrapper[10003]: I0216 17:00:31.977591 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:31.977780 master-0 kubenswrapper[10003]: I0216 17:00:31.977619 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.977780 master-0 kubenswrapper[10003]: I0216 17:00:31.977649 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47bnn\" (UniqueName: \"kubernetes.io/projected/01921947-c416-44b6-953d-75b935ad8977-kube-api-access-47bnn\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:31.977780 master-0 kubenswrapper[10003]: I0216 17:00:31.977726 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.977780 master-0 kubenswrapper[10003]: I0216 17:00:31.977759 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:31.977780 master-0 kubenswrapper[10003]: I0216 17:00:31.977787 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:00:31.978201 master-0 kubenswrapper[10003]: I0216 17:00:31.977976 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.978201 master-0 kubenswrapper[10003]: I0216 17:00:31.978001 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:31.978201 master-0 kubenswrapper[10003]: I0216 17:00:31.978051 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-proxy-ca-bundles\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:31.978201 master-0 kubenswrapper[10003]: I0216 17:00:31.978078 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:31.978201 master-0 kubenswrapper[10003]: I0216 17:00:31.978082 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:31.978201 master-0 kubenswrapper[10003]: I0216 17:00:31.978101 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:31.978201 master-0 kubenswrapper[10003]: I0216 17:00:31.978129 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:00:31.978535 master-0 kubenswrapper[10003]: I0216 17:00:31.978204 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.978535 master-0 kubenswrapper[10003]: I0216 17:00:31.978217 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.978535 master-0 kubenswrapper[10003]: E0216 17:00:31.978224 10003 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 17:00:31.978535 master-0 kubenswrapper[10003]: I0216 17:00:31.978250 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:00:31.978535 master-0 kubenswrapper[10003]: I0216 17:00:31.978277 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:00:31.978535 master-0 kubenswrapper[10003]: E0216 17:00:31.978316 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.478285123 +0000 UTC m=+1.993770834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : secret "image-registry-operator-tls" not found Feb 16 17:00:31.978535 master-0 kubenswrapper[10003]: I0216 17:00:31.978357 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:31.978535 master-0 kubenswrapper[10003]: I0216 17:00:31.978403 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-proxy-ca-bundles\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:31.978535 master-0 kubenswrapper[10003]: I0216 17:00:31.978412 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.978535 master-0 kubenswrapper[10003]: I0216 17:00:31.978467 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.978535 master-0 kubenswrapper[10003]: I0216 17:00:31.978460 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:00:31.978891 master-0 kubenswrapper[10003]: I0216 17:00:31.978538 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:00:31.978891 master-0 kubenswrapper[10003]: I0216 17:00:31.978588 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.978891 master-0 kubenswrapper[10003]: I0216 17:00:31.978633 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:31.978891 master-0 kubenswrapper[10003]: I0216 17:00:31.978652 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:31.978891 master-0 kubenswrapper[10003]: I0216 17:00:31.978710 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.978891 master-0 kubenswrapper[10003]: I0216 17:00:31.978728 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmk2b\" (UniqueName: \"kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:31.978891 master-0 kubenswrapper[10003]: I0216 17:00:31.978748 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:31.979139 master-0 kubenswrapper[10003]: I0216 17:00:31.978900 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.979139 master-0 kubenswrapper[10003]: I0216 17:00:31.978951 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.979139 master-0 kubenswrapper[10003]: I0216 17:00:31.978978 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.979139 master-0 kubenswrapper[10003]: I0216 17:00:31.979005 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:31.979139 master-0 kubenswrapper[10003]: I0216 17:00:31.979030 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:31.979425 master-0 kubenswrapper[10003]: I0216 17:00:31.979163 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:31.979475 master-0 kubenswrapper[10003]: I0216 17:00:31.979444 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:31.979475 master-0 kubenswrapper[10003]: I0216 17:00:31.979463 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:00:31.979700 master-0 kubenswrapper[10003]: I0216 17:00:31.979649 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.979782 master-0 kubenswrapper[10003]: I0216 17:00:31.979717 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:00:31.979818 master-0 kubenswrapper[10003]: I0216 17:00:31.979789 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:31.979886 master-0 kubenswrapper[10003]: I0216 17:00:31.979865 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:31.979949 master-0 kubenswrapper[10003]: I0216 17:00:31.979908 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:31.980001 master-0 kubenswrapper[10003]: I0216 17:00:31.979975 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:31.980284 master-0 kubenswrapper[10003]: I0216 17:00:31.980006 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:00:31.980331 master-0 kubenswrapper[10003]: I0216 17:00:31.980291 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.980331 master-0 kubenswrapper[10003]: I0216 17:00:31.980309 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:31.980395 master-0 kubenswrapper[10003]: I0216 17:00:31.980249 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:00:31.980395 master-0 kubenswrapper[10003]: I0216 17:00:31.980370 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.980471 master-0 kubenswrapper[10003]: I0216 17:00:31.980459 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:00:31.980508 master-0 kubenswrapper[10003]: I0216 17:00:31.980486 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:00:31.980546 master-0 kubenswrapper[10003]: I0216 17:00:31.980523 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:31.980601 master-0 kubenswrapper[10003]: I0216 17:00:31.980575 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:31.980769 master-0 kubenswrapper[10003]: I0216 17:00:31.980741 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:00:31.980892 master-0 kubenswrapper[10003]: I0216 17:00:31.980547 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:31.980892 master-0 kubenswrapper[10003]: I0216 17:00:31.980822 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:00:31.985753 master-0 kubenswrapper[10003]: I0216 17:00:31.985673 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:00:32.005487 master-0 kubenswrapper[10003]: I0216 17:00:32.005439 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 17:00:32.025711 master-0 kubenswrapper[10003]: I0216 17:00:32.025649 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 17:00:32.028225 master-0 kubenswrapper[10003]: I0216 17:00:32.028167 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:00:32.045419 master-0 kubenswrapper[10003]: I0216 17:00:32.045365 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 17:00:32.065441 master-0 kubenswrapper[10003]: I0216 17:00:32.065387 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:00:32.081405 master-0 kubenswrapper[10003]: I0216 17:00:32.081318 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:32.081620 master-0 kubenswrapper[10003]: I0216 17:00:32.081449 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.081620 master-0 kubenswrapper[10003]: E0216 17:00:32.081514 10003 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 17:00:32.081620 master-0 kubenswrapper[10003]: I0216 17:00:32.081601 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.081730 master-0 kubenswrapper[10003]: I0216 17:00:32.081525 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.081730 master-0 kubenswrapper[10003]: E0216 17:00:32.081620 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.58158465 +0000 UTC m=+2.097070371 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "performance-addon-operator-webhook-cert" not found Feb 16 17:00:32.081730 master-0 kubenswrapper[10003]: I0216 17:00:32.081666 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.081730 master-0 kubenswrapper[10003]: I0216 17:00:32.081696 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:32.081909 master-0 kubenswrapper[10003]: I0216 17:00:32.081758 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.081909 master-0 kubenswrapper[10003]: I0216 17:00:32.081790 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.081909 master-0 kubenswrapper[10003]: I0216 17:00:32.081836 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.081909 master-0 kubenswrapper[10003]: E0216 17:00:32.081855 10003 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:32.082233 master-0 kubenswrapper[10003]: I0216 17:00:32.081987 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.082233 master-0 kubenswrapper[10003]: I0216 17:00:32.082014 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.082233 master-0 kubenswrapper[10003]: I0216 17:00:32.082017 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.082233 master-0 kubenswrapper[10003]: E0216 17:00:32.082006 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca podName:01921947-c416-44b6-953d-75b935ad8977 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.58197108 +0000 UTC m=+2.097456801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca") pod "controller-manager-869cbbd595-47pjz" (UID: "01921947-c416-44b6-953d-75b935ad8977") : configmap "client-ca" not found Feb 16 17:00:32.082233 master-0 kubenswrapper[10003]: I0216 17:00:32.082129 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.082233 master-0 kubenswrapper[10003]: I0216 17:00:32.082188 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.082233 master-0 kubenswrapper[10003]: I0216 17:00:32.082219 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.082599 master-0 kubenswrapper[10003]: I0216 17:00:32.082260 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.082599 master-0 kubenswrapper[10003]: I0216 17:00:32.082295 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.082599 master-0 kubenswrapper[10003]: I0216 17:00:32.082350 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:32.082599 master-0 kubenswrapper[10003]: I0216 17:00:32.082362 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.082599 master-0 kubenswrapper[10003]: I0216 17:00:32.082385 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.082599 master-0 kubenswrapper[10003]: I0216 17:00:32.082418 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:32.082599 master-0 kubenswrapper[10003]: I0216 17:00:32.082461 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:32.082599 master-0 kubenswrapper[10003]: I0216 17:00:32.082460 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.082599 master-0 kubenswrapper[10003]: E0216 17:00:32.082492 10003 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:32.082599 master-0 kubenswrapper[10003]: E0216 17:00:32.082562 10003 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 17:00:32.082599 master-0 kubenswrapper[10003]: E0216 17:00:32.082577 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.582550576 +0000 UTC m=+2.098036297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : secret "serving-cert" not found Feb 16 17:00:32.082599 master-0 kubenswrapper[10003]: E0216 17:00:32.082611 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.582594707 +0000 UTC m=+2.098080388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : secret "multus-admission-controller-secret" not found Feb 16 17:00:32.083209 master-0 kubenswrapper[10003]: E0216 17:00:32.082608 10003 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:32.083209 master-0 kubenswrapper[10003]: E0216 17:00:32.082652 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.582639709 +0000 UTC m=+2.098125390 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:32.083209 master-0 kubenswrapper[10003]: I0216 17:00:32.082720 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.083209 master-0 kubenswrapper[10003]: I0216 17:00:32.082778 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.083209 master-0 kubenswrapper[10003]: I0216 17:00:32.082835 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:32.083209 master-0 kubenswrapper[10003]: I0216 17:00:32.082846 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.083209 master-0 kubenswrapper[10003]: I0216 17:00:32.082884 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.083209 master-0 kubenswrapper[10003]: E0216 17:00:32.082992 10003 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 17:00:32.083209 master-0 kubenswrapper[10003]: I0216 17:00:32.082994 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.083209 master-0 kubenswrapper[10003]: I0216 17:00:32.083014 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.083209 master-0 kubenswrapper[10003]: E0216 17:00:32.083050 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.583029609 +0000 UTC m=+2.098515320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "node-tuning-operator-tls" not found Feb 16 17:00:32.083209 master-0 kubenswrapper[10003]: I0216 17:00:32.083094 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.083823 master-0 kubenswrapper[10003]: I0216 17:00:32.083205 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.083823 master-0 kubenswrapper[10003]: I0216 17:00:32.083217 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.083823 master-0 kubenswrapper[10003]: I0216 17:00:32.083378 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.083823 master-0 kubenswrapper[10003]: I0216 17:00:32.083460 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.083823 master-0 kubenswrapper[10003]: I0216 17:00:32.083515 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:32.083823 master-0 kubenswrapper[10003]: I0216 17:00:32.083590 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.083823 master-0 kubenswrapper[10003]: I0216 17:00:32.083627 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:32.083823 master-0 kubenswrapper[10003]: I0216 17:00:32.083677 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.083823 master-0 kubenswrapper[10003]: I0216 17:00:32.083591 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:32.083823 master-0 kubenswrapper[10003]: I0216 17:00:32.083723 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:32.083823 master-0 kubenswrapper[10003]: I0216 17:00:32.083810 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: I0216 17:00:32.083859 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: I0216 17:00:32.083867 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-ssl-certs\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: E0216 17:00:32.083881 10003 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: I0216 17:00:32.083975 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: E0216 17:00:32.083998 10003 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: I0216 17:00:32.084021 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: I0216 17:00:32.084001 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: I0216 17:00:32.084046 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: E0216 17:00:32.084030 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.584003166 +0000 UTC m=+2.099488877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : configmap "client-ca" not found Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: E0216 17:00:32.084080 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.584063807 +0000 UTC m=+2.099549478 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : secret "package-server-manager-serving-cert" not found Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: I0216 17:00:32.084111 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: I0216 17:00:32.084103 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: I0216 17:00:32.084158 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: I0216 17:00:32.084168 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.084301 master-0 kubenswrapper[10003]: I0216 17:00:32.084262 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.084828 master-0 kubenswrapper[10003]: I0216 17:00:32.084317 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.084828 master-0 kubenswrapper[10003]: I0216 17:00:32.084491 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.084828 master-0 kubenswrapper[10003]: I0216 17:00:32.084580 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.084828 master-0 kubenswrapper[10003]: I0216 17:00:32.084683 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.084828 master-0 kubenswrapper[10003]: I0216 17:00:32.084695 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.084828 master-0 kubenswrapper[10003]: I0216 17:00:32.084764 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:32.084828 master-0 kubenswrapper[10003]: I0216 17:00:32.084796 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:32.085274 master-0 kubenswrapper[10003]: I0216 17:00:32.084859 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:32.085274 master-0 kubenswrapper[10003]: I0216 17:00:32.084979 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:32.085274 master-0 kubenswrapper[10003]: I0216 17:00:32.085065 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.085274 master-0 kubenswrapper[10003]: I0216 17:00:32.085117 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.085274 master-0 kubenswrapper[10003]: I0216 17:00:32.085166 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.085274 master-0 kubenswrapper[10003]: I0216 17:00:32.085212 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.085559 master-0 kubenswrapper[10003]: I0216 17:00:32.085284 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:32.085559 master-0 kubenswrapper[10003]: I0216 17:00:32.085338 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:00:32.085559 master-0 kubenswrapper[10003]: I0216 17:00:32.085423 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:00:32.085559 master-0 kubenswrapper[10003]: I0216 17:00:32.085431 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.085559 master-0 kubenswrapper[10003]: I0216 17:00:32.085453 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:00:32.085559 master-0 kubenswrapper[10003]: I0216 17:00:32.085466 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.085559 master-0 kubenswrapper[10003]: I0216 17:00:32.085461 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.085559 master-0 kubenswrapper[10003]: I0216 17:00:32.085533 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-cvo-updatepayloads\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: E0216 17:00:32.085583 10003 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: I0216 17:00:32.085620 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: E0216 17:00:32.085658 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert podName:568b22df-b454-4d74-bc21-6c84daf17c8c nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.58563344 +0000 UTC m=+2.101119161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert") pod "cluster-version-operator-76959b6567-wnh7l" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c") : secret "cluster-version-operator-serving-cert" not found Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: I0216 17:00:32.085692 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: I0216 17:00:32.085720 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: I0216 17:00:32.085741 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: E0216 17:00:32.085747 10003 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: I0216 17:00:32.085795 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: E0216 17:00:32.085806 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.585791825 +0000 UTC m=+2.101277516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : secret "metrics-tls" not found Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: I0216 17:00:32.085798 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: E0216 17:00:32.085850 10003 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: I0216 17:00:32.085860 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: I0216 17:00:32.085867 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: I0216 17:00:32.085896 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.085971 master-0 kubenswrapper[10003]: I0216 17:00:32.085900 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.086806 master-0 kubenswrapper[10003]: E0216 17:00:32.085917 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.585890887 +0000 UTC m=+2.101376588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : secret "metrics-tls" not found Feb 16 17:00:32.086806 master-0 kubenswrapper[10003]: I0216 17:00:32.086056 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:00:32.086806 master-0 kubenswrapper[10003]: I0216 17:00:32.086120 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:32.086806 master-0 kubenswrapper[10003]: E0216 17:00:32.086166 10003 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 17:00:32.086806 master-0 kubenswrapper[10003]: E0216 17:00:32.086238 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.586216196 +0000 UTC m=+2.101701917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : secret "metrics-daemon-secret" not found Feb 16 17:00:32.086806 master-0 kubenswrapper[10003]: E0216 17:00:32.086252 10003 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:32.086806 master-0 kubenswrapper[10003]: E0216 17:00:32.086309 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert podName:01921947-c416-44b6-953d-75b935ad8977 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:32.586290048 +0000 UTC m=+2.101775759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert") pod "controller-manager-869cbbd595-47pjz" (UID: "01921947-c416-44b6-953d-75b935ad8977") : secret "serving-cert" not found Feb 16 17:00:32.087393 master-0 kubenswrapper[10003]: I0216 17:00:32.087341 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-config\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:32.105430 master-0 kubenswrapper[10003]: I0216 17:00:32.105361 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:00:32.469548 master-0 kubenswrapper[10003]: E0216 17:00:32.469475 10003 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:00:32.470628 master-0 kubenswrapper[10003]: E0216 17:00:32.470503 10003 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:00:32.474552 master-0 kubenswrapper[10003]: E0216 17:00:32.474491 10003 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:00:32.474552 master-0 kubenswrapper[10003]: E0216 17:00:32.474525 10003 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:32.474784 master-0 kubenswrapper[10003]: E0216 17:00:32.474577 10003 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:32.492419 master-0 kubenswrapper[10003]: I0216 17:00:32.492331 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:32.492708 master-0 kubenswrapper[10003]: I0216 17:00:32.492441 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:32.492708 master-0 kubenswrapper[10003]: E0216 17:00:32.492613 10003 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 17:00:32.492708 master-0 kubenswrapper[10003]: E0216 17:00:32.492683 10003 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 17:00:32.492708 master-0 kubenswrapper[10003]: E0216 17:00:32.492706 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.49268903 +0000 UTC m=+3.008174701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : secret "image-registry-operator-tls" not found Feb 16 17:00:32.493066 master-0 kubenswrapper[10003]: E0216 17:00:32.492756 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.492736821 +0000 UTC m=+3.008222542 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : secret "marketplace-operator-metrics" not found Feb 16 17:00:32.593840 master-0 kubenswrapper[10003]: I0216 17:00:32.593698 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:32.593840 master-0 kubenswrapper[10003]: I0216 17:00:32.593806 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:32.593840 master-0 kubenswrapper[10003]: I0216 17:00:32.593844 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:32.594258 master-0 kubenswrapper[10003]: I0216 17:00:32.593903 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:32.594258 master-0 kubenswrapper[10003]: I0216 17:00:32.594039 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:32.594258 master-0 kubenswrapper[10003]: I0216 17:00:32.594120 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:32.594453 master-0 kubenswrapper[10003]: I0216 17:00:32.594262 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:32.594453 master-0 kubenswrapper[10003]: I0216 17:00:32.594318 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:32.594453 master-0 kubenswrapper[10003]: I0216 17:00:32.594355 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:32.594453 master-0 kubenswrapper[10003]: I0216 17:00:32.594385 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:00:32.594453 master-0 kubenswrapper[10003]: I0216 17:00:32.594405 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:32.594453 master-0 kubenswrapper[10003]: I0216 17:00:32.594440 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:32.594792 master-0 kubenswrapper[10003]: I0216 17:00:32.594490 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:32.594792 master-0 kubenswrapper[10003]: E0216 17:00:32.594640 10003 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:32.594792 master-0 kubenswrapper[10003]: E0216 17:00:32.594693 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca podName:01921947-c416-44b6-953d-75b935ad8977 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.594676711 +0000 UTC m=+3.110162392 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca") pod "controller-manager-869cbbd595-47pjz" (UID: "01921947-c416-44b6-953d-75b935ad8977") : configmap "client-ca" not found Feb 16 17:00:32.594792 master-0 kubenswrapper[10003]: E0216 17:00:32.594753 10003 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:32.594792 master-0 kubenswrapper[10003]: E0216 17:00:32.594778 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.594770123 +0000 UTC m=+3.110255804 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : secret "serving-cert" not found Feb 16 17:00:32.595133 master-0 kubenswrapper[10003]: E0216 17:00:32.594820 10003 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 17:00:32.595133 master-0 kubenswrapper[10003]: E0216 17:00:32.594842 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.594835525 +0000 UTC m=+3.110321206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : secret "multus-admission-controller-secret" not found Feb 16 17:00:32.595133 master-0 kubenswrapper[10003]: E0216 17:00:32.594882 10003 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:32.595133 master-0 kubenswrapper[10003]: E0216 17:00:32.594904 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.594896997 +0000 UTC m=+3.110382678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:32.595133 master-0 kubenswrapper[10003]: E0216 17:00:32.594977 10003 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Feb 16 17:00:32.595133 master-0 kubenswrapper[10003]: E0216 17:00:32.595006 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.594998589 +0000 UTC m=+3.110484270 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "node-tuning-operator-tls" not found Feb 16 17:00:32.595133 master-0 kubenswrapper[10003]: E0216 17:00:32.595039 10003 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:32.595133 master-0 kubenswrapper[10003]: E0216 17:00:32.595061 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.595054501 +0000 UTC m=+3.110540182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : configmap "client-ca" not found Feb 16 17:00:32.595133 master-0 kubenswrapper[10003]: E0216 17:00:32.595101 10003 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 17:00:32.595133 master-0 kubenswrapper[10003]: E0216 17:00:32.595122 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.595116053 +0000 UTC m=+3.110601734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : secret "package-server-manager-serving-cert" not found Feb 16 17:00:32.595891 master-0 kubenswrapper[10003]: E0216 17:00:32.595162 10003 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:32.595891 master-0 kubenswrapper[10003]: E0216 17:00:32.595183 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.595176334 +0000 UTC m=+3.110662015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : secret "metrics-tls" not found Feb 16 17:00:32.595891 master-0 kubenswrapper[10003]: E0216 17:00:32.595221 10003 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 17:00:32.595891 master-0 kubenswrapper[10003]: E0216 17:00:32.595242 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert podName:568b22df-b454-4d74-bc21-6c84daf17c8c nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.595236216 +0000 UTC m=+3.110721897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert") pod "cluster-version-operator-76959b6567-wnh7l" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c") : secret "cluster-version-operator-serving-cert" not found Feb 16 17:00:32.595891 master-0 kubenswrapper[10003]: E0216 17:00:32.595280 10003 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:32.595891 master-0 kubenswrapper[10003]: E0216 17:00:32.595301 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.595294837 +0000 UTC m=+3.110780518 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : secret "metrics-tls" not found Feb 16 17:00:32.595891 master-0 kubenswrapper[10003]: E0216 17:00:32.595336 10003 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 17:00:32.595891 master-0 kubenswrapper[10003]: E0216 17:00:32.595357 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.595350729 +0000 UTC m=+3.110836410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : secret "metrics-daemon-secret" not found Feb 16 17:00:32.595891 master-0 kubenswrapper[10003]: E0216 17:00:32.595419 10003 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:32.595891 master-0 kubenswrapper[10003]: E0216 17:00:32.595443 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert podName:01921947-c416-44b6-953d-75b935ad8977 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.595435851 +0000 UTC m=+3.110921542 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert") pod "controller-manager-869cbbd595-47pjz" (UID: "01921947-c416-44b6-953d-75b935ad8977") : secret "serving-cert" not found Feb 16 17:00:32.595891 master-0 kubenswrapper[10003]: E0216 17:00:32.595485 10003 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Feb 16 17:00:32.595891 master-0 kubenswrapper[10003]: E0216 17:00:32.595506 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:33.595499833 +0000 UTC m=+3.110985514 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : secret "performance-addon-operator-webhook-cert" not found Feb 16 17:00:32.609960 master-0 kubenswrapper[10003]: I0216 17:00:32.609521 10003 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:32.696496 master-0 kubenswrapper[10003]: I0216 17:00:32.696443 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:32.698450 master-0 kubenswrapper[10003]: I0216 17:00:32.698418 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:00:32.706615 master-0 kubenswrapper[10003]: I0216 17:00:32.706568 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:00:32.712233 master-0 kubenswrapper[10003]: I0216 17:00:32.712193 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:00:32.713116 master-0 kubenswrapper[10003]: I0216 17:00:32.713080 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:00:32.714080 master-0 kubenswrapper[10003]: I0216 17:00:32.713999 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:00:32.714080 master-0 kubenswrapper[10003]: I0216 17:00:32.714065 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:32.714836 master-0 kubenswrapper[10003]: I0216 17:00:32.714800 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:00:32.715059 master-0 kubenswrapper[10003]: I0216 17:00:32.715020 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:00:32.715188 master-0 kubenswrapper[10003]: I0216 17:00:32.715152 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:00:32.716246 master-0 kubenswrapper[10003]: I0216 17:00:32.716213 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:32.724197 master-0 kubenswrapper[10003]: I0216 17:00:32.724085 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:32.724659 master-0 kubenswrapper[10003]: I0216 17:00:32.724627 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:00:32.725477 master-0 kubenswrapper[10003]: I0216 17:00:32.725445 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:32.725979 master-0 kubenswrapper[10003]: I0216 17:00:32.725951 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:00:32.726460 master-0 kubenswrapper[10003]: I0216 17:00:32.726435 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:00:32.727492 master-0 kubenswrapper[10003]: I0216 17:00:32.727468 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:00:32.728238 master-0 kubenswrapper[10003]: I0216 17:00:32.728213 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:32.728902 master-0 kubenswrapper[10003]: I0216 17:00:32.728879 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47bnn\" (UniqueName: \"kubernetes.io/projected/01921947-c416-44b6-953d-75b935ad8977-kube-api-access-47bnn\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:32.741057 master-0 kubenswrapper[10003]: I0216 17:00:32.741008 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:00:32.741256 master-0 kubenswrapper[10003]: I0216 17:00:32.741078 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/568b22df-b454-4d74-bc21-6c84daf17c8c-kube-api-access\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:32.741256 master-0 kubenswrapper[10003]: I0216 17:00:32.741116 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:32.741716 master-0 kubenswrapper[10003]: I0216 17:00:32.741614 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:32.744127 master-0 kubenswrapper[10003]: I0216 17:00:32.742527 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:00:32.744127 master-0 kubenswrapper[10003]: I0216 17:00:32.744097 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:00:32.753835 master-0 kubenswrapper[10003]: I0216 17:00:32.750735 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:00:32.756137 master-0 kubenswrapper[10003]: I0216 17:00:32.755541 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcsw6\" (UniqueName: \"kubernetes.io/projected/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-kube-api-access-qcsw6\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:32.768997 master-0 kubenswrapper[10003]: I0216 17:00:32.768948 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:32.771409 master-0 kubenswrapper[10003]: I0216 17:00:32.769747 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:00:32.771409 master-0 kubenswrapper[10003]: I0216 17:00:32.770350 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:32.774191 master-0 kubenswrapper[10003]: I0216 17:00:32.774124 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmk2b\" (UniqueName: \"kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:32.777078 master-0 kubenswrapper[10003]: I0216 17:00:32.776334 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:00:32.789605 master-0 kubenswrapper[10003]: I0216 17:00:32.789550 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:32.800896 master-0 kubenswrapper[10003]: I0216 17:00:32.800782 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:00:32.805068 master-0 kubenswrapper[10003]: I0216 17:00:32.805036 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="decdca06-6892-49d3-8ed4-10e29d7c5de8" path="/var/lib/kubelet/pods/decdca06-6892-49d3-8ed4-10e29d7c5de8/volumes" Feb 16 17:00:32.824026 master-0 kubenswrapper[10003]: I0216 17:00:32.823973 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:32.844565 master-0 kubenswrapper[10003]: I0216 17:00:32.844525 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:00:32.848376 master-0 kubenswrapper[10003]: I0216 17:00:32.848350 10003 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 16 17:00:32.848457 master-0 kubenswrapper[10003]: I0216 17:00:32.848418 10003 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 16 17:00:32.902189 master-0 kubenswrapper[10003]: I0216 17:00:32.900622 10003 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:00:33.010501 master-0 kubenswrapper[10003]: I0216 17:00:33.010447 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2"] Feb 16 17:00:33.010699 master-0 kubenswrapper[10003]: E0216 17:00:33.010619 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8589094-f18e-4070-a550-b2da6f8acfc0" containerName="assisted-installer-controller" Feb 16 17:00:33.010699 master-0 kubenswrapper[10003]: I0216 17:00:33.010634 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8589094-f18e-4070-a550-b2da6f8acfc0" containerName="assisted-installer-controller" Feb 16 17:00:33.010699 master-0 kubenswrapper[10003]: E0216 17:00:33.010650 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10280d4e-9a32-4fea-aea0-211e7c9f0502" containerName="prober" Feb 16 17:00:33.010699 master-0 kubenswrapper[10003]: I0216 17:00:33.010657 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="10280d4e-9a32-4fea-aea0-211e7c9f0502" containerName="prober" Feb 16 17:00:33.010836 master-0 kubenswrapper[10003]: I0216 17:00:33.010732 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8589094-f18e-4070-a550-b2da6f8acfc0" containerName="assisted-installer-controller" Feb 16 17:00:33.010836 master-0 kubenswrapper[10003]: I0216 17:00:33.010756 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="10280d4e-9a32-4fea-aea0-211e7c9f0502" containerName="prober" Feb 16 17:00:33.011136 master-0 kubenswrapper[10003]: I0216 17:00:33.011112 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:00:33.048863 master-0 kubenswrapper[10003]: I0216 17:00:33.048813 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2"] Feb 16 17:00:33.118614 master-0 kubenswrapper[10003]: I0216 17:00:33.118553 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:00:33.219437 master-0 kubenswrapper[10003]: I0216 17:00:33.219386 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:00:33.244521 master-0 kubenswrapper[10003]: I0216 17:00:33.244468 10003 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 17:00:33.251379 master-0 kubenswrapper[10003]: I0216 17:00:33.251335 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:00:33.327018 master-0 kubenswrapper[10003]: I0216 17:00:33.323087 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:00:33.523906 master-0 kubenswrapper[10003]: I0216 17:00:33.522262 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:33.523906 master-0 kubenswrapper[10003]: I0216 17:00:33.522639 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:33.523906 master-0 kubenswrapper[10003]: E0216 17:00:33.522555 10003 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 17:00:33.523906 master-0 kubenswrapper[10003]: I0216 17:00:33.522587 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2"] Feb 16 17:00:33.523906 master-0 kubenswrapper[10003]: E0216 17:00:33.522819 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.522791058 +0000 UTC m=+5.038276729 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : secret "marketplace-operator-metrics" not found Feb 16 17:00:33.523906 master-0 kubenswrapper[10003]: E0216 17:00:33.522959 10003 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 17:00:33.523906 master-0 kubenswrapper[10003]: E0216 17:00:33.523049 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.523035134 +0000 UTC m=+5.038520805 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : secret "image-registry-operator-tls" not found Feb 16 17:00:33.524263 master-0 kubenswrapper[10003]: I0216 17:00:33.524018 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:33.531051 master-0 kubenswrapper[10003]: W0216 17:00:33.530831 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80d3b238_70c3_4e71_96a1_99405352033f.slice/crio-905e4fdbfe2147706f13434bc2e5b9a3cbf884a588d29ecfca730d73382fb68f WatchSource:0}: Error finding container 905e4fdbfe2147706f13434bc2e5b9a3cbf884a588d29ecfca730d73382fb68f: Status 404 returned error can't find the container with id 905e4fdbfe2147706f13434bc2e5b9a3cbf884a588d29ecfca730d73382fb68f Feb 16 17:00:33.623834 master-0 kubenswrapper[10003]: I0216 17:00:33.623523 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:33.624052 master-0 kubenswrapper[10003]: I0216 17:00:33.623836 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:33.624052 master-0 kubenswrapper[10003]: I0216 17:00:33.623915 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:33.624052 master-0 kubenswrapper[10003]: I0216 17:00:33.623969 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:33.624052 master-0 kubenswrapper[10003]: E0216 17:00:33.623999 10003 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:33.624176 master-0 kubenswrapper[10003]: E0216 17:00:33.624058 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.624042199 +0000 UTC m=+5.139527870 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : configmap "client-ca" not found Feb 16 17:00:33.624176 master-0 kubenswrapper[10003]: E0216 17:00:33.624061 10003 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Feb 16 17:00:33.624176 master-0 kubenswrapper[10003]: E0216 17:00:33.624111 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert podName:568b22df-b454-4d74-bc21-6c84daf17c8c nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.62409596 +0000 UTC m=+5.139581721 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert") pod "cluster-version-operator-76959b6567-wnh7l" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c") : secret "cluster-version-operator-serving-cert" not found Feb 16 17:00:33.624263 master-0 kubenswrapper[10003]: E0216 17:00:33.624198 10003 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 17:00:33.624263 master-0 kubenswrapper[10003]: E0216 17:00:33.624224 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.624216034 +0000 UTC m=+5.139701815 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : secret "package-server-manager-serving-cert" not found Feb 16 17:00:33.624342 master-0 kubenswrapper[10003]: E0216 17:00:33.624267 10003 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:33.624342 master-0 kubenswrapper[10003]: E0216 17:00:33.624284 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.624279065 +0000 UTC m=+5.139764736 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : secret "metrics-tls" not found Feb 16 17:00:33.624342 master-0 kubenswrapper[10003]: I0216 17:00:33.624000 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:33.624342 master-0 kubenswrapper[10003]: I0216 17:00:33.624308 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:33.624342 master-0 kubenswrapper[10003]: I0216 17:00:33.624327 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:00:33.624342 master-0 kubenswrapper[10003]: I0216 17:00:33.624343 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:33.624549 master-0 kubenswrapper[10003]: I0216 17:00:33.624362 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:33.624549 master-0 kubenswrapper[10003]: I0216 17:00:33.624381 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:33.624549 master-0 kubenswrapper[10003]: I0216 17:00:33.624404 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:33.624549 master-0 kubenswrapper[10003]: I0216 17:00:33.624421 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:33.624549 master-0 kubenswrapper[10003]: E0216 17:00:33.624434 10003 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 17:00:33.624549 master-0 kubenswrapper[10003]: I0216 17:00:33.624441 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:33.624549 master-0 kubenswrapper[10003]: E0216 17:00:33.624470 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.62445359 +0000 UTC m=+5.139939261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : secret "metrics-daemon-secret" not found Feb 16 17:00:33.624549 master-0 kubenswrapper[10003]: E0216 17:00:33.624515 10003 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:33.624549 master-0 kubenswrapper[10003]: E0216 17:00:33.624533 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.624527912 +0000 UTC m=+5.140013583 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : secret "metrics-tls" not found Feb 16 17:00:33.624846 master-0 kubenswrapper[10003]: E0216 17:00:33.624565 10003 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:33.624846 master-0 kubenswrapper[10003]: E0216 17:00:33.624566 10003 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:33.624846 master-0 kubenswrapper[10003]: E0216 17:00:33.624585 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.624579273 +0000 UTC m=+5.140064934 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:33.624846 master-0 kubenswrapper[10003]: E0216 17:00:33.624598 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.624592664 +0000 UTC m=+5.140078335 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : secret "serving-cert" not found Feb 16 17:00:33.624846 master-0 kubenswrapper[10003]: E0216 17:00:33.624614 10003 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:33.624846 master-0 kubenswrapper[10003]: E0216 17:00:33.624636 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca podName:01921947-c416-44b6-953d-75b935ad8977 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.624627425 +0000 UTC m=+5.140113096 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca") pod "controller-manager-869cbbd595-47pjz" (UID: "01921947-c416-44b6-953d-75b935ad8977") : configmap "client-ca" not found Feb 16 17:00:33.624846 master-0 kubenswrapper[10003]: E0216 17:00:33.624640 10003 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 17:00:33.624846 master-0 kubenswrapper[10003]: E0216 17:00:33.624659 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.624653845 +0000 UTC m=+5.140139516 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : secret "multus-admission-controller-secret" not found Feb 16 17:00:33.624846 master-0 kubenswrapper[10003]: E0216 17:00:33.624676 10003 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:33.624846 master-0 kubenswrapper[10003]: E0216 17:00:33.624755 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert podName:01921947-c416-44b6-953d-75b935ad8977 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:35.624745978 +0000 UTC m=+5.140231759 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert") pod "controller-manager-869cbbd595-47pjz" (UID: "01921947-c416-44b6-953d-75b935ad8977") : secret "serving-cert" not found Feb 16 17:00:33.627879 master-0 kubenswrapper[10003]: I0216 17:00:33.627854 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:33.627963 master-0 kubenswrapper[10003]: I0216 17:00:33.627901 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:33.697008 master-0 kubenswrapper[10003]: I0216 17:00:33.696890 10003 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:33.714706 master-0 kubenswrapper[10003]: I0216 17:00:33.713851 10003 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:33.799759 master-0 kubenswrapper[10003]: I0216 17:00:33.799681 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:00:33.882205 master-0 kubenswrapper[10003]: I0216 17:00:33.882068 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" event={"ID":"80d3b238-70c3-4e71-96a1-99405352033f","Type":"ContainerStarted","Data":"905e4fdbfe2147706f13434bc2e5b9a3cbf884a588d29ecfca730d73382fb68f"} Feb 16 17:00:33.891481 master-0 kubenswrapper[10003]: I0216 17:00:33.891410 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:33.954653 master-0 kubenswrapper[10003]: I0216 17:00:33.953972 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts"] Feb 16 17:00:34.261182 master-0 kubenswrapper[10003]: I0216 17:00:34.261145 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:34.263860 master-0 kubenswrapper[10003]: I0216 17:00:34.263825 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:34.890240 master-0 kubenswrapper[10003]: I0216 17:00:34.889902 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" event={"ID":"62fc29f4-557f-4a75-8b78-6ca425c81b81","Type":"ContainerStarted","Data":"b62943328f5fac54686a2ebf612b57c71fd7fbf45329dc96f7bdc742f3287d41"} Feb 16 17:00:34.890628 master-0 kubenswrapper[10003]: I0216 17:00:34.890253 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" event={"ID":"62fc29f4-557f-4a75-8b78-6ca425c81b81","Type":"ContainerStarted","Data":"6802981ef2e5cdad643b58e0253f48a1465df01861501821550ee2ca659e7e88"} Feb 16 17:00:34.891129 master-0 kubenswrapper[10003]: I0216 17:00:34.891089 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" event={"ID":"5192fa49-d81c-47ce-b2ab-f90996cc0bd5","Type":"ContainerStarted","Data":"48fe704f6b9f25810dcd5004b13a7c413fb8fc4a4e972dfe51f7142aa16f0fee"} Feb 16 17:00:35.548329 master-0 kubenswrapper[10003]: I0216 17:00:35.548280 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:35.548329 master-0 kubenswrapper[10003]: I0216 17:00:35.548336 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:35.548552 master-0 kubenswrapper[10003]: E0216 17:00:35.548508 10003 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Feb 16 17:00:35.548644 master-0 kubenswrapper[10003]: E0216 17:00:35.548567 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:00:39.548549017 +0000 UTC m=+9.064034688 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : secret "image-registry-operator-tls" not found Feb 16 17:00:35.548771 master-0 kubenswrapper[10003]: E0216 17:00:35.548718 10003 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 17:00:35.548825 master-0 kubenswrapper[10003]: E0216 17:00:35.548810 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:39.548790284 +0000 UTC m=+9.064275945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : secret "marketplace-operator-metrics" not found Feb 16 17:00:35.649228 master-0 kubenswrapper[10003]: I0216 17:00:35.649164 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:35.649406 master-0 kubenswrapper[10003]: E0216 17:00:35.649310 10003 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:35.649406 master-0 kubenswrapper[10003]: E0216 17:00:35.649377 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:39.649355636 +0000 UTC m=+9.164841317 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : configmap "client-ca" not found Feb 16 17:00:35.649511 master-0 kubenswrapper[10003]: I0216 17:00:35.649400 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:35.649511 master-0 kubenswrapper[10003]: I0216 17:00:35.649457 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:35.649626 master-0 kubenswrapper[10003]: E0216 17:00:35.649584 10003 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 17:00:35.649690 master-0 kubenswrapper[10003]: E0216 17:00:35.649675 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:39.649652504 +0000 UTC m=+9.165138225 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : secret "package-server-manager-serving-cert" not found Feb 16 17:00:35.649743 master-0 kubenswrapper[10003]: I0216 17:00:35.649723 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:35.649783 master-0 kubenswrapper[10003]: I0216 17:00:35.649763 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:35.649828 master-0 kubenswrapper[10003]: I0216 17:00:35.649796 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:00:35.649828 master-0 kubenswrapper[10003]: I0216 17:00:35.649821 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:35.649944 master-0 kubenswrapper[10003]: I0216 17:00:35.649857 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:35.649944 master-0 kubenswrapper[10003]: I0216 17:00:35.649905 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:35.650037 master-0 kubenswrapper[10003]: I0216 17:00:35.649948 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:35.650037 master-0 kubenswrapper[10003]: E0216 17:00:35.649971 10003 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 17:00:35.650037 master-0 kubenswrapper[10003]: I0216 17:00:35.649987 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:35.650037 master-0 kubenswrapper[10003]: E0216 17:00:35.650015 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:39.650002914 +0000 UTC m=+9.165488655 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : secret "metrics-daemon-secret" not found Feb 16 17:00:35.650037 master-0 kubenswrapper[10003]: E0216 17:00:35.650025 10003 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:35.650219 master-0 kubenswrapper[10003]: E0216 17:00:35.650075 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca podName:01921947-c416-44b6-953d-75b935ad8977 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:39.650060595 +0000 UTC m=+9.165546346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca") pod "controller-manager-869cbbd595-47pjz" (UID: "01921947-c416-44b6-953d-75b935ad8977") : configmap "client-ca" not found Feb 16 17:00:35.650219 master-0 kubenswrapper[10003]: E0216 17:00:35.650079 10003 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:35.650219 master-0 kubenswrapper[10003]: E0216 17:00:35.649977 10003 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Feb 16 17:00:35.650219 master-0 kubenswrapper[10003]: E0216 17:00:35.650090 10003 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:35.650219 master-0 kubenswrapper[10003]: E0216 17:00:35.650104 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:39.650096596 +0000 UTC m=+9.165582367 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : secret "serving-cert" not found Feb 16 17:00:35.650219 master-0 kubenswrapper[10003]: E0216 17:00:35.650116 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:39.650110617 +0000 UTC m=+9.165596398 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : secret "metrics-tls" not found Feb 16 17:00:35.650219 master-0 kubenswrapper[10003]: E0216 17:00:35.650130 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:00:39.650123037 +0000 UTC m=+9.165608808 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:35.650219 master-0 kubenswrapper[10003]: E0216 17:00:35.650151 10003 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 17:00:35.650219 master-0 kubenswrapper[10003]: E0216 17:00:35.650190 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:00:39.650179719 +0000 UTC m=+9.165665460 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : secret "multus-admission-controller-secret" not found Feb 16 17:00:35.655616 master-0 kubenswrapper[10003]: I0216 17:00:35.655557 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"cluster-version-operator-76959b6567-wnh7l\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:35.655795 master-0 kubenswrapper[10003]: I0216 17:00:35.655748 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:35.658301 master-0 kubenswrapper[10003]: I0216 17:00:35.658271 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:35.896361 master-0 kubenswrapper[10003]: I0216 17:00:35.896231 10003 generic.go:334] "Generic (PLEG): container finished" podID="6b3e071c-1c62-489b-91c1-aef0d197f40b" containerID="410ef06e22d76a946b4285857693ab64a631161fcff4dd55a6b1f8d6e54ed325" exitCode=0 Feb 16 17:00:35.896361 master-0 kubenswrapper[10003]: I0216 17:00:35.896324 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerDied","Data":"410ef06e22d76a946b4285857693ab64a631161fcff4dd55a6b1f8d6e54ed325"} Feb 16 17:00:35.897299 master-0 kubenswrapper[10003]: I0216 17:00:35.896732 10003 scope.go:117] "RemoveContainer" containerID="410ef06e22d76a946b4285857693ab64a631161fcff4dd55a6b1f8d6e54ed325" Feb 16 17:00:35.898963 master-0 kubenswrapper[10003]: I0216 17:00:35.898642 10003 generic.go:334] "Generic (PLEG): container finished" podID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" containerID="a650093628feaa4193c1b7c57ea685e55d5af706446f54a283f32836e6d703a9" exitCode=0 Feb 16 17:00:35.898963 master-0 kubenswrapper[10003]: I0216 17:00:35.898799 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerDied","Data":"a650093628feaa4193c1b7c57ea685e55d5af706446f54a283f32836e6d703a9"} Feb 16 17:00:35.899166 master-0 kubenswrapper[10003]: I0216 17:00:35.899024 10003 scope.go:117] "RemoveContainer" containerID="a650093628feaa4193c1b7c57ea685e55d5af706446f54a283f32836e6d703a9" Feb 16 17:00:35.899965 master-0 kubenswrapper[10003]: I0216 17:00:35.899261 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:00:35.907024 master-0 kubenswrapper[10003]: I0216 17:00:35.906995 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:00:36.829429 master-0 kubenswrapper[10003]: I0216 17:00:36.829091 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:00:36.860557 master-0 kubenswrapper[10003]: I0216 17:00:36.860521 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-86b8869b79-nhxlp"] Feb 16 17:00:36.872618 master-0 kubenswrapper[10003]: W0216 17:00:36.872568 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9859457_f0d1_4754_a6c5_cf05d5abf447.slice/crio-7bcf62830ed108bcbaff872a01506e1cfaba1ae290ee01528f3fca2ecf257682 WatchSource:0}: Error finding container 7bcf62830ed108bcbaff872a01506e1cfaba1ae290ee01528f3fca2ecf257682: Status 404 returned error can't find the container with id 7bcf62830ed108bcbaff872a01506e1cfaba1ae290ee01528f3fca2ecf257682 Feb 16 17:00:36.905036 master-0 kubenswrapper[10003]: I0216 17:00:36.904313 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerStarted","Data":"7bcf62830ed108bcbaff872a01506e1cfaba1ae290ee01528f3fca2ecf257682"} Feb 16 17:00:36.909875 master-0 kubenswrapper[10003]: I0216 17:00:36.909818 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerStarted","Data":"dae1a04576d4d712d2d5bb1de6d3e36f80a9ba9aa32a0acd1c2d40512ad5b174"} Feb 16 17:00:36.910234 master-0 kubenswrapper[10003]: I0216 17:00:36.910206 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:36.941940 master-0 kubenswrapper[10003]: I0216 17:00:36.932819 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerStarted","Data":"d0f7e8be40545fa33b748eaa6f879efc2d956e86b6534dcac117b6e66db8cbc2"} Feb 16 17:00:36.941940 master-0 kubenswrapper[10003]: I0216 17:00:36.934751 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" event={"ID":"568b22df-b454-4d74-bc21-6c84daf17c8c","Type":"ContainerStarted","Data":"d4c4164857bca7a77dce556ef190218992857c42d5628a4f2140aa29651cbc3e"} Feb 16 17:00:36.941940 master-0 kubenswrapper[10003]: I0216 17:00:36.939226 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" event={"ID":"80d3b238-70c3-4e71-96a1-99405352033f","Type":"ContainerStarted","Data":"7573bf948e4a5ccb81f3214838cf4ecabd14ac4f2c4a11558ad134016b1c1851"} Feb 16 17:00:36.983235 master-0 kubenswrapper[10003]: I0216 17:00:36.981144 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podStartSLOduration=2.304198115 podStartE2EDuration="4.9811205s" podCreationTimestamp="2026-02-16 17:00:32 +0000 UTC" firstStartedPulling="2026-02-16 17:00:33.53312279 +0000 UTC m=+3.048608461" lastFinishedPulling="2026-02-16 17:00:36.210045185 +0000 UTC m=+5.725530846" observedRunningTime="2026-02-16 17:00:36.980304628 +0000 UTC m=+6.495790299" watchObservedRunningTime="2026-02-16 17:00:36.9811205 +0000 UTC m=+6.496606171" Feb 16 17:00:37.284337 master-0 kubenswrapper[10003]: I0216 17:00:37.284224 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:37.379931 master-0 kubenswrapper[10003]: I0216 17:00:37.379839 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:37.944609 master-0 kubenswrapper[10003]: I0216 17:00:37.944537 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerStarted","Data":"925f178f46a1d5c4c22dbeed05e4d6e9975a60d252305dcd17064d2bc8dfab6e"} Feb 16 17:00:37.990711 master-0 kubenswrapper[10003]: I0216 17:00:37.990635 10003 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:37.995432 master-0 kubenswrapper[10003]: I0216 17:00:37.995386 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:00:38.949181 master-0 kubenswrapper[10003]: I0216 17:00:38.949120 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv_4e51bba5-0ebe-4e55-a588-38b71548c605/cluster-olm-operator/0.log" Feb 16 17:00:38.949866 master-0 kubenswrapper[10003]: I0216 17:00:38.949660 10003 generic.go:334] "Generic (PLEG): container finished" podID="4e51bba5-0ebe-4e55-a588-38b71548c605" containerID="d0f7e8be40545fa33b748eaa6f879efc2d956e86b6534dcac117b6e66db8cbc2" exitCode=255 Feb 16 17:00:38.949866 master-0 kubenswrapper[10003]: I0216 17:00:38.949757 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerDied","Data":"d0f7e8be40545fa33b748eaa6f879efc2d956e86b6534dcac117b6e66db8cbc2"} Feb 16 17:00:38.950203 master-0 kubenswrapper[10003]: I0216 17:00:38.950169 10003 scope.go:117] "RemoveContainer" containerID="d0f7e8be40545fa33b748eaa6f879efc2d956e86b6534dcac117b6e66db8cbc2" Feb 16 17:00:39.634954 master-0 kubenswrapper[10003]: I0216 17:00:39.634673 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:39.634954 master-0 kubenswrapper[10003]: I0216 17:00:39.634747 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:39.638065 master-0 kubenswrapper[10003]: E0216 17:00:39.636085 10003 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Feb 16 17:00:39.638065 master-0 kubenswrapper[10003]: E0216 17:00:39.636193 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:47.636164779 +0000 UTC m=+17.151650470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : secret "marketplace-operator-metrics" not found Feb 16 17:00:39.641006 master-0 kubenswrapper[10003]: I0216 17:00:39.640310 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:39.737325 master-0 kubenswrapper[10003]: I0216 17:00:39.737222 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:39.737325 master-0 kubenswrapper[10003]: I0216 17:00:39.737298 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:00:39.737601 master-0 kubenswrapper[10003]: I0216 17:00:39.737348 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:39.737601 master-0 kubenswrapper[10003]: I0216 17:00:39.737385 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:39.737601 master-0 kubenswrapper[10003]: I0216 17:00:39.737408 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:39.737601 master-0 kubenswrapper[10003]: I0216 17:00:39.737433 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:39.737601 master-0 kubenswrapper[10003]: I0216 17:00:39.737476 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:39.737802 master-0 kubenswrapper[10003]: E0216 17:00:39.737640 10003 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Feb 16 17:00:39.737802 master-0 kubenswrapper[10003]: E0216 17:00:39.737710 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:00:47.737690498 +0000 UTC m=+17.253176169 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : secret "multus-admission-controller-secret" not found Feb 16 17:00:39.737802 master-0 kubenswrapper[10003]: E0216 17:00:39.737793 10003 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:39.737950 master-0 kubenswrapper[10003]: E0216 17:00:39.737836 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:00:47.737823471 +0000 UTC m=+17.253309142 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : secret "cluster-monitoring-operator-tls" not found Feb 16 17:00:39.737950 master-0 kubenswrapper[10003]: E0216 17:00:39.737908 10003 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Feb 16 17:00:39.738063 master-0 kubenswrapper[10003]: E0216 17:00:39.737969 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:47.737951015 +0000 UTC m=+17.253436686 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : secret "metrics-daemon-secret" not found Feb 16 17:00:39.738112 master-0 kubenswrapper[10003]: E0216 17:00:39.738095 10003 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:39.738153 master-0 kubenswrapper[10003]: E0216 17:00:39.738143 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca podName:01921947-c416-44b6-953d-75b935ad8977 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:47.73812193 +0000 UTC m=+17.253607601 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca") pod "controller-manager-869cbbd595-47pjz" (UID: "01921947-c416-44b6-953d-75b935ad8977") : configmap "client-ca" not found Feb 16 17:00:39.738821 master-0 kubenswrapper[10003]: I0216 17:00:39.738272 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:39.738821 master-0 kubenswrapper[10003]: E0216 17:00:39.738459 10003 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Feb 16 17:00:39.738821 master-0 kubenswrapper[10003]: E0216 17:00:39.738522 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:47.73850402 +0000 UTC m=+17.253989691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : secret "package-server-manager-serving-cert" not found Feb 16 17:00:39.738821 master-0 kubenswrapper[10003]: E0216 17:00:39.738595 10003 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:39.738821 master-0 kubenswrapper[10003]: E0216 17:00:39.738645 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:47.738633083 +0000 UTC m=+17.254118754 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : configmap "client-ca" not found Feb 16 17:00:39.738821 master-0 kubenswrapper[10003]: E0216 17:00:39.738714 10003 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Feb 16 17:00:39.738821 master-0 kubenswrapper[10003]: E0216 17:00:39.738745 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:47.738730816 +0000 UTC m=+17.254216487 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : secret "serving-cert" not found Feb 16 17:00:39.742346 master-0 kubenswrapper[10003]: I0216 17:00:39.742266 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:39.799542 master-0 kubenswrapper[10003]: I0216 17:00:39.799474 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:00:39.799542 master-0 kubenswrapper[10003]: I0216 17:00:39.799516 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:00:40.267139 master-0 kubenswrapper[10003]: I0216 17:00:40.266414 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:00:40.271310 master-0 kubenswrapper[10003]: I0216 17:00:40.271247 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:40.338604 master-0 kubenswrapper[10003]: I0216 17:00:40.338506 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:40.957057 master-0 kubenswrapper[10003]: I0216 17:00:40.956794 10003 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:00:40.957057 master-0 kubenswrapper[10003]: I0216 17:00:40.956836 10003 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:00:41.887581 master-0 kubenswrapper[10003]: I0216 17:00:41.887508 10003 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:41.894810 master-0 kubenswrapper[10003]: I0216 17:00:41.894627 10003 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:41.985229 master-0 kubenswrapper[10003]: I0216 17:00:41.985172 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:00:42.873888 master-0 kubenswrapper[10003]: I0216 17:00:42.872749 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:42.873888 master-0 kubenswrapper[10003]: I0216 17:00:42.873096 10003 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:00:42.873888 master-0 kubenswrapper[10003]: I0216 17:00:42.873119 10003 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:00:42.954681 master-0 kubenswrapper[10003]: I0216 17:00:42.954123 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:42.967611 master-0 kubenswrapper[10003]: I0216 17:00:42.967554 10003 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:00:44.754693 master-0 kubenswrapper[10003]: I0216 17:00:44.754625 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:44.755200 master-0 kubenswrapper[10003]: I0216 17:00:44.754833 10003 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:00:44.777824 master-0 kubenswrapper[10003]: I0216 17:00:44.777755 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:00:45.934988 master-0 kubenswrapper[10003]: I0216 17:00:45.933706 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-74f47b695f-rbr8c"] Feb 16 17:00:45.934988 master-0 kubenswrapper[10003]: I0216 17:00:45.934533 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:45.936818 master-0 kubenswrapper[10003]: I0216 17:00:45.936770 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 17:00:45.937330 master-0 kubenswrapper[10003]: I0216 17:00:45.937281 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 17:00:45.938048 master-0 kubenswrapper[10003]: I0216 17:00:45.938011 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Feb 16 17:00:45.938131 master-0 kubenswrapper[10003]: I0216 17:00:45.938010 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 17:00:45.938987 master-0 kubenswrapper[10003]: I0216 17:00:45.938896 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Feb 16 17:00:45.939061 master-0 kubenswrapper[10003]: I0216 17:00:45.939038 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 17:00:45.939237 master-0 kubenswrapper[10003]: I0216 17:00:45.938944 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 17:00:45.943128 master-0 kubenswrapper[10003]: I0216 17:00:45.941383 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 17:00:45.943128 master-0 kubenswrapper[10003]: I0216 17:00:45.942161 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 17:00:45.951131 master-0 kubenswrapper[10003]: I0216 17:00:45.951078 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 17:00:45.960204 master-0 kubenswrapper[10003]: I0216 17:00:45.960157 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-74f47b695f-rbr8c"] Feb 16 17:00:46.029546 master-0 kubenswrapper[10003]: I0216 17:00:46.028643 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-trusted-ca-bundle\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.029546 master-0 kubenswrapper[10003]: I0216 17:00:46.028689 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.029546 master-0 kubenswrapper[10003]: I0216 17:00:46.028709 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-etcd-client\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.029546 master-0 kubenswrapper[10003]: I0216 17:00:46.028736 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-encryption-config\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.029546 master-0 kubenswrapper[10003]: I0216 17:00:46.028775 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-image-import-ca\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.029546 master-0 kubenswrapper[10003]: I0216 17:00:46.028793 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nbjc\" (UniqueName: \"kubernetes.io/projected/5d108e8b-620e-4523-a97d-3e4d2073f137-kube-api-access-8nbjc\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.029546 master-0 kubenswrapper[10003]: I0216 17:00:46.028807 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d108e8b-620e-4523-a97d-3e4d2073f137-audit-dir\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.029546 master-0 kubenswrapper[10003]: I0216 17:00:46.028823 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.029546 master-0 kubenswrapper[10003]: I0216 17:00:46.028839 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5d108e8b-620e-4523-a97d-3e4d2073f137-node-pullsecrets\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.029546 master-0 kubenswrapper[10003]: I0216 17:00:46.028852 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-etcd-serving-ca\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.029546 master-0 kubenswrapper[10003]: I0216 17:00:46.028881 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-config\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.131189 master-0 kubenswrapper[10003]: I0216 17:00:46.130635 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.131189 master-0 kubenswrapper[10003]: I0216 17:00:46.130998 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5d108e8b-620e-4523-a97d-3e4d2073f137-node-pullsecrets\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.131189 master-0 kubenswrapper[10003]: I0216 17:00:46.131049 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-etcd-serving-ca\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.131189 master-0 kubenswrapper[10003]: I0216 17:00:46.131074 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-config\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.131189 master-0 kubenswrapper[10003]: I0216 17:00:46.131120 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-trusted-ca-bundle\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.131189 master-0 kubenswrapper[10003]: I0216 17:00:46.131160 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.131189 master-0 kubenswrapper[10003]: I0216 17:00:46.131183 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-etcd-client\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.131636 master-0 kubenswrapper[10003]: I0216 17:00:46.131601 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-encryption-config\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.131707 master-0 kubenswrapper[10003]: I0216 17:00:46.131681 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-image-import-ca\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.131780 master-0 kubenswrapper[10003]: I0216 17:00:46.131718 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nbjc\" (UniqueName: \"kubernetes.io/projected/5d108e8b-620e-4523-a97d-3e4d2073f137-kube-api-access-8nbjc\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.131780 master-0 kubenswrapper[10003]: I0216 17:00:46.131743 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d108e8b-620e-4523-a97d-3e4d2073f137-audit-dir\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.131847 master-0 kubenswrapper[10003]: I0216 17:00:46.131824 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d108e8b-620e-4523-a97d-3e4d2073f137-audit-dir\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.131890 master-0 kubenswrapper[10003]: I0216 17:00:46.131868 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-869cbbd595-47pjz"] Feb 16 17:00:46.132389 master-0 kubenswrapper[10003]: E0216 17:00:46.132354 10003 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" podUID="01921947-c416-44b6-953d-75b935ad8977" Feb 16 17:00:46.132505 master-0 kubenswrapper[10003]: E0216 17:00:46.132479 10003 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 17:00:46.132593 master-0 kubenswrapper[10003]: E0216 17:00:46.132576 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert podName:5d108e8b-620e-4523-a97d-3e4d2073f137 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:46.632558772 +0000 UTC m=+16.148044443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert") pod "apiserver-74f47b695f-rbr8c" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137") : secret "serving-cert" not found Feb 16 17:00:46.132848 master-0 kubenswrapper[10003]: I0216 17:00:46.132819 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5d108e8b-620e-4523-a97d-3e4d2073f137-node-pullsecrets\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.133874 master-0 kubenswrapper[10003]: I0216 17:00:46.133840 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-etcd-serving-ca\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.138382 master-0 kubenswrapper[10003]: I0216 17:00:46.134764 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-config\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.138382 master-0 kubenswrapper[10003]: I0216 17:00:46.136459 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-trusted-ca-bundle\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.138382 master-0 kubenswrapper[10003]: E0216 17:00:46.136599 10003 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 16 17:00:46.138382 master-0 kubenswrapper[10003]: E0216 17:00:46.136653 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit podName:5d108e8b-620e-4523-a97d-3e4d2073f137 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:46.636634253 +0000 UTC m=+16.152119924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit") pod "apiserver-74f47b695f-rbr8c" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137") : configmap "audit-0" not found Feb 16 17:00:46.139045 master-0 kubenswrapper[10003]: I0216 17:00:46.139006 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-image-import-ca\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.144834 master-0 kubenswrapper[10003]: I0216 17:00:46.144791 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-encryption-config\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.145085 master-0 kubenswrapper[10003]: I0216 17:00:46.145046 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-etcd-client\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.162138 master-0 kubenswrapper[10003]: I0216 17:00:46.162100 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nbjc\" (UniqueName: \"kubernetes.io/projected/5d108e8b-620e-4523-a97d-3e4d2073f137-kube-api-access-8nbjc\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.181978 master-0 kubenswrapper[10003]: I0216 17:00:46.181902 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk"] Feb 16 17:00:46.242208 master-0 kubenswrapper[10003]: I0216 17:00:46.242150 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d"] Feb 16 17:00:46.252317 master-0 kubenswrapper[10003]: W0216 17:00:46.252261 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9609a4f3_b947_47af_a685_baae26c50fa3.slice/crio-8d9325183d87d503ed41689fd08cf0ecd5e5cd428a5bae6824cddf556b030e2a WatchSource:0}: Error finding container 8d9325183d87d503ed41689fd08cf0ecd5e5cd428a5bae6824cddf556b030e2a: Status 404 returned error can't find the container with id 8d9325183d87d503ed41689fd08cf0ecd5e5cd428a5bae6824cddf556b030e2a Feb 16 17:00:46.452439 master-0 kubenswrapper[10003]: I0216 17:00:46.452382 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-l5kbz"] Feb 16 17:00:46.453059 master-0 kubenswrapper[10003]: I0216 17:00:46.453036 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538151 master-0 kubenswrapper[10003]: I0216 17:00:46.538091 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538151 master-0 kubenswrapper[10003]: I0216 17:00:46.538137 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538449 master-0 kubenswrapper[10003]: I0216 17:00:46.538247 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538449 master-0 kubenswrapper[10003]: I0216 17:00:46.538297 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538449 master-0 kubenswrapper[10003]: I0216 17:00:46.538332 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538581 master-0 kubenswrapper[10003]: I0216 17:00:46.538459 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538666 master-0 kubenswrapper[10003]: I0216 17:00:46.538623 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538722 master-0 kubenswrapper[10003]: I0216 17:00:46.538677 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538722 master-0 kubenswrapper[10003]: I0216 17:00:46.538695 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538722 master-0 kubenswrapper[10003]: I0216 17:00:46.538716 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538884 master-0 kubenswrapper[10003]: I0216 17:00:46.538733 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538884 master-0 kubenswrapper[10003]: I0216 17:00:46.538814 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538884 master-0 kubenswrapper[10003]: I0216 17:00:46.538856 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.538884 master-0 kubenswrapper[10003]: I0216 17:00:46.538875 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.640392 master-0 kubenswrapper[10003]: I0216 17:00:46.640328 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.640392 master-0 kubenswrapper[10003]: I0216 17:00:46.640393 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.640682 master-0 kubenswrapper[10003]: I0216 17:00:46.640425 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.640682 master-0 kubenswrapper[10003]: I0216 17:00:46.640446 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.640682 master-0 kubenswrapper[10003]: I0216 17:00:46.640616 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.640906 master-0 kubenswrapper[10003]: I0216 17:00:46.640677 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.640906 master-0 kubenswrapper[10003]: I0216 17:00:46.640779 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.640906 master-0 kubenswrapper[10003]: I0216 17:00:46.640804 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.640906 master-0 kubenswrapper[10003]: I0216 17:00:46.640810 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.640906 master-0 kubenswrapper[10003]: I0216 17:00:46.640861 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.640906 master-0 kubenswrapper[10003]: I0216 17:00:46.640904 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641178 master-0 kubenswrapper[10003]: I0216 17:00:46.640956 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641178 master-0 kubenswrapper[10003]: I0216 17:00:46.641103 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641257 master-0 kubenswrapper[10003]: I0216 17:00:46.641133 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641326 master-0 kubenswrapper[10003]: I0216 17:00:46.641306 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641369 master-0 kubenswrapper[10003]: I0216 17:00:46.641287 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:46.641410 master-0 kubenswrapper[10003]: I0216 17:00:46.641361 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641410 master-0 kubenswrapper[10003]: I0216 17:00:46.641394 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641491 master-0 kubenswrapper[10003]: I0216 17:00:46.641408 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641540 master-0 kubenswrapper[10003]: E0216 17:00:46.641480 10003 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 17:00:46.641583 master-0 kubenswrapper[10003]: I0216 17:00:46.641542 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641583 master-0 kubenswrapper[10003]: I0216 17:00:46.641562 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641660 master-0 kubenswrapper[10003]: E0216 17:00:46.641601 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert podName:5d108e8b-620e-4523-a97d-3e4d2073f137 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:47.641579642 +0000 UTC m=+17.157065313 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert") pod "apiserver-74f47b695f-rbr8c" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137") : secret "serving-cert" not found Feb 16 17:00:46.641660 master-0 kubenswrapper[10003]: I0216 17:00:46.641622 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641660 master-0 kubenswrapper[10003]: E0216 17:00:46.641629 10003 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 16 17:00:46.641783 master-0 kubenswrapper[10003]: I0216 17:00:46.641724 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641783 master-0 kubenswrapper[10003]: E0216 17:00:46.641765 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit podName:5d108e8b-620e-4523-a97d-3e4d2073f137 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:47.641740656 +0000 UTC m=+17.157226327 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit") pod "apiserver-74f47b695f-rbr8c" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137") : configmap "audit-0" not found Feb 16 17:00:46.641859 master-0 kubenswrapper[10003]: I0216 17:00:46.641794 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641859 master-0 kubenswrapper[10003]: I0216 17:00:46.641815 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641859 master-0 kubenswrapper[10003]: I0216 17:00:46.641845 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.641985 master-0 kubenswrapper[10003]: I0216 17:00:46.641795 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.645797 master-0 kubenswrapper[10003]: I0216 17:00:46.645420 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.646204 master-0 kubenswrapper[10003]: I0216 17:00:46.646178 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.658263 master-0 kubenswrapper[10003]: I0216 17:00:46.658215 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.793042 master-0 kubenswrapper[10003]: I0216 17:00:46.792420 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:00:46.808957 master-0 kubenswrapper[10003]: W0216 17:00:46.808489 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc45ce0e5_c50b_4210_b7bb_82db2b2bc1db.slice/crio-038865dec18a0d018db14cc0ef7eef2e93b57b0f6f010be3c036aa9f30e0bec0 WatchSource:0}: Error finding container 038865dec18a0d018db14cc0ef7eef2e93b57b0f6f010be3c036aa9f30e0bec0: Status 404 returned error can't find the container with id 038865dec18a0d018db14cc0ef7eef2e93b57b0f6f010be3c036aa9f30e0bec0 Feb 16 17:00:46.832115 master-0 kubenswrapper[10003]: I0216 17:00:46.832028 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:00:47.017129 master-0 kubenswrapper[10003]: I0216 17:00:47.016398 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" event={"ID":"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db","Type":"ContainerStarted","Data":"303f8e1c362195fd4193cfe61e0e57d53326ff15f9ff7312804a028571094c23"} Feb 16 17:00:47.017129 master-0 kubenswrapper[10003]: I0216 17:00:47.016734 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" event={"ID":"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db","Type":"ContainerStarted","Data":"038865dec18a0d018db14cc0ef7eef2e93b57b0f6f010be3c036aa9f30e0bec0"} Feb 16 17:00:47.033978 master-0 kubenswrapper[10003]: I0216 17:00:47.031566 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerStarted","Data":"06940a658879a063b012c2bf76a3258fbdd61e5203f5587e2a2a955dfa358b02"} Feb 16 17:00:47.033978 master-0 kubenswrapper[10003]: I0216 17:00:47.032311 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerStarted","Data":"fa96b9440dbcead07a8e8a2883de97575b011436686f4fab2170bdfcc0a3f79e"} Feb 16 17:00:47.034406 master-0 kubenswrapper[10003]: I0216 17:00:47.034217 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" event={"ID":"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd","Type":"ContainerStarted","Data":"b595f395aac0332f79c685e4f9b8d1184bc8d65ea7662129777c88a2f4b6d75c"} Feb 16 17:00:47.037428 master-0 kubenswrapper[10003]: I0216 17:00:47.037390 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv_4e51bba5-0ebe-4e55-a588-38b71548c605/cluster-olm-operator/0.log" Feb 16 17:00:47.037915 master-0 kubenswrapper[10003]: I0216 17:00:47.037881 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerStarted","Data":"079e840529eb6d74a125e4d8873e01bd5f48d0a6e891c798f77f912c0e2b6249"} Feb 16 17:00:47.038577 master-0 kubenswrapper[10003]: I0216 17:00:47.038548 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerStarted","Data":"8d9325183d87d503ed41689fd08cf0ecd5e5cd428a5bae6824cddf556b030e2a"} Feb 16 17:00:47.039809 master-0 kubenswrapper[10003]: I0216 17:00:47.039786 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" event={"ID":"5192fa49-d81c-47ce-b2ab-f90996cc0bd5","Type":"ContainerStarted","Data":"a0b5d7ea986d410582d28daade693c0e0c2c5f11b8996357f83090e55f5232a7"} Feb 16 17:00:47.041239 master-0 kubenswrapper[10003]: I0216 17:00:47.041210 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:47.042032 master-0 kubenswrapper[10003]: I0216 17:00:47.041844 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" event={"ID":"568b22df-b454-4d74-bc21-6c84daf17c8c","Type":"ContainerStarted","Data":"3cf535557ef474da496049f2fbeee50256220278f92b2366b4416bba5db2a11f"} Feb 16 17:00:47.050846 master-0 kubenswrapper[10003]: I0216 17:00:47.050539 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qcgxx"] Feb 16 17:00:47.051057 master-0 kubenswrapper[10003]: I0216 17:00:47.050832 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" podStartSLOduration=1.050822381 podStartE2EDuration="1.050822381s" podCreationTimestamp="2026-02-16 17:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:00:47.048983091 +0000 UTC m=+16.564468762" watchObservedRunningTime="2026-02-16 17:00:47.050822381 +0000 UTC m=+16.566308052" Feb 16 17:00:47.052003 master-0 kubenswrapper[10003]: I0216 17:00:47.051572 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:00:47.054782 master-0 kubenswrapper[10003]: I0216 17:00:47.053601 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 17:00:47.054782 master-0 kubenswrapper[10003]: I0216 17:00:47.053936 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 17:00:47.054782 master-0 kubenswrapper[10003]: I0216 17:00:47.054111 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 17:00:47.054782 master-0 kubenswrapper[10003]: I0216 17:00:47.054421 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 17:00:47.061045 master-0 kubenswrapper[10003]: I0216 17:00:47.061006 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qcgxx"] Feb 16 17:00:47.075653 master-0 kubenswrapper[10003]: I0216 17:00:47.075532 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:47.149749 master-0 kubenswrapper[10003]: I0216 17:00:47.146749 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47bnn\" (UniqueName: \"kubernetes.io/projected/01921947-c416-44b6-953d-75b935ad8977-kube-api-access-47bnn\") pod \"01921947-c416-44b6-953d-75b935ad8977\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " Feb 16 17:00:47.149749 master-0 kubenswrapper[10003]: I0216 17:00:47.146788 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-config\") pod \"01921947-c416-44b6-953d-75b935ad8977\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " Feb 16 17:00:47.149749 master-0 kubenswrapper[10003]: I0216 17:00:47.146842 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-proxy-ca-bundles\") pod \"01921947-c416-44b6-953d-75b935ad8977\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " Feb 16 17:00:47.149749 master-0 kubenswrapper[10003]: I0216 17:00:47.146895 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert\") pod \"01921947-c416-44b6-953d-75b935ad8977\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " Feb 16 17:00:47.149749 master-0 kubenswrapper[10003]: I0216 17:00:47.147108 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:00:47.149749 master-0 kubenswrapper[10003]: I0216 17:00:47.147128 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:00:47.149749 master-0 kubenswrapper[10003]: I0216 17:00:47.147289 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:00:47.149749 master-0 kubenswrapper[10003]: I0216 17:00:47.149598 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-config" (OuterVolumeSpecName: "config") pod "01921947-c416-44b6-953d-75b935ad8977" (UID: "01921947-c416-44b6-953d-75b935ad8977"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:47.150634 master-0 kubenswrapper[10003]: I0216 17:00:47.150592 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "01921947-c416-44b6-953d-75b935ad8977" (UID: "01921947-c416-44b6-953d-75b935ad8977"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:47.151324 master-0 kubenswrapper[10003]: I0216 17:00:47.151296 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01921947-c416-44b6-953d-75b935ad8977-kube-api-access-47bnn" (OuterVolumeSpecName: "kube-api-access-47bnn") pod "01921947-c416-44b6-953d-75b935ad8977" (UID: "01921947-c416-44b6-953d-75b935ad8977"). InnerVolumeSpecName "kube-api-access-47bnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:47.152781 master-0 kubenswrapper[10003]: I0216 17:00:47.152716 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01921947-c416-44b6-953d-75b935ad8977" (UID: "01921947-c416-44b6-953d-75b935ad8977"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:47.248890 master-0 kubenswrapper[10003]: I0216 17:00:47.248838 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:00:47.249079 master-0 kubenswrapper[10003]: I0216 17:00:47.248978 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:00:47.249079 master-0 kubenswrapper[10003]: I0216 17:00:47.249001 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:00:47.249079 master-0 kubenswrapper[10003]: I0216 17:00:47.249076 10003 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:47.249171 master-0 kubenswrapper[10003]: I0216 17:00:47.249093 10003 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01921947-c416-44b6-953d-75b935ad8977-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:47.249171 master-0 kubenswrapper[10003]: I0216 17:00:47.249107 10003 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47bnn\" (UniqueName: \"kubernetes.io/projected/01921947-c416-44b6-953d-75b935ad8977-kube-api-access-47bnn\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:47.249171 master-0 kubenswrapper[10003]: I0216 17:00:47.249117 10003 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:47.249433 master-0 kubenswrapper[10003]: E0216 17:00:47.249408 10003 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 16 17:00:47.249478 master-0 kubenswrapper[10003]: E0216 17:00:47.249453 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:47.749440946 +0000 UTC m=+17.264926617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : secret "dns-default-metrics-tls" not found Feb 16 17:00:47.250979 master-0 kubenswrapper[10003]: I0216 17:00:47.250690 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:00:47.266280 master-0 kubenswrapper[10003]: I0216 17:00:47.266228 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:00:47.406642 master-0 kubenswrapper[10003]: I0216 17:00:47.406129 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-vfxj4"] Feb 16 17:00:47.406799 master-0 kubenswrapper[10003]: I0216 17:00:47.406728 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:00:47.451233 master-0 kubenswrapper[10003]: I0216 17:00:47.451185 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:00:47.451463 master-0 kubenswrapper[10003]: I0216 17:00:47.451268 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:00:47.556327 master-0 kubenswrapper[10003]: I0216 17:00:47.556283 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:00:47.556497 master-0 kubenswrapper[10003]: I0216 17:00:47.556417 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:00:47.556669 master-0 kubenswrapper[10003]: I0216 17:00:47.556628 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:00:47.576034 master-0 kubenswrapper[10003]: I0216 17:00:47.575993 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:00:47.577147 master-0 kubenswrapper[10003]: I0216 17:00:47.577116 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 17:00:47.577666 master-0 kubenswrapper[10003]: I0216 17:00:47.577645 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 17:00:47.579442 master-0 kubenswrapper[10003]: I0216 17:00:47.579408 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 16 17:00:47.592400 master-0 kubenswrapper[10003]: I0216 17:00:47.592332 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 17:00:47.665099 master-0 kubenswrapper[10003]: I0216 17:00:47.657229 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:47.665099 master-0 kubenswrapper[10003]: I0216 17:00:47.657420 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:47.665099 master-0 kubenswrapper[10003]: I0216 17:00:47.657482 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb59921b-7279-4258-a342-554b2878dca1-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"bb59921b-7279-4258-a342-554b2878dca1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 17:00:47.665099 master-0 kubenswrapper[10003]: I0216 17:00:47.657511 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb59921b-7279-4258-a342-554b2878dca1-var-lock\") pod \"installer-1-master-0\" (UID: \"bb59921b-7279-4258-a342-554b2878dca1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 17:00:47.665099 master-0 kubenswrapper[10003]: E0216 17:00:47.657473 10003 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Feb 16 17:00:47.665099 master-0 kubenswrapper[10003]: E0216 17:00:47.657636 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert podName:5d108e8b-620e-4523-a97d-3e4d2073f137 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:49.657597376 +0000 UTC m=+19.173083227 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert") pod "apiserver-74f47b695f-rbr8c" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137") : secret "serving-cert" not found Feb 16 17:00:47.665099 master-0 kubenswrapper[10003]: I0216 17:00:47.657762 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb59921b-7279-4258-a342-554b2878dca1-kube-api-access\") pod \"installer-1-master-0\" (UID: \"bb59921b-7279-4258-a342-554b2878dca1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 17:00:47.665099 master-0 kubenswrapper[10003]: I0216 17:00:47.657894 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:47.665099 master-0 kubenswrapper[10003]: E0216 17:00:47.658002 10003 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 16 17:00:47.665099 master-0 kubenswrapper[10003]: E0216 17:00:47.658082 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit podName:5d108e8b-620e-4523-a97d-3e4d2073f137 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:49.658053799 +0000 UTC m=+19.173539460 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit") pod "apiserver-74f47b695f-rbr8c" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137") : configmap "audit-0" not found Feb 16 17:00:47.665099 master-0 kubenswrapper[10003]: I0216 17:00:47.660734 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:47.736807 master-0 kubenswrapper[10003]: I0216 17:00:47.736755 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:00:47.755948 master-0 kubenswrapper[10003]: W0216 17:00:47.755881 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6fe41b0_1a42_4f07_8220_d9aaa50788ad.slice/crio-631bd21307bb03e494699eafea36a1ef9835bef8edb1d35b0dd997809ea4fddf WatchSource:0}: Error finding container 631bd21307bb03e494699eafea36a1ef9835bef8edb1d35b0dd997809ea4fddf: Status 404 returned error can't find the container with id 631bd21307bb03e494699eafea36a1ef9835bef8edb1d35b0dd997809ea4fddf Feb 16 17:00:47.758982 master-0 kubenswrapper[10003]: I0216 17:00:47.758904 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca\") pod \"controller-manager-869cbbd595-47pjz\" (UID: \"01921947-c416-44b6-953d-75b935ad8977\") " pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:47.758982 master-0 kubenswrapper[10003]: I0216 17:00:47.758980 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:47.759142 master-0 kubenswrapper[10003]: I0216 17:00:47.759005 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:47.759142 master-0 kubenswrapper[10003]: I0216 17:00:47.759038 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:47.759142 master-0 kubenswrapper[10003]: I0216 17:00:47.759071 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:00:47.759142 master-0 kubenswrapper[10003]: I0216 17:00:47.759112 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:47.759300 master-0 kubenswrapper[10003]: I0216 17:00:47.759146 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:47.759300 master-0 kubenswrapper[10003]: I0216 17:00:47.759207 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb59921b-7279-4258-a342-554b2878dca1-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"bb59921b-7279-4258-a342-554b2878dca1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 17:00:47.759300 master-0 kubenswrapper[10003]: I0216 17:00:47.759237 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb59921b-7279-4258-a342-554b2878dca1-var-lock\") pod \"installer-1-master-0\" (UID: \"bb59921b-7279-4258-a342-554b2878dca1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 17:00:47.759416 master-0 kubenswrapper[10003]: I0216 17:00:47.759353 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb59921b-7279-4258-a342-554b2878dca1-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"bb59921b-7279-4258-a342-554b2878dca1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 17:00:47.759416 master-0 kubenswrapper[10003]: E0216 17:00:47.759404 10003 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Feb 16 17:00:47.759496 master-0 kubenswrapper[10003]: E0216 17:00:47.759447 10003 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:47.759540 master-0 kubenswrapper[10003]: E0216 17:00:47.759521 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:48.759497315 +0000 UTC m=+18.274983156 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : secret "dns-default-metrics-tls" not found Feb 16 17:00:47.759585 master-0 kubenswrapper[10003]: E0216 17:00:47.759549 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:01:03.759537296 +0000 UTC m=+33.275023207 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : configmap "client-ca" not found Feb 16 17:00:47.760584 master-0 kubenswrapper[10003]: E0216 17:00:47.760568 10003 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:47.760712 master-0 kubenswrapper[10003]: E0216 17:00:47.760697 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca podName:01921947-c416-44b6-953d-75b935ad8977 nodeName:}" failed. No retries permitted until 2026-02-16 17:01:03.760684267 +0000 UTC m=+33.276170118 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca") pod "controller-manager-869cbbd595-47pjz" (UID: "01921947-c416-44b6-953d-75b935ad8977") : configmap "client-ca" not found Feb 16 17:00:47.760965 master-0 kubenswrapper[10003]: I0216 17:00:47.760568 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb59921b-7279-4258-a342-554b2878dca1-kube-api-access\") pod \"installer-1-master-0\" (UID: \"bb59921b-7279-4258-a342-554b2878dca1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 17:00:47.761111 master-0 kubenswrapper[10003]: I0216 17:00:47.761070 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb59921b-7279-4258-a342-554b2878dca1-var-lock\") pod \"installer-1-master-0\" (UID: \"bb59921b-7279-4258-a342-554b2878dca1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 17:00:47.761244 master-0 kubenswrapper[10003]: I0216 17:00:47.761218 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:00:47.762568 master-0 kubenswrapper[10003]: I0216 17:00:47.762541 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:47.763737 master-0 kubenswrapper[10003]: I0216 17:00:47.763692 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:47.763964 master-0 kubenswrapper[10003]: I0216 17:00:47.763938 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:47.767544 master-0 kubenswrapper[10003]: I0216 17:00:47.767504 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:00:47.775109 master-0 kubenswrapper[10003]: I0216 17:00:47.775055 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:00:47.779097 master-0 kubenswrapper[10003]: I0216 17:00:47.779034 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb59921b-7279-4258-a342-554b2878dca1-kube-api-access\") pod \"installer-1-master-0\" (UID: \"bb59921b-7279-4258-a342-554b2878dca1\") " pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 17:00:47.901072 master-0 kubenswrapper[10003]: I0216 17:00:47.897562 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:47.901072 master-0 kubenswrapper[10003]: I0216 17:00:47.899721 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:47.901072 master-0 kubenswrapper[10003]: I0216 17:00:47.899998 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:00:47.914250 master-0 kubenswrapper[10003]: I0216 17:00:47.914186 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:00:47.914388 master-0 kubenswrapper[10003]: I0216 17:00:47.914199 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:00:47.930684 master-0 kubenswrapper[10003]: I0216 17:00:47.930636 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 17:00:48.059162 master-0 kubenswrapper[10003]: I0216 17:00:48.058686 10003 generic.go:334] "Generic (PLEG): container finished" podID="6b3e071c-1c62-489b-91c1-aef0d197f40b" containerID="925f178f46a1d5c4c22dbeed05e4d6e9975a60d252305dcd17064d2bc8dfab6e" exitCode=0 Feb 16 17:00:48.059953 master-0 kubenswrapper[10003]: I0216 17:00:48.058913 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerDied","Data":"925f178f46a1d5c4c22dbeed05e4d6e9975a60d252305dcd17064d2bc8dfab6e"} Feb 16 17:00:48.061109 master-0 kubenswrapper[10003]: I0216 17:00:48.061054 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vfxj4" event={"ID":"a6fe41b0-1a42-4f07-8220-d9aaa50788ad","Type":"ContainerStarted","Data":"f10912c30fd4a11ea42c60b953841baf59f4219d858a735d1f2aa7871453e0dd"} Feb 16 17:00:48.061175 master-0 kubenswrapper[10003]: I0216 17:00:48.061122 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vfxj4" event={"ID":"a6fe41b0-1a42-4f07-8220-d9aaa50788ad","Type":"ContainerStarted","Data":"631bd21307bb03e494699eafea36a1ef9835bef8edb1d35b0dd997809ea4fddf"} Feb 16 17:00:48.061175 master-0 kubenswrapper[10003]: I0216 17:00:48.061122 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-869cbbd595-47pjz" Feb 16 17:00:48.062124 master-0 kubenswrapper[10003]: I0216 17:00:48.062094 10003 scope.go:117] "RemoveContainer" containerID="410ef06e22d76a946b4285857693ab64a631161fcff4dd55a6b1f8d6e54ed325" Feb 16 17:00:48.062226 master-0 kubenswrapper[10003]: I0216 17:00:48.062201 10003 scope.go:117] "RemoveContainer" containerID="925f178f46a1d5c4c22dbeed05e4d6e9975a60d252305dcd17064d2bc8dfab6e" Feb 16 17:00:48.062397 master-0 kubenswrapper[10003]: E0216 17:00:48.062366 10003 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=etcd-operator pod=etcd-operator-67bf55ccdd-cppj8_openshift-etcd-operator(6b3e071c-1c62-489b-91c1-aef0d197f40b)\"" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:00:48.106800 master-0 kubenswrapper[10003]: I0216 17:00:48.106708 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2"] Feb 16 17:00:48.118246 master-0 kubenswrapper[10003]: I0216 17:00:48.117565 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-vfxj4" podStartSLOduration=1.117548398 podStartE2EDuration="1.117548398s" podCreationTimestamp="2026-02-16 17:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:00:48.116839599 +0000 UTC m=+17.632325270" watchObservedRunningTime="2026-02-16 17:00:48.117548398 +0000 UTC m=+17.633034079" Feb 16 17:00:48.137515 master-0 kubenswrapper[10003]: I0216 17:00:48.132330 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm"] Feb 16 17:00:48.146744 master-0 kubenswrapper[10003]: I0216 17:00:48.146452 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bf97f7775-zn8fd"] Feb 16 17:00:48.148103 master-0 kubenswrapper[10003]: I0216 17:00:48.148037 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.149134 master-0 kubenswrapper[10003]: I0216 17:00:48.149087 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-869cbbd595-47pjz"] Feb 16 17:00:48.151090 master-0 kubenswrapper[10003]: I0216 17:00:48.150427 10003 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-869cbbd595-47pjz"] Feb 16 17:00:48.151330 master-0 kubenswrapper[10003]: I0216 17:00:48.151277 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:00:48.151625 master-0 kubenswrapper[10003]: I0216 17:00:48.151603 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:00:48.154432 master-0 kubenswrapper[10003]: I0216 17:00:48.151776 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:00:48.154432 master-0 kubenswrapper[10003]: I0216 17:00:48.152853 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:00:48.154432 master-0 kubenswrapper[10003]: I0216 17:00:48.153342 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:00:48.158013 master-0 kubenswrapper[10003]: I0216 17:00:48.156159 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bf97f7775-zn8fd"] Feb 16 17:00:48.165987 master-0 kubenswrapper[10003]: I0216 17:00:48.165464 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:00:48.166161 master-0 kubenswrapper[10003]: I0216 17:00:48.166129 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.168695 master-0 kubenswrapper[10003]: I0216 17:00:48.166536 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-proxy-ca-bundles\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.168695 master-0 kubenswrapper[10003]: I0216 17:00:48.166604 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-config\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.168695 master-0 kubenswrapper[10003]: I0216 17:00:48.166716 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zqt6\" (UniqueName: \"kubernetes.io/projected/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-kube-api-access-9zqt6\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.168695 master-0 kubenswrapper[10003]: I0216 17:00:48.166806 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-serving-cert\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.168695 master-0 kubenswrapper[10003]: I0216 17:00:48.166893 10003 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/01921947-c416-44b6-953d-75b935ad8977-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:48.268241 master-0 kubenswrapper[10003]: I0216 17:00:48.268195 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.268241 master-0 kubenswrapper[10003]: I0216 17:00:48.268251 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-proxy-ca-bundles\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.268493 master-0 kubenswrapper[10003]: I0216 17:00:48.268273 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-config\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.268493 master-0 kubenswrapper[10003]: I0216 17:00:48.268468 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zqt6\" (UniqueName: \"kubernetes.io/projected/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-kube-api-access-9zqt6\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.268585 master-0 kubenswrapper[10003]: I0216 17:00:48.268545 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-serving-cert\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.269158 master-0 kubenswrapper[10003]: E0216 17:00:48.269115 10003 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:48.269247 master-0 kubenswrapper[10003]: E0216 17:00:48.269231 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca podName:f9e4abf5-7fdb-4aad-a69a-b0999c617acb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:48.769207754 +0000 UTC m=+18.284693425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca") pod "controller-manager-5bf97f7775-zn8fd" (UID: "f9e4abf5-7fdb-4aad-a69a-b0999c617acb") : configmap "client-ca" not found Feb 16 17:00:48.269914 master-0 kubenswrapper[10003]: I0216 17:00:48.269712 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-config\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.271269 master-0 kubenswrapper[10003]: I0216 17:00:48.271231 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-proxy-ca-bundles\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.274083 master-0 kubenswrapper[10003]: I0216 17:00:48.274050 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-serving-cert\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.296876 master-0 kubenswrapper[10003]: I0216 17:00:48.296820 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zqt6\" (UniqueName: \"kubernetes.io/projected/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-kube-api-access-9zqt6\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.719685 master-0 kubenswrapper[10003]: W0216 17:00:48.719556 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode10d0b0c_4c2a_45b3_8d69_3070d566b97d.slice/crio-add87c6ecc390c11dd4bb671cf6c85cf8d43a3b5be958bf533ae60889482daca WatchSource:0}: Error finding container add87c6ecc390c11dd4bb671cf6c85cf8d43a3b5be958bf533ae60889482daca: Status 404 returned error can't find the container with id add87c6ecc390c11dd4bb671cf6c85cf8d43a3b5be958bf533ae60889482daca Feb 16 17:00:48.720169 master-0 kubenswrapper[10003]: W0216 17:00:48.720152 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74b2561b_933b_4c58_a63a_7a8c671d0ae9.slice/crio-a60e6d4793a7edacd573013c98b5733c94b908b9bbe63d3fd698f772eed289c7 WatchSource:0}: Error finding container a60e6d4793a7edacd573013c98b5733c94b908b9bbe63d3fd698f772eed289c7: Status 404 returned error can't find the container with id a60e6d4793a7edacd573013c98b5733c94b908b9bbe63d3fd698f772eed289c7 Feb 16 17:00:48.779048 master-0 kubenswrapper[10003]: I0216 17:00:48.779001 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:00:48.779283 master-0 kubenswrapper[10003]: I0216 17:00:48.779075 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:48.779283 master-0 kubenswrapper[10003]: E0216 17:00:48.779216 10003 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:48.779283 master-0 kubenswrapper[10003]: E0216 17:00:48.779270 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca podName:f9e4abf5-7fdb-4aad-a69a-b0999c617acb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:49.779254202 +0000 UTC m=+19.294739873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca") pod "controller-manager-5bf97f7775-zn8fd" (UID: "f9e4abf5-7fdb-4aad-a69a-b0999c617acb") : configmap "client-ca" not found Feb 16 17:00:48.792247 master-0 kubenswrapper[10003]: I0216 17:00:48.792186 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:00:48.818440 master-0 kubenswrapper[10003]: I0216 17:00:48.818116 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01921947-c416-44b6-953d-75b935ad8977" path="/var/lib/kubelet/pods/01921947-c416-44b6-953d-75b935ad8977/volumes" Feb 16 17:00:48.886566 master-0 kubenswrapper[10003]: I0216 17:00:48.886504 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6"] Feb 16 17:00:48.892073 master-0 kubenswrapper[10003]: I0216 17:00:48.892024 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:00:49.066493 master-0 kubenswrapper[10003]: I0216 17:00:49.066422 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" event={"ID":"e10d0b0c-4c2a-45b3-8d69-3070d566b97d","Type":"ContainerStarted","Data":"add87c6ecc390c11dd4bb671cf6c85cf8d43a3b5be958bf533ae60889482daca"} Feb 16 17:00:49.067496 master-0 kubenswrapper[10003]: I0216 17:00:49.067448 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerStarted","Data":"a60e6d4793a7edacd573013c98b5733c94b908b9bbe63d3fd698f772eed289c7"} Feb 16 17:00:49.264348 master-0 kubenswrapper[10003]: I0216 17:00:49.264284 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-74f47b695f-rbr8c"] Feb 16 17:00:49.265797 master-0 kubenswrapper[10003]: E0216 17:00:49.264542 10003 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" podUID="5d108e8b-620e-4523-a97d-3e4d2073f137" Feb 16 17:00:49.688562 master-0 kubenswrapper[10003]: I0216 17:00:49.688486 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:49.688741 master-0 kubenswrapper[10003]: I0216 17:00:49.688581 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:49.688741 master-0 kubenswrapper[10003]: E0216 17:00:49.688685 10003 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Feb 16 17:00:49.688741 master-0 kubenswrapper[10003]: E0216 17:00:49.688734 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit podName:5d108e8b-620e-4523-a97d-3e4d2073f137 nodeName:}" failed. No retries permitted until 2026-02-16 17:00:53.688720482 +0000 UTC m=+23.204206153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit") pod "apiserver-74f47b695f-rbr8c" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137") : configmap "audit-0" not found Feb 16 17:00:49.697324 master-0 kubenswrapper[10003]: I0216 17:00:49.697259 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert\") pod \"apiserver-74f47b695f-rbr8c\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:49.731516 master-0 kubenswrapper[10003]: W0216 17:00:49.731472 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18e9a9d3_9b18_4c19_9558_f33c68101922.slice/crio-f9d46acb28343da01106746c6081478cf22731025778bc59637f1078390a8865 WatchSource:0}: Error finding container f9d46acb28343da01106746c6081478cf22731025778bc59637f1078390a8865: Status 404 returned error can't find the container with id f9d46acb28343da01106746c6081478cf22731025778bc59637f1078390a8865 Feb 16 17:00:49.789621 master-0 kubenswrapper[10003]: I0216 17:00:49.789511 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:49.789723 master-0 kubenswrapper[10003]: E0216 17:00:49.789623 10003 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:49.789723 master-0 kubenswrapper[10003]: E0216 17:00:49.789703 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca podName:f9e4abf5-7fdb-4aad-a69a-b0999c617acb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:51.789685595 +0000 UTC m=+21.305171266 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca") pod "controller-manager-5bf97f7775-zn8fd" (UID: "f9e4abf5-7fdb-4aad-a69a-b0999c617acb") : configmap "client-ca" not found Feb 16 17:00:50.010478 master-0 kubenswrapper[10003]: I0216 17:00:50.005894 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-4jz2t"] Feb 16 17:00:50.030659 master-0 kubenswrapper[10003]: W0216 17:00:50.022112 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab6e5720_2c30_4962_9c67_89f1607d137f.slice/crio-2ab21ee08c6858b29e0d5402811c6a5058510ebfc99fdee4dceca48abf0ebb37 WatchSource:0}: Error finding container 2ab21ee08c6858b29e0d5402811c6a5058510ebfc99fdee4dceca48abf0ebb37: Status 404 returned error can't find the container with id 2ab21ee08c6858b29e0d5402811c6a5058510ebfc99fdee4dceca48abf0ebb37 Feb 16 17:00:50.096158 master-0 kubenswrapper[10003]: I0216 17:00:50.094984 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" event={"ID":"ab6e5720-2c30-4962-9c67-89f1607d137f","Type":"ContainerStarted","Data":"2ab21ee08c6858b29e0d5402811c6a5058510ebfc99fdee4dceca48abf0ebb37"} Feb 16 17:00:50.097681 master-0 kubenswrapper[10003]: I0216 17:00:50.096902 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerStarted","Data":"be513303bf4ed878c6c5f6ef9c7437f58c4a298f57e9e8964fc17527ef538c38"} Feb 16 17:00:50.097681 master-0 kubenswrapper[10003]: I0216 17:00:50.096978 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerStarted","Data":"f9d46acb28343da01106746c6081478cf22731025778bc59637f1078390a8865"} Feb 16 17:00:50.098548 master-0 kubenswrapper[10003]: I0216 17:00:50.098503 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" event={"ID":"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd","Type":"ContainerStarted","Data":"a620812d00ec1d27fd80352b095f2c12e6234eb0d4bf84ea3c70b1cbd9af080b"} Feb 16 17:00:50.102433 master-0 kubenswrapper[10003]: I0216 17:00:50.102393 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerStarted","Data":"b2ea1bb15f78693382433f2c7f09878ee2e059e95bab8649c9ca7870ea580187"} Feb 16 17:00:50.102504 master-0 kubenswrapper[10003]: I0216 17:00:50.102456 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:50.119950 master-0 kubenswrapper[10003]: I0216 17:00:50.119336 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:50.207207 master-0 kubenswrapper[10003]: I0216 17:00:50.201411 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert\") pod \"5d108e8b-620e-4523-a97d-3e4d2073f137\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " Feb 16 17:00:50.207207 master-0 kubenswrapper[10003]: I0216 17:00:50.201590 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-encryption-config\") pod \"5d108e8b-620e-4523-a97d-3e4d2073f137\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " Feb 16 17:00:50.207207 master-0 kubenswrapper[10003]: I0216 17:00:50.201727 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-etcd-client\") pod \"5d108e8b-620e-4523-a97d-3e4d2073f137\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " Feb 16 17:00:50.207207 master-0 kubenswrapper[10003]: I0216 17:00:50.201868 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d108e8b-620e-4523-a97d-3e4d2073f137-audit-dir\") pod \"5d108e8b-620e-4523-a97d-3e4d2073f137\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " Feb 16 17:00:50.207207 master-0 kubenswrapper[10003]: I0216 17:00:50.202068 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-trusted-ca-bundle\") pod \"5d108e8b-620e-4523-a97d-3e4d2073f137\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " Feb 16 17:00:50.207207 master-0 kubenswrapper[10003]: I0216 17:00:50.202104 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nbjc\" (UniqueName: \"kubernetes.io/projected/5d108e8b-620e-4523-a97d-3e4d2073f137-kube-api-access-8nbjc\") pod \"5d108e8b-620e-4523-a97d-3e4d2073f137\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " Feb 16 17:00:50.207207 master-0 kubenswrapper[10003]: I0216 17:00:50.202177 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-config\") pod \"5d108e8b-620e-4523-a97d-3e4d2073f137\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " Feb 16 17:00:50.207207 master-0 kubenswrapper[10003]: I0216 17:00:50.202198 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5d108e8b-620e-4523-a97d-3e4d2073f137-node-pullsecrets\") pod \"5d108e8b-620e-4523-a97d-3e4d2073f137\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " Feb 16 17:00:50.207207 master-0 kubenswrapper[10003]: I0216 17:00:50.202240 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-image-import-ca\") pod \"5d108e8b-620e-4523-a97d-3e4d2073f137\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " Feb 16 17:00:50.207207 master-0 kubenswrapper[10003]: I0216 17:00:50.202308 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-etcd-serving-ca\") pod \"5d108e8b-620e-4523-a97d-3e4d2073f137\" (UID: \"5d108e8b-620e-4523-a97d-3e4d2073f137\") " Feb 16 17:00:50.207207 master-0 kubenswrapper[10003]: I0216 17:00:50.205462 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "5d108e8b-620e-4523-a97d-3e4d2073f137" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:50.207207 master-0 kubenswrapper[10003]: I0216 17:00:50.206844 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d108e8b-620e-4523-a97d-3e4d2073f137-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5d108e8b-620e-4523-a97d-3e4d2073f137" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:00:50.216782 master-0 kubenswrapper[10003]: I0216 17:00:50.216682 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d108e8b-620e-4523-a97d-3e4d2073f137-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5d108e8b-620e-4523-a97d-3e4d2073f137" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:00:50.222474 master-0 kubenswrapper[10003]: I0216 17:00:50.222424 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "5d108e8b-620e-4523-a97d-3e4d2073f137" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:50.226941 master-0 kubenswrapper[10003]: I0216 17:00:50.226343 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5d108e8b-620e-4523-a97d-3e4d2073f137" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:50.226941 master-0 kubenswrapper[10003]: I0216 17:00:50.226505 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-config" (OuterVolumeSpecName: "config") pod "5d108e8b-620e-4523-a97d-3e4d2073f137" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:00:50.228719 master-0 kubenswrapper[10003]: I0216 17:00:50.228669 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "5d108e8b-620e-4523-a97d-3e4d2073f137" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:50.229806 master-0 kubenswrapper[10003]: I0216 17:00:50.229213 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d108e8b-620e-4523-a97d-3e4d2073f137-kube-api-access-8nbjc" (OuterVolumeSpecName: "kube-api-access-8nbjc") pod "5d108e8b-620e-4523-a97d-3e4d2073f137" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137"). InnerVolumeSpecName "kube-api-access-8nbjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:00:50.231405 master-0 kubenswrapper[10003]: I0216 17:00:50.231081 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "5d108e8b-620e-4523-a97d-3e4d2073f137" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:50.246862 master-0 kubenswrapper[10003]: I0216 17:00:50.246791 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5d108e8b-620e-4523-a97d-3e4d2073f137" (UID: "5d108e8b-620e-4523-a97d-3e4d2073f137"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:00:50.274504 master-0 kubenswrapper[10003]: I0216 17:00:50.274218 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 17:00:50.306959 master-0 kubenswrapper[10003]: I0216 17:00:50.304826 10003 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:50.306959 master-0 kubenswrapper[10003]: I0216 17:00:50.304866 10003 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-encryption-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:50.306959 master-0 kubenswrapper[10003]: I0216 17:00:50.304878 10003 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5d108e8b-620e-4523-a97d-3e4d2073f137-etcd-client\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:50.306959 master-0 kubenswrapper[10003]: I0216 17:00:50.304886 10003 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d108e8b-620e-4523-a97d-3e4d2073f137-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:50.306959 master-0 kubenswrapper[10003]: I0216 17:00:50.304895 10003 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:50.306959 master-0 kubenswrapper[10003]: I0216 17:00:50.304905 10003 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nbjc\" (UniqueName: \"kubernetes.io/projected/5d108e8b-620e-4523-a97d-3e4d2073f137-kube-api-access-8nbjc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:50.306959 master-0 kubenswrapper[10003]: I0216 17:00:50.304913 10003 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:50.306959 master-0 kubenswrapper[10003]: I0216 17:00:50.304938 10003 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5d108e8b-620e-4523-a97d-3e4d2073f137-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:50.306959 master-0 kubenswrapper[10003]: I0216 17:00:50.304950 10003 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-image-import-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:50.306959 master-0 kubenswrapper[10003]: I0216 17:00:50.304960 10003 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:50.324694 master-0 kubenswrapper[10003]: I0216 17:00:50.324661 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-279g6"] Feb 16 17:00:50.332348 master-0 kubenswrapper[10003]: I0216 17:00:50.330086 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qcgxx"] Feb 16 17:00:50.332348 master-0 kubenswrapper[10003]: W0216 17:00:50.332313 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d96ccdc_0b09_437d_bfca_1958af5d9953.slice/crio-1d90441ff6782f784fce85c87a44597213ff8f98913ae13bfc6b95e97a8d2532 WatchSource:0}: Error finding container 1d90441ff6782f784fce85c87a44597213ff8f98913ae13bfc6b95e97a8d2532: Status 404 returned error can't find the container with id 1d90441ff6782f784fce85c87a44597213ff8f98913ae13bfc6b95e97a8d2532 Feb 16 17:00:50.355767 master-0 kubenswrapper[10003]: I0216 17:00:50.353115 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc"] Feb 16 17:00:50.355767 master-0 kubenswrapper[10003]: I0216 17:00:50.354141 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.355767 master-0 kubenswrapper[10003]: I0216 17:00:50.355401 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc"] Feb 16 17:00:50.355767 master-0 kubenswrapper[10003]: I0216 17:00:50.355418 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 17:00:50.360008 master-0 kubenswrapper[10003]: I0216 17:00:50.356665 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 17:00:50.360008 master-0 kubenswrapper[10003]: I0216 17:00:50.356859 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 17:00:50.360008 master-0 kubenswrapper[10003]: I0216 17:00:50.357179 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 17:00:50.360008 master-0 kubenswrapper[10003]: I0216 17:00:50.357361 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 17:00:50.360008 master-0 kubenswrapper[10003]: I0216 17:00:50.357656 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 17:00:50.360008 master-0 kubenswrapper[10003]: I0216 17:00:50.357777 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 17:00:50.366106 master-0 kubenswrapper[10003]: I0216 17:00:50.360686 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 17:00:50.407101 master-0 kubenswrapper[10003]: I0216 17:00:50.407008 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.407101 master-0 kubenswrapper[10003]: I0216 17:00:50.407063 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.407101 master-0 kubenswrapper[10003]: I0216 17:00:50.407094 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.407398 master-0 kubenswrapper[10003]: I0216 17:00:50.407138 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.407398 master-0 kubenswrapper[10003]: I0216 17:00:50.407229 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.407398 master-0 kubenswrapper[10003]: I0216 17:00:50.407291 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.407398 master-0 kubenswrapper[10003]: I0216 17:00:50.407393 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.407539 master-0 kubenswrapper[10003]: I0216 17:00:50.407454 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.508603 master-0 kubenswrapper[10003]: I0216 17:00:50.508563 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.508767 master-0 kubenswrapper[10003]: I0216 17:00:50.508651 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.508767 master-0 kubenswrapper[10003]: I0216 17:00:50.508694 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.508767 master-0 kubenswrapper[10003]: I0216 17:00:50.508730 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.509703 master-0 kubenswrapper[10003]: I0216 17:00:50.508981 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.509703 master-0 kubenswrapper[10003]: I0216 17:00:50.509030 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.509703 master-0 kubenswrapper[10003]: I0216 17:00:50.509113 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.509703 master-0 kubenswrapper[10003]: I0216 17:00:50.509205 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.509703 master-0 kubenswrapper[10003]: I0216 17:00:50.509231 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.509703 master-0 kubenswrapper[10003]: I0216 17:00:50.509473 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.509703 master-0 kubenswrapper[10003]: I0216 17:00:50.509486 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.510149 master-0 kubenswrapper[10003]: I0216 17:00:50.509771 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.511797 master-0 kubenswrapper[10003]: I0216 17:00:50.511769 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.512490 master-0 kubenswrapper[10003]: I0216 17:00:50.512471 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.514435 master-0 kubenswrapper[10003]: I0216 17:00:50.514413 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.522594 master-0 kubenswrapper[10003]: I0216 17:00:50.522576 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:50.684165 master-0 kubenswrapper[10003]: I0216 17:00:50.684052 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:00:51.102856 master-0 kubenswrapper[10003]: I0216 17:00:51.102774 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc"] Feb 16 17:00:51.108281 master-0 kubenswrapper[10003]: I0216 17:00:51.108211 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"bb59921b-7279-4258-a342-554b2878dca1","Type":"ContainerStarted","Data":"fad88901989abbb6486cff5122d32135bc1a86edd8ffd3eb270671a3c15b9193"} Feb 16 17:00:51.108281 master-0 kubenswrapper[10003]: I0216 17:00:51.108256 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"bb59921b-7279-4258-a342-554b2878dca1","Type":"ContainerStarted","Data":"2e4d127d6ee2504af15db182b0f08077405b5010605551e55d9383d6127a2697"} Feb 16 17:00:51.110961 master-0 kubenswrapper[10003]: I0216 17:00:51.110886 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerStarted","Data":"7c0822a4b748eb1f3f4a4167fcf68aef3951b37e78e3f357e137483a9da93da7"} Feb 16 17:00:51.115041 master-0 kubenswrapper[10003]: I0216 17:00:51.114706 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerStarted","Data":"f3e3e8e94dc6c217da7c3312700e3c981cf01212e798fb2c9ea5fc2b31f6b8aa"} Feb 16 17:00:51.117091 master-0 kubenswrapper[10003]: I0216 17:00:51.116826 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerStarted","Data":"1d90441ff6782f784fce85c87a44597213ff8f98913ae13bfc6b95e97a8d2532"} Feb 16 17:00:51.117091 master-0 kubenswrapper[10003]: I0216 17:00:51.116884 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-74f47b695f-rbr8c" Feb 16 17:00:51.140426 master-0 kubenswrapper[10003]: I0216 17:00:51.140336 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=4.140248242 podStartE2EDuration="4.140248242s" podCreationTimestamp="2026-02-16 17:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:00:51.138572476 +0000 UTC m=+20.654058157" watchObservedRunningTime="2026-02-16 17:00:51.140248242 +0000 UTC m=+20.655733913" Feb 16 17:00:51.174462 master-0 kubenswrapper[10003]: I0216 17:00:51.174392 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-fc4bf7f79-tqnlw"] Feb 16 17:00:51.177875 master-0 kubenswrapper[10003]: I0216 17:00:51.175274 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.177875 master-0 kubenswrapper[10003]: I0216 17:00:51.177785 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 17:00:51.178126 master-0 kubenswrapper[10003]: I0216 17:00:51.177979 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 17:00:51.183288 master-0 kubenswrapper[10003]: I0216 17:00:51.180270 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 17:00:51.183288 master-0 kubenswrapper[10003]: I0216 17:00:51.180563 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 17:00:51.183288 master-0 kubenswrapper[10003]: I0216 17:00:51.180605 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 17:00:51.183288 master-0 kubenswrapper[10003]: I0216 17:00:51.180616 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 17:00:51.183288 master-0 kubenswrapper[10003]: I0216 17:00:51.180743 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 17:00:51.183288 master-0 kubenswrapper[10003]: I0216 17:00:51.180830 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 17:00:51.183288 master-0 kubenswrapper[10003]: I0216 17:00:51.180940 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 17:00:51.192007 master-0 kubenswrapper[10003]: I0216 17:00:51.186177 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 17:00:51.194267 master-0 kubenswrapper[10003]: I0216 17:00:51.194191 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-74f47b695f-rbr8c"] Feb 16 17:00:51.197090 master-0 kubenswrapper[10003]: I0216 17:00:51.197038 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-fc4bf7f79-tqnlw"] Feb 16 17:00:51.199492 master-0 kubenswrapper[10003]: I0216 17:00:51.199441 10003 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-74f47b695f-rbr8c"] Feb 16 17:00:51.219402 master-0 kubenswrapper[10003]: I0216 17:00:51.219358 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.219589 master-0 kubenswrapper[10003]: I0216 17:00:51.219406 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.219589 master-0 kubenswrapper[10003]: I0216 17:00:51.219436 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.219589 master-0 kubenswrapper[10003]: I0216 17:00:51.219462 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.219589 master-0 kubenswrapper[10003]: I0216 17:00:51.219483 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.219589 master-0 kubenswrapper[10003]: I0216 17:00:51.219519 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.219766 master-0 kubenswrapper[10003]: I0216 17:00:51.219641 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.219766 master-0 kubenswrapper[10003]: I0216 17:00:51.219686 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.219766 master-0 kubenswrapper[10003]: I0216 17:00:51.219711 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.219884 master-0 kubenswrapper[10003]: I0216 17:00:51.219796 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.219884 master-0 kubenswrapper[10003]: I0216 17:00:51.219813 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.321880 master-0 kubenswrapper[10003]: I0216 17:00:51.321625 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.321880 master-0 kubenswrapper[10003]: I0216 17:00:51.321726 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.321880 master-0 kubenswrapper[10003]: I0216 17:00:51.321748 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.321880 master-0 kubenswrapper[10003]: I0216 17:00:51.321770 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.321880 master-0 kubenswrapper[10003]: I0216 17:00:51.321787 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.321880 master-0 kubenswrapper[10003]: I0216 17:00:51.321808 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.322344 master-0 kubenswrapper[10003]: I0216 17:00:51.322081 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.322344 master-0 kubenswrapper[10003]: I0216 17:00:51.322192 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.322344 master-0 kubenswrapper[10003]: I0216 17:00:51.322219 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.322344 master-0 kubenswrapper[10003]: I0216 17:00:51.322298 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.322344 master-0 kubenswrapper[10003]: I0216 17:00:51.322322 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.322478 master-0 kubenswrapper[10003]: I0216 17:00:51.322400 10003 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5d108e8b-620e-4523-a97d-3e4d2073f137-audit\") on node \"master-0\" DevicePath \"\"" Feb 16 17:00:51.322974 master-0 kubenswrapper[10003]: I0216 17:00:51.322769 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.322974 master-0 kubenswrapper[10003]: I0216 17:00:51.322823 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.323068 master-0 kubenswrapper[10003]: I0216 17:00:51.323009 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.323580 master-0 kubenswrapper[10003]: I0216 17:00:51.323537 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.323627 master-0 kubenswrapper[10003]: I0216 17:00:51.323582 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.323627 master-0 kubenswrapper[10003]: I0216 17:00:51.323616 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.323733 master-0 kubenswrapper[10003]: I0216 17:00:51.323699 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.326733 master-0 kubenswrapper[10003]: I0216 17:00:51.324969 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.326733 master-0 kubenswrapper[10003]: I0216 17:00:51.325727 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.330364 master-0 kubenswrapper[10003]: I0216 17:00:51.330326 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.407245 master-0 kubenswrapper[10003]: I0216 17:00:51.406976 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.495809 master-0 kubenswrapper[10003]: I0216 17:00:51.495338 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:00:51.607779 master-0 kubenswrapper[10003]: W0216 17:00:51.607713 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7390ccc6_dfbe_4f51_960c_7628f49bffb7.slice/crio-6518a84f5d47511f5f25592fd8ed06e7ac0d8f38709f9e4fcd73acdf3eb6490c WatchSource:0}: Error finding container 6518a84f5d47511f5f25592fd8ed06e7ac0d8f38709f9e4fcd73acdf3eb6490c: Status 404 returned error can't find the container with id 6518a84f5d47511f5f25592fd8ed06e7ac0d8f38709f9e4fcd73acdf3eb6490c Feb 16 17:00:51.827596 master-0 kubenswrapper[10003]: I0216 17:00:51.827546 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:51.827822 master-0 kubenswrapper[10003]: E0216 17:00:51.827698 10003 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:51.827822 master-0 kubenswrapper[10003]: E0216 17:00:51.827744 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca podName:f9e4abf5-7fdb-4aad-a69a-b0999c617acb nodeName:}" failed. No retries permitted until 2026-02-16 17:00:55.827729688 +0000 UTC m=+25.343215359 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca") pod "controller-manager-5bf97f7775-zn8fd" (UID: "f9e4abf5-7fdb-4aad-a69a-b0999c617acb") : configmap "client-ca" not found Feb 16 17:00:52.133211 master-0 kubenswrapper[10003]: I0216 17:00:52.133005 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" event={"ID":"7390ccc6-dfbe-4f51-960c-7628f49bffb7","Type":"ContainerStarted","Data":"6518a84f5d47511f5f25592fd8ed06e7ac0d8f38709f9e4fcd73acdf3eb6490c"} Feb 16 17:00:52.804116 master-0 kubenswrapper[10003]: I0216 17:00:52.804073 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d108e8b-620e-4523-a97d-3e4d2073f137" path="/var/lib/kubelet/pods/5d108e8b-620e-4523-a97d-3e4d2073f137/volumes" Feb 16 17:00:53.421165 master-0 kubenswrapper[10003]: I0216 17:00:53.420034 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-fc4bf7f79-tqnlw"] Feb 16 17:00:53.484360 master-0 kubenswrapper[10003]: W0216 17:00:53.484241 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddce85b5e_6e92_4e0e_bee7_07b1a3634302.slice/crio-34d279c74bd940d5ab6f0f7e4b7983d57ebc4d60ff3c8f38850791761b56d54b WatchSource:0}: Error finding container 34d279c74bd940d5ab6f0f7e4b7983d57ebc4d60ff3c8f38850791761b56d54b: Status 404 returned error can't find the container with id 34d279c74bd940d5ab6f0f7e4b7983d57ebc4d60ff3c8f38850791761b56d54b Feb 16 17:00:54.145388 master-0 kubenswrapper[10003]: I0216 17:00:54.145327 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerStarted","Data":"41145f961148dffbd55b7be77a9591605ef99767213da81b0ba442326c4b3012"} Feb 16 17:00:54.145388 master-0 kubenswrapper[10003]: I0216 17:00:54.145381 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerStarted","Data":"8621c35772b0fa0c74746882d26cde088c3ee0e7e232d2738c23c769fa66118c"} Feb 16 17:00:54.149754 master-0 kubenswrapper[10003]: I0216 17:00:54.148891 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerStarted","Data":"e89782b445c861c527809a14cee2fd06738fb3945a0c6a3181ecbf78934d2bda"} Feb 16 17:00:54.149754 master-0 kubenswrapper[10003]: I0216 17:00:54.149550 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:54.154618 master-0 kubenswrapper[10003]: I0216 17:00:54.154079 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:00:54.154854 master-0 kubenswrapper[10003]: I0216 17:00:54.154767 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" event={"ID":"ab6e5720-2c30-4962-9c67-89f1607d137f","Type":"ContainerStarted","Data":"4a3f00327e72eb182ca9d24f6345e55e740d3bc96d139c82176b1ad867248cfd"} Feb 16 17:00:54.154854 master-0 kubenswrapper[10003]: I0216 17:00:54.154844 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" event={"ID":"ab6e5720-2c30-4962-9c67-89f1607d137f","Type":"ContainerStarted","Data":"8596ea544be0a448a19f843f8fb2963353f75aac2b39d7b1fc12540532ae6bdc"} Feb 16 17:00:54.157321 master-0 kubenswrapper[10003]: I0216 17:00:54.157296 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" event={"ID":"e10d0b0c-4c2a-45b3-8d69-3070d566b97d","Type":"ContainerStarted","Data":"81ddf55b61540f7b5e030d229eea51d26c8a5bda0650c33851cbe3fbbeefd261"} Feb 16 17:00:54.159159 master-0 kubenswrapper[10003]: I0216 17:00:54.159118 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerStarted","Data":"34d279c74bd940d5ab6f0f7e4b7983d57ebc4d60ff3c8f38850791761b56d54b"} Feb 16 17:00:55.877775 master-0 kubenswrapper[10003]: I0216 17:00:55.877728 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:00:55.878308 master-0 kubenswrapper[10003]: E0216 17:00:55.877871 10003 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:00:55.878308 master-0 kubenswrapper[10003]: E0216 17:00:55.877933 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca podName:f9e4abf5-7fdb-4aad-a69a-b0999c617acb nodeName:}" failed. No retries permitted until 2026-02-16 17:01:03.877904399 +0000 UTC m=+33.393390070 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca") pod "controller-manager-5bf97f7775-zn8fd" (UID: "f9e4abf5-7fdb-4aad-a69a-b0999c617acb") : configmap "client-ca" not found Feb 16 17:00:56.867583 master-0 kubenswrapper[10003]: I0216 17:00:56.867526 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr"] Feb 16 17:00:56.868131 master-0 kubenswrapper[10003]: I0216 17:00:56.868110 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.873097 master-0 kubenswrapper[10003]: I0216 17:00:56.871998 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 16 17:00:56.873097 master-0 kubenswrapper[10003]: I0216 17:00:56.872270 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 16 17:00:56.873097 master-0 kubenswrapper[10003]: I0216 17:00:56.872447 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 16 17:00:56.879003 master-0 kubenswrapper[10003]: I0216 17:00:56.878947 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr"] Feb 16 17:00:56.879576 master-0 kubenswrapper[10003]: I0216 17:00:56.879536 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 16 17:00:56.891164 master-0 kubenswrapper[10003]: I0216 17:00:56.891114 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.891164 master-0 kubenswrapper[10003]: I0216 17:00:56.891166 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.891406 master-0 kubenswrapper[10003]: I0216 17:00:56.891189 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.891406 master-0 kubenswrapper[10003]: I0216 17:00:56.891207 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.891406 master-0 kubenswrapper[10003]: I0216 17:00:56.891239 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.891406 master-0 kubenswrapper[10003]: I0216 17:00:56.891280 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.980304 master-0 kubenswrapper[10003]: I0216 17:00:56.980237 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b"] Feb 16 17:00:56.980858 master-0 kubenswrapper[10003]: I0216 17:00:56.980828 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:56.983180 master-0 kubenswrapper[10003]: I0216 17:00:56.983122 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 16 17:00:56.983718 master-0 kubenswrapper[10003]: I0216 17:00:56.983306 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 16 17:00:56.983718 master-0 kubenswrapper[10003]: I0216 17:00:56.983438 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 16 17:00:56.989397 master-0 kubenswrapper[10003]: I0216 17:00:56.989347 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b"] Feb 16 17:00:56.996950 master-0 kubenswrapper[10003]: I0216 17:00:56.996234 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.996950 master-0 kubenswrapper[10003]: I0216 17:00:56.996309 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.996950 master-0 kubenswrapper[10003]: I0216 17:00:56.996340 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.996950 master-0 kubenswrapper[10003]: I0216 17:00:56.996373 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.996950 master-0 kubenswrapper[10003]: I0216 17:00:56.996392 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.996950 master-0 kubenswrapper[10003]: I0216 17:00:56.996410 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.997265 master-0 kubenswrapper[10003]: I0216 17:00:56.997086 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.997631 master-0 kubenswrapper[10003]: I0216 17:00:56.997421 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:56.997631 master-0 kubenswrapper[10003]: I0216 17:00:56.997529 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:57.000954 master-0 kubenswrapper[10003]: I0216 17:00:56.999470 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:57.001967 master-0 kubenswrapper[10003]: I0216 17:00:57.001444 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:57.015557 master-0 kubenswrapper[10003]: I0216 17:00:57.015515 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:57.097255 master-0 kubenswrapper[10003]: I0216 17:00:57.097201 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.097255 master-0 kubenswrapper[10003]: I0216 17:00:57.097255 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.097484 master-0 kubenswrapper[10003]: I0216 17:00:57.097274 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.097484 master-0 kubenswrapper[10003]: I0216 17:00:57.097294 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.097612 master-0 kubenswrapper[10003]: I0216 17:00:57.097574 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.191100 master-0 kubenswrapper[10003]: I0216 17:00:57.190970 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:00:57.198417 master-0 kubenswrapper[10003]: I0216 17:00:57.198362 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.198541 master-0 kubenswrapper[10003]: I0216 17:00:57.198441 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.198541 master-0 kubenswrapper[10003]: I0216 17:00:57.198527 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.198629 master-0 kubenswrapper[10003]: E0216 17:00:57.198553 10003 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: configmap "operator-controller-trusted-ca-bundle" not found Feb 16 17:00:57.198629 master-0 kubenswrapper[10003]: I0216 17:00:57.198572 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.198629 master-0 kubenswrapper[10003]: E0216 17:00:57.198583 10003 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: configmap "operator-controller-trusted-ca-bundle" not found Feb 16 17:00:57.198629 master-0 kubenswrapper[10003]: I0216 17:00:57.198600 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.198629 master-0 kubenswrapper[10003]: I0216 17:00:57.198618 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.198807 master-0 kubenswrapper[10003]: E0216 17:00:57.198643 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:00:57.698625564 +0000 UTC m=+27.214111235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : configmap "operator-controller-trusted-ca-bundle" not found Feb 16 17:00:57.198944 master-0 kubenswrapper[10003]: I0216 17:00:57.198888 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.199001 master-0 kubenswrapper[10003]: I0216 17:00:57.198991 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.216999 master-0 kubenswrapper[10003]: I0216 17:00:57.216913 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.504570 master-0 kubenswrapper[10003]: I0216 17:00:57.504502 10003 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:00:57.505003 master-0 kubenswrapper[10003]: I0216 17:00:57.504965 10003 scope.go:117] "RemoveContainer" containerID="925f178f46a1d5c4c22dbeed05e4d6e9975a60d252305dcd17064d2bc8dfab6e" Feb 16 17:00:57.704143 master-0 kubenswrapper[10003]: I0216 17:00:57.704075 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.707545 master-0 kubenswrapper[10003]: I0216 17:00:57.707507 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:57.901828 master-0 kubenswrapper[10003]: I0216 17:00:57.901675 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:00:58.184889 master-0 kubenswrapper[10003]: I0216 17:00:58.184769 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 17:00:58.185107 master-0 kubenswrapper[10003]: I0216 17:00:58.185000 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="bb59921b-7279-4258-a342-554b2878dca1" containerName="installer" containerID="cri-o://fad88901989abbb6486cff5122d32135bc1a86edd8ffd3eb270671a3c15b9193" gracePeriod=30 Feb 16 17:00:59.197993 master-0 kubenswrapper[10003]: I0216 17:00:59.197952 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerStarted","Data":"8b5f186fc636a0c8960f76cfef6732841109955fd2f4967d010972e20332e869"} Feb 16 17:00:59.208438 master-0 kubenswrapper[10003]: I0216 17:00:59.205324 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerStarted","Data":"179f3f8e9463125ded1c5a4f832192a17edba6e13a5506acf48e86abcd40cda7"} Feb 16 17:00:59.208438 master-0 kubenswrapper[10003]: I0216 17:00:59.205444 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:00:59.287320 master-0 kubenswrapper[10003]: I0216 17:00:59.287281 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b"] Feb 16 17:00:59.351086 master-0 kubenswrapper[10003]: I0216 17:00:59.351036 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr"] Feb 16 17:01:00.198596 master-0 kubenswrapper[10003]: I0216 17:01:00.198490 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l"] Feb 16 17:01:00.199359 master-0 kubenswrapper[10003]: I0216 17:01:00.199327 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" podUID="568b22df-b454-4d74-bc21-6c84daf17c8c" containerName="cluster-version-operator" containerID="cri-o://3cf535557ef474da496049f2fbeee50256220278f92b2366b4416bba5db2a11f" gracePeriod=130 Feb 16 17:01:00.224726 master-0 kubenswrapper[10003]: I0216 17:01:00.224673 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerStarted","Data":"e843fe6093fddb4f0608997e3c887c540c5790a52102b7ba7e769e8ae9904f7d"} Feb 16 17:01:00.224726 master-0 kubenswrapper[10003]: I0216 17:01:00.224731 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerStarted","Data":"af4f06c8656e24dc76c11a21937d73b5e139ad31b06bedcdf3957bacba32069a"} Feb 16 17:01:00.225021 master-0 kubenswrapper[10003]: I0216 17:01:00.224745 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerStarted","Data":"c68b78ea048e3e05fd1fcd40eae1c2d97a33dc3cbf3cea258f66da49798e5912"} Feb 16 17:01:00.229273 master-0 kubenswrapper[10003]: I0216 17:01:00.229215 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerStarted","Data":"370077a3a36bce27e444d5d7ac12daf42269e596b3cbd5fa257c45ddbfe8edf1"} Feb 16 17:01:00.229273 master-0 kubenswrapper[10003]: I0216 17:01:00.229263 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerStarted","Data":"0bee485c8968ce0e68dba41fcbcee4d323847661d4d7322172f3a42844676150"} Feb 16 17:01:00.229698 master-0 kubenswrapper[10003]: I0216 17:01:00.229661 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:01:00.230999 master-0 kubenswrapper[10003]: I0216 17:01:00.230935 10003 generic.go:334] "Generic (PLEG): container finished" podID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" containerID="0c316f0475ab0d19308e3571553a8196d11f7628c2f61de84b97dea8ed48cf58" exitCode=0 Feb 16 17:01:00.231130 master-0 kubenswrapper[10003]: I0216 17:01:00.231109 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerDied","Data":"0c316f0475ab0d19308e3571553a8196d11f7628c2f61de84b97dea8ed48cf58"} Feb 16 17:01:00.233580 master-0 kubenswrapper[10003]: I0216 17:01:00.233517 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerStarted","Data":"11df536ab46de7aea5d67794ede57f343d242c66232b1933e38f8621505f15f7"} Feb 16 17:01:00.233580 master-0 kubenswrapper[10003]: I0216 17:01:00.233575 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerStarted","Data":"8e6333d17c854be811265371ff3fa3a77118514f88a15fbd08c26eea148ad400"} Feb 16 17:01:00.233700 master-0 kubenswrapper[10003]: I0216 17:01:00.233589 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerStarted","Data":"f6a17f679ed7a7fbe57a462f9ffd2577eef58e5ba226eff8515fa879120c4750"} Feb 16 17:01:00.233700 master-0 kubenswrapper[10003]: I0216 17:01:00.233605 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:01:00.234808 master-0 kubenswrapper[10003]: I0216 17:01:00.234771 10003 generic.go:334] "Generic (PLEG): container finished" podID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" containerID="97d671c2a336b225236f0499e973eab6ef7683203f7b46f7e3767de75b466dd3" exitCode=0 Feb 16 17:01:00.234890 master-0 kubenswrapper[10003]: I0216 17:01:00.234854 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" event={"ID":"7390ccc6-dfbe-4f51-960c-7628f49bffb7","Type":"ContainerDied","Data":"97d671c2a336b225236f0499e973eab6ef7683203f7b46f7e3767de75b466dd3"} Feb 16 17:01:00.234965 master-0 kubenswrapper[10003]: I0216 17:01:00.234893 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" event={"ID":"7390ccc6-dfbe-4f51-960c-7628f49bffb7","Type":"ContainerStarted","Data":"fb06e1ce2942ea95f146315a11dd8bc05e374eacc49a86ee457b9eb98dde18f6"} Feb 16 17:01:00.248739 master-0 kubenswrapper[10003]: I0216 17:01:00.248656 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podStartSLOduration=4.248637872 podStartE2EDuration="4.248637872s" podCreationTimestamp="2026-02-16 17:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:00.24525687 +0000 UTC m=+29.760742551" watchObservedRunningTime="2026-02-16 17:01:00.248637872 +0000 UTC m=+29.764123553" Feb 16 17:01:00.263169 master-0 kubenswrapper[10003]: I0216 17:01:00.262763 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podStartSLOduration=2.946205807 podStartE2EDuration="10.262734556s" podCreationTimestamp="2026-02-16 17:00:50 +0000 UTC" firstStartedPulling="2026-02-16 17:00:51.609552619 +0000 UTC m=+21.125038290" lastFinishedPulling="2026-02-16 17:00:58.926081368 +0000 UTC m=+28.441567039" observedRunningTime="2026-02-16 17:01:00.262346866 +0000 UTC m=+29.777832547" watchObservedRunningTime="2026-02-16 17:01:00.262734556 +0000 UTC m=+29.778220237" Feb 16 17:01:00.282422 master-0 kubenswrapper[10003]: I0216 17:01:00.280182 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qcgxx" podStartSLOduration=4.689655134 podStartE2EDuration="13.280162961s" podCreationTimestamp="2026-02-16 17:00:47 +0000 UTC" firstStartedPulling="2026-02-16 17:00:50.335574861 +0000 UTC m=+19.851060522" lastFinishedPulling="2026-02-16 17:00:58.926082678 +0000 UTC m=+28.441568349" observedRunningTime="2026-02-16 17:01:00.278972009 +0000 UTC m=+29.794457690" watchObservedRunningTime="2026-02-16 17:01:00.280162961 +0000 UTC m=+29.795648632" Feb 16 17:01:00.319041 master-0 kubenswrapper[10003]: I0216 17:01:00.318099 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podStartSLOduration=4.318078405 podStartE2EDuration="4.318078405s" podCreationTimestamp="2026-02-16 17:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:00.318030234 +0000 UTC m=+29.833515925" watchObservedRunningTime="2026-02-16 17:01:00.318078405 +0000 UTC m=+29.833564076" Feb 16 17:01:00.341120 master-0 kubenswrapper[10003]: I0216 17:01:00.340734 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:01:00.452643 master-0 kubenswrapper[10003]: I0216 17:01:00.445321 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/568b22df-b454-4d74-bc21-6c84daf17c8c-kube-api-access\") pod \"568b22df-b454-4d74-bc21-6c84daf17c8c\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " Feb 16 17:01:00.452643 master-0 kubenswrapper[10003]: I0216 17:01:00.445439 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/568b22df-b454-4d74-bc21-6c84daf17c8c-service-ca\") pod \"568b22df-b454-4d74-bc21-6c84daf17c8c\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " Feb 16 17:01:00.452643 master-0 kubenswrapper[10003]: I0216 17:01:00.445475 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-ssl-certs\") pod \"568b22df-b454-4d74-bc21-6c84daf17c8c\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " Feb 16 17:01:00.452643 master-0 kubenswrapper[10003]: I0216 17:01:00.445612 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") pod \"568b22df-b454-4d74-bc21-6c84daf17c8c\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " Feb 16 17:01:00.452643 master-0 kubenswrapper[10003]: I0216 17:01:00.445668 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-cvo-updatepayloads\") pod \"568b22df-b454-4d74-bc21-6c84daf17c8c\" (UID: \"568b22df-b454-4d74-bc21-6c84daf17c8c\") " Feb 16 17:01:00.452643 master-0 kubenswrapper[10003]: I0216 17:01:00.446048 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "568b22df-b454-4d74-bc21-6c84daf17c8c" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:01:00.452643 master-0 kubenswrapper[10003]: I0216 17:01:00.449160 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568b22df-b454-4d74-bc21-6c84daf17c8c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "568b22df-b454-4d74-bc21-6c84daf17c8c" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:00.452643 master-0 kubenswrapper[10003]: I0216 17:01:00.449785 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568b22df-b454-4d74-bc21-6c84daf17c8c-service-ca" (OuterVolumeSpecName: "service-ca") pod "568b22df-b454-4d74-bc21-6c84daf17c8c" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:00.452643 master-0 kubenswrapper[10003]: I0216 17:01:00.449838 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "568b22df-b454-4d74-bc21-6c84daf17c8c" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:01:00.468459 master-0 kubenswrapper[10003]: I0216 17:01:00.456525 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "568b22df-b454-4d74-bc21-6c84daf17c8c" (UID: "568b22df-b454-4d74-bc21-6c84daf17c8c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:01:00.547208 master-0 kubenswrapper[10003]: I0216 17:01:00.547172 10003 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/568b22df-b454-4d74-bc21-6c84daf17c8c-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:00.547310 master-0 kubenswrapper[10003]: I0216 17:01:00.547215 10003 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:00.547310 master-0 kubenswrapper[10003]: I0216 17:01:00.547237 10003 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/568b22df-b454-4d74-bc21-6c84daf17c8c-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:00.547310 master-0 kubenswrapper[10003]: I0216 17:01:00.547261 10003 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/568b22df-b454-4d74-bc21-6c84daf17c8c-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:00.547310 master-0 kubenswrapper[10003]: I0216 17:01:00.547277 10003 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/568b22df-b454-4d74-bc21-6c84daf17c8c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:00.685170 master-0 kubenswrapper[10003]: I0216 17:01:00.685124 10003 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:01:00.685170 master-0 kubenswrapper[10003]: I0216 17:01:00.685169 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:01:00.694537 master-0 kubenswrapper[10003]: I0216 17:01:00.694500 10003 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:01:00.773132 master-0 kubenswrapper[10003]: I0216 17:01:00.773071 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 17:01:00.773329 master-0 kubenswrapper[10003]: E0216 17:01:00.773234 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568b22df-b454-4d74-bc21-6c84daf17c8c" containerName="cluster-version-operator" Feb 16 17:01:00.773329 master-0 kubenswrapper[10003]: I0216 17:01:00.773252 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="568b22df-b454-4d74-bc21-6c84daf17c8c" containerName="cluster-version-operator" Feb 16 17:01:00.773448 master-0 kubenswrapper[10003]: I0216 17:01:00.773335 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="568b22df-b454-4d74-bc21-6c84daf17c8c" containerName="cluster-version-operator" Feb 16 17:01:00.773646 master-0 kubenswrapper[10003]: I0216 17:01:00.773616 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 17:01:00.787890 master-0 kubenswrapper[10003]: I0216 17:01:00.787837 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 17:01:00.850882 master-0 kubenswrapper[10003]: I0216 17:01:00.850824 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f91197b-2619-4744-b739-e1ec4b3ab447-var-lock\") pod \"installer-2-master-0\" (UID: \"9f91197b-2619-4744-b739-e1ec4b3ab447\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 17:01:00.851084 master-0 kubenswrapper[10003]: I0216 17:01:00.850946 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f91197b-2619-4744-b739-e1ec4b3ab447-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"9f91197b-2619-4744-b739-e1ec4b3ab447\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 17:01:00.851084 master-0 kubenswrapper[10003]: I0216 17:01:00.851039 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f91197b-2619-4744-b739-e1ec4b3ab447-kube-api-access\") pod \"installer-2-master-0\" (UID: \"9f91197b-2619-4744-b739-e1ec4b3ab447\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 17:01:00.952379 master-0 kubenswrapper[10003]: I0216 17:01:00.952309 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f91197b-2619-4744-b739-e1ec4b3ab447-var-lock\") pod \"installer-2-master-0\" (UID: \"9f91197b-2619-4744-b739-e1ec4b3ab447\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 17:01:00.952626 master-0 kubenswrapper[10003]: I0216 17:01:00.952456 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f91197b-2619-4744-b739-e1ec4b3ab447-var-lock\") pod \"installer-2-master-0\" (UID: \"9f91197b-2619-4744-b739-e1ec4b3ab447\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 17:01:00.952681 master-0 kubenswrapper[10003]: I0216 17:01:00.952635 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f91197b-2619-4744-b739-e1ec4b3ab447-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"9f91197b-2619-4744-b739-e1ec4b3ab447\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 17:01:00.952735 master-0 kubenswrapper[10003]: I0216 17:01:00.952704 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f91197b-2619-4744-b739-e1ec4b3ab447-kube-api-access\") pod \"installer-2-master-0\" (UID: \"9f91197b-2619-4744-b739-e1ec4b3ab447\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 17:01:00.952842 master-0 kubenswrapper[10003]: I0216 17:01:00.952799 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f91197b-2619-4744-b739-e1ec4b3ab447-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"9f91197b-2619-4744-b739-e1ec4b3ab447\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 17:01:00.975218 master-0 kubenswrapper[10003]: I0216 17:01:00.975113 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f91197b-2619-4744-b739-e1ec4b3ab447-kube-api-access\") pod \"installer-2-master-0\" (UID: \"9f91197b-2619-4744-b739-e1ec4b3ab447\") " pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 17:01:01.091537 master-0 kubenswrapper[10003]: I0216 17:01:01.091472 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 17:01:01.265020 master-0 kubenswrapper[10003]: I0216 17:01:01.253485 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerStarted","Data":"461a2f0f61f0fcc0eb519485188a2e4212d395f0c1a67321cce2d8f4b7ef3e1c"} Feb 16 17:01:01.265020 master-0 kubenswrapper[10003]: I0216 17:01:01.253545 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerStarted","Data":"3af7f55a17ec60042c0482aa69809fbba4e6ed0269b1409544298283f99ef1ef"} Feb 16 17:01:01.265020 master-0 kubenswrapper[10003]: I0216 17:01:01.260940 10003 generic.go:334] "Generic (PLEG): container finished" podID="568b22df-b454-4d74-bc21-6c84daf17c8c" containerID="3cf535557ef474da496049f2fbeee50256220278f92b2366b4416bba5db2a11f" exitCode=0 Feb 16 17:01:01.265020 master-0 kubenswrapper[10003]: I0216 17:01:01.261487 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" Feb 16 17:01:01.265020 master-0 kubenswrapper[10003]: I0216 17:01:01.261883 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" event={"ID":"568b22df-b454-4d74-bc21-6c84daf17c8c","Type":"ContainerDied","Data":"3cf535557ef474da496049f2fbeee50256220278f92b2366b4416bba5db2a11f"} Feb 16 17:01:01.265020 master-0 kubenswrapper[10003]: I0216 17:01:01.261910 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l" event={"ID":"568b22df-b454-4d74-bc21-6c84daf17c8c","Type":"ContainerDied","Data":"d4c4164857bca7a77dce556ef190218992857c42d5628a4f2140aa29651cbc3e"} Feb 16 17:01:01.265020 master-0 kubenswrapper[10003]: I0216 17:01:01.261947 10003 scope.go:117] "RemoveContainer" containerID="3cf535557ef474da496049f2fbeee50256220278f92b2366b4416bba5db2a11f" Feb 16 17:01:01.265020 master-0 kubenswrapper[10003]: I0216 17:01:01.262694 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:01:01.269298 master-0 kubenswrapper[10003]: I0216 17:01:01.269088 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:01:01.282084 master-0 kubenswrapper[10003]: I0216 17:01:01.282053 10003 scope.go:117] "RemoveContainer" containerID="3cf535557ef474da496049f2fbeee50256220278f92b2366b4416bba5db2a11f" Feb 16 17:01:01.282729 master-0 kubenswrapper[10003]: E0216 17:01:01.282683 10003 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3cf535557ef474da496049f2fbeee50256220278f92b2366b4416bba5db2a11f\": container with ID starting with 3cf535557ef474da496049f2fbeee50256220278f92b2366b4416bba5db2a11f not found: ID does not exist" containerID="3cf535557ef474da496049f2fbeee50256220278f92b2366b4416bba5db2a11f" Feb 16 17:01:01.282878 master-0 kubenswrapper[10003]: I0216 17:01:01.282732 10003 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3cf535557ef474da496049f2fbeee50256220278f92b2366b4416bba5db2a11f"} err="failed to get container status \"3cf535557ef474da496049f2fbeee50256220278f92b2366b4416bba5db2a11f\": rpc error: code = NotFound desc = could not find container \"3cf535557ef474da496049f2fbeee50256220278f92b2366b4416bba5db2a11f\": container with ID starting with 3cf535557ef474da496049f2fbeee50256220278f92b2366b4416bba5db2a11f not found: ID does not exist" Feb 16 17:01:01.282957 master-0 kubenswrapper[10003]: I0216 17:01:01.282850 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podStartSLOduration=6.785721077 podStartE2EDuration="12.282830663s" podCreationTimestamp="2026-02-16 17:00:49 +0000 UTC" firstStartedPulling="2026-02-16 17:00:53.486972143 +0000 UTC m=+23.002457814" lastFinishedPulling="2026-02-16 17:00:58.984081729 +0000 UTC m=+28.499567400" observedRunningTime="2026-02-16 17:01:01.278974537 +0000 UTC m=+30.794460218" watchObservedRunningTime="2026-02-16 17:01:01.282830663 +0000 UTC m=+30.798316334" Feb 16 17:01:01.293378 master-0 kubenswrapper[10003]: I0216 17:01:01.293186 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l"] Feb 16 17:01:01.294842 master-0 kubenswrapper[10003]: I0216 17:01:01.294796 10003 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-76959b6567-wnh7l"] Feb 16 17:01:01.337127 master-0 kubenswrapper[10003]: I0216 17:01:01.334930 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb"] Feb 16 17:01:01.337127 master-0 kubenswrapper[10003]: I0216 17:01:01.335449 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.338277 master-0 kubenswrapper[10003]: I0216 17:01:01.338231 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 17:01:01.338546 master-0 kubenswrapper[10003]: I0216 17:01:01.338522 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 17:01:01.338898 master-0 kubenswrapper[10003]: I0216 17:01:01.338864 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 17:01:01.355587 master-0 kubenswrapper[10003]: I0216 17:01:01.355540 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.355792 master-0 kubenswrapper[10003]: I0216 17:01:01.355614 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.355792 master-0 kubenswrapper[10003]: I0216 17:01:01.355649 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.355792 master-0 kubenswrapper[10003]: I0216 17:01:01.355689 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.355792 master-0 kubenswrapper[10003]: I0216 17:01:01.355705 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.456994 master-0 kubenswrapper[10003]: I0216 17:01:01.456942 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.456994 master-0 kubenswrapper[10003]: I0216 17:01:01.456992 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.457221 master-0 kubenswrapper[10003]: I0216 17:01:01.457017 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.457221 master-0 kubenswrapper[10003]: I0216 17:01:01.457100 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.457221 master-0 kubenswrapper[10003]: I0216 17:01:01.457155 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.457366 master-0 kubenswrapper[10003]: I0216 17:01:01.457335 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.459971 master-0 kubenswrapper[10003]: I0216 17:01:01.457777 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.459971 master-0 kubenswrapper[10003]: I0216 17:01:01.458533 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.465831 master-0 kubenswrapper[10003]: I0216 17:01:01.465394 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 17:01:01.466020 master-0 kubenswrapper[10003]: I0216 17:01:01.465866 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.477682 master-0 kubenswrapper[10003]: W0216 17:01:01.476291 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9f91197b_2619_4744_b739_e1ec4b3ab447.slice/crio-15ef05289713a1ba25a32eabe280c627821bd43f611d7302ac2def0585d6cb1e WatchSource:0}: Error finding container 15ef05289713a1ba25a32eabe280c627821bd43f611d7302ac2def0585d6cb1e: Status 404 returned error can't find the container with id 15ef05289713a1ba25a32eabe280c627821bd43f611d7302ac2def0585d6cb1e Feb 16 17:01:01.480884 master-0 kubenswrapper[10003]: I0216 17:01:01.480676 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.496709 master-0 kubenswrapper[10003]: I0216 17:01:01.496309 10003 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:01:01.496709 master-0 kubenswrapper[10003]: I0216 17:01:01.496348 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: I0216 17:01:01.502992 10003 patch_prober.go:28] interesting pod/apiserver-fc4bf7f79-tqnlw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [+]log ok Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [+]etcd ok Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [+]poststarthook/max-in-flight-filter ok Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [+]poststarthook/openshift.io-startinformers ok Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 17:01:01.503042 master-0 kubenswrapper[10003]: livez check failed Feb 16 17:01:01.503443 master-0 kubenswrapper[10003]: I0216 17:01:01.503061 10003 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:01:01.703341 master-0 kubenswrapper[10003]: I0216 17:01:01.703279 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:01:01.732342 master-0 kubenswrapper[10003]: W0216 17:01:01.732271 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6ad958f_25e4_40cb_89ec_5da9cb6395c7.slice/crio-82c53f2c6633be154a699c2073aab7e95ac6eb9355d0da7ab4ff59d5ab695ebf WatchSource:0}: Error finding container 82c53f2c6633be154a699c2073aab7e95ac6eb9355d0da7ab4ff59d5ab695ebf: Status 404 returned error can't find the container with id 82c53f2c6633be154a699c2073aab7e95ac6eb9355d0da7ab4ff59d5ab695ebf Feb 16 17:01:02.268029 master-0 kubenswrapper[10003]: I0216 17:01:02.267975 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"9f91197b-2619-4744-b739-e1ec4b3ab447","Type":"ContainerStarted","Data":"3a99d15a7e2411aa51974edf473f4546983bb2f97f193a98437d18fe2f623532"} Feb 16 17:01:02.268029 master-0 kubenswrapper[10003]: I0216 17:01:02.268029 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"9f91197b-2619-4744-b739-e1ec4b3ab447","Type":"ContainerStarted","Data":"15ef05289713a1ba25a32eabe280c627821bd43f611d7302ac2def0585d6cb1e"} Feb 16 17:01:02.269531 master-0 kubenswrapper[10003]: I0216 17:01:02.269491 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" event={"ID":"b6ad958f-25e4-40cb-89ec-5da9cb6395c7","Type":"ContainerStarted","Data":"2dafd39a483160a1d39cbe9a3a9409c939da33f2a648ec553387255240b550e9"} Feb 16 17:01:02.269607 master-0 kubenswrapper[10003]: I0216 17:01:02.269536 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" event={"ID":"b6ad958f-25e4-40cb-89ec-5da9cb6395c7","Type":"ContainerStarted","Data":"82c53f2c6633be154a699c2073aab7e95ac6eb9355d0da7ab4ff59d5ab695ebf"} Feb 16 17:01:02.294992 master-0 kubenswrapper[10003]: I0216 17:01:02.294890 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=2.294869918 podStartE2EDuration="2.294869918s" podCreationTimestamp="2026-02-16 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:02.281628437 +0000 UTC m=+31.797114108" watchObservedRunningTime="2026-02-16 17:01:02.294869918 +0000 UTC m=+31.810355589" Feb 16 17:01:02.295238 master-0 kubenswrapper[10003]: I0216 17:01:02.295206 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" podStartSLOduration=1.295202847 podStartE2EDuration="1.295202847s" podCreationTimestamp="2026-02-16 17:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:02.294409796 +0000 UTC m=+31.809895477" watchObservedRunningTime="2026-02-16 17:01:02.295202847 +0000 UTC m=+31.810688508" Feb 16 17:01:02.804861 master-0 kubenswrapper[10003]: I0216 17:01:02.804825 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568b22df-b454-4d74-bc21-6c84daf17c8c" path="/var/lib/kubelet/pods/568b22df-b454-4d74-bc21-6c84daf17c8c/volumes" Feb 16 17:01:03.791123 master-0 kubenswrapper[10003]: I0216 17:01:03.791010 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca\") pod \"route-controller-manager-78fb76f597-46pj4\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:01:03.791964 master-0 kubenswrapper[10003]: E0216 17:01:03.791172 10003 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:01:03.791964 master-0 kubenswrapper[10003]: E0216 17:01:03.791256 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca podName:cff91b5b-3cbb-489a-94e7-9f279ae6cbbb nodeName:}" failed. No retries permitted until 2026-02-16 17:01:35.791237762 +0000 UTC m=+65.306723433 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca") pod "route-controller-manager-78fb76f597-46pj4" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb") : configmap "client-ca" not found Feb 16 17:01:03.854591 master-0 kubenswrapper[10003]: I0216 17:01:03.854529 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 16 17:01:03.855212 master-0 kubenswrapper[10003]: I0216 17:01:03.855158 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 17:01:03.856913 master-0 kubenswrapper[10003]: I0216 17:01:03.856835 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 16 17:01:03.863357 master-0 kubenswrapper[10003]: I0216 17:01:03.863256 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 16 17:01:03.892174 master-0 kubenswrapper[10003]: I0216 17:01:03.892074 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf8ea978-99b0-4957-8f9d-0d074263b235-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"cf8ea978-99b0-4957-8f9d-0d074263b235\") " pod="openshift-etcd/installer-1-master-0" Feb 16 17:01:03.892424 master-0 kubenswrapper[10003]: I0216 17:01:03.892207 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca\") pod \"controller-manager-5bf97f7775-zn8fd\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:01:03.892424 master-0 kubenswrapper[10003]: I0216 17:01:03.892244 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf8ea978-99b0-4957-8f9d-0d074263b235-kube-api-access\") pod \"installer-1-master-0\" (UID: \"cf8ea978-99b0-4957-8f9d-0d074263b235\") " pod="openshift-etcd/installer-1-master-0" Feb 16 17:01:03.892424 master-0 kubenswrapper[10003]: E0216 17:01:03.892286 10003 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Feb 16 17:01:03.892424 master-0 kubenswrapper[10003]: I0216 17:01:03.892309 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf8ea978-99b0-4957-8f9d-0d074263b235-var-lock\") pod \"installer-1-master-0\" (UID: \"cf8ea978-99b0-4957-8f9d-0d074263b235\") " pod="openshift-etcd/installer-1-master-0" Feb 16 17:01:03.892424 master-0 kubenswrapper[10003]: E0216 17:01:03.892359 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca podName:f9e4abf5-7fdb-4aad-a69a-b0999c617acb nodeName:}" failed. No retries permitted until 2026-02-16 17:01:19.892341959 +0000 UTC m=+49.407827630 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca") pod "controller-manager-5bf97f7775-zn8fd" (UID: "f9e4abf5-7fdb-4aad-a69a-b0999c617acb") : configmap "client-ca" not found Feb 16 17:01:03.993993 master-0 kubenswrapper[10003]: I0216 17:01:03.993943 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf8ea978-99b0-4957-8f9d-0d074263b235-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"cf8ea978-99b0-4957-8f9d-0d074263b235\") " pod="openshift-etcd/installer-1-master-0" Feb 16 17:01:03.994212 master-0 kubenswrapper[10003]: I0216 17:01:03.994187 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf8ea978-99b0-4957-8f9d-0d074263b235-kube-api-access\") pod \"installer-1-master-0\" (UID: \"cf8ea978-99b0-4957-8f9d-0d074263b235\") " pod="openshift-etcd/installer-1-master-0" Feb 16 17:01:03.994345 master-0 kubenswrapper[10003]: I0216 17:01:03.994321 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf8ea978-99b0-4957-8f9d-0d074263b235-var-lock\") pod \"installer-1-master-0\" (UID: \"cf8ea978-99b0-4957-8f9d-0d074263b235\") " pod="openshift-etcd/installer-1-master-0" Feb 16 17:01:03.994446 master-0 kubenswrapper[10003]: I0216 17:01:03.994422 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf8ea978-99b0-4957-8f9d-0d074263b235-var-lock\") pod \"installer-1-master-0\" (UID: \"cf8ea978-99b0-4957-8f9d-0d074263b235\") " pod="openshift-etcd/installer-1-master-0" Feb 16 17:01:03.994779 master-0 kubenswrapper[10003]: I0216 17:01:03.994126 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf8ea978-99b0-4957-8f9d-0d074263b235-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"cf8ea978-99b0-4957-8f9d-0d074263b235\") " pod="openshift-etcd/installer-1-master-0" Feb 16 17:01:04.013297 master-0 kubenswrapper[10003]: I0216 17:01:04.013252 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf8ea978-99b0-4957-8f9d-0d074263b235-kube-api-access\") pod \"installer-1-master-0\" (UID: \"cf8ea978-99b0-4957-8f9d-0d074263b235\") " pod="openshift-etcd/installer-1-master-0" Feb 16 17:01:04.173875 master-0 kubenswrapper[10003]: I0216 17:01:04.173706 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 17:01:04.587083 master-0 kubenswrapper[10003]: I0216 17:01:04.586990 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 16 17:01:04.606584 master-0 kubenswrapper[10003]: W0216 17:01:04.606458 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcf8ea978_99b0_4957_8f9d_0d074263b235.slice/crio-bcbd7bae801dad72473e19db1c8cfcee94d4e54c1e79aef5d145c86534ef5b38 WatchSource:0}: Error finding container bcbd7bae801dad72473e19db1c8cfcee94d4e54c1e79aef5d145c86534ef5b38: Status 404 returned error can't find the container with id bcbd7bae801dad72473e19db1c8cfcee94d4e54c1e79aef5d145c86534ef5b38 Feb 16 17:01:05.283285 master-0 kubenswrapper[10003]: I0216 17:01:05.283153 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"cf8ea978-99b0-4957-8f9d-0d074263b235","Type":"ContainerStarted","Data":"22e732f70ca6f94cfd7098b649223ea91275dcc9a8431d38a472ff475c10f789"} Feb 16 17:01:05.283285 master-0 kubenswrapper[10003]: I0216 17:01:05.283223 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"cf8ea978-99b0-4957-8f9d-0d074263b235","Type":"ContainerStarted","Data":"bcbd7bae801dad72473e19db1c8cfcee94d4e54c1e79aef5d145c86534ef5b38"} Feb 16 17:01:05.298638 master-0 kubenswrapper[10003]: I0216 17:01:05.298509 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=2.298456021 podStartE2EDuration="2.298456021s" podCreationTimestamp="2026-02-16 17:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:05.295713886 +0000 UTC m=+34.811199567" watchObservedRunningTime="2026-02-16 17:01:05.298456021 +0000 UTC m=+34.813941712" Feb 16 17:01:06.662142 master-0 kubenswrapper[10003]: I0216 17:01:06.662058 10003 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:01:06.682585 master-0 kubenswrapper[10003]: I0216 17:01:06.682520 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:01:06.820431 master-0 kubenswrapper[10003]: I0216 17:01:06.819983 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bf97f7775-zn8fd"] Feb 16 17:01:06.820431 master-0 kubenswrapper[10003]: E0216 17:01:06.820345 10003 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" podUID="f9e4abf5-7fdb-4aad-a69a-b0999c617acb" Feb 16 17:01:06.850948 master-0 kubenswrapper[10003]: I0216 17:01:06.848215 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4"] Feb 16 17:01:06.850948 master-0 kubenswrapper[10003]: E0216 17:01:06.848720 10003 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" podUID="cff91b5b-3cbb-489a-94e7-9f279ae6cbbb" Feb 16 17:01:07.196180 master-0 kubenswrapper[10003]: I0216 17:01:07.196111 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:01:07.300353 master-0 kubenswrapper[10003]: I0216 17:01:07.300287 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:01:07.300353 master-0 kubenswrapper[10003]: I0216 17:01:07.300345 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:01:07.316029 master-0 kubenswrapper[10003]: I0216 17:01:07.315963 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:01:07.322659 master-0 kubenswrapper[10003]: I0216 17:01:07.322611 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:01:07.442942 master-0 kubenswrapper[10003]: I0216 17:01:07.442867 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zqt6\" (UniqueName: \"kubernetes.io/projected/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-kube-api-access-9zqt6\") pod \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " Feb 16 17:01:07.443177 master-0 kubenswrapper[10003]: I0216 17:01:07.442950 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") pod \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " Feb 16 17:01:07.443177 master-0 kubenswrapper[10003]: I0216 17:01:07.442980 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcsw6\" (UniqueName: \"kubernetes.io/projected/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-kube-api-access-qcsw6\") pod \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " Feb 16 17:01:07.443177 master-0 kubenswrapper[10003]: I0216 17:01:07.443004 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-config\") pod \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " Feb 16 17:01:07.443177 master-0 kubenswrapper[10003]: I0216 17:01:07.443025 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-config\") pod \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\" (UID: \"cff91b5b-3cbb-489a-94e7-9f279ae6cbbb\") " Feb 16 17:01:07.443177 master-0 kubenswrapper[10003]: I0216 17:01:07.443066 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-serving-cert\") pod \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " Feb 16 17:01:07.443177 master-0 kubenswrapper[10003]: I0216 17:01:07.443123 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-proxy-ca-bundles\") pod \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\" (UID: \"f9e4abf5-7fdb-4aad-a69a-b0999c617acb\") " Feb 16 17:01:07.443894 master-0 kubenswrapper[10003]: I0216 17:01:07.443851 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-config" (OuterVolumeSpecName: "config") pod "f9e4abf5-7fdb-4aad-a69a-b0999c617acb" (UID: "f9e4abf5-7fdb-4aad-a69a-b0999c617acb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:07.444009 master-0 kubenswrapper[10003]: I0216 17:01:07.443870 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-config" (OuterVolumeSpecName: "config") pod "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:07.444009 master-0 kubenswrapper[10003]: I0216 17:01:07.443910 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f9e4abf5-7fdb-4aad-a69a-b0999c617acb" (UID: "f9e4abf5-7fdb-4aad-a69a-b0999c617acb"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:07.446960 master-0 kubenswrapper[10003]: I0216 17:01:07.446849 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:01:07.447060 master-0 kubenswrapper[10003]: I0216 17:01:07.446956 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f9e4abf5-7fdb-4aad-a69a-b0999c617acb" (UID: "f9e4abf5-7fdb-4aad-a69a-b0999c617acb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:01:07.448054 master-0 kubenswrapper[10003]: I0216 17:01:07.447980 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-kube-api-access-qcsw6" (OuterVolumeSpecName: "kube-api-access-qcsw6") pod "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb" (UID: "cff91b5b-3cbb-489a-94e7-9f279ae6cbbb"). InnerVolumeSpecName "kube-api-access-qcsw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:07.448562 master-0 kubenswrapper[10003]: I0216 17:01:07.448516 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-kube-api-access-9zqt6" (OuterVolumeSpecName: "kube-api-access-9zqt6") pod "f9e4abf5-7fdb-4aad-a69a-b0999c617acb" (UID: "f9e4abf5-7fdb-4aad-a69a-b0999c617acb"). InnerVolumeSpecName "kube-api-access-9zqt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:07.546136 master-0 kubenswrapper[10003]: I0216 17:01:07.546089 10003 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zqt6\" (UniqueName: \"kubernetes.io/projected/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-kube-api-access-9zqt6\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:07.546136 master-0 kubenswrapper[10003]: I0216 17:01:07.546134 10003 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:07.546136 master-0 kubenswrapper[10003]: I0216 17:01:07.546148 10003 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcsw6\" (UniqueName: \"kubernetes.io/projected/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-kube-api-access-qcsw6\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:07.546468 master-0 kubenswrapper[10003]: I0216 17:01:07.546161 10003 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:07.546468 master-0 kubenswrapper[10003]: I0216 17:01:07.546173 10003 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:07.546468 master-0 kubenswrapper[10003]: I0216 17:01:07.546184 10003 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:07.546468 master-0 kubenswrapper[10003]: I0216 17:01:07.546197 10003 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:07.903687 master-0 kubenswrapper[10003]: I0216 17:01:07.903524 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:01:08.304119 master-0 kubenswrapper[10003]: I0216 17:01:08.304056 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bf97f7775-zn8fd" Feb 16 17:01:08.305072 master-0 kubenswrapper[10003]: I0216 17:01:08.304082 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4" Feb 16 17:01:08.350279 master-0 kubenswrapper[10003]: I0216 17:01:08.350184 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r"] Feb 16 17:01:08.350814 master-0 kubenswrapper[10003]: I0216 17:01:08.350784 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.354955 master-0 kubenswrapper[10003]: I0216 17:01:08.352974 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:01:08.354955 master-0 kubenswrapper[10003]: I0216 17:01:08.353121 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:01:08.354955 master-0 kubenswrapper[10003]: I0216 17:01:08.353174 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:01:08.354955 master-0 kubenswrapper[10003]: I0216 17:01:08.353300 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:01:08.361005 master-0 kubenswrapper[10003]: I0216 17:01:08.357965 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:01:08.361207 master-0 kubenswrapper[10003]: I0216 17:01:08.361045 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4"] Feb 16 17:01:08.383064 master-0 kubenswrapper[10003]: I0216 17:01:08.383007 10003 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-78fb76f597-46pj4"] Feb 16 17:01:08.384852 master-0 kubenswrapper[10003]: I0216 17:01:08.384784 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r"] Feb 16 17:01:08.395136 master-0 kubenswrapper[10003]: I0216 17:01:08.391975 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 17:01:08.395136 master-0 kubenswrapper[10003]: I0216 17:01:08.392226 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="9f91197b-2619-4744-b739-e1ec4b3ab447" containerName="installer" containerID="cri-o://3a99d15a7e2411aa51974edf473f4546983bb2f97f193a98437d18fe2f623532" gracePeriod=30 Feb 16 17:01:08.438812 master-0 kubenswrapper[10003]: I0216 17:01:08.438753 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bf97f7775-zn8fd"] Feb 16 17:01:08.448657 master-0 kubenswrapper[10003]: I0216 17:01:08.448616 10003 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bf97f7775-zn8fd"] Feb 16 17:01:08.457453 master-0 kubenswrapper[10003]: I0216 17:01:08.457415 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-config\") pod \"route-controller-manager-6d88b87bb8-wfs4r\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.457453 master-0 kubenswrapper[10003]: I0216 17:01:08.457448 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-serving-cert\") pod \"route-controller-manager-6d88b87bb8-wfs4r\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.457578 master-0 kubenswrapper[10003]: I0216 17:01:08.457515 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-client-ca\") pod \"route-controller-manager-6d88b87bb8-wfs4r\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.457578 master-0 kubenswrapper[10003]: I0216 17:01:08.457564 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rsvb\" (UniqueName: \"kubernetes.io/projected/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-kube-api-access-7rsvb\") pod \"route-controller-manager-6d88b87bb8-wfs4r\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.457637 master-0 kubenswrapper[10003]: I0216 17:01:08.457595 10003 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:08.559121 master-0 kubenswrapper[10003]: I0216 17:01:08.559010 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-client-ca\") pod \"route-controller-manager-6d88b87bb8-wfs4r\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.559121 master-0 kubenswrapper[10003]: I0216 17:01:08.559115 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rsvb\" (UniqueName: \"kubernetes.io/projected/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-kube-api-access-7rsvb\") pod \"route-controller-manager-6d88b87bb8-wfs4r\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.559305 master-0 kubenswrapper[10003]: I0216 17:01:08.559142 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-config\") pod \"route-controller-manager-6d88b87bb8-wfs4r\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.559305 master-0 kubenswrapper[10003]: I0216 17:01:08.559165 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-serving-cert\") pod \"route-controller-manager-6d88b87bb8-wfs4r\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.559305 master-0 kubenswrapper[10003]: I0216 17:01:08.559281 10003 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f9e4abf5-7fdb-4aad-a69a-b0999c617acb-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:08.563948 master-0 kubenswrapper[10003]: I0216 17:01:08.562890 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-config\") pod \"route-controller-manager-6d88b87bb8-wfs4r\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.567728 master-0 kubenswrapper[10003]: I0216 17:01:08.566570 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-serving-cert\") pod \"route-controller-manager-6d88b87bb8-wfs4r\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.567728 master-0 kubenswrapper[10003]: I0216 17:01:08.567369 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-client-ca\") pod \"route-controller-manager-6d88b87bb8-wfs4r\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.581581 master-0 kubenswrapper[10003]: I0216 17:01:08.581538 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rsvb\" (UniqueName: \"kubernetes.io/projected/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-kube-api-access-7rsvb\") pod \"route-controller-manager-6d88b87bb8-wfs4r\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.670759 master-0 kubenswrapper[10003]: I0216 17:01:08.670714 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:08.756165 master-0 kubenswrapper[10003]: I0216 17:01:08.756127 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_9f91197b-2619-4744-b739-e1ec4b3ab447/installer/0.log" Feb 16 17:01:08.756332 master-0 kubenswrapper[10003]: I0216 17:01:08.756206 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 17:01:08.805542 master-0 kubenswrapper[10003]: I0216 17:01:08.805491 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cff91b5b-3cbb-489a-94e7-9f279ae6cbbb" path="/var/lib/kubelet/pods/cff91b5b-3cbb-489a-94e7-9f279ae6cbbb/volumes" Feb 16 17:01:08.805915 master-0 kubenswrapper[10003]: I0216 17:01:08.805882 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9e4abf5-7fdb-4aad-a69a-b0999c617acb" path="/var/lib/kubelet/pods/f9e4abf5-7fdb-4aad-a69a-b0999c617acb/volumes" Feb 16 17:01:08.861734 master-0 kubenswrapper[10003]: I0216 17:01:08.861584 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f91197b-2619-4744-b739-e1ec4b3ab447-var-lock\") pod \"9f91197b-2619-4744-b739-e1ec4b3ab447\" (UID: \"9f91197b-2619-4744-b739-e1ec4b3ab447\") " Feb 16 17:01:08.861734 master-0 kubenswrapper[10003]: I0216 17:01:08.861661 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f91197b-2619-4744-b739-e1ec4b3ab447-kubelet-dir\") pod \"9f91197b-2619-4744-b739-e1ec4b3ab447\" (UID: \"9f91197b-2619-4744-b739-e1ec4b3ab447\") " Feb 16 17:01:08.861734 master-0 kubenswrapper[10003]: I0216 17:01:08.861711 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f91197b-2619-4744-b739-e1ec4b3ab447-kube-api-access\") pod \"9f91197b-2619-4744-b739-e1ec4b3ab447\" (UID: \"9f91197b-2619-4744-b739-e1ec4b3ab447\") " Feb 16 17:01:08.862265 master-0 kubenswrapper[10003]: I0216 17:01:08.862049 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f91197b-2619-4744-b739-e1ec4b3ab447-var-lock" (OuterVolumeSpecName: "var-lock") pod "9f91197b-2619-4744-b739-e1ec4b3ab447" (UID: "9f91197b-2619-4744-b739-e1ec4b3ab447"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:01:08.862265 master-0 kubenswrapper[10003]: I0216 17:01:08.862065 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f91197b-2619-4744-b739-e1ec4b3ab447-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9f91197b-2619-4744-b739-e1ec4b3ab447" (UID: "9f91197b-2619-4744-b739-e1ec4b3ab447"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:01:08.865329 master-0 kubenswrapper[10003]: I0216 17:01:08.865270 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f91197b-2619-4744-b739-e1ec4b3ab447-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9f91197b-2619-4744-b739-e1ec4b3ab447" (UID: "9f91197b-2619-4744-b739-e1ec4b3ab447"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:08.962978 master-0 kubenswrapper[10003]: I0216 17:01:08.962801 10003 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f91197b-2619-4744-b739-e1ec4b3ab447-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:08.962978 master-0 kubenswrapper[10003]: I0216 17:01:08.962842 10003 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f91197b-2619-4744-b739-e1ec4b3ab447-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:08.962978 master-0 kubenswrapper[10003]: I0216 17:01:08.962856 10003 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9f91197b-2619-4744-b739-e1ec4b3ab447-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:09.047402 master-0 kubenswrapper[10003]: I0216 17:01:09.047290 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r"] Feb 16 17:01:09.310863 master-0 kubenswrapper[10003]: I0216 17:01:09.310727 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_9f91197b-2619-4744-b739-e1ec4b3ab447/installer/0.log" Feb 16 17:01:09.310863 master-0 kubenswrapper[10003]: I0216 17:01:09.310777 10003 generic.go:334] "Generic (PLEG): container finished" podID="9f91197b-2619-4744-b739-e1ec4b3ab447" containerID="3a99d15a7e2411aa51974edf473f4546983bb2f97f193a98437d18fe2f623532" exitCode=1 Feb 16 17:01:09.310863 master-0 kubenswrapper[10003]: I0216 17:01:09.310833 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"9f91197b-2619-4744-b739-e1ec4b3ab447","Type":"ContainerDied","Data":"3a99d15a7e2411aa51974edf473f4546983bb2f97f193a98437d18fe2f623532"} Feb 16 17:01:09.310863 master-0 kubenswrapper[10003]: I0216 17:01:09.310867 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"9f91197b-2619-4744-b739-e1ec4b3ab447","Type":"ContainerDied","Data":"15ef05289713a1ba25a32eabe280c627821bd43f611d7302ac2def0585d6cb1e"} Feb 16 17:01:09.311383 master-0 kubenswrapper[10003]: I0216 17:01:09.310887 10003 scope.go:117] "RemoveContainer" containerID="3a99d15a7e2411aa51974edf473f4546983bb2f97f193a98437d18fe2f623532" Feb 16 17:01:09.311383 master-0 kubenswrapper[10003]: I0216 17:01:09.311011 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Feb 16 17:01:09.317599 master-0 kubenswrapper[10003]: I0216 17:01:09.317554 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" event={"ID":"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d","Type":"ContainerStarted","Data":"f2f8872c1e11a1b425867c8d5c4a87bd6af6c98273220473b1408998fe1195b7"} Feb 16 17:01:09.330538 master-0 kubenswrapper[10003]: I0216 17:01:09.330460 10003 scope.go:117] "RemoveContainer" containerID="3a99d15a7e2411aa51974edf473f4546983bb2f97f193a98437d18fe2f623532" Feb 16 17:01:09.330974 master-0 kubenswrapper[10003]: E0216 17:01:09.330943 10003 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a99d15a7e2411aa51974edf473f4546983bb2f97f193a98437d18fe2f623532\": container with ID starting with 3a99d15a7e2411aa51974edf473f4546983bb2f97f193a98437d18fe2f623532 not found: ID does not exist" containerID="3a99d15a7e2411aa51974edf473f4546983bb2f97f193a98437d18fe2f623532" Feb 16 17:01:09.331078 master-0 kubenswrapper[10003]: I0216 17:01:09.331001 10003 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a99d15a7e2411aa51974edf473f4546983bb2f97f193a98437d18fe2f623532"} err="failed to get container status \"3a99d15a7e2411aa51974edf473f4546983bb2f97f193a98437d18fe2f623532\": rpc error: code = NotFound desc = could not find container \"3a99d15a7e2411aa51974edf473f4546983bb2f97f193a98437d18fe2f623532\": container with ID starting with 3a99d15a7e2411aa51974edf473f4546983bb2f97f193a98437d18fe2f623532 not found: ID does not exist" Feb 16 17:01:09.344268 master-0 kubenswrapper[10003]: I0216 17:01:09.344210 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 17:01:09.347613 master-0 kubenswrapper[10003]: I0216 17:01:09.347568 10003 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Feb 16 17:01:09.894411 master-0 kubenswrapper[10003]: I0216 17:01:09.894375 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:01:10.804940 master-0 kubenswrapper[10003]: I0216 17:01:10.804883 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f91197b-2619-4744-b739-e1ec4b3ab447" path="/var/lib/kubelet/pods/9f91197b-2619-4744-b739-e1ec4b3ab447/volumes" Feb 16 17:01:10.860162 master-0 kubenswrapper[10003]: I0216 17:01:10.860061 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cb7f5cc48-l2768"] Feb 16 17:01:10.860458 master-0 kubenswrapper[10003]: E0216 17:01:10.860366 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f91197b-2619-4744-b739-e1ec4b3ab447" containerName="installer" Feb 16 17:01:10.860458 master-0 kubenswrapper[10003]: I0216 17:01:10.860393 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f91197b-2619-4744-b739-e1ec4b3ab447" containerName="installer" Feb 16 17:01:10.860654 master-0 kubenswrapper[10003]: I0216 17:01:10.860613 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f91197b-2619-4744-b739-e1ec4b3ab447" containerName="installer" Feb 16 17:01:10.861228 master-0 kubenswrapper[10003]: I0216 17:01:10.861172 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.863844 master-0 kubenswrapper[10003]: I0216 17:01:10.863783 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:01:10.864130 master-0 kubenswrapper[10003]: I0216 17:01:10.864074 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:01:10.864251 master-0 kubenswrapper[10003]: I0216 17:01:10.864225 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:01:10.864764 master-0 kubenswrapper[10003]: I0216 17:01:10.864715 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:01:10.865589 master-0 kubenswrapper[10003]: I0216 17:01:10.865466 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:01:10.874604 master-0 kubenswrapper[10003]: I0216 17:01:10.874542 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:01:10.887713 master-0 kubenswrapper[10003]: I0216 17:01:10.887433 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-config\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.887713 master-0 kubenswrapper[10003]: I0216 17:01:10.887502 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-client-ca\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.887713 master-0 kubenswrapper[10003]: I0216 17:01:10.887553 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5c4cde-289c-49d9-9b17-176c368267d2-serving-cert\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.887713 master-0 kubenswrapper[10003]: I0216 17:01:10.887570 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-proxy-ca-bundles\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.887713 master-0 kubenswrapper[10003]: I0216 17:01:10.887609 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5g4z\" (UniqueName: \"kubernetes.io/projected/8f5c4cde-289c-49d9-9b17-176c368267d2-kube-api-access-c5g4z\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.888200 master-0 kubenswrapper[10003]: I0216 17:01:10.888166 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7f5cc48-l2768"] Feb 16 17:01:10.977722 master-0 kubenswrapper[10003]: I0216 17:01:10.975464 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 17:01:10.977722 master-0 kubenswrapper[10003]: I0216 17:01:10.976403 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 17:01:10.990907 master-0 kubenswrapper[10003]: I0216 17:01:10.990833 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5g4z\" (UniqueName: \"kubernetes.io/projected/8f5c4cde-289c-49d9-9b17-176c368267d2-kube-api-access-c5g4z\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.990907 master-0 kubenswrapper[10003]: I0216 17:01:10.990908 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-config\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.991233 master-0 kubenswrapper[10003]: I0216 17:01:10.990962 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-client-ca\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.991233 master-0 kubenswrapper[10003]: I0216 17:01:10.991020 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5c4cde-289c-49d9-9b17-176c368267d2-serving-cert\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.991233 master-0 kubenswrapper[10003]: I0216 17:01:10.991038 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-proxy-ca-bundles\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.993509 master-0 kubenswrapper[10003]: I0216 17:01:10.992065 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-client-ca\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.993509 master-0 kubenswrapper[10003]: I0216 17:01:10.992439 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-config\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.993509 master-0 kubenswrapper[10003]: I0216 17:01:10.992546 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-proxy-ca-bundles\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:10.997791 master-0 kubenswrapper[10003]: I0216 17:01:10.997742 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5c4cde-289c-49d9-9b17-176c368267d2-serving-cert\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:11.033107 master-0 kubenswrapper[10003]: I0216 17:01:11.033053 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 17:01:11.094997 master-0 kubenswrapper[10003]: I0216 17:01:11.092149 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40e9a6b6-b1c0-4d95-9534-198d828d4548-kube-api-access\") pod \"installer-3-master-0\" (UID: \"40e9a6b6-b1c0-4d95-9534-198d828d4548\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 17:01:11.094997 master-0 kubenswrapper[10003]: I0216 17:01:11.092215 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/40e9a6b6-b1c0-4d95-9534-198d828d4548-var-lock\") pod \"installer-3-master-0\" (UID: \"40e9a6b6-b1c0-4d95-9534-198d828d4548\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 17:01:11.094997 master-0 kubenswrapper[10003]: I0216 17:01:11.092249 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40e9a6b6-b1c0-4d95-9534-198d828d4548-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"40e9a6b6-b1c0-4d95-9534-198d828d4548\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 17:01:11.152994 master-0 kubenswrapper[10003]: I0216 17:01:11.152939 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5g4z\" (UniqueName: \"kubernetes.io/projected/8f5c4cde-289c-49d9-9b17-176c368267d2-kube-api-access-c5g4z\") pod \"controller-manager-6cb7f5cc48-l2768\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:11.188573 master-0 kubenswrapper[10003]: I0216 17:01:11.188495 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:11.193313 master-0 kubenswrapper[10003]: I0216 17:01:11.193267 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/40e9a6b6-b1c0-4d95-9534-198d828d4548-var-lock\") pod \"installer-3-master-0\" (UID: \"40e9a6b6-b1c0-4d95-9534-198d828d4548\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 17:01:11.193383 master-0 kubenswrapper[10003]: I0216 17:01:11.193318 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40e9a6b6-b1c0-4d95-9534-198d828d4548-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"40e9a6b6-b1c0-4d95-9534-198d828d4548\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 17:01:11.193423 master-0 kubenswrapper[10003]: I0216 17:01:11.193398 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40e9a6b6-b1c0-4d95-9534-198d828d4548-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"40e9a6b6-b1c0-4d95-9534-198d828d4548\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 17:01:11.193423 master-0 kubenswrapper[10003]: I0216 17:01:11.193403 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/40e9a6b6-b1c0-4d95-9534-198d828d4548-var-lock\") pod \"installer-3-master-0\" (UID: \"40e9a6b6-b1c0-4d95-9534-198d828d4548\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 17:01:11.193479 master-0 kubenswrapper[10003]: I0216 17:01:11.193439 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40e9a6b6-b1c0-4d95-9534-198d828d4548-kube-api-access\") pod \"installer-3-master-0\" (UID: \"40e9a6b6-b1c0-4d95-9534-198d828d4548\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 17:01:11.209580 master-0 kubenswrapper[10003]: I0216 17:01:11.209316 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40e9a6b6-b1c0-4d95-9534-198d828d4548-kube-api-access\") pod \"installer-3-master-0\" (UID: \"40e9a6b6-b1c0-4d95-9534-198d828d4548\") " pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 17:01:11.315610 master-0 kubenswrapper[10003]: I0216 17:01:11.315553 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 17:01:11.596489 master-0 kubenswrapper[10003]: I0216 17:01:11.596414 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7f5cc48-l2768"] Feb 16 17:01:11.604001 master-0 kubenswrapper[10003]: W0216 17:01:11.603966 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f5c4cde_289c_49d9_9b17_176c368267d2.slice/crio-843cba8436efc8a69794429be90a7519d875093d79d2def3641614c864e2b2dd WatchSource:0}: Error finding container 843cba8436efc8a69794429be90a7519d875093d79d2def3641614c864e2b2dd: Status 404 returned error can't find the container with id 843cba8436efc8a69794429be90a7519d875093d79d2def3641614c864e2b2dd Feb 16 17:01:11.718803 master-0 kubenswrapper[10003]: I0216 17:01:11.718757 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 17:01:11.724911 master-0 kubenswrapper[10003]: W0216 17:01:11.724864 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod40e9a6b6_b1c0_4d95_9534_198d828d4548.slice/crio-1aff09269d1a38316375acc63348d15689e20589f9cca35c5085fa60b2b3e2de WatchSource:0}: Error finding container 1aff09269d1a38316375acc63348d15689e20589f9cca35c5085fa60b2b3e2de: Status 404 returned error can't find the container with id 1aff09269d1a38316375acc63348d15689e20589f9cca35c5085fa60b2b3e2de Feb 16 17:01:12.338862 master-0 kubenswrapper[10003]: I0216 17:01:12.338793 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" event={"ID":"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d","Type":"ContainerStarted","Data":"97cbf0dab61f16f4856e8045183318000795ee2cded73dac7a1a281cb2b7e077"} Feb 16 17:01:12.339346 master-0 kubenswrapper[10003]: I0216 17:01:12.339007 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:12.341623 master-0 kubenswrapper[10003]: I0216 17:01:12.341571 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" event={"ID":"8f5c4cde-289c-49d9-9b17-176c368267d2","Type":"ContainerStarted","Data":"843cba8436efc8a69794429be90a7519d875093d79d2def3641614c864e2b2dd"} Feb 16 17:01:12.343481 master-0 kubenswrapper[10003]: I0216 17:01:12.343455 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"40e9a6b6-b1c0-4d95-9534-198d828d4548","Type":"ContainerStarted","Data":"26885a3a8871743215a6d399e764e7a7e1cc57ea6e165592fea5874dd60c31a7"} Feb 16 17:01:12.343547 master-0 kubenswrapper[10003]: I0216 17:01:12.343486 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"40e9a6b6-b1c0-4d95-9534-198d828d4548","Type":"ContainerStarted","Data":"1aff09269d1a38316375acc63348d15689e20589f9cca35c5085fa60b2b3e2de"} Feb 16 17:01:12.345904 master-0 kubenswrapper[10003]: I0216 17:01:12.345879 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:13.093876 master-0 kubenswrapper[10003]: I0216 17:01:13.093676 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" podStartSLOduration=4.904968798 podStartE2EDuration="7.09364443s" podCreationTimestamp="2026-02-16 17:01:06 +0000 UTC" firstStartedPulling="2026-02-16 17:01:09.056296968 +0000 UTC m=+38.571782639" lastFinishedPulling="2026-02-16 17:01:11.2449726 +0000 UTC m=+40.760458271" observedRunningTime="2026-02-16 17:01:13.060521237 +0000 UTC m=+42.576006908" watchObservedRunningTime="2026-02-16 17:01:13.09364443 +0000 UTC m=+42.609130101" Feb 16 17:01:15.025124 master-0 kubenswrapper[10003]: I0216 17:01:15.025005 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=5.024988295 podStartE2EDuration="5.024988295s" podCreationTimestamp="2026-02-16 17:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:13.368454703 +0000 UTC m=+42.883940374" watchObservedRunningTime="2026-02-16 17:01:15.024988295 +0000 UTC m=+44.540473966" Feb 16 17:01:15.025880 master-0 kubenswrapper[10003]: I0216 17:01:15.025849 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx"] Feb 16 17:01:15.026431 master-0 kubenswrapper[10003]: I0216 17:01:15.026400 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:01:15.028836 master-0 kubenswrapper[10003]: I0216 17:01:15.028786 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 17:01:15.028975 master-0 kubenswrapper[10003]: I0216 17:01:15.028822 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 17:01:15.029020 master-0 kubenswrapper[10003]: I0216 17:01:15.028968 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 17:01:15.035341 master-0 kubenswrapper[10003]: I0216 17:01:15.035287 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx"] Feb 16 17:01:15.162232 master-0 kubenswrapper[10003]: I0216 17:01:15.162128 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:01:15.162435 master-0 kubenswrapper[10003]: I0216 17:01:15.162366 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:01:15.264354 master-0 kubenswrapper[10003]: I0216 17:01:15.264292 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:01:15.264555 master-0 kubenswrapper[10003]: I0216 17:01:15.264393 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:01:15.267430 master-0 kubenswrapper[10003]: I0216 17:01:15.267384 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:01:15.279751 master-0 kubenswrapper[10003]: I0216 17:01:15.278942 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:01:15.399453 master-0 kubenswrapper[10003]: I0216 17:01:15.399397 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:01:15.597019 master-0 kubenswrapper[10003]: I0216 17:01:15.593476 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 16 17:01:15.597019 master-0 kubenswrapper[10003]: I0216 17:01:15.594390 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:01:15.597250 master-0 kubenswrapper[10003]: I0216 17:01:15.597128 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 17:01:15.604243 master-0 kubenswrapper[10003]: I0216 17:01:15.604203 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 16 17:01:15.668862 master-0 kubenswrapper[10003]: I0216 17:01:15.668781 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:01:15.669130 master-0 kubenswrapper[10003]: I0216 17:01:15.668901 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f-var-lock\") pod \"installer-1-master-0\" (UID: \"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:01:15.669130 master-0 kubenswrapper[10003]: I0216 17:01:15.668965 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:01:15.770476 master-0 kubenswrapper[10003]: I0216 17:01:15.770420 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f-var-lock\") pod \"installer-1-master-0\" (UID: \"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:01:15.770688 master-0 kubenswrapper[10003]: I0216 17:01:15.770499 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:01:15.770688 master-0 kubenswrapper[10003]: I0216 17:01:15.770594 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f-var-lock\") pod \"installer-1-master-0\" (UID: \"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:01:15.770688 master-0 kubenswrapper[10003]: I0216 17:01:15.770642 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:01:15.770865 master-0 kubenswrapper[10003]: I0216 17:01:15.770816 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:01:16.364878 master-0 kubenswrapper[10003]: I0216 17:01:16.364818 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" event={"ID":"8f5c4cde-289c-49d9-9b17-176c368267d2","Type":"ContainerStarted","Data":"af85a978226a1ad38f26ff25527b186b68bc9fc211af6ef3a866e7c1644f287f"} Feb 16 17:01:16.365395 master-0 kubenswrapper[10003]: I0216 17:01:16.365057 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:16.370226 master-0 kubenswrapper[10003]: I0216 17:01:16.370188 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:16.894621 master-0 kubenswrapper[10003]: I0216 17:01:16.894568 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx"] Feb 16 17:01:16.899374 master-0 kubenswrapper[10003]: I0216 17:01:16.899304 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f-kube-api-access\") pod \"installer-1-master-0\" (UID: \"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f\") " pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:01:16.913976 master-0 kubenswrapper[10003]: I0216 17:01:16.913884 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" podStartSLOduration=7.19781044 podStartE2EDuration="10.91386493s" podCreationTimestamp="2026-02-16 17:01:06 +0000 UTC" firstStartedPulling="2026-02-16 17:01:11.606219611 +0000 UTC m=+41.121705282" lastFinishedPulling="2026-02-16 17:01:15.322274101 +0000 UTC m=+44.837759772" observedRunningTime="2026-02-16 17:01:16.895194361 +0000 UTC m=+46.410680032" watchObservedRunningTime="2026-02-16 17:01:16.91386493 +0000 UTC m=+46.429350611" Feb 16 17:01:17.065084 master-0 kubenswrapper[10003]: I0216 17:01:17.063487 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 16 17:01:17.065084 master-0 kubenswrapper[10003]: I0216 17:01:17.063749 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/installer-1-master-0" podUID="cf8ea978-99b0-4957-8f9d-0d074263b235" containerName="installer" containerID="cri-o://22e732f70ca6f94cfd7098b649223ea91275dcc9a8431d38a472ff475c10f789" gracePeriod=30 Feb 16 17:01:17.128958 master-0 kubenswrapper[10003]: I0216 17:01:17.127766 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:01:17.173995 master-0 kubenswrapper[10003]: I0216 17:01:17.172748 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 16 17:01:17.175112 master-0 kubenswrapper[10003]: I0216 17:01:17.175080 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:01:17.187960 master-0 kubenswrapper[10003]: I0216 17:01:17.183738 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 17:01:17.190734 master-0 kubenswrapper[10003]: I0216 17:01:17.190674 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 16 17:01:17.301667 master-0 kubenswrapper[10003]: I0216 17:01:17.295429 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86c571b6-0f65-41f0-b1be-f63d7a974782-kube-api-access\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:01:17.301667 master-0 kubenswrapper[10003]: I0216 17:01:17.295514 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-var-lock\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:01:17.301667 master-0 kubenswrapper[10003]: I0216 17:01:17.295557 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:01:17.382251 master-0 kubenswrapper[10003]: I0216 17:01:17.382023 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" event={"ID":"642e5115-b7f2-4561-bc6b-1a74b6d891c4","Type":"ContainerStarted","Data":"b077d967ff0915e46adebbfea57fba17bebbd700385551a20b3c9d4bda18abd6"} Feb 16 17:01:17.397821 master-0 kubenswrapper[10003]: I0216 17:01:17.397626 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:01:17.397821 master-0 kubenswrapper[10003]: I0216 17:01:17.397792 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86c571b6-0f65-41f0-b1be-f63d7a974782-kube-api-access\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:01:17.398076 master-0 kubenswrapper[10003]: I0216 17:01:17.397866 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-var-lock\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:01:17.398076 master-0 kubenswrapper[10003]: I0216 17:01:17.397970 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-var-lock\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:01:17.398076 master-0 kubenswrapper[10003]: I0216 17:01:17.398014 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:01:17.423264 master-0 kubenswrapper[10003]: I0216 17:01:17.423217 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86c571b6-0f65-41f0-b1be-f63d7a974782-kube-api-access\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:01:17.471520 master-0 kubenswrapper[10003]: I0216 17:01:17.471462 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk"] Feb 16 17:01:17.473956 master-0 kubenswrapper[10003]: I0216 17:01:17.473523 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.475684 master-0 kubenswrapper[10003]: I0216 17:01:17.475480 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 17:01:17.475684 master-0 kubenswrapper[10003]: I0216 17:01:17.475527 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 17:01:17.475814 master-0 kubenswrapper[10003]: I0216 17:01:17.475772 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 17:01:17.476639 master-0 kubenswrapper[10003]: I0216 17:01:17.475879 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 17:01:17.476639 master-0 kubenswrapper[10003]: I0216 17:01:17.476228 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 17:01:17.507964 master-0 kubenswrapper[10003]: I0216 17:01:17.507899 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:01:17.600541 master-0 kubenswrapper[10003]: I0216 17:01:17.600502 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/32a6b902-917f-4529-92c5-1a4975237501-machine-approver-tls\") pod \"machine-approver-6c46d95f74-kp5vk\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.600684 master-0 kubenswrapper[10003]: I0216 17:01:17.600643 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhzsp\" (UniqueName: \"kubernetes.io/projected/32a6b902-917f-4529-92c5-1a4975237501-kube-api-access-dhzsp\") pod \"machine-approver-6c46d95f74-kp5vk\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.600684 master-0 kubenswrapper[10003]: I0216 17:01:17.600664 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/32a6b902-917f-4529-92c5-1a4975237501-auth-proxy-config\") pod \"machine-approver-6c46d95f74-kp5vk\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.600768 master-0 kubenswrapper[10003]: I0216 17:01:17.600733 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32a6b902-917f-4529-92c5-1a4975237501-config\") pod \"machine-approver-6c46d95f74-kp5vk\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.651225 master-0 kubenswrapper[10003]: I0216 17:01:17.647526 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 16 17:01:17.702122 master-0 kubenswrapper[10003]: I0216 17:01:17.702072 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhzsp\" (UniqueName: \"kubernetes.io/projected/32a6b902-917f-4529-92c5-1a4975237501-kube-api-access-dhzsp\") pod \"machine-approver-6c46d95f74-kp5vk\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.702299 master-0 kubenswrapper[10003]: I0216 17:01:17.702129 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/32a6b902-917f-4529-92c5-1a4975237501-auth-proxy-config\") pod \"machine-approver-6c46d95f74-kp5vk\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.702299 master-0 kubenswrapper[10003]: I0216 17:01:17.702162 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32a6b902-917f-4529-92c5-1a4975237501-config\") pod \"machine-approver-6c46d95f74-kp5vk\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.702299 master-0 kubenswrapper[10003]: I0216 17:01:17.702210 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/32a6b902-917f-4529-92c5-1a4975237501-machine-approver-tls\") pod \"machine-approver-6c46d95f74-kp5vk\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.718673 master-0 kubenswrapper[10003]: I0216 17:01:17.718594 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32a6b902-917f-4529-92c5-1a4975237501-config\") pod \"machine-approver-6c46d95f74-kp5vk\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.718876 master-0 kubenswrapper[10003]: I0216 17:01:17.718829 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/32a6b902-917f-4529-92c5-1a4975237501-auth-proxy-config\") pod \"machine-approver-6c46d95f74-kp5vk\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.718942 master-0 kubenswrapper[10003]: I0216 17:01:17.718836 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/32a6b902-917f-4529-92c5-1a4975237501-machine-approver-tls\") pod \"machine-approver-6c46d95f74-kp5vk\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.746000 master-0 kubenswrapper[10003]: I0216 17:01:17.745126 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhzsp\" (UniqueName: \"kubernetes.io/projected/32a6b902-917f-4529-92c5-1a4975237501-kube-api-access-dhzsp\") pod \"machine-approver-6c46d95f74-kp5vk\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.810380 master-0 kubenswrapper[10003]: I0216 17:01:17.810327 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:17.934339 master-0 kubenswrapper[10003]: I0216 17:01:17.934288 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Feb 16 17:01:17.952937 master-0 kubenswrapper[10003]: W0216 17:01:17.952872 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod86c571b6_0f65_41f0_b1be_f63d7a974782.slice/crio-cd7158aca6c004ac8177200d17fa2e56721dfe46e78c27563fd124a05f790d1a WatchSource:0}: Error finding container cd7158aca6c004ac8177200d17fa2e56721dfe46e78c27563fd124a05f790d1a: Status 404 returned error can't find the container with id cd7158aca6c004ac8177200d17fa2e56721dfe46e78c27563fd124a05f790d1a Feb 16 17:01:18.392674 master-0 kubenswrapper[10003]: I0216 17:01:18.392616 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" event={"ID":"32a6b902-917f-4529-92c5-1a4975237501","Type":"ContainerStarted","Data":"a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973"} Feb 16 17:01:18.392674 master-0 kubenswrapper[10003]: I0216 17:01:18.392662 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" event={"ID":"32a6b902-917f-4529-92c5-1a4975237501","Type":"ContainerStarted","Data":"30bcef361054bdc1ea0385e93e587e6cc354acc58a166dd68c254ed816d32245"} Feb 16 17:01:18.393839 master-0 kubenswrapper[10003]: I0216 17:01:18.393816 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f","Type":"ContainerStarted","Data":"90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99"} Feb 16 17:01:18.393898 master-0 kubenswrapper[10003]: I0216 17:01:18.393841 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f","Type":"ContainerStarted","Data":"dd3ceeb0da8db938eae2cfa500166d7af7a50e381f011dcd54ec971db54cfcba"} Feb 16 17:01:18.395229 master-0 kubenswrapper[10003]: I0216 17:01:18.395205 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"86c571b6-0f65-41f0-b1be-f63d7a974782","Type":"ContainerStarted","Data":"cd7158aca6c004ac8177200d17fa2e56721dfe46e78c27563fd124a05f790d1a"} Feb 16 17:01:19.089769 master-0 kubenswrapper[10003]: I0216 17:01:19.089712 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 17:01:19.090028 master-0 kubenswrapper[10003]: I0216 17:01:19.089898 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="40e9a6b6-b1c0-4d95-9534-198d828d4548" containerName="installer" containerID="cri-o://26885a3a8871743215a6d399e764e7a7e1cc57ea6e165592fea5874dd60c31a7" gracePeriod=30 Feb 16 17:01:19.400187 master-0 kubenswrapper[10003]: I0216 17:01:19.400069 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"86c571b6-0f65-41f0-b1be-f63d7a974782","Type":"ContainerStarted","Data":"e607db32e1640f4a53c9cd19e2f52a26fa9cbdb5cdabb553570529d03baa71fa"} Feb 16 17:01:19.411309 master-0 kubenswrapper[10003]: I0216 17:01:19.411265 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_40e9a6b6-b1c0-4d95-9534-198d828d4548/installer/0.log" Feb 16 17:01:19.411536 master-0 kubenswrapper[10003]: I0216 17:01:19.411319 10003 generic.go:334] "Generic (PLEG): container finished" podID="40e9a6b6-b1c0-4d95-9534-198d828d4548" containerID="26885a3a8871743215a6d399e764e7a7e1cc57ea6e165592fea5874dd60c31a7" exitCode=1 Feb 16 17:01:19.411536 master-0 kubenswrapper[10003]: I0216 17:01:19.411419 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"40e9a6b6-b1c0-4d95-9534-198d828d4548","Type":"ContainerDied","Data":"26885a3a8871743215a6d399e764e7a7e1cc57ea6e165592fea5874dd60c31a7"} Feb 16 17:01:19.429366 master-0 kubenswrapper[10003]: I0216 17:01:19.428623 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=2.428603733 podStartE2EDuration="2.428603733s" podCreationTimestamp="2026-02-16 17:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:19.426075074 +0000 UTC m=+48.941560755" watchObservedRunningTime="2026-02-16 17:01:19.428603733 +0000 UTC m=+48.944089404" Feb 16 17:01:19.451744 master-0 kubenswrapper[10003]: I0216 17:01:19.450770 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=4.4507500669999995 podStartE2EDuration="4.450750067s" podCreationTimestamp="2026-02-16 17:01:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:19.449061511 +0000 UTC m=+48.964547192" watchObservedRunningTime="2026-02-16 17:01:19.450750067 +0000 UTC m=+48.966235738" Feb 16 17:01:19.944948 master-0 kubenswrapper[10003]: I0216 17:01:19.940638 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq"] Feb 16 17:01:19.944948 master-0 kubenswrapper[10003]: I0216 17:01:19.941502 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:01:19.944948 master-0 kubenswrapper[10003]: I0216 17:01:19.943044 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-j874l" Feb 16 17:01:19.944948 master-0 kubenswrapper[10003]: I0216 17:01:19.943249 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 16 17:01:19.944948 master-0 kubenswrapper[10003]: I0216 17:01:19.943470 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 16 17:01:19.948736 master-0 kubenswrapper[10003]: I0216 17:01:19.946818 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 16 17:01:19.956142 master-0 kubenswrapper[10003]: I0216 17:01:19.950620 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 16 17:01:19.957442 master-0 kubenswrapper[10003]: I0216 17:01:19.957390 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq"] Feb 16 17:01:20.054985 master-0 kubenswrapper[10003]: I0216 17:01:20.054875 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:01:20.054985 master-0 kubenswrapper[10003]: I0216 17:01:20.054948 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:01:20.054985 master-0 kubenswrapper[10003]: I0216 17:01:20.055002 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:01:20.156378 master-0 kubenswrapper[10003]: I0216 17:01:20.156317 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:01:20.156485 master-0 kubenswrapper[10003]: I0216 17:01:20.156401 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:01:20.156485 master-0 kubenswrapper[10003]: I0216 17:01:20.156445 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:01:20.171112 master-0 kubenswrapper[10003]: I0216 17:01:20.171048 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:01:20.173543 master-0 kubenswrapper[10003]: I0216 17:01:20.173483 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:01:20.179217 master-0 kubenswrapper[10003]: I0216 17:01:20.178812 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:01:20.260451 master-0 kubenswrapper[10003]: I0216 17:01:20.258535 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 16 17:01:20.260451 master-0 kubenswrapper[10003]: I0216 17:01:20.259098 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:01:20.260703 master-0 kubenswrapper[10003]: I0216 17:01:20.260661 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 17:01:20.263740 master-0 kubenswrapper[10003]: I0216 17:01:20.263700 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-rxv66" Feb 16 17:01:20.282985 master-0 kubenswrapper[10003]: I0216 17:01:20.272661 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 16 17:01:20.361090 master-0 kubenswrapper[10003]: I0216 17:01:20.361049 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:01:20.361219 master-0 kubenswrapper[10003]: I0216 17:01:20.361117 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-var-lock\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:01:20.361219 master-0 kubenswrapper[10003]: I0216 17:01:20.361154 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:01:20.366055 master-0 kubenswrapper[10003]: I0216 17:01:20.363441 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_40e9a6b6-b1c0-4d95-9534-198d828d4548/installer/0.log" Feb 16 17:01:20.366055 master-0 kubenswrapper[10003]: I0216 17:01:20.363495 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 17:01:20.423769 master-0 kubenswrapper[10003]: I0216 17:01:20.423722 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_40e9a6b6-b1c0-4d95-9534-198d828d4548/installer/0.log" Feb 16 17:01:20.424242 master-0 kubenswrapper[10003]: I0216 17:01:20.423959 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Feb 16 17:01:20.424301 master-0 kubenswrapper[10003]: I0216 17:01:20.423946 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"40e9a6b6-b1c0-4d95-9534-198d828d4548","Type":"ContainerDied","Data":"1aff09269d1a38316375acc63348d15689e20589f9cca35c5085fa60b2b3e2de"} Feb 16 17:01:20.424341 master-0 kubenswrapper[10003]: I0216 17:01:20.424322 10003 scope.go:117] "RemoveContainer" containerID="26885a3a8871743215a6d399e764e7a7e1cc57ea6e165592fea5874dd60c31a7" Feb 16 17:01:20.429131 master-0 kubenswrapper[10003]: I0216 17:01:20.428562 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" event={"ID":"642e5115-b7f2-4561-bc6b-1a74b6d891c4","Type":"ContainerStarted","Data":"221dc64441e450195317a3ad8eacbbb293523d0726dbd96812217d44d6f1da31"} Feb 16 17:01:20.448946 master-0 kubenswrapper[10003]: I0216 17:01:20.448849 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podStartSLOduration=2.232063128 podStartE2EDuration="5.448827663s" podCreationTimestamp="2026-02-16 17:01:15 +0000 UTC" firstStartedPulling="2026-02-16 17:01:16.904227758 +0000 UTC m=+46.419713429" lastFinishedPulling="2026-02-16 17:01:20.120992293 +0000 UTC m=+49.636477964" observedRunningTime="2026-02-16 17:01:20.448660588 +0000 UTC m=+49.964146279" watchObservedRunningTime="2026-02-16 17:01:20.448827663 +0000 UTC m=+49.964313334" Feb 16 17:01:20.462224 master-0 kubenswrapper[10003]: I0216 17:01:20.462167 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/40e9a6b6-b1c0-4d95-9534-198d828d4548-var-lock\") pod \"40e9a6b6-b1c0-4d95-9534-198d828d4548\" (UID: \"40e9a6b6-b1c0-4d95-9534-198d828d4548\") " Feb 16 17:01:20.462465 master-0 kubenswrapper[10003]: I0216 17:01:20.462242 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40e9a6b6-b1c0-4d95-9534-198d828d4548-kubelet-dir\") pod \"40e9a6b6-b1c0-4d95-9534-198d828d4548\" (UID: \"40e9a6b6-b1c0-4d95-9534-198d828d4548\") " Feb 16 17:01:20.462465 master-0 kubenswrapper[10003]: I0216 17:01:20.462307 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40e9a6b6-b1c0-4d95-9534-198d828d4548-kube-api-access\") pod \"40e9a6b6-b1c0-4d95-9534-198d828d4548\" (UID: \"40e9a6b6-b1c0-4d95-9534-198d828d4548\") " Feb 16 17:01:20.462465 master-0 kubenswrapper[10003]: I0216 17:01:20.462434 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-var-lock\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:01:20.462777 master-0 kubenswrapper[10003]: I0216 17:01:20.462474 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:01:20.462777 master-0 kubenswrapper[10003]: I0216 17:01:20.462512 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:01:20.462777 master-0 kubenswrapper[10003]: I0216 17:01:20.462579 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:01:20.462777 master-0 kubenswrapper[10003]: I0216 17:01:20.462653 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e9a6b6-b1c0-4d95-9534-198d828d4548-var-lock" (OuterVolumeSpecName: "var-lock") pod "40e9a6b6-b1c0-4d95-9534-198d828d4548" (UID: "40e9a6b6-b1c0-4d95-9534-198d828d4548"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:01:20.462777 master-0 kubenswrapper[10003]: I0216 17:01:20.462685 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40e9a6b6-b1c0-4d95-9534-198d828d4548-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "40e9a6b6-b1c0-4d95-9534-198d828d4548" (UID: "40e9a6b6-b1c0-4d95-9534-198d828d4548"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:01:20.463905 master-0 kubenswrapper[10003]: I0216 17:01:20.463052 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-var-lock\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:01:20.469306 master-0 kubenswrapper[10003]: I0216 17:01:20.469235 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40e9a6b6-b1c0-4d95-9534-198d828d4548-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "40e9a6b6-b1c0-4d95-9534-198d828d4548" (UID: "40e9a6b6-b1c0-4d95-9534-198d828d4548"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:20.489661 master-0 kubenswrapper[10003]: I0216 17:01:20.488723 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:01:20.567011 master-0 kubenswrapper[10003]: I0216 17:01:20.566389 10003 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/40e9a6b6-b1c0-4d95-9534-198d828d4548-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:20.567011 master-0 kubenswrapper[10003]: I0216 17:01:20.566432 10003 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40e9a6b6-b1c0-4d95-9534-198d828d4548-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:20.567011 master-0 kubenswrapper[10003]: I0216 17:01:20.566447 10003 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/40e9a6b6-b1c0-4d95-9534-198d828d4548-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:20.582490 master-0 kubenswrapper[10003]: I0216 17:01:20.582183 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 17:01:20.771962 master-0 kubenswrapper[10003]: I0216 17:01:20.770837 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 17:01:20.773855 master-0 kubenswrapper[10003]: I0216 17:01:20.773793 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq"] Feb 16 17:01:20.776024 master-0 kubenswrapper[10003]: I0216 17:01:20.775420 10003 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Feb 16 17:01:20.819100 master-0 kubenswrapper[10003]: I0216 17:01:20.815480 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40e9a6b6-b1c0-4d95-9534-198d828d4548" path="/var/lib/kubelet/pods/40e9a6b6-b1c0-4d95-9534-198d828d4548/volumes" Feb 16 17:01:21.013302 master-0 kubenswrapper[10003]: I0216 17:01:21.013265 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Feb 16 17:01:21.181485 master-0 kubenswrapper[10003]: I0216 17:01:21.181065 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 16 17:01:21.181485 master-0 kubenswrapper[10003]: E0216 17:01:21.181430 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40e9a6b6-b1c0-4d95-9534-198d828d4548" containerName="installer" Feb 16 17:01:21.181485 master-0 kubenswrapper[10003]: I0216 17:01:21.181471 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="40e9a6b6-b1c0-4d95-9534-198d828d4548" containerName="installer" Feb 16 17:01:21.183123 master-0 kubenswrapper[10003]: I0216 17:01:21.182441 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="40e9a6b6-b1c0-4d95-9534-198d828d4548" containerName="installer" Feb 16 17:01:21.183123 master-0 kubenswrapper[10003]: I0216 17:01:21.183028 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:01:21.185231 master-0 kubenswrapper[10003]: I0216 17:01:21.185197 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-cwb2w" Feb 16 17:01:21.200867 master-0 kubenswrapper[10003]: I0216 17:01:21.200820 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 16 17:01:21.279014 master-0 kubenswrapper[10003]: I0216 17:01:21.274793 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/035c8af0-95f3-4ab6-939c-d7fa8bda40a3-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"035c8af0-95f3-4ab6-939c-d7fa8bda40a3\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:01:21.279014 master-0 kubenswrapper[10003]: I0216 17:01:21.274867 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/035c8af0-95f3-4ab6-939c-d7fa8bda40a3-var-lock\") pod \"installer-4-master-0\" (UID: \"035c8af0-95f3-4ab6-939c-d7fa8bda40a3\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:01:21.279014 master-0 kubenswrapper[10003]: I0216 17:01:21.274899 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/035c8af0-95f3-4ab6-939c-d7fa8bda40a3-kube-api-access\") pod \"installer-4-master-0\" (UID: \"035c8af0-95f3-4ab6-939c-d7fa8bda40a3\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:01:21.375664 master-0 kubenswrapper[10003]: I0216 17:01:21.375543 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/035c8af0-95f3-4ab6-939c-d7fa8bda40a3-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"035c8af0-95f3-4ab6-939c-d7fa8bda40a3\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:01:21.375664 master-0 kubenswrapper[10003]: I0216 17:01:21.375600 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/035c8af0-95f3-4ab6-939c-d7fa8bda40a3-var-lock\") pod \"installer-4-master-0\" (UID: \"035c8af0-95f3-4ab6-939c-d7fa8bda40a3\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:01:21.375664 master-0 kubenswrapper[10003]: I0216 17:01:21.375627 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/035c8af0-95f3-4ab6-939c-d7fa8bda40a3-kube-api-access\") pod \"installer-4-master-0\" (UID: \"035c8af0-95f3-4ab6-939c-d7fa8bda40a3\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:01:21.375948 master-0 kubenswrapper[10003]: I0216 17:01:21.375724 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/035c8af0-95f3-4ab6-939c-d7fa8bda40a3-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"035c8af0-95f3-4ab6-939c-d7fa8bda40a3\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:01:21.375948 master-0 kubenswrapper[10003]: I0216 17:01:21.375792 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/035c8af0-95f3-4ab6-939c-d7fa8bda40a3-var-lock\") pod \"installer-4-master-0\" (UID: \"035c8af0-95f3-4ab6-939c-d7fa8bda40a3\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:01:21.401653 master-0 kubenswrapper[10003]: I0216 17:01:21.401525 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/035c8af0-95f3-4ab6-939c-d7fa8bda40a3-kube-api-access\") pod \"installer-4-master-0\" (UID: \"035c8af0-95f3-4ab6-939c-d7fa8bda40a3\") " pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:01:21.435471 master-0 kubenswrapper[10003]: I0216 17:01:21.435413 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b1b4fccc-6bf6-47ac-8ae1-32cad23734da","Type":"ContainerStarted","Data":"a6f2cc640b5de57d7f65239e3dfae00a6c9cda6decad3cf4c15c3e87bd2e0a2d"} Feb 16 17:01:21.435471 master-0 kubenswrapper[10003]: I0216 17:01:21.435472 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b1b4fccc-6bf6-47ac-8ae1-32cad23734da","Type":"ContainerStarted","Data":"aa37dd5bc712a6e66397e0efcad2c702b51d3841d761278b212389b13ad668e0"} Feb 16 17:01:21.438809 master-0 kubenswrapper[10003]: I0216 17:01:21.438768 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerStarted","Data":"ebcc47375c8090ea868a5deccf7dc1e91eebca2d21948753da2f002b09800231"} Feb 16 17:01:21.438880 master-0 kubenswrapper[10003]: I0216 17:01:21.438818 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerStarted","Data":"d0a7109bce95d1a32301d6e84ffc12bd1d37b091b1ee1ee044686d1a38898e0f"} Feb 16 17:01:21.441282 master-0 kubenswrapper[10003]: I0216 17:01:21.441258 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_bb59921b-7279-4258-a342-554b2878dca1/installer/0.log" Feb 16 17:01:21.441414 master-0 kubenswrapper[10003]: I0216 17:01:21.441393 10003 generic.go:334] "Generic (PLEG): container finished" podID="bb59921b-7279-4258-a342-554b2878dca1" containerID="fad88901989abbb6486cff5122d32135bc1a86edd8ffd3eb270671a3c15b9193" exitCode=1 Feb 16 17:01:21.441598 master-0 kubenswrapper[10003]: I0216 17:01:21.441425 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"bb59921b-7279-4258-a342-554b2878dca1","Type":"ContainerDied","Data":"fad88901989abbb6486cff5122d32135bc1a86edd8ffd3eb270671a3c15b9193"} Feb 16 17:01:21.452011 master-0 kubenswrapper[10003]: I0216 17:01:21.451949 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=1.4519124749999999 podStartE2EDuration="1.451912475s" podCreationTimestamp="2026-02-16 17:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:21.451437952 +0000 UTC m=+50.966923633" watchObservedRunningTime="2026-02-16 17:01:21.451912475 +0000 UTC m=+50.967398146" Feb 16 17:01:21.520391 master-0 kubenswrapper[10003]: I0216 17:01:21.520219 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:01:21.554272 master-0 kubenswrapper[10003]: I0216 17:01:21.554235 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9"] Feb 16 17:01:21.555434 master-0 kubenswrapper[10003]: I0216 17:01:21.555418 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:01:21.558273 master-0 kubenswrapper[10003]: I0216 17:01:21.558228 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 17:01:21.560175 master-0 kubenswrapper[10003]: I0216 17:01:21.558491 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 17:01:21.560175 master-0 kubenswrapper[10003]: I0216 17:01:21.558702 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-gtxjb" Feb 16 17:01:21.560175 master-0 kubenswrapper[10003]: I0216 17:01:21.558971 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 17:01:21.565838 master-0 kubenswrapper[10003]: I0216 17:01:21.565734 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9"] Feb 16 17:01:21.678680 master-0 kubenswrapper[10003]: I0216 17:01:21.678631 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:01:21.678967 master-0 kubenswrapper[10003]: I0216 17:01:21.678692 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:01:21.779820 master-0 kubenswrapper[10003]: I0216 17:01:21.779769 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:01:21.779820 master-0 kubenswrapper[10003]: I0216 17:01:21.779817 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:01:21.783513 master-0 kubenswrapper[10003]: I0216 17:01:21.783483 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:01:21.798576 master-0 kubenswrapper[10003]: I0216 17:01:21.798523 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:01:21.820468 master-0 kubenswrapper[10003]: I0216 17:01:21.819956 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_bb59921b-7279-4258-a342-554b2878dca1/installer/0.log" Feb 16 17:01:21.820468 master-0 kubenswrapper[10003]: I0216 17:01:21.820034 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 17:01:21.881269 master-0 kubenswrapper[10003]: I0216 17:01:21.880966 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb59921b-7279-4258-a342-554b2878dca1-kube-api-access\") pod \"bb59921b-7279-4258-a342-554b2878dca1\" (UID: \"bb59921b-7279-4258-a342-554b2878dca1\") " Feb 16 17:01:21.881269 master-0 kubenswrapper[10003]: I0216 17:01:21.881237 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb59921b-7279-4258-a342-554b2878dca1-var-lock\") pod \"bb59921b-7279-4258-a342-554b2878dca1\" (UID: \"bb59921b-7279-4258-a342-554b2878dca1\") " Feb 16 17:01:21.881471 master-0 kubenswrapper[10003]: I0216 17:01:21.881291 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb59921b-7279-4258-a342-554b2878dca1-kubelet-dir\") pod \"bb59921b-7279-4258-a342-554b2878dca1\" (UID: \"bb59921b-7279-4258-a342-554b2878dca1\") " Feb 16 17:01:21.883241 master-0 kubenswrapper[10003]: I0216 17:01:21.881538 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb59921b-7279-4258-a342-554b2878dca1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bb59921b-7279-4258-a342-554b2878dca1" (UID: "bb59921b-7279-4258-a342-554b2878dca1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:01:21.883241 master-0 kubenswrapper[10003]: I0216 17:01:21.881811 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb59921b-7279-4258-a342-554b2878dca1-var-lock" (OuterVolumeSpecName: "var-lock") pod "bb59921b-7279-4258-a342-554b2878dca1" (UID: "bb59921b-7279-4258-a342-554b2878dca1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:01:21.886080 master-0 kubenswrapper[10003]: I0216 17:01:21.885241 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb59921b-7279-4258-a342-554b2878dca1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bb59921b-7279-4258-a342-554b2878dca1" (UID: "bb59921b-7279-4258-a342-554b2878dca1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:21.886481 master-0 kubenswrapper[10003]: I0216 17:01:21.886344 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:01:21.984064 master-0 kubenswrapper[10003]: I0216 17:01:21.982857 10003 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bb59921b-7279-4258-a342-554b2878dca1-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:21.984064 master-0 kubenswrapper[10003]: I0216 17:01:21.982898 10003 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bb59921b-7279-4258-a342-554b2878dca1-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:21.984064 master-0 kubenswrapper[10003]: I0216 17:01:21.982913 10003 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb59921b-7279-4258-a342-554b2878dca1-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:22.259889 master-0 kubenswrapper[10003]: I0216 17:01:22.259842 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Feb 16 17:01:22.271001 master-0 kubenswrapper[10003]: W0216 17:01:22.270942 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod035c8af0_95f3_4ab6_939c_d7fa8bda40a3.slice/crio-eced2e42d842617466080f00d7950ca196964eff7f84fad83ac2c918e5c89adc WatchSource:0}: Error finding container eced2e42d842617466080f00d7950ca196964eff7f84fad83ac2c918e5c89adc: Status 404 returned error can't find the container with id eced2e42d842617466080f00d7950ca196964eff7f84fad83ac2c918e5c89adc Feb 16 17:01:22.334048 master-0 kubenswrapper[10003]: I0216 17:01:22.331200 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9"] Feb 16 17:01:22.452079 master-0 kubenswrapper[10003]: I0216 17:01:22.452022 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" event={"ID":"32a6b902-917f-4529-92c5-1a4975237501","Type":"ContainerStarted","Data":"7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0"} Feb 16 17:01:22.457324 master-0 kubenswrapper[10003]: I0216 17:01:22.455432 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_bb59921b-7279-4258-a342-554b2878dca1/installer/0.log" Feb 16 17:01:22.457324 master-0 kubenswrapper[10003]: I0216 17:01:22.455560 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Feb 16 17:01:22.457324 master-0 kubenswrapper[10003]: I0216 17:01:22.455567 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"bb59921b-7279-4258-a342-554b2878dca1","Type":"ContainerDied","Data":"2e4d127d6ee2504af15db182b0f08077405b5010605551e55d9383d6127a2697"} Feb 16 17:01:22.457324 master-0 kubenswrapper[10003]: I0216 17:01:22.455655 10003 scope.go:117] "RemoveContainer" containerID="fad88901989abbb6486cff5122d32135bc1a86edd8ffd3eb270671a3c15b9193" Feb 16 17:01:22.457324 master-0 kubenswrapper[10003]: I0216 17:01:22.456526 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"035c8af0-95f3-4ab6-939c-d7fa8bda40a3","Type":"ContainerStarted","Data":"eced2e42d842617466080f00d7950ca196964eff7f84fad83ac2c918e5c89adc"} Feb 16 17:01:22.473652 master-0 kubenswrapper[10003]: I0216 17:01:22.473580 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" podStartSLOduration=2.057126034 podStartE2EDuration="5.473562494s" podCreationTimestamp="2026-02-16 17:01:17 +0000 UTC" firstStartedPulling="2026-02-16 17:01:18.419425845 +0000 UTC m=+47.934911506" lastFinishedPulling="2026-02-16 17:01:21.835862295 +0000 UTC m=+51.351347966" observedRunningTime="2026-02-16 17:01:22.473491922 +0000 UTC m=+51.988977603" watchObservedRunningTime="2026-02-16 17:01:22.473562494 +0000 UTC m=+51.989048155" Feb 16 17:01:22.505705 master-0 kubenswrapper[10003]: I0216 17:01:22.505683 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 17:01:22.507507 master-0 kubenswrapper[10003]: I0216 17:01:22.507457 10003 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Feb 16 17:01:22.809906 master-0 kubenswrapper[10003]: I0216 17:01:22.809854 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb59921b-7279-4258-a342-554b2878dca1" path="/var/lib/kubelet/pods/bb59921b-7279-4258-a342-554b2878dca1/volumes" Feb 16 17:01:22.828005 master-0 kubenswrapper[10003]: I0216 17:01:22.827959 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd"] Feb 16 17:01:22.828204 master-0 kubenswrapper[10003]: E0216 17:01:22.828185 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb59921b-7279-4258-a342-554b2878dca1" containerName="installer" Feb 16 17:01:22.828204 master-0 kubenswrapper[10003]: I0216 17:01:22.828203 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb59921b-7279-4258-a342-554b2878dca1" containerName="installer" Feb 16 17:01:22.828318 master-0 kubenswrapper[10003]: I0216 17:01:22.828296 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb59921b-7279-4258-a342-554b2878dca1" containerName="installer" Feb 16 17:01:22.829045 master-0 kubenswrapper[10003]: I0216 17:01:22.828996 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:01:22.831801 master-0 kubenswrapper[10003]: I0216 17:01:22.831560 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 16 17:01:22.832105 master-0 kubenswrapper[10003]: I0216 17:01:22.831852 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-b9gfw" Feb 16 17:01:22.837994 master-0 kubenswrapper[10003]: I0216 17:01:22.835010 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 16 17:01:22.844303 master-0 kubenswrapper[10003]: I0216 17:01:22.844098 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd"] Feb 16 17:01:22.896003 master-0 kubenswrapper[10003]: I0216 17:01:22.895585 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:01:22.896003 master-0 kubenswrapper[10003]: I0216 17:01:22.895660 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:01:22.896003 master-0 kubenswrapper[10003]: I0216 17:01:22.895705 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:01:22.997345 master-0 kubenswrapper[10003]: I0216 17:01:22.997267 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:01:22.997345 master-0 kubenswrapper[10003]: I0216 17:01:22.997353 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:01:22.997576 master-0 kubenswrapper[10003]: I0216 17:01:22.997391 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:01:22.998576 master-0 kubenswrapper[10003]: I0216 17:01:22.998530 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:01:23.001588 master-0 kubenswrapper[10003]: I0216 17:01:23.001548 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:01:23.009037 master-0 kubenswrapper[10003]: I0216 17:01:23.008963 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn"] Feb 16 17:01:23.010295 master-0 kubenswrapper[10003]: I0216 17:01:23.010259 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.013341 master-0 kubenswrapper[10003]: I0216 17:01:23.013300 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 16 17:01:23.013506 master-0 kubenswrapper[10003]: I0216 17:01:23.013472 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 16 17:01:23.013594 master-0 kubenswrapper[10003]: I0216 17:01:23.013568 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 16 17:01:23.013734 master-0 kubenswrapper[10003]: I0216 17:01:23.013713 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-mzz6s" Feb 16 17:01:23.013936 master-0 kubenswrapper[10003]: I0216 17:01:23.013868 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 16 17:01:23.020950 master-0 kubenswrapper[10003]: I0216 17:01:23.020478 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:01:23.024949 master-0 kubenswrapper[10003]: I0216 17:01:23.022791 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn"] Feb 16 17:01:23.098398 master-0 kubenswrapper[10003]: I0216 17:01:23.098341 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.098565 master-0 kubenswrapper[10003]: I0216 17:01:23.098402 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.098730 master-0 kubenswrapper[10003]: I0216 17:01:23.098666 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.098820 master-0 kubenswrapper[10003]: I0216 17:01:23.098799 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.098886 master-0 kubenswrapper[10003]: I0216 17:01:23.098870 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.149594 master-0 kubenswrapper[10003]: I0216 17:01:23.149160 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:01:23.200045 master-0 kubenswrapper[10003]: I0216 17:01:23.199910 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.200045 master-0 kubenswrapper[10003]: I0216 17:01:23.200051 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.200272 master-0 kubenswrapper[10003]: I0216 17:01:23.200085 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.200272 master-0 kubenswrapper[10003]: I0216 17:01:23.200119 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.200272 master-0 kubenswrapper[10003]: I0216 17:01:23.200147 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.202938 master-0 kubenswrapper[10003]: I0216 17:01:23.202856 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.203888 master-0 kubenswrapper[10003]: I0216 17:01:23.203841 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.206299 master-0 kubenswrapper[10003]: I0216 17:01:23.206265 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.207226 master-0 kubenswrapper[10003]: I0216 17:01:23.207192 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.216900 master-0 kubenswrapper[10003]: I0216 17:01:23.216829 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.397057 master-0 kubenswrapper[10003]: I0216 17:01:23.396939 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:01:23.495414 master-0 kubenswrapper[10003]: I0216 17:01:23.495370 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerStarted","Data":"837a858734b801f62c18bbc1ac1678d7076080812a795cc7c558fa08b748a43c"} Feb 16 17:01:23.498845 master-0 kubenswrapper[10003]: I0216 17:01:23.498707 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"035c8af0-95f3-4ab6-939c-d7fa8bda40a3","Type":"ContainerStarted","Data":"8d78fa623e175273ca9fb1b430de0aa7e6c7b81ae465f33ce572879406853709"} Feb 16 17:01:23.528732 master-0 kubenswrapper[10003]: I0216 17:01:23.528397 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=2.528365066 podStartE2EDuration="2.528365066s" podCreationTimestamp="2026-02-16 17:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:23.522881486 +0000 UTC m=+53.038367167" watchObservedRunningTime="2026-02-16 17:01:23.528365066 +0000 UTC m=+53.043850747" Feb 16 17:01:23.562142 master-0 kubenswrapper[10003]: I0216 17:01:23.562096 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd"] Feb 16 17:01:23.567140 master-0 kubenswrapper[10003]: W0216 17:01:23.567083 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee84198d_6357_4429_a90c_455c3850a788.slice/crio-d377c24744b60cc35617b8e88be818c3a9283d990df16a27d5112c6aed9ce981 WatchSource:0}: Error finding container d377c24744b60cc35617b8e88be818c3a9283d990df16a27d5112c6aed9ce981: Status 404 returned error can't find the container with id d377c24744b60cc35617b8e88be818c3a9283d990df16a27d5112c6aed9ce981 Feb 16 17:01:23.650020 master-0 kubenswrapper[10003]: I0216 17:01:23.649978 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-cb4f7b4cf-6qrw5"] Feb 16 17:01:23.651647 master-0 kubenswrapper[10003]: I0216 17:01:23.651275 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.654612 master-0 kubenswrapper[10003]: I0216 17:01:23.654572 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 16 17:01:23.654863 master-0 kubenswrapper[10003]: I0216 17:01:23.654832 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 16 17:01:23.654863 master-0 kubenswrapper[10003]: I0216 17:01:23.654857 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 16 17:01:23.655139 master-0 kubenswrapper[10003]: I0216 17:01:23.655118 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-rzjlw" Feb 16 17:01:23.655179 master-0 kubenswrapper[10003]: I0216 17:01:23.655145 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 16 17:01:23.667116 master-0 kubenswrapper[10003]: I0216 17:01:23.666174 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 16 17:01:23.676276 master-0 kubenswrapper[10003]: I0216 17:01:23.676208 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-cb4f7b4cf-6qrw5"] Feb 16 17:01:23.710652 master-0 kubenswrapper[10003]: I0216 17:01:23.710572 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.710869 master-0 kubenswrapper[10003]: I0216 17:01:23.710678 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.710869 master-0 kubenswrapper[10003]: I0216 17:01:23.710770 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.710869 master-0 kubenswrapper[10003]: I0216 17:01:23.710829 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.711093 master-0 kubenswrapper[10003]: I0216 17:01:23.710897 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.780069 master-0 kubenswrapper[10003]: I0216 17:01:23.779688 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k"] Feb 16 17:01:23.780512 master-0 kubenswrapper[10003]: I0216 17:01:23.780484 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:01:23.782402 master-0 kubenswrapper[10003]: I0216 17:01:23.782353 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 17:01:23.782482 master-0 kubenswrapper[10003]: I0216 17:01:23.782454 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-hk5sk" Feb 16 17:01:23.782826 master-0 kubenswrapper[10003]: I0216 17:01:23.782795 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 17:01:23.791522 master-0 kubenswrapper[10003]: I0216 17:01:23.791480 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k"] Feb 16 17:01:23.806635 master-0 kubenswrapper[10003]: I0216 17:01:23.805623 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn"] Feb 16 17:01:23.810584 master-0 kubenswrapper[10003]: W0216 17:01:23.810535 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4488757c_f0fd_48fa_a3f9_6373b0bcafe4.slice/crio-80fdc50e531795c33b265621c0e851281169f624db10a2cc59cfc4a7fd66173e WatchSource:0}: Error finding container 80fdc50e531795c33b265621c0e851281169f624db10a2cc59cfc4a7fd66173e: Status 404 returned error can't find the container with id 80fdc50e531795c33b265621c0e851281169f624db10a2cc59cfc4a7fd66173e Feb 16 17:01:23.811684 master-0 kubenswrapper[10003]: I0216 17:01:23.811647 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.811755 master-0 kubenswrapper[10003]: I0216 17:01:23.811692 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.811877 master-0 kubenswrapper[10003]: I0216 17:01:23.811831 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.811938 master-0 kubenswrapper[10003]: I0216 17:01:23.811893 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.811979 master-0 kubenswrapper[10003]: I0216 17:01:23.811957 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.812969 master-0 kubenswrapper[10003]: I0216 17:01:23.812933 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.813292 master-0 kubenswrapper[10003]: I0216 17:01:23.813259 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.815167 master-0 kubenswrapper[10003]: I0216 17:01:23.815129 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.816532 master-0 kubenswrapper[10003]: I0216 17:01:23.816486 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.831399 master-0 kubenswrapper[10003]: I0216 17:01:23.830904 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:23.888506 master-0 kubenswrapper[10003]: I0216 17:01:23.887859 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp"] Feb 16 17:01:23.889386 master-0 kubenswrapper[10003]: I0216 17:01:23.889024 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:01:23.893681 master-0 kubenswrapper[10003]: I0216 17:01:23.892766 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-x2982" Feb 16 17:01:23.899789 master-0 kubenswrapper[10003]: I0216 17:01:23.894060 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 16 17:01:23.908329 master-0 kubenswrapper[10003]: I0216 17:01:23.908295 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp"] Feb 16 17:01:23.912555 master-0 kubenswrapper[10003]: I0216 17:01:23.912530 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:01:23.913078 master-0 kubenswrapper[10003]: I0216 17:01:23.912652 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:01:23.913078 master-0 kubenswrapper[10003]: I0216 17:01:23.913073 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:01:23.991578 master-0 kubenswrapper[10003]: I0216 17:01:23.991490 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:01:24.014444 master-0 kubenswrapper[10003]: I0216 17:01:24.014346 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:01:24.014566 master-0 kubenswrapper[10003]: I0216 17:01:24.014518 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:01:24.014648 master-0 kubenswrapper[10003]: I0216 17:01:24.014601 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:01:24.014696 master-0 kubenswrapper[10003]: I0216 17:01:24.014672 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:01:24.014765 master-0 kubenswrapper[10003]: I0216 17:01:24.014742 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:01:24.018106 master-0 kubenswrapper[10003]: I0216 17:01:24.018068 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:01:24.022459 master-0 kubenswrapper[10003]: I0216 17:01:24.022420 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:01:24.039070 master-0 kubenswrapper[10003]: I0216 17:01:24.039029 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:01:24.051991 master-0 kubenswrapper[10003]: I0216 17:01:24.051938 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs"] Feb 16 17:01:24.052879 master-0 kubenswrapper[10003]: I0216 17:01:24.052854 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:01:24.057412 master-0 kubenswrapper[10003]: I0216 17:01:24.057287 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 17:01:24.068534 master-0 kubenswrapper[10003]: I0216 17:01:24.068484 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs"] Feb 16 17:01:24.105518 master-0 kubenswrapper[10003]: I0216 17:01:24.105468 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:01:24.150736 master-0 kubenswrapper[10003]: I0216 17:01:24.150548 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:01:24.150736 master-0 kubenswrapper[10003]: I0216 17:01:24.150660 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:01:24.151188 master-0 kubenswrapper[10003]: I0216 17:01:24.150785 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:01:24.151188 master-0 kubenswrapper[10003]: I0216 17:01:24.150830 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:01:24.151188 master-0 kubenswrapper[10003]: I0216 17:01:24.150855 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:01:24.156397 master-0 kubenswrapper[10003]: I0216 17:01:24.154448 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:01:24.185242 master-0 kubenswrapper[10003]: I0216 17:01:24.184996 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:01:24.220420 master-0 kubenswrapper[10003]: I0216 17:01:24.220390 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:01:24.254370 master-0 kubenswrapper[10003]: I0216 17:01:24.252298 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:01:24.254370 master-0 kubenswrapper[10003]: I0216 17:01:24.252379 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:01:24.254370 master-0 kubenswrapper[10003]: I0216 17:01:24.252855 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:01:24.258607 master-0 kubenswrapper[10003]: I0216 17:01:24.257616 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:01:24.263024 master-0 kubenswrapper[10003]: I0216 17:01:24.262993 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:01:24.505564 master-0 kubenswrapper[10003]: I0216 17:01:24.505332 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerStarted","Data":"510990a72db12a97eef2b9c9fbdaec55abf5d52c68ce419a7f5a87a3062f73f1"} Feb 16 17:01:24.505564 master-0 kubenswrapper[10003]: I0216 17:01:24.505387 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerStarted","Data":"d377c24744b60cc35617b8e88be818c3a9283d990df16a27d5112c6aed9ce981"} Feb 16 17:01:24.506912 master-0 kubenswrapper[10003]: I0216 17:01:24.506863 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerStarted","Data":"80fdc50e531795c33b265621c0e851281169f624db10a2cc59cfc4a7fd66173e"} Feb 16 17:01:24.855856 master-0 kubenswrapper[10003]: I0216 17:01:24.855727 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:01:24.980698 master-0 kubenswrapper[10003]: I0216 17:01:24.980638 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:01:25.111132 master-0 kubenswrapper[10003]: I0216 17:01:25.105371 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k"] Feb 16 17:01:25.111132 master-0 kubenswrapper[10003]: I0216 17:01:25.107427 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp"] Feb 16 17:01:25.111132 master-0 kubenswrapper[10003]: I0216 17:01:25.108950 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-cb4f7b4cf-6qrw5"] Feb 16 17:01:25.122244 master-0 kubenswrapper[10003]: W0216 17:01:25.122178 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a275679_b7b6_4c28_b389_94cd2b014d6c.slice/crio-fbcae0a406fe6ca88ac22ca96551fc1de219ee3e9705034ce16efd7971fc9fed WatchSource:0}: Error finding container fbcae0a406fe6ca88ac22ca96551fc1de219ee3e9705034ce16efd7971fc9fed: Status 404 returned error can't find the container with id fbcae0a406fe6ca88ac22ca96551fc1de219ee3e9705034ce16efd7971fc9fed Feb 16 17:01:25.125296 master-0 kubenswrapper[10003]: W0216 17:01:25.125255 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2511146_1d04_4ecd_a28e_79662ef7b9d3.slice/crio-fbbdab5ef2164d5b878fbbf6c9e025a67f22281db5fb649f6dbfc4b829160d91 WatchSource:0}: Error finding container fbbdab5ef2164d5b878fbbf6c9e025a67f22281db5fb649f6dbfc4b829160d91: Status 404 returned error can't find the container with id fbbdab5ef2164d5b878fbbf6c9e025a67f22281db5fb649f6dbfc4b829160d91 Feb 16 17:01:25.516627 master-0 kubenswrapper[10003]: I0216 17:01:25.516561 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" event={"ID":"62220aa5-4065-472c-8a17-c0a58942ab8a","Type":"ContainerStarted","Data":"976b039c9b06af0f3723d83d4469ee022692218ee590a3983454ac89413005ba"} Feb 16 17:01:25.516627 master-0 kubenswrapper[10003]: I0216 17:01:25.516604 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" event={"ID":"62220aa5-4065-472c-8a17-c0a58942ab8a","Type":"ContainerStarted","Data":"f1e81c2caa02917ae2e1efaeab30f34c00bb80423dce6819a41e6640d4fdc6d5"} Feb 16 17:01:25.517718 master-0 kubenswrapper[10003]: I0216 17:01:25.517678 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerStarted","Data":"fbbdab5ef2164d5b878fbbf6c9e025a67f22281db5fb649f6dbfc4b829160d91"} Feb 16 17:01:25.518999 master-0 kubenswrapper[10003]: I0216 17:01:25.518467 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" event={"ID":"5a275679-b7b6-4c28-b389-94cd2b014d6c","Type":"ContainerStarted","Data":"fbcae0a406fe6ca88ac22ca96551fc1de219ee3e9705034ce16efd7971fc9fed"} Feb 16 17:01:26.292636 master-0 kubenswrapper[10003]: I0216 17:01:26.292589 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs"] Feb 16 17:01:26.523708 master-0 kubenswrapper[10003]: I0216 17:01:26.523639 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:01:26.547852 master-0 kubenswrapper[10003]: I0216 17:01:26.547642 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc"] Feb 16 17:01:26.552477 master-0 kubenswrapper[10003]: I0216 17:01:26.551353 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:01:26.552477 master-0 kubenswrapper[10003]: I0216 17:01:26.551448 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:26.580979 master-0 kubenswrapper[10003]: I0216 17:01:26.564572 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 17:01:26.580979 master-0 kubenswrapper[10003]: I0216 17:01:26.564856 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 17:01:26.580979 master-0 kubenswrapper[10003]: I0216 17:01:26.565028 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 17:01:26.580979 master-0 kubenswrapper[10003]: I0216 17:01:26.579520 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc"] Feb 16 17:01:26.592590 master-0 kubenswrapper[10003]: I0216 17:01:26.591673 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 17:01:26.592590 master-0 kubenswrapper[10003]: I0216 17:01:26.591753 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-q2gzj" Feb 16 17:01:26.592590 master-0 kubenswrapper[10003]: I0216 17:01:26.591989 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 17:01:26.684941 master-0 kubenswrapper[10003]: I0216 17:01:26.683956 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podStartSLOduration=3.6839405530000002 podStartE2EDuration="3.683940553s" podCreationTimestamp="2026-02-16 17:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:26.61085234 +0000 UTC m=+56.126338031" watchObservedRunningTime="2026-02-16 17:01:26.683940553 +0000 UTC m=+56.199426224" Feb 16 17:01:26.696894 master-0 kubenswrapper[10003]: I0216 17:01:26.696857 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:26.697120 master-0 kubenswrapper[10003]: I0216 17:01:26.697103 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:26.697219 master-0 kubenswrapper[10003]: I0216 17:01:26.697207 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:26.697323 master-0 kubenswrapper[10003]: I0216 17:01:26.697309 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:26.761990 master-0 kubenswrapper[10003]: I0216 17:01:26.761848 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm"] Feb 16 17:01:26.774110 master-0 kubenswrapper[10003]: I0216 17:01:26.762693 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:26.774110 master-0 kubenswrapper[10003]: I0216 17:01:26.768653 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 17:01:26.774110 master-0 kubenswrapper[10003]: I0216 17:01:26.768796 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:01:26.774110 master-0 kubenswrapper[10003]: I0216 17:01:26.768938 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lc8g2" Feb 16 17:01:26.774110 master-0 kubenswrapper[10003]: I0216 17:01:26.769050 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:01:26.774110 master-0 kubenswrapper[10003]: I0216 17:01:26.769211 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 17:01:26.774110 master-0 kubenswrapper[10003]: I0216 17:01:26.769302 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 17:01:26.793095 master-0 kubenswrapper[10003]: I0216 17:01:26.790628 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7f5cc48-l2768"] Feb 16 17:01:26.793095 master-0 kubenswrapper[10003]: I0216 17:01:26.790824 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" podUID="8f5c4cde-289c-49d9-9b17-176c368267d2" containerName="controller-manager" containerID="cri-o://af85a978226a1ad38f26ff25527b186b68bc9fc211af6ef3a866e7c1644f287f" gracePeriod=30 Feb 16 17:01:26.798642 master-0 kubenswrapper[10003]: I0216 17:01:26.798537 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:26.798642 master-0 kubenswrapper[10003]: I0216 17:01:26.798594 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:26.798642 master-0 kubenswrapper[10003]: I0216 17:01:26.798615 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:26.798642 master-0 kubenswrapper[10003]: I0216 17:01:26.798647 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:26.805838 master-0 kubenswrapper[10003]: I0216 17:01:26.801216 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:26.805838 master-0 kubenswrapper[10003]: I0216 17:01:26.802541 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:26.805838 master-0 kubenswrapper[10003]: I0216 17:01:26.802789 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:26.819698 master-0 kubenswrapper[10003]: I0216 17:01:26.819646 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:26.841478 master-0 kubenswrapper[10003]: I0216 17:01:26.841386 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r"] Feb 16 17:01:26.842121 master-0 kubenswrapper[10003]: I0216 17:01:26.842088 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" podUID="4fa68fac-a6bc-461c-8edb-cf4e6a1a802d" containerName="route-controller-manager" containerID="cri-o://97cbf0dab61f16f4856e8045183318000795ee2cded73dac7a1a281cb2b7e077" gracePeriod=30 Feb 16 17:01:26.909735 master-0 kubenswrapper[10003]: I0216 17:01:26.909689 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-images\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:26.909904 master-0 kubenswrapper[10003]: I0216 17:01:26.909748 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnmbv\" (UniqueName: \"kubernetes.io/projected/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-kube-api-access-nnmbv\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:26.909904 master-0 kubenswrapper[10003]: I0216 17:01:26.909779 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:26.909904 master-0 kubenswrapper[10003]: I0216 17:01:26.909811 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:26.909904 master-0 kubenswrapper[10003]: I0216 17:01:26.909840 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:26.932651 master-0 kubenswrapper[10003]: I0216 17:01:26.932596 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:01:27.011333 master-0 kubenswrapper[10003]: I0216 17:01:27.011176 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-images\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:27.011333 master-0 kubenswrapper[10003]: I0216 17:01:27.011264 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnmbv\" (UniqueName: \"kubernetes.io/projected/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-kube-api-access-nnmbv\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:27.011333 master-0 kubenswrapper[10003]: I0216 17:01:27.011298 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:27.011601 master-0 kubenswrapper[10003]: I0216 17:01:27.011416 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:27.011601 master-0 kubenswrapper[10003]: I0216 17:01:27.011450 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:27.011601 master-0 kubenswrapper[10003]: I0216 17:01:27.011533 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:27.012470 master-0 kubenswrapper[10003]: I0216 17:01:27.012438 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:27.012536 master-0 kubenswrapper[10003]: I0216 17:01:27.012482 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-images\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:27.016599 master-0 kubenswrapper[10003]: I0216 17:01:27.016540 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:27.028502 master-0 kubenswrapper[10003]: I0216 17:01:27.028432 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnmbv\" (UniqueName: \"kubernetes.io/projected/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-kube-api-access-nnmbv\") pod \"cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:27.076258 master-0 kubenswrapper[10003]: I0216 17:01:27.074208 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx"] Feb 16 17:01:27.076258 master-0 kubenswrapper[10003]: I0216 17:01:27.075111 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.081205 master-0 kubenswrapper[10003]: I0216 17:01:27.080123 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 17:01:27.081205 master-0 kubenswrapper[10003]: I0216 17:01:27.080333 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 17:01:27.081205 master-0 kubenswrapper[10003]: I0216 17:01:27.080577 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:27.083064 master-0 kubenswrapper[10003]: I0216 17:01:27.082158 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-kh5s4" Feb 16 17:01:27.083064 master-0 kubenswrapper[10003]: I0216 17:01:27.082711 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 17:01:27.095633 master-0 kubenswrapper[10003]: I0216 17:01:27.095594 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx"] Feb 16 17:01:27.222984 master-0 kubenswrapper[10003]: I0216 17:01:27.220047 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.222984 master-0 kubenswrapper[10003]: I0216 17:01:27.220239 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.222984 master-0 kubenswrapper[10003]: I0216 17:01:27.220275 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.222984 master-0 kubenswrapper[10003]: I0216 17:01:27.220309 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.322150 master-0 kubenswrapper[10003]: I0216 17:01:27.322083 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.322150 master-0 kubenswrapper[10003]: I0216 17:01:27.322148 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.322343 master-0 kubenswrapper[10003]: I0216 17:01:27.322191 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.322343 master-0 kubenswrapper[10003]: I0216 17:01:27.322265 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.323211 master-0 kubenswrapper[10003]: E0216 17:01:27.323171 10003 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: secret "machine-api-operator-tls" not found Feb 16 17:01:27.323261 master-0 kubenswrapper[10003]: E0216 17:01:27.323251 10003 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:01:27.823234195 +0000 UTC m=+57.338719866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : secret "machine-api-operator-tls" not found Feb 16 17:01:27.323595 master-0 kubenswrapper[10003]: I0216 17:01:27.323558 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.323653 master-0 kubenswrapper[10003]: I0216 17:01:27.323592 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.530256 master-0 kubenswrapper[10003]: I0216 17:01:27.530204 10003 generic.go:334] "Generic (PLEG): container finished" podID="8f5c4cde-289c-49d9-9b17-176c368267d2" containerID="af85a978226a1ad38f26ff25527b186b68bc9fc211af6ef3a866e7c1644f287f" exitCode=0 Feb 16 17:01:27.530838 master-0 kubenswrapper[10003]: I0216 17:01:27.530274 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" event={"ID":"8f5c4cde-289c-49d9-9b17-176c368267d2","Type":"ContainerDied","Data":"af85a978226a1ad38f26ff25527b186b68bc9fc211af6ef3a866e7c1644f287f"} Feb 16 17:01:27.532003 master-0 kubenswrapper[10003]: I0216 17:01:27.531969 10003 generic.go:334] "Generic (PLEG): container finished" podID="4fa68fac-a6bc-461c-8edb-cf4e6a1a802d" containerID="97cbf0dab61f16f4856e8045183318000795ee2cded73dac7a1a281cb2b7e077" exitCode=0 Feb 16 17:01:27.532077 master-0 kubenswrapper[10003]: I0216 17:01:27.532006 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" event={"ID":"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d","Type":"ContainerDied","Data":"97cbf0dab61f16f4856e8045183318000795ee2cded73dac7a1a281cb2b7e077"} Feb 16 17:01:27.623947 master-0 kubenswrapper[10003]: I0216 17:01:27.615817 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.827713 master-0 kubenswrapper[10003]: I0216 17:01:27.827654 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.831153 master-0 kubenswrapper[10003]: I0216 17:01:27.831106 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:27.917357 master-0 kubenswrapper[10003]: I0216 17:01:27.917308 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw"] Feb 16 17:01:27.919128 master-0 kubenswrapper[10003]: I0216 17:01:27.919092 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:27.921026 master-0 kubenswrapper[10003]: I0216 17:01:27.920986 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 17:01:27.931376 master-0 kubenswrapper[10003]: I0216 17:01:27.930168 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw"] Feb 16 17:01:28.026032 master-0 kubenswrapper[10003]: I0216 17:01:28.025959 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:01:28.029953 master-0 kubenswrapper[10003]: I0216 17:01:28.029887 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:28.030201 master-0 kubenswrapper[10003]: I0216 17:01:28.030138 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:28.030332 master-0 kubenswrapper[10003]: I0216 17:01:28.030287 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:28.030457 master-0 kubenswrapper[10003]: I0216 17:01:28.030430 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:28.131844 master-0 kubenswrapper[10003]: I0216 17:01:28.131716 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:28.131844 master-0 kubenswrapper[10003]: I0216 17:01:28.131803 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:28.132099 master-0 kubenswrapper[10003]: I0216 17:01:28.131849 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:28.132099 master-0 kubenswrapper[10003]: I0216 17:01:28.131898 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:28.133807 master-0 kubenswrapper[10003]: I0216 17:01:28.132458 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:28.136590 master-0 kubenswrapper[10003]: I0216 17:01:28.136567 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:28.136877 master-0 kubenswrapper[10003]: I0216 17:01:28.136849 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:28.148470 master-0 kubenswrapper[10003]: I0216 17:01:28.148434 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:28.242318 master-0 kubenswrapper[10003]: I0216 17:01:28.242257 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:28.538126 master-0 kubenswrapper[10003]: I0216 17:01:28.538067 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" event={"ID":"188e42e5-9f9c-42af-ba15-5548c4fa4b52","Type":"ContainerStarted","Data":"960647e5dd274ec370d3ea843747f832b88bbc5e8bbea57e384a265bf5609dcc"} Feb 16 17:01:29.672342 master-0 kubenswrapper[10003]: I0216 17:01:29.672120 10003 patch_prober.go:28] interesting pod/route-controller-manager-6d88b87bb8-wfs4r container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 17:01:29.672342 master-0 kubenswrapper[10003]: I0216 17:01:29.672262 10003 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" podUID="4fa68fac-a6bc-461c-8edb-cf4e6a1a802d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.41:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 17:01:30.452009 master-0 kubenswrapper[10003]: I0216 17:01:30.451600 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:30.459695 master-0 kubenswrapper[10003]: I0216 17:01:30.459641 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:30.480349 master-0 kubenswrapper[10003]: I0216 17:01:30.480271 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl"] Feb 16 17:01:30.480683 master-0 kubenswrapper[10003]: E0216 17:01:30.480505 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa68fac-a6bc-461c-8edb-cf4e6a1a802d" containerName="route-controller-manager" Feb 16 17:01:30.480683 master-0 kubenswrapper[10003]: I0216 17:01:30.480521 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa68fac-a6bc-461c-8edb-cf4e6a1a802d" containerName="route-controller-manager" Feb 16 17:01:30.480683 master-0 kubenswrapper[10003]: E0216 17:01:30.480531 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5c4cde-289c-49d9-9b17-176c368267d2" containerName="controller-manager" Feb 16 17:01:30.480683 master-0 kubenswrapper[10003]: I0216 17:01:30.480537 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5c4cde-289c-49d9-9b17-176c368267d2" containerName="controller-manager" Feb 16 17:01:30.480683 master-0 kubenswrapper[10003]: I0216 17:01:30.480630 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa68fac-a6bc-461c-8edb-cf4e6a1a802d" containerName="route-controller-manager" Feb 16 17:01:30.480683 master-0 kubenswrapper[10003]: I0216 17:01:30.480670 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5c4cde-289c-49d9-9b17-176c368267d2" containerName="controller-manager" Feb 16 17:01:30.481077 master-0 kubenswrapper[10003]: I0216 17:01:30.481050 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.483075 master-0 kubenswrapper[10003]: I0216 17:01:30.483018 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ztpz8" Feb 16 17:01:30.500372 master-0 kubenswrapper[10003]: I0216 17:01:30.500319 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl"] Feb 16 17:01:30.549625 master-0 kubenswrapper[10003]: I0216 17:01:30.549551 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" event={"ID":"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d","Type":"ContainerDied","Data":"f2f8872c1e11a1b425867c8d5c4a87bd6af6c98273220473b1408998fe1195b7"} Feb 16 17:01:30.549625 master-0 kubenswrapper[10003]: I0216 17:01:30.549608 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r" Feb 16 17:01:30.549625 master-0 kubenswrapper[10003]: I0216 17:01:30.549627 10003 scope.go:117] "RemoveContainer" containerID="97cbf0dab61f16f4856e8045183318000795ee2cded73dac7a1a281cb2b7e077" Feb 16 17:01:30.552185 master-0 kubenswrapper[10003]: I0216 17:01:30.551783 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" event={"ID":"8f5c4cde-289c-49d9-9b17-176c368267d2","Type":"ContainerDied","Data":"843cba8436efc8a69794429be90a7519d875093d79d2def3641614c864e2b2dd"} Feb 16 17:01:30.552185 master-0 kubenswrapper[10003]: I0216 17:01:30.551882 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb7f5cc48-l2768" Feb 16 17:01:30.565995 master-0 kubenswrapper[10003]: I0216 17:01:30.565955 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5c4cde-289c-49d9-9b17-176c368267d2-serving-cert\") pod \"8f5c4cde-289c-49d9-9b17-176c368267d2\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " Feb 16 17:01:30.566134 master-0 kubenswrapper[10003]: I0216 17:01:30.566057 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-client-ca\") pod \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " Feb 16 17:01:30.566172 master-0 kubenswrapper[10003]: I0216 17:01:30.566140 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-config\") pod \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " Feb 16 17:01:30.566837 master-0 kubenswrapper[10003]: I0216 17:01:30.566268 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5g4z\" (UniqueName: \"kubernetes.io/projected/8f5c4cde-289c-49d9-9b17-176c368267d2-kube-api-access-c5g4z\") pod \"8f5c4cde-289c-49d9-9b17-176c368267d2\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " Feb 16 17:01:30.566837 master-0 kubenswrapper[10003]: I0216 17:01:30.566321 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-serving-cert\") pod \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " Feb 16 17:01:30.566837 master-0 kubenswrapper[10003]: I0216 17:01:30.566357 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-client-ca\") pod \"8f5c4cde-289c-49d9-9b17-176c368267d2\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " Feb 16 17:01:30.566837 master-0 kubenswrapper[10003]: I0216 17:01:30.566385 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rsvb\" (UniqueName: \"kubernetes.io/projected/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-kube-api-access-7rsvb\") pod \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\" (UID: \"4fa68fac-a6bc-461c-8edb-cf4e6a1a802d\") " Feb 16 17:01:30.566837 master-0 kubenswrapper[10003]: I0216 17:01:30.566421 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-proxy-ca-bundles\") pod \"8f5c4cde-289c-49d9-9b17-176c368267d2\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " Feb 16 17:01:30.566837 master-0 kubenswrapper[10003]: I0216 17:01:30.566437 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-config\") pod \"8f5c4cde-289c-49d9-9b17-176c368267d2\" (UID: \"8f5c4cde-289c-49d9-9b17-176c368267d2\") " Feb 16 17:01:30.566837 master-0 kubenswrapper[10003]: I0216 17:01:30.566585 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-client-ca" (OuterVolumeSpecName: "client-ca") pod "4fa68fac-a6bc-461c-8edb-cf4e6a1a802d" (UID: "4fa68fac-a6bc-461c-8edb-cf4e6a1a802d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:30.566837 master-0 kubenswrapper[10003]: I0216 17:01:30.566613 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.566837 master-0 kubenswrapper[10003]: I0216 17:01:30.566682 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.566837 master-0 kubenswrapper[10003]: I0216 17:01:30.566697 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-config" (OuterVolumeSpecName: "config") pod "4fa68fac-a6bc-461c-8edb-cf4e6a1a802d" (UID: "4fa68fac-a6bc-461c-8edb-cf4e6a1a802d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:30.566837 master-0 kubenswrapper[10003]: I0216 17:01:30.566707 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.567280 master-0 kubenswrapper[10003]: I0216 17:01:30.566936 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.567280 master-0 kubenswrapper[10003]: I0216 17:01:30.567058 10003 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:30.567280 master-0 kubenswrapper[10003]: I0216 17:01:30.567075 10003 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:30.567280 master-0 kubenswrapper[10003]: I0216 17:01:30.567254 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-config" (OuterVolumeSpecName: "config") pod "8f5c4cde-289c-49d9-9b17-176c368267d2" (UID: "8f5c4cde-289c-49d9-9b17-176c368267d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:30.567433 master-0 kubenswrapper[10003]: I0216 17:01:30.567328 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8f5c4cde-289c-49d9-9b17-176c368267d2" (UID: "8f5c4cde-289c-49d9-9b17-176c368267d2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:30.567664 master-0 kubenswrapper[10003]: I0216 17:01:30.567632 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-client-ca" (OuterVolumeSpecName: "client-ca") pod "8f5c4cde-289c-49d9-9b17-176c368267d2" (UID: "8f5c4cde-289c-49d9-9b17-176c368267d2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:30.569138 master-0 kubenswrapper[10003]: I0216 17:01:30.569110 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5c4cde-289c-49d9-9b17-176c368267d2-kube-api-access-c5g4z" (OuterVolumeSpecName: "kube-api-access-c5g4z") pod "8f5c4cde-289c-49d9-9b17-176c368267d2" (UID: "8f5c4cde-289c-49d9-9b17-176c368267d2"). InnerVolumeSpecName "kube-api-access-c5g4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:30.569331 master-0 kubenswrapper[10003]: I0216 17:01:30.569281 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5c4cde-289c-49d9-9b17-176c368267d2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8f5c4cde-289c-49d9-9b17-176c368267d2" (UID: "8f5c4cde-289c-49d9-9b17-176c368267d2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:01:30.569381 master-0 kubenswrapper[10003]: I0216 17:01:30.569290 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-kube-api-access-7rsvb" (OuterVolumeSpecName: "kube-api-access-7rsvb") pod "4fa68fac-a6bc-461c-8edb-cf4e6a1a802d" (UID: "4fa68fac-a6bc-461c-8edb-cf4e6a1a802d"). InnerVolumeSpecName "kube-api-access-7rsvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:30.569636 master-0 kubenswrapper[10003]: I0216 17:01:30.569596 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4fa68fac-a6bc-461c-8edb-cf4e6a1a802d" (UID: "4fa68fac-a6bc-461c-8edb-cf4e6a1a802d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:01:30.585179 master-0 kubenswrapper[10003]: I0216 17:01:30.585133 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Feb 16 17:01:30.585380 master-0 kubenswrapper[10003]: I0216 17:01:30.585326 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="7fe1c16d-061a-4a57-aea4-cf1d4b24d02f" containerName="installer" containerID="cri-o://90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99" gracePeriod=30 Feb 16 17:01:30.667912 master-0 kubenswrapper[10003]: I0216 17:01:30.667804 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.667912 master-0 kubenswrapper[10003]: I0216 17:01:30.667857 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.667912 master-0 kubenswrapper[10003]: I0216 17:01:30.667883 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.668181 master-0 kubenswrapper[10003]: I0216 17:01:30.667939 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.668181 master-0 kubenswrapper[10003]: I0216 17:01:30.667973 10003 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5g4z\" (UniqueName: \"kubernetes.io/projected/8f5c4cde-289c-49d9-9b17-176c368267d2-kube-api-access-c5g4z\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:30.668181 master-0 kubenswrapper[10003]: I0216 17:01:30.667984 10003 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:30.668181 master-0 kubenswrapper[10003]: I0216 17:01:30.667994 10003 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:30.668181 master-0 kubenswrapper[10003]: I0216 17:01:30.668002 10003 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rsvb\" (UniqueName: \"kubernetes.io/projected/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d-kube-api-access-7rsvb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:30.668181 master-0 kubenswrapper[10003]: I0216 17:01:30.668011 10003 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:30.668181 master-0 kubenswrapper[10003]: I0216 17:01:30.668019 10003 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5c4cde-289c-49d9-9b17-176c368267d2-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:30.668181 master-0 kubenswrapper[10003]: I0216 17:01:30.668027 10003 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5c4cde-289c-49d9-9b17-176c368267d2-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:30.669073 master-0 kubenswrapper[10003]: I0216 17:01:30.669039 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.669575 master-0 kubenswrapper[10003]: I0216 17:01:30.669523 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.671240 master-0 kubenswrapper[10003]: I0216 17:01:30.671163 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.690839 master-0 kubenswrapper[10003]: I0216 17:01:30.690789 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.808875 master-0 kubenswrapper[10003]: I0216 17:01:30.808555 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ztpz8" Feb 16 17:01:30.817202 master-0 kubenswrapper[10003]: I0216 17:01:30.817148 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:30.940940 master-0 kubenswrapper[10003]: I0216 17:01:30.939611 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r"] Feb 16 17:01:30.943171 master-0 kubenswrapper[10003]: I0216 17:01:30.943131 10003 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d88b87bb8-wfs4r"] Feb 16 17:01:30.952252 master-0 kubenswrapper[10003]: I0216 17:01:30.951483 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7f5cc48-l2768"] Feb 16 17:01:30.953801 master-0 kubenswrapper[10003]: I0216 17:01:30.953753 10003 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cb7f5cc48-l2768"] Feb 16 17:01:31.412986 master-0 kubenswrapper[10003]: I0216 17:01:31.412912 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc"] Feb 16 17:01:31.557073 master-0 kubenswrapper[10003]: I0216 17:01:31.556860 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" event={"ID":"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7","Type":"ContainerStarted","Data":"2882ab80b1900a943d704e6e04c01d1cc470047c64e522580e4e2608630c5a4a"} Feb 16 17:01:32.803943 master-0 kubenswrapper[10003]: I0216 17:01:32.803815 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa68fac-a6bc-461c-8edb-cf4e6a1a802d" path="/var/lib/kubelet/pods/4fa68fac-a6bc-461c-8edb-cf4e6a1a802d/volumes" Feb 16 17:01:32.804748 master-0 kubenswrapper[10003]: I0216 17:01:32.804719 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f5c4cde-289c-49d9-9b17-176c368267d2" path="/var/lib/kubelet/pods/8f5c4cde-289c-49d9-9b17-176c368267d2/volumes" Feb 16 17:01:33.237063 master-0 kubenswrapper[10003]: I0216 17:01:33.236159 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd"] Feb 16 17:01:33.237063 master-0 kubenswrapper[10003]: I0216 17:01:33.237034 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.241705 master-0 kubenswrapper[10003]: I0216 17:01:33.241648 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:01:33.241992 master-0 kubenswrapper[10003]: I0216 17:01:33.241699 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:01:33.241992 master-0 kubenswrapper[10003]: I0216 17:01:33.241905 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:01:33.241992 master-0 kubenswrapper[10003]: I0216 17:01:33.241961 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:01:33.242276 master-0 kubenswrapper[10003]: I0216 17:01:33.241911 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:01:33.242534 master-0 kubenswrapper[10003]: I0216 17:01:33.242496 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-7mlbn" Feb 16 17:01:33.247955 master-0 kubenswrapper[10003]: I0216 17:01:33.247881 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 16 17:01:33.250411 master-0 kubenswrapper[10003]: I0216 17:01:33.250373 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:01:33.254686 master-0 kubenswrapper[10003]: I0216 17:01:33.254621 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-qlqr4" Feb 16 17:01:33.257794 master-0 kubenswrapper[10003]: I0216 17:01:33.257745 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:01:33.407338 master-0 kubenswrapper[10003]: I0216 17:01:33.407247 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.407338 master-0 kubenswrapper[10003]: I0216 17:01:33.407311 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.407338 master-0 kubenswrapper[10003]: I0216 17:01:33.407335 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.407632 master-0 kubenswrapper[10003]: I0216 17:01:33.407438 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.407632 master-0 kubenswrapper[10003]: I0216 17:01:33.407575 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-var-lock\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:01:33.407698 master-0 kubenswrapper[10003]: I0216 17:01:33.407684 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:01:33.407788 master-0 kubenswrapper[10003]: I0216 17:01:33.407750 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:01:33.407939 master-0 kubenswrapper[10003]: I0216 17:01:33.407871 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.472668 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.475781 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd"] Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.500411 10003 scope.go:117] "RemoveContainer" containerID="af85a978226a1ad38f26ff25527b186b68bc9fc211af6ef3a866e7c1644f287f" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.508801 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.508843 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.508872 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.508900 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.508940 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-var-lock\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.508981 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.509012 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.509040 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.518881 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-var-lock\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.519561 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.519634 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.520293 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.526643 master-0 kubenswrapper[10003]: I0216 17:01:33.520706 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.549948 master-0 kubenswrapper[10003]: I0216 17:01:33.533270 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.552655 master-0 kubenswrapper[10003]: I0216 17:01:33.552605 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:01:33.587265 master-0 kubenswrapper[10003]: I0216 17:01:33.587145 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.593284 master-0 kubenswrapper[10003]: I0216 17:01:33.589198 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:33.593284 master-0 kubenswrapper[10003]: I0216 17:01:33.589557 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerStarted","Data":"f669ceacf6e4215d33879fd75925e984def643e57187c462c685b966c75f2673"} Feb 16 17:01:33.620283 master-0 kubenswrapper[10003]: I0216 17:01:33.609274 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:01:33.983618 master-0 kubenswrapper[10003]: I0216 17:01:33.973764 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx"] Feb 16 17:01:34.146129 master-0 kubenswrapper[10003]: I0216 17:01:34.142016 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw"] Feb 16 17:01:34.200436 master-0 kubenswrapper[10003]: I0216 17:01:34.200386 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl"] Feb 16 17:01:34.303106 master-0 kubenswrapper[10003]: I0216 17:01:34.302972 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Feb 16 17:01:34.333891 master-0 kubenswrapper[10003]: I0216 17:01:34.333492 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd"] Feb 16 17:01:34.598122 master-0 kubenswrapper[10003]: I0216 17:01:34.598072 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" event={"ID":"e1a7c783-2e23-4284-b648-147984cf1022","Type":"ContainerStarted","Data":"abc0a1f84bde8763c28cad4b7f880d6652bce9442417fc89848d5368bf9822ad"} Feb 16 17:01:34.598122 master-0 kubenswrapper[10003]: I0216 17:01:34.598117 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" event={"ID":"e1a7c783-2e23-4284-b648-147984cf1022","Type":"ContainerStarted","Data":"95008d005493fc2ada0d9b7ff7c718284548b7f519269f9c8d8a7c1fae08fbf6"} Feb 16 17:01:34.598400 master-0 kubenswrapper[10003]: I0216 17:01:34.598311 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:34.602139 master-0 kubenswrapper[10003]: I0216 17:01:34.602030 10003 patch_prober.go:28] interesting pod/controller-manager-7fc9897cf8-9rjwd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" start-of-body= Feb 16 17:01:34.602139 master-0 kubenswrapper[10003]: I0216 17:01:34.602088 10003 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" Feb 16 17:01:34.615138 master-0 kubenswrapper[10003]: I0216 17:01:34.615091 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerStarted","Data":"1a75bfcb3d6ee6e289b7323fbc3d24c63e7fcd67393fd211cfa30edcae278f7a"} Feb 16 17:01:34.626409 master-0 kubenswrapper[10003]: I0216 17:01:34.626350 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerStarted","Data":"c7a678a1566dce1a83b3b33b3d0dd73aa2c7ba1c17bac97e5cf444e5f241b28a"} Feb 16 17:01:34.626409 master-0 kubenswrapper[10003]: I0216 17:01:34.626399 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerStarted","Data":"836a8b0540247df6e45c7363dec062ae3f1c759c61215fa36d1a8c35a0e755fb"} Feb 16 17:01:34.634162 master-0 kubenswrapper[10003]: I0216 17:01:34.634112 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" event={"ID":"5a275679-b7b6-4c28-b389-94cd2b014d6c","Type":"ContainerStarted","Data":"e6cda4ab3c9867af7a01fb5a090799b3598cbcc97267a527ff61d80c779d1d83"} Feb 16 17:01:34.636817 master-0 kubenswrapper[10003]: I0216 17:01:34.636120 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerStarted","Data":"4a3fbb1a388ca141e061ddd3f456a30e0ea19e4b3d5d971ef21b891853ddad88"} Feb 16 17:01:34.641238 master-0 kubenswrapper[10003]: I0216 17:01:34.641185 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerStarted","Data":"b0576ce377a5cee2ae182a3190bd7d01c4057d29cbcd5c8c32f7d95440a684f0"} Feb 16 17:01:34.641238 master-0 kubenswrapper[10003]: I0216 17:01:34.641231 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerStarted","Data":"45b31aa01f6a85e0dcf85670319be85b9e6d0c112d9bd0004ef655a9654d75f6"} Feb 16 17:01:34.642616 master-0 kubenswrapper[10003]: I0216 17:01:34.642529 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerStarted","Data":"f779b4dca39b490013fbc325e5d662f7110932224ce922a8167a1dd7f4ad51ac"} Feb 16 17:01:34.645805 master-0 kubenswrapper[10003]: I0216 17:01:34.644097 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" event={"ID":"e73ee493-de15-44c2-bd51-e12fcbb27a15","Type":"ContainerStarted","Data":"7ac85a9051d1d62aecdb0aea9f364c312e41f09a8b1c3d2e9cdedd31994406f5"} Feb 16 17:01:34.645805 master-0 kubenswrapper[10003]: I0216 17:01:34.644125 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" event={"ID":"e73ee493-de15-44c2-bd51-e12fcbb27a15","Type":"ContainerStarted","Data":"a8742926579beb3bc6f4cff1fa7c25f0bdd68039ed37e9a331f3e110c7838ff1"} Feb 16 17:01:34.645805 master-0 kubenswrapper[10003]: I0216 17:01:34.644696 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:34.646158 master-0 kubenswrapper[10003]: I0216 17:01:34.646117 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" event={"ID":"78be97a3-18d1-4962-804f-372974dc8ccc","Type":"ContainerStarted","Data":"7d077fd0a75015a74c26fba1db2c5751ce399c05a04ad0dce4ab7670133702c9"} Feb 16 17:01:34.646158 master-0 kubenswrapper[10003]: I0216 17:01:34.646146 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" event={"ID":"78be97a3-18d1-4962-804f-372974dc8ccc","Type":"ContainerStarted","Data":"e596a971faed7fb65d78d19abb83585c95e9a5de18c154df5de65c3d54692d18"} Feb 16 17:01:34.646375 master-0 kubenswrapper[10003]: I0216 17:01:34.646343 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:34.648645 master-0 kubenswrapper[10003]: I0216 17:01:34.648600 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" event={"ID":"188e42e5-9f9c-42af-ba15-5548c4fa4b52","Type":"ContainerStarted","Data":"22751fbbdf7aa3224dae4e546afa76f6812f3b8e22c34ed3ba395d1643038f1f"} Feb 16 17:01:34.650949 master-0 kubenswrapper[10003]: I0216 17:01:34.649341 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:01:34.652162 master-0 kubenswrapper[10003]: I0216 17:01:34.652072 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerStarted","Data":"32282e3210e204263457f22f7fb6c9b2c61db1832f983d1236a1034b1a5140d4"} Feb 16 17:01:34.652162 master-0 kubenswrapper[10003]: I0216 17:01:34.652098 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerStarted","Data":"e33dd133299981fac9e32c9766093c6f93957b6afe0a293539ddeda20c06cf82"} Feb 16 17:01:34.653485 master-0 kubenswrapper[10003]: I0216 17:01:34.653447 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"5d39ed24-4301-4cea-8a42-a08f4ba8b479","Type":"ContainerStarted","Data":"4b0c06fa22c4b9fdff535e3051201dc0bd36447aab43eba5f5549b527b9cff7e"} Feb 16 17:01:34.655209 master-0 kubenswrapper[10003]: I0216 17:01:34.655105 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerStarted","Data":"4680b8c2d5e31d1d35cc0e3e5320c2ad6ac1474aaaf6f05440e71e203962ad7d"} Feb 16 17:01:34.655209 master-0 kubenswrapper[10003]: I0216 17:01:34.655127 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerStarted","Data":"a4258db92ccb0e5bfb5051d02b4ac371ae71dd3a55d7950001a7b771cb5d1c29"} Feb 16 17:01:34.655727 master-0 kubenswrapper[10003]: I0216 17:01:34.655693 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:01:34.717947 master-0 kubenswrapper[10003]: I0216 17:01:34.717859 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podStartSLOduration=8.717837793 podStartE2EDuration="8.717837793s" podCreationTimestamp="2026-02-16 17:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:34.716423665 +0000 UTC m=+64.231909336" watchObservedRunningTime="2026-02-16 17:01:34.717837793 +0000 UTC m=+64.233323484" Feb 16 17:01:34.753004 master-0 kubenswrapper[10003]: I0216 17:01:34.752116 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podStartSLOduration=2.869947228 podStartE2EDuration="12.752097627s" podCreationTimestamp="2026-02-16 17:01:22 +0000 UTC" firstStartedPulling="2026-02-16 17:01:23.813906692 +0000 UTC m=+53.329392363" lastFinishedPulling="2026-02-16 17:01:33.696057091 +0000 UTC m=+63.211542762" observedRunningTime="2026-02-16 17:01:34.750423502 +0000 UTC m=+64.265909183" watchObservedRunningTime="2026-02-16 17:01:34.752097627 +0000 UTC m=+64.267583298" Feb 16 17:01:34.854999 master-0 kubenswrapper[10003]: I0216 17:01:34.854791 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podStartSLOduration=8.854774447 podStartE2EDuration="8.854774447s" podCreationTimestamp="2026-02-16 17:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:34.793350382 +0000 UTC m=+64.308836063" watchObservedRunningTime="2026-02-16 17:01:34.854774447 +0000 UTC m=+64.370260118" Feb 16 17:01:34.863947 master-0 kubenswrapper[10003]: I0216 17:01:34.856233 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podStartSLOduration=5.166511125 podStartE2EDuration="13.856223527s" podCreationTimestamp="2026-02-16 17:01:21 +0000 UTC" firstStartedPulling="2026-02-16 17:01:22.501120096 +0000 UTC m=+52.016605757" lastFinishedPulling="2026-02-16 17:01:31.190832488 +0000 UTC m=+60.706318159" observedRunningTime="2026-02-16 17:01:34.854087508 +0000 UTC m=+64.369573190" watchObservedRunningTime="2026-02-16 17:01:34.856223527 +0000 UTC m=+64.371709198" Feb 16 17:01:34.894629 master-0 kubenswrapper[10003]: I0216 17:01:34.892027 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n7kjr"] Feb 16 17:01:34.894629 master-0 kubenswrapper[10003]: I0216 17:01:34.893102 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podStartSLOduration=5.381846374 podStartE2EDuration="12.893054761s" podCreationTimestamp="2026-02-16 17:01:22 +0000 UTC" firstStartedPulling="2026-02-16 17:01:23.690550279 +0000 UTC m=+53.206035950" lastFinishedPulling="2026-02-16 17:01:31.201758666 +0000 UTC m=+60.717244337" observedRunningTime="2026-02-16 17:01:34.88750201 +0000 UTC m=+64.402987681" watchObservedRunningTime="2026-02-16 17:01:34.893054761 +0000 UTC m=+64.408540432" Feb 16 17:01:34.894629 master-0 kubenswrapper[10003]: I0216 17:01:34.893259 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:01:34.894629 master-0 kubenswrapper[10003]: I0216 17:01:34.893356 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:01:34.909945 master-0 kubenswrapper[10003]: I0216 17:01:34.905050 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n7kjr"] Feb 16 17:01:34.919953 master-0 kubenswrapper[10003]: I0216 17:01:34.919668 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podStartSLOduration=10.919648566 podStartE2EDuration="10.919648566s" podCreationTimestamp="2026-02-16 17:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:34.918467594 +0000 UTC m=+64.433953265" watchObservedRunningTime="2026-02-16 17:01:34.919648566 +0000 UTC m=+64.435134237" Feb 16 17:01:34.986933 master-0 kubenswrapper[10003]: I0216 17:01:34.986834 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podStartSLOduration=3.4894123280000002 podStartE2EDuration="11.986816198s" podCreationTimestamp="2026-02-16 17:01:23 +0000 UTC" firstStartedPulling="2026-02-16 17:01:25.130624087 +0000 UTC m=+54.646109748" lastFinishedPulling="2026-02-16 17:01:33.628027947 +0000 UTC m=+63.143513618" observedRunningTime="2026-02-16 17:01:34.957662923 +0000 UTC m=+64.473148594" watchObservedRunningTime="2026-02-16 17:01:34.986816198 +0000 UTC m=+64.502301869" Feb 16 17:01:34.987335 master-0 kubenswrapper[10003]: I0216 17:01:34.987046 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podStartSLOduration=7.987040804 podStartE2EDuration="7.987040804s" podCreationTimestamp="2026-02-16 17:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:34.986266693 +0000 UTC m=+64.501752364" watchObservedRunningTime="2026-02-16 17:01:34.987040804 +0000 UTC m=+64.502526475" Feb 16 17:01:35.049457 master-0 kubenswrapper[10003]: I0216 17:01:35.048427 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podStartSLOduration=9.048405817 podStartE2EDuration="9.048405817s" podCreationTimestamp="2026-02-16 17:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.037835339 +0000 UTC m=+64.553321010" watchObservedRunningTime="2026-02-16 17:01:35.048405817 +0000 UTC m=+64.563891498" Feb 16 17:01:35.050394 master-0 kubenswrapper[10003]: I0216 17:01:35.050351 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-catalog-content\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:01:35.050496 master-0 kubenswrapper[10003]: I0216 17:01:35.050472 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-utilities\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:01:35.050527 master-0 kubenswrapper[10003]: I0216 17:01:35.050514 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8kkl7"] Feb 16 17:01:35.050564 master-0 kubenswrapper[10003]: I0216 17:01:35.050543 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfkd9\" (UniqueName: \"kubernetes.io/projected/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-kube-api-access-qfkd9\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:01:35.052502 master-0 kubenswrapper[10003]: I0216 17:01:35.052469 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:01:35.067681 master-0 kubenswrapper[10003]: I0216 17:01:35.067626 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kkl7"] Feb 16 17:01:35.076961 master-0 kubenswrapper[10003]: I0216 17:01:35.074785 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podStartSLOduration=3.5433974900000003 podStartE2EDuration="12.074766096s" podCreationTimestamp="2026-02-16 17:01:23 +0000 UTC" firstStartedPulling="2026-02-16 17:01:25.126036022 +0000 UTC m=+54.641521693" lastFinishedPulling="2026-02-16 17:01:33.657404628 +0000 UTC m=+63.172890299" observedRunningTime="2026-02-16 17:01:35.074661933 +0000 UTC m=+64.590147614" watchObservedRunningTime="2026-02-16 17:01:35.074766096 +0000 UTC m=+64.590251767" Feb 16 17:01:35.104718 master-0 kubenswrapper[10003]: I0216 17:01:35.103866 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:01:35.143599 master-0 kubenswrapper[10003]: I0216 17:01:35.143467 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podStartSLOduration=3.386754906 podStartE2EDuration="16.143446529s" podCreationTimestamp="2026-02-16 17:01:19 +0000 UTC" firstStartedPulling="2026-02-16 17:01:20.885351996 +0000 UTC m=+50.400837667" lastFinishedPulling="2026-02-16 17:01:33.642043599 +0000 UTC m=+63.157529290" observedRunningTime="2026-02-16 17:01:35.101081804 +0000 UTC m=+64.616567475" watchObservedRunningTime="2026-02-16 17:01:35.143446529 +0000 UTC m=+64.658932200" Feb 16 17:01:35.153427 master-0 kubenswrapper[10003]: I0216 17:01:35.151748 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-utilities\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:01:35.153427 master-0 kubenswrapper[10003]: I0216 17:01:35.151829 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfkd9\" (UniqueName: \"kubernetes.io/projected/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-kube-api-access-qfkd9\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:01:35.153427 master-0 kubenswrapper[10003]: I0216 17:01:35.151863 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxhk5\" (UniqueName: \"kubernetes.io/projected/a6d86b04-1d3f-4f27-a262-b732c1295997-kube-api-access-lxhk5\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:01:35.153427 master-0 kubenswrapper[10003]: I0216 17:01:35.151904 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-catalog-content\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:01:35.153427 master-0 kubenswrapper[10003]: I0216 17:01:35.151977 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-catalog-content\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:01:35.153427 master-0 kubenswrapper[10003]: I0216 17:01:35.152005 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-utilities\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:01:35.153427 master-0 kubenswrapper[10003]: I0216 17:01:35.152493 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-utilities\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:01:35.153427 master-0 kubenswrapper[10003]: I0216 17:01:35.153115 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-catalog-content\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:01:35.182804 master-0 kubenswrapper[10003]: I0216 17:01:35.182756 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfkd9\" (UniqueName: \"kubernetes.io/projected/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-kube-api-access-qfkd9\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:01:35.228074 master-0 kubenswrapper[10003]: I0216 17:01:35.227865 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:01:35.265365 master-0 kubenswrapper[10003]: I0216 17:01:35.264274 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-utilities\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:01:35.265365 master-0 kubenswrapper[10003]: I0216 17:01:35.264380 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxhk5\" (UniqueName: \"kubernetes.io/projected/a6d86b04-1d3f-4f27-a262-b732c1295997-kube-api-access-lxhk5\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:01:35.265365 master-0 kubenswrapper[10003]: I0216 17:01:35.264435 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-catalog-content\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:01:35.265365 master-0 kubenswrapper[10003]: I0216 17:01:35.264857 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-catalog-content\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:01:35.265365 master-0 kubenswrapper[10003]: I0216 17:01:35.265174 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-utilities\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:01:35.285006 master-0 kubenswrapper[10003]: I0216 17:01:35.284978 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxhk5\" (UniqueName: \"kubernetes.io/projected/a6d86b04-1d3f-4f27-a262-b732c1295997-kube-api-access-lxhk5\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:01:35.464944 master-0 kubenswrapper[10003]: I0216 17:01:35.464722 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:01:35.611042 master-0 kubenswrapper[10003]: W0216 17:01:35.610973 10003 container.go:586] Failed to update stats for container "/kubepods.slice/kubepods-podcf8ea978_99b0_4957_8f9d_0d074263b235.slice/crio-bcbd7bae801dad72473e19db1c8cfcee94d4e54c1e79aef5d145c86534ef5b38": error while statting cgroup v2: [read /sys/fs/cgroup/kubepods.slice/kubepods-podcf8ea978_99b0_4957_8f9d_0d074263b235.slice/crio-bcbd7bae801dad72473e19db1c8cfcee94d4e54c1e79aef5d145c86534ef5b38/memory.current: no such device], continuing to push stats Feb 16 17:01:35.645442 master-0 kubenswrapper[10003]: W0216 17:01:35.645394 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e51a0d9_d1bd_4b32_9196_5f756b1fa8aa.slice/crio-bd5dcd2c4add7ffc4e409d02664e000a9abb556798c746bb479a7c76fa9d67b8 WatchSource:0}: Error finding container bd5dcd2c4add7ffc4e409d02664e000a9abb556798c746bb479a7c76fa9d67b8: Status 404 returned error can't find the container with id bd5dcd2c4add7ffc4e409d02664e000a9abb556798c746bb479a7c76fa9d67b8 Feb 16 17:01:35.649688 master-0 kubenswrapper[10003]: I0216 17:01:35.647322 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n7kjr"] Feb 16 17:01:35.673791 master-0 kubenswrapper[10003]: I0216 17:01:35.673727 10003 generic.go:334] "Generic (PLEG): container finished" podID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" containerID="8fa44b1ac9949e31fd12e8a885f114d1074a93f335ef9c428586ae9835e14643" exitCode=0 Feb 16 17:01:35.673983 master-0 kubenswrapper[10003]: I0216 17:01:35.673808 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" event={"ID":"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41","Type":"ContainerDied","Data":"8fa44b1ac9949e31fd12e8a885f114d1074a93f335ef9c428586ae9835e14643"} Feb 16 17:01:35.674327 master-0 kubenswrapper[10003]: I0216 17:01:35.674289 10003 scope.go:117] "RemoveContainer" containerID="8fa44b1ac9949e31fd12e8a885f114d1074a93f335ef9c428586ae9835e14643" Feb 16 17:01:35.688250 master-0 kubenswrapper[10003]: I0216 17:01:35.688201 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"5d39ed24-4301-4cea-8a42-a08f4ba8b479","Type":"ContainerStarted","Data":"2c53a58c131794a80fa1c0999460553c2cc95a04f4d47697c0e7fb42de126acf"} Feb 16 17:01:35.692202 master-0 kubenswrapper[10003]: I0216 17:01:35.691485 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_cf8ea978-99b0-4957-8f9d-0d074263b235/installer/0.log" Feb 16 17:01:35.692202 master-0 kubenswrapper[10003]: I0216 17:01:35.691551 10003 generic.go:334] "Generic (PLEG): container finished" podID="cf8ea978-99b0-4957-8f9d-0d074263b235" containerID="22e732f70ca6f94cfd7098b649223ea91275dcc9a8431d38a472ff475c10f789" exitCode=1 Feb 16 17:01:35.692202 master-0 kubenswrapper[10003]: I0216 17:01:35.691647 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"cf8ea978-99b0-4957-8f9d-0d074263b235","Type":"ContainerDied","Data":"22e732f70ca6f94cfd7098b649223ea91275dcc9a8431d38a472ff475c10f789"} Feb 16 17:01:35.693848 master-0 kubenswrapper[10003]: I0216 17:01:35.693111 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7kjr" event={"ID":"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa","Type":"ContainerStarted","Data":"bd5dcd2c4add7ffc4e409d02664e000a9abb556798c746bb479a7c76fa9d67b8"} Feb 16 17:01:35.697980 master-0 kubenswrapper[10003]: E0216 17:01:35.697940 10003 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podcf8ea978_99b0_4957_8f9d_0d074263b235.slice/crio-22e732f70ca6f94cfd7098b649223ea91275dcc9a8431d38a472ff475c10f789.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-podcf8ea978_99b0_4957_8f9d_0d074263b235.slice/crio-bcbd7bae801dad72473e19db1c8cfcee94d4e54c1e79aef5d145c86534ef5b38\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9aa57eb4_c511_4ab8_a5d7_385e1ed9ee41.slice/crio-conmon-8fa44b1ac9949e31fd12e8a885f114d1074a93f335ef9c428586ae9835e14643.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:01:35.700860 master-0 kubenswrapper[10003]: I0216 17:01:35.700823 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:01:35.746205 master-0 kubenswrapper[10003]: I0216 17:01:35.746131 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=3.746104452 podStartE2EDuration="3.746104452s" podCreationTimestamp="2026-02-16 17:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:35.710844281 +0000 UTC m=+65.226329952" watchObservedRunningTime="2026-02-16 17:01:35.746104452 +0000 UTC m=+65.261590123" Feb 16 17:01:35.901058 master-0 kubenswrapper[10003]: I0216 17:01:35.901025 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_cf8ea978-99b0-4957-8f9d-0d074263b235/installer/0.log" Feb 16 17:01:35.901344 master-0 kubenswrapper[10003]: I0216 17:01:35.901113 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 17:01:35.921145 master-0 kubenswrapper[10003]: I0216 17:01:35.921034 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kkl7"] Feb 16 17:01:35.998226 master-0 kubenswrapper[10003]: I0216 17:01:35.983084 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf8ea978-99b0-4957-8f9d-0d074263b235-kubelet-dir\") pod \"cf8ea978-99b0-4957-8f9d-0d074263b235\" (UID: \"cf8ea978-99b0-4957-8f9d-0d074263b235\") " Feb 16 17:01:35.998226 master-0 kubenswrapper[10003]: I0216 17:01:35.983140 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf8ea978-99b0-4957-8f9d-0d074263b235-var-lock\") pod \"cf8ea978-99b0-4957-8f9d-0d074263b235\" (UID: \"cf8ea978-99b0-4957-8f9d-0d074263b235\") " Feb 16 17:01:35.998226 master-0 kubenswrapper[10003]: I0216 17:01:35.983416 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf8ea978-99b0-4957-8f9d-0d074263b235-kube-api-access\") pod \"cf8ea978-99b0-4957-8f9d-0d074263b235\" (UID: \"cf8ea978-99b0-4957-8f9d-0d074263b235\") " Feb 16 17:01:35.998226 master-0 kubenswrapper[10003]: I0216 17:01:35.984011 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf8ea978-99b0-4957-8f9d-0d074263b235-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cf8ea978-99b0-4957-8f9d-0d074263b235" (UID: "cf8ea978-99b0-4957-8f9d-0d074263b235"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:01:35.998226 master-0 kubenswrapper[10003]: I0216 17:01:35.984123 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf8ea978-99b0-4957-8f9d-0d074263b235-var-lock" (OuterVolumeSpecName: "var-lock") pod "cf8ea978-99b0-4957-8f9d-0d074263b235" (UID: "cf8ea978-99b0-4957-8f9d-0d074263b235"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:01:35.998226 master-0 kubenswrapper[10003]: I0216 17:01:35.989711 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf8ea978-99b0-4957-8f9d-0d074263b235-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cf8ea978-99b0-4957-8f9d-0d074263b235" (UID: "cf8ea978-99b0-4957-8f9d-0d074263b235"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:36.084445 master-0 kubenswrapper[10003]: I0216 17:01:36.084389 10003 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cf8ea978-99b0-4957-8f9d-0d074263b235-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:36.084445 master-0 kubenswrapper[10003]: I0216 17:01:36.084432 10003 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cf8ea978-99b0-4957-8f9d-0d074263b235-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:36.084445 master-0 kubenswrapper[10003]: I0216 17:01:36.084445 10003 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cf8ea978-99b0-4957-8f9d-0d074263b235-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:36.456778 master-0 kubenswrapper[10003]: I0216 17:01:36.456678 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4kd66"] Feb 16 17:01:36.456955 master-0 kubenswrapper[10003]: E0216 17:01:36.456867 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf8ea978-99b0-4957-8f9d-0d074263b235" containerName="installer" Feb 16 17:01:36.456955 master-0 kubenswrapper[10003]: I0216 17:01:36.456880 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf8ea978-99b0-4957-8f9d-0d074263b235" containerName="installer" Feb 16 17:01:36.457035 master-0 kubenswrapper[10003]: I0216 17:01:36.457017 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf8ea978-99b0-4957-8f9d-0d074263b235" containerName="installer" Feb 16 17:01:36.457734 master-0 kubenswrapper[10003]: I0216 17:01:36.457645 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:01:36.459396 master-0 kubenswrapper[10003]: I0216 17:01:36.459356 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-r5p9m" Feb 16 17:01:36.465959 master-0 kubenswrapper[10003]: I0216 17:01:36.465891 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kd66"] Feb 16 17:01:36.590192 master-0 kubenswrapper[10003]: I0216 17:01:36.590138 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:01:36.590192 master-0 kubenswrapper[10003]: I0216 17:01:36.590189 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:01:36.590561 master-0 kubenswrapper[10003]: I0216 17:01:36.590254 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:01:36.691221 master-0 kubenswrapper[10003]: I0216 17:01:36.691165 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:01:36.691392 master-0 kubenswrapper[10003]: I0216 17:01:36.691230 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:01:36.691629 master-0 kubenswrapper[10003]: I0216 17:01:36.691600 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:01:36.693370 master-0 kubenswrapper[10003]: I0216 17:01:36.693336 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:01:36.694105 master-0 kubenswrapper[10003]: I0216 17:01:36.694073 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:01:36.699561 master-0 kubenswrapper[10003]: I0216 17:01:36.699499 10003 generic.go:334] "Generic (PLEG): container finished" podID="a6d86b04-1d3f-4f27-a262-b732c1295997" containerID="b61f3a0c3ac4f93f0d72928ae09f6e157b6ae98210058a751bcc300beda92cf1" exitCode=0 Feb 16 17:01:36.699647 master-0 kubenswrapper[10003]: I0216 17:01:36.699618 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kkl7" event={"ID":"a6d86b04-1d3f-4f27-a262-b732c1295997","Type":"ContainerDied","Data":"b61f3a0c3ac4f93f0d72928ae09f6e157b6ae98210058a751bcc300beda92cf1"} Feb 16 17:01:36.699682 master-0 kubenswrapper[10003]: I0216 17:01:36.699650 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kkl7" event={"ID":"a6d86b04-1d3f-4f27-a262-b732c1295997","Type":"ContainerStarted","Data":"69df1f56628ba74129afddfead0ccc135e8c4f4c22ab04aa82de33d67e1e6121"} Feb 16 17:01:36.706216 master-0 kubenswrapper[10003]: I0216 17:01:36.706189 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_cf8ea978-99b0-4957-8f9d-0d074263b235/installer/0.log" Feb 16 17:01:36.706553 master-0 kubenswrapper[10003]: I0216 17:01:36.706531 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Feb 16 17:01:36.707186 master-0 kubenswrapper[10003]: I0216 17:01:36.706793 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"cf8ea978-99b0-4957-8f9d-0d074263b235","Type":"ContainerDied","Data":"bcbd7bae801dad72473e19db1c8cfcee94d4e54c1e79aef5d145c86534ef5b38"} Feb 16 17:01:36.707186 master-0 kubenswrapper[10003]: I0216 17:01:36.707151 10003 scope.go:117] "RemoveContainer" containerID="22e732f70ca6f94cfd7098b649223ea91275dcc9a8431d38a472ff475c10f789" Feb 16 17:01:36.709598 master-0 kubenswrapper[10003]: I0216 17:01:36.709480 10003 generic.go:334] "Generic (PLEG): container finished" podID="1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa" containerID="12b334066ee229a3063ed554a4dd75cf5da1c898112391b182e46bd9935b002b" exitCode=0 Feb 16 17:01:36.709675 master-0 kubenswrapper[10003]: I0216 17:01:36.709577 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7kjr" event={"ID":"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa","Type":"ContainerDied","Data":"12b334066ee229a3063ed554a4dd75cf5da1c898112391b182e46bd9935b002b"} Feb 16 17:01:36.714833 master-0 kubenswrapper[10003]: I0216 17:01:36.714767 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:01:36.718275 master-0 kubenswrapper[10003]: I0216 17:01:36.718234 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" event={"ID":"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41","Type":"ContainerStarted","Data":"12b0103601aa9d452d5c380b8174f625698cf75c8ec9ba10415964e9b65d2f4f"} Feb 16 17:01:36.775432 master-0 kubenswrapper[10003]: I0216 17:01:36.775387 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:01:36.814476 master-0 kubenswrapper[10003]: I0216 17:01:36.814423 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 16 17:01:36.819889 master-0 kubenswrapper[10003]: I0216 17:01:36.819844 10003 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/installer-1-master-0"] Feb 16 17:01:37.473901 master-0 kubenswrapper[10003]: I0216 17:01:37.473506 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-98q6v"] Feb 16 17:01:37.474452 master-0 kubenswrapper[10003]: I0216 17:01:37.474433 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.476172 master-0 kubenswrapper[10003]: I0216 17:01:37.476038 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 17:01:37.476172 master-0 kubenswrapper[10003]: I0216 17:01:37.476059 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-q5h8t" Feb 16 17:01:37.603724 master-0 kubenswrapper[10003]: I0216 17:01:37.603620 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.603937 master-0 kubenswrapper[10003]: I0216 17:01:37.603710 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.603991 master-0 kubenswrapper[10003]: I0216 17:01:37.603769 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.604063 master-0 kubenswrapper[10003]: I0216 17:01:37.604037 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.647047 master-0 kubenswrapper[10003]: I0216 17:01:37.646977 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lnzfx"] Feb 16 17:01:37.649033 master-0 kubenswrapper[10003]: I0216 17:01:37.648991 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:01:37.651053 master-0 kubenswrapper[10003]: I0216 17:01:37.650851 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-5lx84" Feb 16 17:01:37.660772 master-0 kubenswrapper[10003]: I0216 17:01:37.660730 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lnzfx"] Feb 16 17:01:37.705387 master-0 kubenswrapper[10003]: I0216 17:01:37.705330 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.705616 master-0 kubenswrapper[10003]: I0216 17:01:37.705410 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.705977 master-0 kubenswrapper[10003]: I0216 17:01:37.705741 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.705977 master-0 kubenswrapper[10003]: I0216 17:01:37.705903 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.706111 master-0 kubenswrapper[10003]: I0216 17:01:37.705994 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.707038 master-0 kubenswrapper[10003]: I0216 17:01:37.706981 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.709155 master-0 kubenswrapper[10003]: I0216 17:01:37.709124 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.727014 master-0 kubenswrapper[10003]: I0216 17:01:37.726567 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.794453 master-0 kubenswrapper[10003]: I0216 17:01:37.794358 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:01:37.807071 master-0 kubenswrapper[10003]: I0216 17:01:37.807025 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:01:37.807071 master-0 kubenswrapper[10003]: I0216 17:01:37.807075 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:01:37.807346 master-0 kubenswrapper[10003]: I0216 17:01:37.807136 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:01:37.899658 master-0 kubenswrapper[10003]: W0216 17:01:37.899262 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod648abb6c_9c81_4e5c_b5f1_3b7eb254f743.slice/crio-b3cfd9fdcf09c3cddf191f185c0ac9e10d9889f1bf6c1c0a015d8feabfcf56b7 WatchSource:0}: Error finding container b3cfd9fdcf09c3cddf191f185c0ac9e10d9889f1bf6c1c0a015d8feabfcf56b7: Status 404 returned error can't find the container with id b3cfd9fdcf09c3cddf191f185c0ac9e10d9889f1bf6c1c0a015d8feabfcf56b7 Feb 16 17:01:37.905931 master-0 kubenswrapper[10003]: I0216 17:01:37.905837 10003 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:01:37.908293 master-0 kubenswrapper[10003]: I0216 17:01:37.908248 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:01:37.908409 master-0 kubenswrapper[10003]: I0216 17:01:37.908306 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:01:37.908409 master-0 kubenswrapper[10003]: I0216 17:01:37.908404 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:01:37.909495 master-0 kubenswrapper[10003]: I0216 17:01:37.909419 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:01:37.910363 master-0 kubenswrapper[10003]: I0216 17:01:37.910306 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:01:37.929189 master-0 kubenswrapper[10003]: I0216 17:01:37.929131 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:01:37.969236 master-0 kubenswrapper[10003]: I0216 17:01:37.968105 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:01:38.293091 master-0 kubenswrapper[10003]: I0216 17:01:38.292057 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4kd66"] Feb 16 17:01:38.396429 master-0 kubenswrapper[10003]: I0216 17:01:38.396180 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lnzfx"] Feb 16 17:01:38.401819 master-0 kubenswrapper[10003]: W0216 17:01:38.401749 10003 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod822e1750_652e_4ceb_8fea_b2c1c905b0f1.slice/crio-dbac153ecd4a3f99cd0e69d237edceb4a48ab6c1d711adc4be460a302948a462 WatchSource:0}: Error finding container dbac153ecd4a3f99cd0e69d237edceb4a48ab6c1d711adc4be460a302948a462: Status 404 returned error can't find the container with id dbac153ecd4a3f99cd0e69d237edceb4a48ab6c1d711adc4be460a302948a462 Feb 16 17:01:38.736299 master-0 kubenswrapper[10003]: I0216 17:01:38.736233 10003 generic.go:334] "Generic (PLEG): container finished" podID="0393fe12-2533-4c9c-a8e4-a58003c88f36" containerID="923a71501b419dfeeea5a3bc9e6232ad282276a9f4cb4239a8c0e6dc182d5ef7" exitCode=0 Feb 16 17:01:38.736844 master-0 kubenswrapper[10003]: I0216 17:01:38.736396 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerDied","Data":"923a71501b419dfeeea5a3bc9e6232ad282276a9f4cb4239a8c0e6dc182d5ef7"} Feb 16 17:01:38.736844 master-0 kubenswrapper[10003]: I0216 17:01:38.736431 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerStarted","Data":"0c31bbb582da4a5c2f2c01e8ab5dbd9246ddce55c685733c6872e97a601d53de"} Feb 16 17:01:38.738952 master-0 kubenswrapper[10003]: I0216 17:01:38.738905 10003 generic.go:334] "Generic (PLEG): container finished" podID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" containerID="dc9e8dbf3a74fb329eb23f61fe7acc2cbbecad6e0ad9994f107aa3c7b0c60d14" exitCode=0 Feb 16 17:01:38.739037 master-0 kubenswrapper[10003]: I0216 17:01:38.738975 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerDied","Data":"dc9e8dbf3a74fb329eb23f61fe7acc2cbbecad6e0ad9994f107aa3c7b0c60d14"} Feb 16 17:01:38.739037 master-0 kubenswrapper[10003]: I0216 17:01:38.738997 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerStarted","Data":"dbac153ecd4a3f99cd0e69d237edceb4a48ab6c1d711adc4be460a302948a462"} Feb 16 17:01:38.750858 master-0 kubenswrapper[10003]: I0216 17:01:38.750784 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" event={"ID":"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7","Type":"ContainerStarted","Data":"3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169"} Feb 16 17:01:38.750858 master-0 kubenswrapper[10003]: I0216 17:01:38.750839 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" event={"ID":"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7","Type":"ContainerStarted","Data":"5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2"} Feb 16 17:01:38.754316 master-0 kubenswrapper[10003]: I0216 17:01:38.753544 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"d333f86f0a8ab06d569bfb3d4f4ee86bbc505f7ff52162d4fe6868c5e30caf74"} Feb 16 17:01:38.754316 master-0 kubenswrapper[10003]: I0216 17:01:38.753576 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"df55732c40e933ea372f1214a91cde4306eb5555441b3143bbda5066dd5d87f2"} Feb 16 17:01:38.754316 master-0 kubenswrapper[10003]: I0216 17:01:38.753588 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"b3cfd9fdcf09c3cddf191f185c0ac9e10d9889f1bf6c1c0a015d8feabfcf56b7"} Feb 16 17:01:38.803742 master-0 kubenswrapper[10003]: I0216 17:01:38.803666 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podStartSLOduration=1.8036447359999999 podStartE2EDuration="1.803644736s" podCreationTimestamp="2026-02-16 17:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:01:38.779871018 +0000 UTC m=+68.295356679" watchObservedRunningTime="2026-02-16 17:01:38.803644736 +0000 UTC m=+68.319130427" Feb 16 17:01:38.820976 master-0 kubenswrapper[10003]: I0216 17:01:38.809365 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf8ea978-99b0-4957-8f9d-0d074263b235" path="/var/lib/kubelet/pods/cf8ea978-99b0-4957-8f9d-0d074263b235/volumes" Feb 16 17:01:39.254292 master-0 kubenswrapper[10003]: I0216 17:01:39.254219 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n7kjr"] Feb 16 17:01:39.322197 master-0 kubenswrapper[10003]: I0216 17:01:39.322103 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk"] Feb 16 17:01:39.322530 master-0 kubenswrapper[10003]: I0216 17:01:39.322484 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" podUID="32a6b902-917f-4529-92c5-1a4975237501" containerName="kube-rbac-proxy" containerID="cri-o://a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973" gracePeriod=30 Feb 16 17:01:39.322672 master-0 kubenswrapper[10003]: I0216 17:01:39.322595 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" podUID="32a6b902-917f-4529-92c5-1a4975237501" containerName="machine-approver-controller" containerID="cri-o://7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0" gracePeriod=30 Feb 16 17:01:39.539013 master-0 kubenswrapper[10003]: I0216 17:01:39.531108 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:39.645497 master-0 kubenswrapper[10003]: I0216 17:01:39.645384 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32a6b902-917f-4529-92c5-1a4975237501-config\") pod \"32a6b902-917f-4529-92c5-1a4975237501\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " Feb 16 17:01:39.645497 master-0 kubenswrapper[10003]: I0216 17:01:39.645449 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/32a6b902-917f-4529-92c5-1a4975237501-auth-proxy-config\") pod \"32a6b902-917f-4529-92c5-1a4975237501\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " Feb 16 17:01:39.645704 master-0 kubenswrapper[10003]: I0216 17:01:39.645548 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/32a6b902-917f-4529-92c5-1a4975237501-machine-approver-tls\") pod \"32a6b902-917f-4529-92c5-1a4975237501\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " Feb 16 17:01:39.646459 master-0 kubenswrapper[10003]: I0216 17:01:39.646371 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a6b902-917f-4529-92c5-1a4975237501-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "32a6b902-917f-4529-92c5-1a4975237501" (UID: "32a6b902-917f-4529-92c5-1a4975237501"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:39.646459 master-0 kubenswrapper[10003]: I0216 17:01:39.646444 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhzsp\" (UniqueName: \"kubernetes.io/projected/32a6b902-917f-4529-92c5-1a4975237501-kube-api-access-dhzsp\") pod \"32a6b902-917f-4529-92c5-1a4975237501\" (UID: \"32a6b902-917f-4529-92c5-1a4975237501\") " Feb 16 17:01:39.647020 master-0 kubenswrapper[10003]: I0216 17:01:39.646518 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32a6b902-917f-4529-92c5-1a4975237501-config" (OuterVolumeSpecName: "config") pod "32a6b902-917f-4529-92c5-1a4975237501" (UID: "32a6b902-917f-4529-92c5-1a4975237501"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:39.647084 master-0 kubenswrapper[10003]: I0216 17:01:39.647050 10003 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32a6b902-917f-4529-92c5-1a4975237501-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:39.647084 master-0 kubenswrapper[10003]: I0216 17:01:39.647072 10003 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/32a6b902-917f-4529-92c5-1a4975237501-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:39.648876 master-0 kubenswrapper[10003]: I0216 17:01:39.648431 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7w4km"] Feb 16 17:01:39.648876 master-0 kubenswrapper[10003]: E0216 17:01:39.648657 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a6b902-917f-4529-92c5-1a4975237501" containerName="machine-approver-controller" Feb 16 17:01:39.648876 master-0 kubenswrapper[10003]: I0216 17:01:39.648668 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a6b902-917f-4529-92c5-1a4975237501" containerName="machine-approver-controller" Feb 16 17:01:39.649347 master-0 kubenswrapper[10003]: E0216 17:01:39.649126 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32a6b902-917f-4529-92c5-1a4975237501" containerName="kube-rbac-proxy" Feb 16 17:01:39.649347 master-0 kubenswrapper[10003]: I0216 17:01:39.649140 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="32a6b902-917f-4529-92c5-1a4975237501" containerName="kube-rbac-proxy" Feb 16 17:01:39.649347 master-0 kubenswrapper[10003]: I0216 17:01:39.649247 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a6b902-917f-4529-92c5-1a4975237501" containerName="machine-approver-controller" Feb 16 17:01:39.649347 master-0 kubenswrapper[10003]: I0216 17:01:39.649263 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="32a6b902-917f-4529-92c5-1a4975237501" containerName="kube-rbac-proxy" Feb 16 17:01:39.650683 master-0 kubenswrapper[10003]: I0216 17:01:39.649746 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32a6b902-917f-4529-92c5-1a4975237501-kube-api-access-dhzsp" (OuterVolumeSpecName: "kube-api-access-dhzsp") pod "32a6b902-917f-4529-92c5-1a4975237501" (UID: "32a6b902-917f-4529-92c5-1a4975237501"). InnerVolumeSpecName "kube-api-access-dhzsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:39.651677 master-0 kubenswrapper[10003]: I0216 17:01:39.651023 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:01:39.653152 master-0 kubenswrapper[10003]: I0216 17:01:39.652827 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-6858s" Feb 16 17:01:39.660821 master-0 kubenswrapper[10003]: I0216 17:01:39.660767 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7w4km"] Feb 16 17:01:39.691307 master-0 kubenswrapper[10003]: I0216 17:01:39.691033 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32a6b902-917f-4529-92c5-1a4975237501-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "32a6b902-917f-4529-92c5-1a4975237501" (UID: "32a6b902-917f-4529-92c5-1a4975237501"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:01:39.760071 master-0 kubenswrapper[10003]: I0216 17:01:39.748360 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:01:39.760071 master-0 kubenswrapper[10003]: I0216 17:01:39.748472 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:01:39.760071 master-0 kubenswrapper[10003]: I0216 17:01:39.748533 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:01:39.760071 master-0 kubenswrapper[10003]: I0216 17:01:39.748634 10003 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/32a6b902-917f-4529-92c5-1a4975237501-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:39.760071 master-0 kubenswrapper[10003]: I0216 17:01:39.748730 10003 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhzsp\" (UniqueName: \"kubernetes.io/projected/32a6b902-917f-4529-92c5-1a4975237501-kube-api-access-dhzsp\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:39.775159 master-0 kubenswrapper[10003]: I0216 17:01:39.775108 10003 generic.go:334] "Generic (PLEG): container finished" podID="32a6b902-917f-4529-92c5-1a4975237501" containerID="7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0" exitCode=0 Feb 16 17:01:39.775159 master-0 kubenswrapper[10003]: I0216 17:01:39.775151 10003 generic.go:334] "Generic (PLEG): container finished" podID="32a6b902-917f-4529-92c5-1a4975237501" containerID="a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973" exitCode=0 Feb 16 17:01:39.776982 master-0 kubenswrapper[10003]: I0216 17:01:39.775206 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" Feb 16 17:01:39.776982 master-0 kubenswrapper[10003]: I0216 17:01:39.775251 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" event={"ID":"32a6b902-917f-4529-92c5-1a4975237501","Type":"ContainerDied","Data":"7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0"} Feb 16 17:01:39.776982 master-0 kubenswrapper[10003]: I0216 17:01:39.775401 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" event={"ID":"32a6b902-917f-4529-92c5-1a4975237501","Type":"ContainerDied","Data":"a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973"} Feb 16 17:01:39.776982 master-0 kubenswrapper[10003]: I0216 17:01:39.775430 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk" event={"ID":"32a6b902-917f-4529-92c5-1a4975237501","Type":"ContainerDied","Data":"30bcef361054bdc1ea0385e93e587e6cc354acc58a166dd68c254ed816d32245"} Feb 16 17:01:39.776982 master-0 kubenswrapper[10003]: I0216 17:01:39.775462 10003 scope.go:117] "RemoveContainer" containerID="7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0" Feb 16 17:01:39.781882 master-0 kubenswrapper[10003]: I0216 17:01:39.781694 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" event={"ID":"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7","Type":"ContainerStarted","Data":"09c67126c6de3668502bac14b9e7abd00e8d4219a805f89a15ceed509ea0a832"} Feb 16 17:01:39.812030 master-0 kubenswrapper[10003]: I0216 17:01:39.811407 10003 scope.go:117] "RemoveContainer" containerID="a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973" Feb 16 17:01:39.817388 master-0 kubenswrapper[10003]: I0216 17:01:39.815629 10003 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" podStartSLOduration=7.113164267 podStartE2EDuration="13.81560397s" podCreationTimestamp="2026-02-16 17:01:26 +0000 UTC" firstStartedPulling="2026-02-16 17:01:31.221033152 +0000 UTC m=+60.736518823" lastFinishedPulling="2026-02-16 17:01:37.923472855 +0000 UTC m=+67.438958526" observedRunningTime="2026-02-16 17:01:39.804008404 +0000 UTC m=+69.319494075" watchObservedRunningTime="2026-02-16 17:01:39.81560397 +0000 UTC m=+69.331089641" Feb 16 17:01:39.830790 master-0 kubenswrapper[10003]: I0216 17:01:39.830692 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk"] Feb 16 17:01:39.835152 master-0 kubenswrapper[10003]: I0216 17:01:39.835126 10003 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6c46d95f74-kp5vk"] Feb 16 17:01:39.851172 master-0 kubenswrapper[10003]: I0216 17:01:39.849730 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:01:39.851739 master-0 kubenswrapper[10003]: I0216 17:01:39.851519 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:01:39.852219 master-0 kubenswrapper[10003]: I0216 17:01:39.852142 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:01:39.852401 master-0 kubenswrapper[10003]: I0216 17:01:39.852314 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:01:39.852942 master-0 kubenswrapper[10003]: I0216 17:01:39.852859 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:01:39.861187 master-0 kubenswrapper[10003]: I0216 17:01:39.861139 10003 scope.go:117] "RemoveContainer" containerID="7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0" Feb 16 17:01:39.861901 master-0 kubenswrapper[10003]: E0216 17:01:39.861870 10003 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0\": container with ID starting with 7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0 not found: ID does not exist" containerID="7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0" Feb 16 17:01:39.861986 master-0 kubenswrapper[10003]: I0216 17:01:39.861907 10003 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0"} err="failed to get container status \"7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0\": rpc error: code = NotFound desc = could not find container \"7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0\": container with ID starting with 7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0 not found: ID does not exist" Feb 16 17:01:39.861986 master-0 kubenswrapper[10003]: I0216 17:01:39.861960 10003 scope.go:117] "RemoveContainer" containerID="a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973" Feb 16 17:01:39.862316 master-0 kubenswrapper[10003]: E0216 17:01:39.862283 10003 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973\": container with ID starting with a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973 not found: ID does not exist" containerID="a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973" Feb 16 17:01:39.862380 master-0 kubenswrapper[10003]: I0216 17:01:39.862318 10003 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973"} err="failed to get container status \"a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973\": rpc error: code = NotFound desc = could not find container \"a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973\": container with ID starting with a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973 not found: ID does not exist" Feb 16 17:01:39.862380 master-0 kubenswrapper[10003]: I0216 17:01:39.862338 10003 scope.go:117] "RemoveContainer" containerID="7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0" Feb 16 17:01:39.862759 master-0 kubenswrapper[10003]: I0216 17:01:39.862736 10003 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0"} err="failed to get container status \"7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0\": rpc error: code = NotFound desc = could not find container \"7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0\": container with ID starting with 7369bdc021be670ec3e59fc31fe53eaf8786e7274b41956aa9f218a58707ecd0 not found: ID does not exist" Feb 16 17:01:39.862759 master-0 kubenswrapper[10003]: I0216 17:01:39.862755 10003 scope.go:117] "RemoveContainer" containerID="a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973" Feb 16 17:01:39.863015 master-0 kubenswrapper[10003]: I0216 17:01:39.862969 10003 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973"} err="failed to get container status \"a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973\": rpc error: code = NotFound desc = could not find container \"a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973\": container with ID starting with a9a6e644fb78057bbf1088592b0e45ca8812ace4f9ac9219cc5ae5bc6f25d973 not found: ID does not exist" Feb 16 17:01:39.863815 master-0 kubenswrapper[10003]: I0216 17:01:39.863777 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz"] Feb 16 17:01:39.864698 master-0 kubenswrapper[10003]: I0216 17:01:39.864673 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:39.866384 master-0 kubenswrapper[10003]: I0216 17:01:39.866349 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 17:01:39.866471 master-0 kubenswrapper[10003]: I0216 17:01:39.866459 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 17:01:39.866589 master-0 kubenswrapper[10003]: I0216 17:01:39.866566 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 17:01:39.866831 master-0 kubenswrapper[10003]: I0216 17:01:39.866812 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-wnnb7" Feb 16 17:01:39.866945 master-0 kubenswrapper[10003]: I0216 17:01:39.866915 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 17:01:39.867031 master-0 kubenswrapper[10003]: I0216 17:01:39.867015 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 17:01:39.889404 master-0 kubenswrapper[10003]: I0216 17:01:39.889298 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:01:39.955118 master-0 kubenswrapper[10003]: I0216 17:01:39.954838 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:39.955118 master-0 kubenswrapper[10003]: I0216 17:01:39.954968 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:39.955118 master-0 kubenswrapper[10003]: I0216 17:01:39.954998 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:39.955118 master-0 kubenswrapper[10003]: I0216 17:01:39.955028 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:39.970043 master-0 kubenswrapper[10003]: I0216 17:01:39.969978 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:01:40.056564 master-0 kubenswrapper[10003]: I0216 17:01:40.056415 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:40.056564 master-0 kubenswrapper[10003]: I0216 17:01:40.056556 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:40.056846 master-0 kubenswrapper[10003]: I0216 17:01:40.056598 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:40.056846 master-0 kubenswrapper[10003]: I0216 17:01:40.056639 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:40.058219 master-0 kubenswrapper[10003]: I0216 17:01:40.058041 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:40.058532 master-0 kubenswrapper[10003]: I0216 17:01:40.058461 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:40.074537 master-0 kubenswrapper[10003]: I0216 17:01:40.074499 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:40.086439 master-0 kubenswrapper[10003]: I0216 17:01:40.086385 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:40.212416 master-0 kubenswrapper[10003]: I0216 17:01:40.212330 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:01:40.441833 master-0 kubenswrapper[10003]: I0216 17:01:40.441719 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8kkl7"] Feb 16 17:01:40.816121 master-0 kubenswrapper[10003]: I0216 17:01:40.816075 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32a6b902-917f-4529-92c5-1a4975237501" path="/var/lib/kubelet/pods/32a6b902-917f-4529-92c5-1a4975237501/volumes" Feb 16 17:01:40.854946 master-0 kubenswrapper[10003]: I0216 17:01:40.854807 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z69zq"] Feb 16 17:01:40.861385 master-0 kubenswrapper[10003]: I0216 17:01:40.861235 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:01:40.866134 master-0 kubenswrapper[10003]: I0216 17:01:40.863030 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-nslxl" Feb 16 17:01:40.874553 master-0 kubenswrapper[10003]: I0216 17:01:40.874517 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z69zq"] Feb 16 17:01:40.975692 master-0 kubenswrapper[10003]: I0216 17:01:40.975289 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:01:40.975692 master-0 kubenswrapper[10003]: I0216 17:01:40.975525 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:01:40.976831 master-0 kubenswrapper[10003]: I0216 17:01:40.976793 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:01:41.079023 master-0 kubenswrapper[10003]: I0216 17:01:41.078943 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:01:41.079338 master-0 kubenswrapper[10003]: I0216 17:01:41.079080 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:01:41.079338 master-0 kubenswrapper[10003]: I0216 17:01:41.079127 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:01:41.079507 master-0 kubenswrapper[10003]: I0216 17:01:41.079465 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:01:41.080029 master-0 kubenswrapper[10003]: I0216 17:01:41.079951 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:01:41.096544 master-0 kubenswrapper[10003]: I0216 17:01:41.096495 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:01:41.180551 master-0 kubenswrapper[10003]: I0216 17:01:41.180497 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:01:41.710089 master-0 kubenswrapper[10003]: I0216 17:01:41.710027 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48"] Feb 16 17:01:41.711650 master-0 kubenswrapper[10003]: I0216 17:01:41.711615 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:01:41.713299 master-0 kubenswrapper[10003]: I0216 17:01:41.713261 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-t46bw" Feb 16 17:01:41.715482 master-0 kubenswrapper[10003]: I0216 17:01:41.714605 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 17:01:41.720311 master-0 kubenswrapper[10003]: I0216 17:01:41.719759 10003 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48"] Feb 16 17:01:41.794167 master-0 kubenswrapper[10003]: I0216 17:01:41.794115 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:01:41.794167 master-0 kubenswrapper[10003]: I0216 17:01:41.794171 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:01:41.794384 master-0 kubenswrapper[10003]: I0216 17:01:41.794343 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:01:41.895207 master-0 kubenswrapper[10003]: I0216 17:01:41.895137 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:01:41.895648 master-0 kubenswrapper[10003]: I0216 17:01:41.895394 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:01:41.895648 master-0 kubenswrapper[10003]: I0216 17:01:41.895494 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:01:41.897812 master-0 kubenswrapper[10003]: I0216 17:01:41.897299 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:01:41.901240 master-0 kubenswrapper[10003]: I0216 17:01:41.901026 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:01:41.915584 master-0 kubenswrapper[10003]: I0216 17:01:41.915537 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:01:42.040261 master-0 kubenswrapper[10003]: I0216 17:01:42.040198 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:01:45.712734 master-0 kubenswrapper[10003]: I0216 17:01:45.712658 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm"] Feb 16 17:01:45.713366 master-0 kubenswrapper[10003]: I0216 17:01:45.713040 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="cluster-cloud-controller-manager" containerID="cri-o://5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2" gracePeriod=30 Feb 16 17:01:45.713902 master-0 kubenswrapper[10003]: I0216 17:01:45.713414 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="config-sync-controllers" containerID="cri-o://3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169" gracePeriod=30 Feb 16 17:01:45.713902 master-0 kubenswrapper[10003]: I0216 17:01:45.713510 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="kube-rbac-proxy" containerID="cri-o://09c67126c6de3668502bac14b9e7abd00e8d4219a805f89a15ceed509ea0a832" gracePeriod=30 Feb 16 17:01:45.822428 master-0 kubenswrapper[10003]: E0216 17:01:45.822359 10003 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b5c9593_e93c_40f4_966d_8fb2a4edd5b7.slice/crio-conmon-09c67126c6de3668502bac14b9e7abd00e8d4219a805f89a15ceed509ea0a832.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:01:45.845225 master-0 kubenswrapper[10003]: I0216 17:01:45.845179 10003 generic.go:334] "Generic (PLEG): container finished" podID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerID="09c67126c6de3668502bac14b9e7abd00e8d4219a805f89a15ceed509ea0a832" exitCode=0 Feb 16 17:01:45.845225 master-0 kubenswrapper[10003]: I0216 17:01:45.845226 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" event={"ID":"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7","Type":"ContainerDied","Data":"09c67126c6de3668502bac14b9e7abd00e8d4219a805f89a15ceed509ea0a832"} Feb 16 17:01:46.852058 master-0 kubenswrapper[10003]: I0216 17:01:46.852012 10003 generic.go:334] "Generic (PLEG): container finished" podID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerID="3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169" exitCode=0 Feb 16 17:01:46.852058 master-0 kubenswrapper[10003]: I0216 17:01:46.852042 10003 generic.go:334] "Generic (PLEG): container finished" podID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerID="5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2" exitCode=0 Feb 16 17:01:46.852058 master-0 kubenswrapper[10003]: I0216 17:01:46.852061 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" event={"ID":"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7","Type":"ContainerDied","Data":"3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169"} Feb 16 17:01:46.852613 master-0 kubenswrapper[10003]: I0216 17:01:46.852089 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" event={"ID":"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7","Type":"ContainerDied","Data":"5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2"} Feb 16 17:01:49.825839 master-0 kubenswrapper[10003]: I0216 17:01:49.825769 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:49.870750 master-0 kubenswrapper[10003]: I0216 17:01:49.870671 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" event={"ID":"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7","Type":"ContainerDied","Data":"2882ab80b1900a943d704e6e04c01d1cc470047c64e522580e4e2608630c5a4a"} Feb 16 17:01:49.870750 master-0 kubenswrapper[10003]: I0216 17:01:49.870742 10003 scope.go:117] "RemoveContainer" containerID="09c67126c6de3668502bac14b9e7abd00e8d4219a805f89a15ceed509ea0a832" Feb 16 17:01:49.870901 master-0 kubenswrapper[10003]: I0216 17:01:49.870869 10003 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm" Feb 16 17:01:49.878132 master-0 kubenswrapper[10003]: I0216 17:01:49.878069 10003 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_7fe1c16d-061a-4a57-aea4-cf1d4b24d02f/installer/0.log" Feb 16 17:01:49.878261 master-0 kubenswrapper[10003]: I0216 17:01:49.878135 10003 generic.go:334] "Generic (PLEG): container finished" podID="7fe1c16d-061a-4a57-aea4-cf1d4b24d02f" containerID="90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99" exitCode=1 Feb 16 17:01:49.878261 master-0 kubenswrapper[10003]: I0216 17:01:49.878176 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f","Type":"ContainerDied","Data":"90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99"} Feb 16 17:01:49.912861 master-0 kubenswrapper[10003]: I0216 17:01:49.912803 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-images\") pod \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " Feb 16 17:01:49.912861 master-0 kubenswrapper[10003]: I0216 17:01:49.912854 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnmbv\" (UniqueName: \"kubernetes.io/projected/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-kube-api-access-nnmbv\") pod \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " Feb 16 17:01:49.912861 master-0 kubenswrapper[10003]: I0216 17:01:49.912890 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-auth-proxy-config\") pod \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " Feb 16 17:01:49.913385 master-0 kubenswrapper[10003]: I0216 17:01:49.912946 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-host-etc-kube\") pod \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " Feb 16 17:01:49.913385 master-0 kubenswrapper[10003]: I0216 17:01:49.912980 10003 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-cloud-controller-manager-operator-tls\") pod \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\" (UID: \"8b5c9593-e93c-40f4-966d-8fb2a4edd5b7\") " Feb 16 17:01:49.913992 master-0 kubenswrapper[10003]: I0216 17:01:49.913963 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-images" (OuterVolumeSpecName: "images") pod "8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" (UID: "8b5c9593-e93c-40f4-966d-8fb2a4edd5b7"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:49.914956 master-0 kubenswrapper[10003]: I0216 17:01:49.914857 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" (UID: "8b5c9593-e93c-40f4-966d-8fb2a4edd5b7"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:01:49.915440 master-0 kubenswrapper[10003]: I0216 17:01:49.915392 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" (UID: "8b5c9593-e93c-40f4-966d-8fb2a4edd5b7"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:01:49.917813 master-0 kubenswrapper[10003]: I0216 17:01:49.917768 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" (UID: "8b5c9593-e93c-40f4-966d-8fb2a4edd5b7"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:01:49.917959 master-0 kubenswrapper[10003]: I0216 17:01:49.917911 10003 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-kube-api-access-nnmbv" (OuterVolumeSpecName: "kube-api-access-nnmbv") pod "8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" (UID: "8b5c9593-e93c-40f4-966d-8fb2a4edd5b7"). InnerVolumeSpecName "kube-api-access-nnmbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:01:50.017330 master-0 kubenswrapper[10003]: I0216 17:01:50.017284 10003 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-images\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:50.017330 master-0 kubenswrapper[10003]: I0216 17:01:50.017322 10003 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnmbv\" (UniqueName: \"kubernetes.io/projected/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-kube-api-access-nnmbv\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:50.017330 master-0 kubenswrapper[10003]: I0216 17:01:50.017356 10003 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:50.017330 master-0 kubenswrapper[10003]: I0216 17:01:50.017364 10003 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:50.018767 master-0 kubenswrapper[10003]: I0216 17:01:50.017374 10003 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 17:01:51.175815 master-0 kubenswrapper[10003]: I0216 17:01:51.175753 10003 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm"] Feb 16 17:01:51.464787 master-0 kubenswrapper[10003]: I0216 17:01:51.463349 10003 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5b487c8bfc-rdtdm"] Feb 16 17:01:51.861078 master-0 kubenswrapper[10003]: I0216 17:01:51.861014 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz"] Feb 16 17:01:51.861274 master-0 kubenswrapper[10003]: E0216 17:01:51.861218 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="kube-rbac-proxy" Feb 16 17:01:51.861274 master-0 kubenswrapper[10003]: I0216 17:01:51.861230 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="kube-rbac-proxy" Feb 16 17:01:51.861274 master-0 kubenswrapper[10003]: E0216 17:01:51.861245 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="cluster-cloud-controller-manager" Feb 16 17:01:51.861274 master-0 kubenswrapper[10003]: I0216 17:01:51.861251 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="cluster-cloud-controller-manager" Feb 16 17:01:51.861274 master-0 kubenswrapper[10003]: E0216 17:01:51.861259 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="config-sync-controllers" Feb 16 17:01:51.861274 master-0 kubenswrapper[10003]: I0216 17:01:51.861268 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="config-sync-controllers" Feb 16 17:01:51.861524 master-0 kubenswrapper[10003]: I0216 17:01:51.861404 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="cluster-cloud-controller-manager" Feb 16 17:01:51.861524 master-0 kubenswrapper[10003]: I0216 17:01:51.861418 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="config-sync-controllers" Feb 16 17:01:51.861524 master-0 kubenswrapper[10003]: I0216 17:01:51.861425 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="kube-rbac-proxy" Feb 16 17:01:51.862108 master-0 kubenswrapper[10003]: I0216 17:01:51.862082 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:51.867879 master-0 kubenswrapper[10003]: I0216 17:01:51.866954 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lc8g2" Feb 16 17:01:51.867879 master-0 kubenswrapper[10003]: I0216 17:01:51.867236 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:01:51.867879 master-0 kubenswrapper[10003]: I0216 17:01:51.867494 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 17:01:51.867879 master-0 kubenswrapper[10003]: I0216 17:01:51.867621 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 17:01:51.867879 master-0 kubenswrapper[10003]: I0216 17:01:51.867683 10003 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:01:51.868187 master-0 kubenswrapper[10003]: I0216 17:01:51.867837 10003 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 17:01:51.949543 master-0 kubenswrapper[10003]: I0216 17:01:51.949488 10003 generic.go:334] "Generic (PLEG): container finished" podID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" containerID="8399cb1f8a954f603085247154bb48084a1a6283fe2b99aa8facab4cb78f381d" exitCode=0 Feb 16 17:01:51.949725 master-0 kubenswrapper[10003]: I0216 17:01:51.949566 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" event={"ID":"8e623376-9e14-4341-9dcf-7a7c218b6f9f","Type":"ContainerDied","Data":"8399cb1f8a954f603085247154bb48084a1a6283fe2b99aa8facab4cb78f381d"} Feb 16 17:01:51.950164 master-0 kubenswrapper[10003]: I0216 17:01:51.950141 10003 scope.go:117] "RemoveContainer" containerID="8399cb1f8a954f603085247154bb48084a1a6283fe2b99aa8facab4cb78f381d" Feb 16 17:01:51.951215 master-0 kubenswrapper[10003]: I0216 17:01:51.951180 10003 generic.go:334] "Generic (PLEG): container finished" podID="29402454-a920-471e-895e-764235d16eb4" containerID="f31ff62ede3b23583193a8479095d460885c4665f91f714a80c48601aa1a71ad" exitCode=0 Feb 16 17:01:51.951265 master-0 kubenswrapper[10003]: I0216 17:01:51.951223 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerDied","Data":"f31ff62ede3b23583193a8479095d460885c4665f91f714a80c48601aa1a71ad"} Feb 16 17:01:51.951394 master-0 kubenswrapper[10003]: I0216 17:01:51.951365 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:51.951443 master-0 kubenswrapper[10003]: I0216 17:01:51.951418 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:51.951479 master-0 kubenswrapper[10003]: I0216 17:01:51.951465 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:51.951523 master-0 kubenswrapper[10003]: I0216 17:01:51.951492 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:51.951662 master-0 kubenswrapper[10003]: I0216 17:01:51.951645 10003 scope.go:117] "RemoveContainer" containerID="f31ff62ede3b23583193a8479095d460885c4665f91f714a80c48601aa1a71ad" Feb 16 17:01:51.951969 master-0 kubenswrapper[10003]: I0216 17:01:51.951839 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:52.053612 master-0 kubenswrapper[10003]: I0216 17:01:52.053552 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:52.053805 master-0 kubenswrapper[10003]: I0216 17:01:52.053627 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:52.053805 master-0 kubenswrapper[10003]: I0216 17:01:52.053690 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:52.053805 master-0 kubenswrapper[10003]: I0216 17:01:52.053727 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:52.054132 master-0 kubenswrapper[10003]: I0216 17:01:52.054046 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:52.054393 master-0 kubenswrapper[10003]: I0216 17:01:52.054346 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:52.054620 master-0 kubenswrapper[10003]: I0216 17:01:52.054568 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:52.055277 master-0 kubenswrapper[10003]: I0216 17:01:52.055238 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:52.057140 master-0 kubenswrapper[10003]: I0216 17:01:52.057108 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:52.817977 master-0 kubenswrapper[10003]: I0216 17:01:52.815415 10003 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" path="/var/lib/kubelet/pods/8b5c9593-e93c-40f4-966d-8fb2a4edd5b7/volumes" Feb 16 17:01:53.597686 master-0 kubenswrapper[10003]: I0216 17:01:53.597582 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:53.686550 master-0 kubenswrapper[10003]: I0216 17:01:53.686482 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:01:54.967280 master-0 kubenswrapper[10003]: I0216 17:01:54.967109 10003 generic.go:334] "Generic (PLEG): container finished" podID="d020c902-2adb-4919-8dd9-0c2109830580" containerID="e310e36fd740b75515307293e697ecd768c9c8241ff939db071d778913f35a7a" exitCode=0 Feb 16 17:01:54.967280 master-0 kubenswrapper[10003]: I0216 17:01:54.967156 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerDied","Data":"e310e36fd740b75515307293e697ecd768c9c8241ff939db071d778913f35a7a"} Feb 16 17:01:54.968147 master-0 kubenswrapper[10003]: I0216 17:01:54.967618 10003 scope.go:117] "RemoveContainer" containerID="e310e36fd740b75515307293e697ecd768c9c8241ff939db071d778913f35a7a" Feb 16 17:01:56.494339 master-0 kubenswrapper[10003]: I0216 17:01:56.494276 10003 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Feb 16 17:01:56.494988 master-0 kubenswrapper[10003]: I0216 17:01:56.494566 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" containerID="cri-o://e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098" gracePeriod=30 Feb 16 17:01:56.501912 master-0 kubenswrapper[10003]: I0216 17:01:56.501861 10003 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 17:01:56.502159 master-0 kubenswrapper[10003]: E0216 17:01:56.502131 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 17:01:56.502159 master-0 kubenswrapper[10003]: I0216 17:01:56.502150 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 17:01:56.502273 master-0 kubenswrapper[10003]: I0216 17:01:56.502244 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 17:01:56.503174 master-0 kubenswrapper[10003]: I0216 17:01:56.503139 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:01:56.633631 master-0 kubenswrapper[10003]: I0216 17:01:56.633557 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:01:56.633856 master-0 kubenswrapper[10003]: I0216 17:01:56.633661 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:01:56.735619 master-0 kubenswrapper[10003]: I0216 17:01:56.735544 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:01:56.735856 master-0 kubenswrapper[10003]: I0216 17:01:56.735635 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:01:56.735856 master-0 kubenswrapper[10003]: I0216 17:01:56.735718 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:01:56.735856 master-0 kubenswrapper[10003]: I0216 17:01:56.735750 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:01:56.867243 master-0 kubenswrapper[10003]: I0216 17:01:56.867185 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:01:56.878811 master-0 kubenswrapper[10003]: I0216 17:01:56.878745 10003 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 17:01:56.978909 master-0 kubenswrapper[10003]: I0216 17:01:56.978831 10003 generic.go:334] "Generic (PLEG): container finished" podID="035c8af0-95f3-4ab6-939c-d7fa8bda40a3" containerID="8d78fa623e175273ca9fb1b430de0aa7e6c7b81ae465f33ce572879406853709" exitCode=0 Feb 16 17:01:56.979176 master-0 kubenswrapper[10003]: I0216 17:01:56.978940 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"035c8af0-95f3-4ab6-939c-d7fa8bda40a3","Type":"ContainerDied","Data":"8d78fa623e175273ca9fb1b430de0aa7e6c7b81ae465f33ce572879406853709"} Feb 16 17:01:56.980868 master-0 kubenswrapper[10003]: I0216 17:01:56.980807 10003 generic.go:334] "Generic (PLEG): container finished" podID="9460ca0802075a8a6a10d7b3e6052c4d" containerID="e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098" exitCode=0 Feb 16 17:01:56.982814 master-0 kubenswrapper[10003]: I0216 17:01:56.982774 10003 generic.go:334] "Generic (PLEG): container finished" podID="4549ea98-7379-49e1-8452-5efb643137ca" containerID="01bf42c6c3bf4f293fd2294a37aff703b4c469002ae6a87f7c50eefa7c6ae11b" exitCode=0 Feb 16 17:01:56.982892 master-0 kubenswrapper[10003]: I0216 17:01:56.982815 10003 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" event={"ID":"4549ea98-7379-49e1-8452-5efb643137ca","Type":"ContainerDied","Data":"01bf42c6c3bf4f293fd2294a37aff703b4c469002ae6a87f7c50eefa7c6ae11b"} Feb 16 17:01:56.983366 master-0 kubenswrapper[10003]: I0216 17:01:56.983337 10003 scope.go:117] "RemoveContainer" containerID="01bf42c6c3bf4f293fd2294a37aff703b4c469002ae6a87f7c50eefa7c6ae11b" Feb 16 17:02:00.257105 master-0 kubenswrapper[10003]: I0216 17:02:00.257034 10003 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Feb 16 17:02:00.257659 master-0 kubenswrapper[10003]: I0216 17:02:00.257131 10003 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:02:00.257659 master-0 kubenswrapper[10003]: I0216 17:02:00.257440 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" containerID="cri-o://a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b" gracePeriod=15 Feb 16 17:02:00.257749 master-0 kubenswrapper[10003]: I0216 17:02:00.257693 10003 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31" gracePeriod=15 Feb 16 17:02:00.257814 master-0 kubenswrapper[10003]: E0216 17:02:00.257790 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 16 17:02:00.257814 master-0 kubenswrapper[10003]: I0216 17:02:00.257811 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 16 17:02:00.257912 master-0 kubenswrapper[10003]: E0216 17:02:00.257829 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 16 17:02:00.257912 master-0 kubenswrapper[10003]: I0216 17:02:00.257836 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 16 17:02:00.257912 master-0 kubenswrapper[10003]: E0216 17:02:00.257844 10003 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 16 17:02:00.257912 master-0 kubenswrapper[10003]: I0216 17:02:00.257851 10003 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 16 17:02:00.258049 master-0 kubenswrapper[10003]: I0216 17:02:00.257992 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 16 17:02:00.258049 master-0 kubenswrapper[10003]: I0216 17:02:00.258005 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 16 17:02:00.258049 master-0 kubenswrapper[10003]: I0216 17:02:00.258014 10003 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 16 17:02:00.259262 master-0 kubenswrapper[10003]: I0216 17:02:00.259240 10003 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 17:02:00.259663 master-0 kubenswrapper[10003]: I0216 17:02:00.259648 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.260106 master-0 kubenswrapper[10003]: I0216 17:02:00.259988 10003 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:00.387110 master-0 kubenswrapper[10003]: I0216 17:02:00.387037 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.387110 master-0 kubenswrapper[10003]: I0216 17:02:00.387088 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.387429 master-0 kubenswrapper[10003]: I0216 17:02:00.387263 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:00.387429 master-0 kubenswrapper[10003]: I0216 17:02:00.387323 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:00.387526 master-0 kubenswrapper[10003]: I0216 17:02:00.387435 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.387572 master-0 kubenswrapper[10003]: I0216 17:02:00.387521 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.387572 master-0 kubenswrapper[10003]: I0216 17:02:00.387547 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.387653 master-0 kubenswrapper[10003]: I0216 17:02:00.387579 10003 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:00.489411 master-0 kubenswrapper[10003]: I0216 17:02:00.489289 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.489411 master-0 kubenswrapper[10003]: I0216 17:02:00.489343 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.489411 master-0 kubenswrapper[10003]: I0216 17:02:00.489390 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.489904 master-0 kubenswrapper[10003]: I0216 17:02:00.489430 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:00.489904 master-0 kubenswrapper[10003]: I0216 17:02:00.489470 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.489904 master-0 kubenswrapper[10003]: I0216 17:02:00.489489 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.489904 master-0 kubenswrapper[10003]: I0216 17:02:00.489540 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:00.489904 master-0 kubenswrapper[10003]: I0216 17:02:00.489561 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:00.489904 master-0 kubenswrapper[10003]: I0216 17:02:00.489580 10003 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.489904 master-0 kubenswrapper[10003]: I0216 17:02:00.489621 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.489904 master-0 kubenswrapper[10003]: I0216 17:02:00.489645 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.489904 master-0 kubenswrapper[10003]: I0216 17:02:00.489663 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:00.489904 master-0 kubenswrapper[10003]: I0216 17:02:00.489683 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.489904 master-0 kubenswrapper[10003]: I0216 17:02:00.489702 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:00.489904 master-0 kubenswrapper[10003]: I0216 17:02:00.489720 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:00.489904 master-0 kubenswrapper[10003]: I0216 17:02:00.489741 10003 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:00.761714 master-0 kubenswrapper[10003]: I0216 17:02:00.761408 10003 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:02:00.761472 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 16 17:02:00.788398 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 16 17:02:00.788824 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 16 17:02:00.790544 master-0 systemd[1]: kubelet.service: Consumed 13.650s CPU time. Feb 16 17:02:00.811772 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 17:02:00.916143 master-0 kubenswrapper[15493]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:02:00.916143 master-0 kubenswrapper[15493]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 17:02:00.916143 master-0 kubenswrapper[15493]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:02:00.916143 master-0 kubenswrapper[15493]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:02:00.916143 master-0 kubenswrapper[15493]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 17:02:00.916143 master-0 kubenswrapper[15493]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:02:00.916143 master-0 kubenswrapper[15493]: I0216 17:02:00.910583 15493 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 17:02:00.916143 master-0 kubenswrapper[15493]: W0216 17:02:00.913434 15493 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:02:00.916143 master-0 kubenswrapper[15493]: W0216 17:02:00.913444 15493 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:02:00.916143 master-0 kubenswrapper[15493]: W0216 17:02:00.913448 15493 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:02:00.916143 master-0 kubenswrapper[15493]: W0216 17:02:00.913453 15493 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:02:00.916143 master-0 kubenswrapper[15493]: W0216 17:02:00.913457 15493 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913461 15493 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913465 15493 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913469 15493 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913473 15493 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913476 15493 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913485 15493 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913489 15493 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913493 15493 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913498 15493 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913503 15493 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913507 15493 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913511 15493 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913515 15493 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913520 15493 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913525 15493 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913529 15493 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913533 15493 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913536 15493 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913540 15493 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:02:00.918163 master-0 kubenswrapper[15493]: W0216 17:02:00.913543 15493 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913547 15493 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913552 15493 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913567 15493 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913571 15493 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913576 15493 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913581 15493 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913585 15493 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913589 15493 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913593 15493 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913596 15493 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913600 15493 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913604 15493 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913608 15493 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913612 15493 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913615 15493 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913619 15493 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913623 15493 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913626 15493 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913630 15493 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:02:00.918889 master-0 kubenswrapper[15493]: W0216 17:02:00.913633 15493 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913637 15493 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913640 15493 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913644 15493 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913647 15493 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913651 15493 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913669 15493 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913672 15493 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913676 15493 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913679 15493 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913683 15493 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913687 15493 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913690 15493 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913694 15493 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913698 15493 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913705 15493 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913709 15493 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913713 15493 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913717 15493 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913725 15493 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:02:00.919671 master-0 kubenswrapper[15493]: W0216 17:02:00.913730 15493 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: W0216 17:02:00.913733 15493 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: W0216 17:02:00.913737 15493 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: W0216 17:02:00.913741 15493 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: W0216 17:02:00.913744 15493 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: W0216 17:02:00.913747 15493 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: W0216 17:02:00.913751 15493 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: W0216 17:02:00.913754 15493 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913838 15493 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913849 15493 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913860 15493 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913866 15493 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913873 15493 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913879 15493 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913887 15493 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913893 15493 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913899 15493 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913905 15493 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913910 15493 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913929 15493 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913935 15493 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913940 15493 flags.go:64] FLAG: --cgroup-root="" Feb 16 17:02:00.920406 master-0 kubenswrapper[15493]: I0216 17:02:00.913944 15493 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.913948 15493 flags.go:64] FLAG: --client-ca-file="" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.913952 15493 flags.go:64] FLAG: --cloud-config="" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.913956 15493 flags.go:64] FLAG: --cloud-provider="" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.913960 15493 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.913972 15493 flags.go:64] FLAG: --cluster-domain="" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.913977 15493 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.913981 15493 flags.go:64] FLAG: --config-dir="" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.913985 15493 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.913989 15493 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.913995 15493 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.913999 15493 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914003 15493 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914013 15493 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914017 15493 flags.go:64] FLAG: --contention-profiling="false" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914021 15493 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914025 15493 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914030 15493 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914034 15493 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914039 15493 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914043 15493 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914047 15493 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914051 15493 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914055 15493 flags.go:64] FLAG: --enable-server="true" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914060 15493 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 17:02:00.921208 master-0 kubenswrapper[15493]: I0216 17:02:00.914086 15493 flags.go:64] FLAG: --event-burst="100" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914093 15493 flags.go:64] FLAG: --event-qps="50" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914097 15493 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914102 15493 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914106 15493 flags.go:64] FLAG: --eviction-hard="" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914112 15493 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914116 15493 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914120 15493 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914124 15493 flags.go:64] FLAG: --eviction-soft="" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914128 15493 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914132 15493 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914136 15493 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914142 15493 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914146 15493 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914150 15493 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914155 15493 flags.go:64] FLAG: --feature-gates="" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914164 15493 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914169 15493 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914173 15493 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914177 15493 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914182 15493 flags.go:64] FLAG: --healthz-port="10248" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914186 15493 flags.go:64] FLAG: --help="false" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914190 15493 flags.go:64] FLAG: --hostname-override="" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914193 15493 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914203 15493 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 17:02:00.922107 master-0 kubenswrapper[15493]: I0216 17:02:00.914207 15493 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914211 15493 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914215 15493 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914219 15493 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914223 15493 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914227 15493 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914232 15493 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914236 15493 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914240 15493 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914244 15493 flags.go:64] FLAG: --kube-reserved="" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914248 15493 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914252 15493 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914257 15493 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914260 15493 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914264 15493 flags.go:64] FLAG: --lock-file="" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914268 15493 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914272 15493 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914277 15493 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914283 15493 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914288 15493 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914292 15493 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914305 15493 flags.go:64] FLAG: --logging-format="text" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914310 15493 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914315 15493 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914319 15493 flags.go:64] FLAG: --manifest-url="" Feb 16 17:02:00.922943 master-0 kubenswrapper[15493]: I0216 17:02:00.914323 15493 flags.go:64] FLAG: --manifest-url-header="" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914328 15493 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914332 15493 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914337 15493 flags.go:64] FLAG: --max-pods="110" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914341 15493 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914345 15493 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914349 15493 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914353 15493 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914357 15493 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914361 15493 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914370 15493 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914380 15493 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914384 15493 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914388 15493 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914392 15493 flags.go:64] FLAG: --pod-cidr="" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914396 15493 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914403 15493 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914408 15493 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914413 15493 flags.go:64] FLAG: --pods-per-core="0" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914417 15493 flags.go:64] FLAG: --port="10250" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914422 15493 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914427 15493 flags.go:64] FLAG: --provider-id="" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914431 15493 flags.go:64] FLAG: --qos-reserved="" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914435 15493 flags.go:64] FLAG: --read-only-port="10255" Feb 16 17:02:00.923771 master-0 kubenswrapper[15493]: I0216 17:02:00.914439 15493 flags.go:64] FLAG: --register-node="true" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914443 15493 flags.go:64] FLAG: --register-schedulable="true" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914447 15493 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914456 15493 flags.go:64] FLAG: --registry-burst="10" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914460 15493 flags.go:64] FLAG: --registry-qps="5" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914464 15493 flags.go:64] FLAG: --reserved-cpus="" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914468 15493 flags.go:64] FLAG: --reserved-memory="" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914473 15493 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914477 15493 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914481 15493 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914485 15493 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914489 15493 flags.go:64] FLAG: --runonce="false" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914493 15493 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914497 15493 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914501 15493 flags.go:64] FLAG: --seccomp-default="false" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914506 15493 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914510 15493 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914514 15493 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914518 15493 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914522 15493 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914526 15493 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914531 15493 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914540 15493 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914544 15493 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914548 15493 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 17:02:00.924365 master-0 kubenswrapper[15493]: I0216 17:02:00.914552 15493 flags.go:64] FLAG: --system-cgroups="" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914556 15493 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914562 15493 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914567 15493 flags.go:64] FLAG: --tls-cert-file="" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914570 15493 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914577 15493 flags.go:64] FLAG: --tls-min-version="" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914581 15493 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914584 15493 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914589 15493 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914593 15493 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914599 15493 flags.go:64] FLAG: --v="2" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914604 15493 flags.go:64] FLAG: --version="false" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914610 15493 flags.go:64] FLAG: --vmodule="" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914615 15493 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: I0216 17:02:00.914620 15493 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: W0216 17:02:00.914731 15493 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: W0216 17:02:00.914736 15493 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: W0216 17:02:00.914740 15493 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: W0216 17:02:00.914744 15493 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: W0216 17:02:00.914748 15493 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: W0216 17:02:00.914752 15493 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: W0216 17:02:00.914755 15493 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: W0216 17:02:00.914759 15493 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:02:00.925870 master-0 kubenswrapper[15493]: W0216 17:02:00.914763 15493 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914766 15493 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914770 15493 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914774 15493 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914778 15493 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914782 15493 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914786 15493 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914790 15493 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914794 15493 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914797 15493 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914806 15493 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914810 15493 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914813 15493 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914817 15493 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914821 15493 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914825 15493 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914830 15493 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914834 15493 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:02:00.926493 master-0 kubenswrapper[15493]: W0216 17:02:00.914839 15493 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914845 15493 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914849 15493 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914853 15493 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914857 15493 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914860 15493 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914865 15493 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914869 15493 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914873 15493 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914877 15493 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914880 15493 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914884 15493 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914887 15493 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914891 15493 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914895 15493 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914899 15493 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914902 15493 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914906 15493 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914910 15493 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914913 15493 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:02:00.926954 master-0 kubenswrapper[15493]: W0216 17:02:00.914932 15493 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.914935 15493 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.914940 15493 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.914944 15493 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.914949 15493 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.914953 15493 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.914957 15493 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.914961 15493 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.914975 15493 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.914979 15493 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.914984 15493 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.914988 15493 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.914993 15493 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.914999 15493 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.915004 15493 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.915009 15493 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.915013 15493 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.915017 15493 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.915022 15493 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.915026 15493 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.915031 15493 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:02:00.927436 master-0 kubenswrapper[15493]: W0216 17:02:00.915034 15493 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: W0216 17:02:00.915039 15493 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: W0216 17:02:00.915044 15493 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: W0216 17:02:00.915047 15493 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: W0216 17:02:00.915052 15493 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: I0216 17:02:00.915064 15493 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: I0216 17:02:00.927452 15493 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: I0216 17:02:00.927493 15493 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: W0216 17:02:00.927563 15493 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: W0216 17:02:00.927571 15493 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: W0216 17:02:00.927575 15493 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: W0216 17:02:00.927580 15493 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: W0216 17:02:00.927584 15493 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: W0216 17:02:00.927588 15493 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: W0216 17:02:00.927592 15493 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:02:00.927937 master-0 kubenswrapper[15493]: W0216 17:02:00.927596 15493 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927599 15493 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927603 15493 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927607 15493 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927610 15493 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927614 15493 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927618 15493 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927623 15493 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927629 15493 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927636 15493 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927640 15493 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927644 15493 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927649 15493 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927654 15493 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927658 15493 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927662 15493 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927666 15493 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927670 15493 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927673 15493 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:02:00.928306 master-0 kubenswrapper[15493]: W0216 17:02:00.927677 15493 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927681 15493 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927684 15493 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927689 15493 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927692 15493 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927697 15493 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927701 15493 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927705 15493 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927709 15493 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927713 15493 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927716 15493 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927720 15493 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927724 15493 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927728 15493 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927732 15493 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927735 15493 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927739 15493 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927743 15493 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927746 15493 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927750 15493 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:02:00.928768 master-0 kubenswrapper[15493]: W0216 17:02:00.927753 15493 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927758 15493 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927762 15493 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927766 15493 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927770 15493 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927775 15493 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927779 15493 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927783 15493 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927787 15493 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927791 15493 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927797 15493 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927800 15493 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927804 15493 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927809 15493 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927812 15493 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927816 15493 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927819 15493 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927823 15493 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927827 15493 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927831 15493 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:02:00.929338 master-0 kubenswrapper[15493]: W0216 17:02:00.927834 15493 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.927838 15493 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.927842 15493 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.927846 15493 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.927849 15493 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.927853 15493 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: I0216 17:02:00.927860 15493 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.927999 15493 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.928009 15493 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.928013 15493 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.928018 15493 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.928022 15493 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.928026 15493 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.928030 15493 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.928035 15493 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:02:00.929806 master-0 kubenswrapper[15493]: W0216 17:02:00.928039 15493 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928043 15493 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928046 15493 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928050 15493 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928054 15493 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928057 15493 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928061 15493 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928066 15493 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928070 15493 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928073 15493 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928077 15493 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928081 15493 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928085 15493 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928089 15493 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928093 15493 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928097 15493 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928101 15493 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928104 15493 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928109 15493 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928112 15493 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:02:00.930202 master-0 kubenswrapper[15493]: W0216 17:02:00.928117 15493 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928122 15493 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928126 15493 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928130 15493 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928134 15493 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928138 15493 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928142 15493 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928146 15493 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928150 15493 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928154 15493 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928160 15493 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928164 15493 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928167 15493 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928171 15493 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928175 15493 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928178 15493 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928182 15493 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928186 15493 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928190 15493 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928241 15493 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:02:00.930711 master-0 kubenswrapper[15493]: W0216 17:02:00.928246 15493 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928249 15493 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928253 15493 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928257 15493 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928261 15493 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928264 15493 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928268 15493 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928272 15493 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928276 15493 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928279 15493 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928283 15493 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928288 15493 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928293 15493 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928297 15493 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928302 15493 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928306 15493 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928310 15493 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928314 15493 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928318 15493 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:02:00.931272 master-0 kubenswrapper[15493]: W0216 17:02:00.928323 15493 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:02:00.931794 master-0 kubenswrapper[15493]: W0216 17:02:00.928328 15493 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:02:00.931794 master-0 kubenswrapper[15493]: W0216 17:02:00.928333 15493 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:02:00.931794 master-0 kubenswrapper[15493]: W0216 17:02:00.928339 15493 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:02:00.931794 master-0 kubenswrapper[15493]: W0216 17:02:00.928343 15493 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:02:00.931794 master-0 kubenswrapper[15493]: I0216 17:02:00.928351 15493 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:02:00.931794 master-0 kubenswrapper[15493]: I0216 17:02:00.928530 15493 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 17:02:00.931794 master-0 kubenswrapper[15493]: I0216 17:02:00.930143 15493 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 17:02:00.931794 master-0 kubenswrapper[15493]: I0216 17:02:00.930230 15493 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 17:02:00.931794 master-0 kubenswrapper[15493]: I0216 17:02:00.930477 15493 server.go:997] "Starting client certificate rotation" Feb 16 17:02:00.931794 master-0 kubenswrapper[15493]: I0216 17:02:00.930487 15493 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 17:02:00.931794 master-0 kubenswrapper[15493]: I0216 17:02:00.930669 15493 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 13:56:08.982977378 +0000 UTC Feb 16 17:02:00.931794 master-0 kubenswrapper[15493]: I0216 17:02:00.930789 15493 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h54m8.052190935s for next certificate rotation Feb 16 17:02:00.932124 master-0 kubenswrapper[15493]: I0216 17:02:00.931260 15493 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:02:00.932900 master-0 kubenswrapper[15493]: I0216 17:02:00.932857 15493 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:02:00.935568 master-0 kubenswrapper[15493]: I0216 17:02:00.935532 15493 log.go:25] "Validated CRI v1 runtime API" Feb 16 17:02:00.939896 master-0 kubenswrapper[15493]: I0216 17:02:00.939857 15493 log.go:25] "Validated CRI v1 image API" Feb 16 17:02:00.941490 master-0 kubenswrapper[15493]: I0216 17:02:00.941352 15493 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 17:02:00.953964 master-0 kubenswrapper[15493]: I0216 17:02:00.952534 15493 fs.go:135] Filesystem UUIDs: map[35a0b0cc-84b1-4374-a18a-0f49ad7a8333:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 17:02:00.956398 master-0 kubenswrapper[15493]: I0216 17:02:00.952579 15493 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/040d7d0293a7b20224cd27a16c0bf2020794d17010ab130f879f9e5ce8511a88/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/040d7d0293a7b20224cd27a16c0bf2020794d17010ab130f879f9e5ce8511a88/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0a3ce6339796232d6462786af4891ac2f6ae4477b24c445386f55fd5ad1be497/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0a3ce6339796232d6462786af4891ac2f6ae4477b24c445386f55fd5ad1be497/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0c31bbb582da4a5c2f2c01e8ab5dbd9246ddce55c685733c6872e97a601d53de/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0c31bbb582da4a5c2f2c01e8ab5dbd9246ddce55c685733c6872e97a601d53de/userdata/shm major:0 minor:1001 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0cff847538436e1b2bb3434e2e04b8332738e465e04639ea97b586f2461bb9fc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0cff847538436e1b2bb3434e2e04b8332738e465e04639ea97b586f2461bb9fc/userdata/shm major:0 minor:298 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/11ed7f8e3ea465f63c87bfe4f19d1af5fa7ffa1300819fc95ebd1dd0c7c845d0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/11ed7f8e3ea465f63c87bfe4f19d1af5fa7ffa1300819fc95ebd1dd0c7c845d0/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1a6fc168713ed892fb86b4e303cafe982f512d1d221599bf5dd49b75c3751ce5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1a6fc168713ed892fb86b4e303cafe982f512d1d221599bf5dd49b75c3751ce5/userdata/shm major:0 minor:294 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1d90441ff6782f784fce85c87a44597213ff8f98913ae13bfc6b95e97a8d2532/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1d90441ff6782f784fce85c87a44597213ff8f98913ae13bfc6b95e97a8d2532/userdata/shm major:0 minor:559 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/27e2fd204d60ad6b8a779a11015379244968cbe2949b9df72430bd5ea3162c81/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/27e2fd204d60ad6b8a779a11015379244968cbe2949b9df72430bd5ea3162c81/userdata/shm major:0 minor:319 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2ab21ee08c6858b29e0d5402811c6a5058510ebfc99fdee4dceca48abf0ebb37/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2ab21ee08c6858b29e0d5402811c6a5058510ebfc99fdee4dceca48abf0ebb37/userdata/shm major:0 minor:554 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/34d279c74bd940d5ab6f0f7e4b7983d57ebc4d60ff3c8f38850791761b56d54b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/34d279c74bd940d5ab6f0f7e4b7983d57ebc4d60ff3c8f38850791761b56d54b/userdata/shm major:0 minor:607 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/399ed6a40d22a74f841b008ea9aabb72324ddc42397c926051d186c2a8be50e2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/399ed6a40d22a74f841b008ea9aabb72324ddc42397c926051d186c2a8be50e2/userdata/shm major:0 minor:168 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/45b31aa01f6a85e0dcf85670319be85b9e6d0c112d9bd0004ef655a9654d75f6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/45b31aa01f6a85e0dcf85670319be85b9e6d0c112d9bd0004ef655a9654d75f6/userdata/shm major:0 minor:714 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/48fe704f6b9f25810dcd5004b13a7c413fb8fc4a4e972dfe51f7142aa16f0fee/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/48fe704f6b9f25810dcd5004b13a7c413fb8fc4a4e972dfe51f7142aa16f0fee/userdata/shm major:0 minor:439 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4b0c06fa22c4b9fdff535e3051201dc0bd36447aab43eba5f5549b527b9cff7e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4b0c06fa22c4b9fdff535e3051201dc0bd36447aab43eba5f5549b527b9cff7e/userdata/shm major:0 minor:890 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4b4cf6ce22ab8720cdceaa9299137fdb7eefaf7a73cc07e0b511a6eb79ff2810/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4b4cf6ce22ab8720cdceaa9299137fdb7eefaf7a73cc07e0b511a6eb79ff2810/userdata/shm major:0 minor:108 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4c0337d0eb1672f3cea8f23ec0619b33320b69137b0d2e9bc66e94fe15bbe412/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4c0337d0eb1672f3cea8f23ec0619b33320b69137b0d2e9bc66e94fe15bbe412/userdata/shm major:0 minor:338 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/631bd21307bb03e494699eafea36a1ef9835bef8edb1d35b0dd997809ea4fddf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/631bd21307bb03e494699eafea36a1ef9835bef8edb1d35b0dd997809ea4fddf/userdata/shm major:0 minor:479 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6518a84f5d47511f5f25592fd8ed06e7ac0d8f38709f9e4fcd73acdf3eb6490c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6518a84f5d47511f5f25592fd8ed06e7ac0d8f38709f9e4fcd73acdf3eb6490c/userdata/shm major:0 minor:591 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/69df1f56628ba74129afddfead0ccc135e8c4f4c22ab04aa82de33d67e1e6121/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/69df1f56628ba74129afddfead0ccc135e8c4f4c22ab04aa82de33d67e1e6121/userdata/shm major:0 minor:375 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/74ced4b4e3fdce2aecbb38a4d03ec1a93853cd8aa3de1fd3350c1e935e0a300f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/74ced4b4e3fdce2aecbb38a4d03ec1a93853cd8aa3de1fd3350c1e935e0a300f/userdata/shm major:0 minor:297 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/75a3e91157092df61ab323caf67fd50fd02c9c52e83eb981207e27d0552a17af/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/75a3e91157092df61ab323caf67fd50fd02c9c52e83eb981207e27d0552a17af/userdata/shm major:0 minor:109 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7bcf62830ed108bcbaff872a01506e1cfaba1ae290ee01528f3fca2ecf257682/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7bcf62830ed108bcbaff872a01506e1cfaba1ae290ee01528f3fca2ecf257682/userdata/shm major:0 minor:384 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7c0822a4b748eb1f3f4a4167fcf68aef3951b37e78e3f357e137483a9da93da7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7c0822a4b748eb1f3f4a4167fcf68aef3951b37e78e3f357e137483a9da93da7/userdata/shm major:0 minor:552 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7f3624c603b0a3ab1d6d22b9ebbf3c00bc31ae7c696fca7464238b99ca1dc1bf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7f3624c603b0a3ab1d6d22b9ebbf3c00bc31ae7c696fca7464238b99ca1dc1bf/userdata/shm major:0 minor:397 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/80fdc50e531795c33b265621c0e851281169f624db10a2cc59cfc4a7fd66173e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/80fdc50e531795c33b265621c0e851281169f624db10a2cc59cfc4a7fd66173e/userdata/shm major:0 minor:844 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/82c53f2c6633be154a699c2073aab7e95ac6eb9355d0da7ab4ff59d5ab695ebf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/82c53f2c6633be154a699c2073aab7e95ac6eb9355d0da7ab4ff59d5ab695ebf/userdata/shm major:0 minor:726 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/837a858734b801f62c18bbc1ac1678d7076080812a795cc7c558fa08b748a43c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/837a858734b801f62c18bbc1ac1678d7076080812a795cc7c558fa08b748a43c/userdata/shm major:0 minor:534 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/84fbcf4f8c4afda2e79a3be2b73e332fc9f2d8ce27d17523a1712d76d3ce752e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/84fbcf4f8c4afda2e79a3be2b73e332fc9f2d8ce27d17523a1712d76d3ce752e/userdata/shm major:0 minor:324 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/857c4fe5a333ed5e586a39e2b18bf56aa0347e3ba01b64ce2cc93b89b2b270ec/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/857c4fe5a333ed5e586a39e2b18bf56aa0347e3ba01b64ce2cc93b89b2b270ec/userdata/shm major:0 minor:542 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8d9325183d87d503ed41689fd08cf0ecd5e5cd428a5bae6824cddf556b030e2a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8d9325183d87d503ed41689fd08cf0ecd5e5cd428a5bae6824cddf556b030e2a/userdata/shm major:0 minor:477 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/905e4fdbfe2147706f13434bc2e5b9a3cbf884a588d29ecfca730d73382fb68f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/905e4fdbfe2147706f13434bc2e5b9a3cbf884a588d29ecfca730d73382fb68f/userdata/shm major:0 minor:433 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95008d005493fc2ada0d9b7ff7c718284548b7f519269f9c8d8a7c1fae08fbf6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95008d005493fc2ada0d9b7ff7c718284548b7f519269f9c8d8a7c1fae08fbf6/userdata/shm major:0 minor:901 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/95380b516961f947b4de886138c9d7adc4beb7c7579d206d803e4d6c415fb290/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/95380b516961f947b4de886138c9d7adc4beb7c7579d206d803e4d6c415fb290/userdata/shm major:0 minor:299 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/960647e5dd274ec370d3ea843747f832b88bbc5e8bbea57e384a265bf5609dcc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/960647e5dd274ec370d3ea843747f832b88bbc5e8bbea57e384a265bf5609dcc/userdata/shm major:0 minor:872 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/99e6140d34fdb87885ac6f6b6458e82728271da4c3991473cfe40419736e575d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/99e6140d34fdb87885ac6f6b6458e82728271da4c3991473cfe40419736e575d/userdata/shm major:0 minor:337 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a493ec972b676ff0c630095722c8d8d6f05ae211809b90b8791aa422b9dcb2fb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a493ec972b676ff0c630095722c8d8d6f05ae211809b90b8791aa422b9dcb2fb/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a51ce5dfcf0ae0215dfb9bb56d30c910b8c1e31cb77a303efaded16db5c0b84f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a51ce5dfcf0ae0215dfb9bb56d30c910b8c1e31cb77a303efaded16db5c0b84f/userdata/shm major:0 minor:238 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a58bacdfec2737e0bee779689b2175a4a02a49f390a08f9357e0306cdd9834c8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a58bacdfec2737e0bee779689b2175a4a02a49f390a08f9357e0306cdd9834c8/userdata/shm major:0 minor:120 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a60e6d4793a7edacd573013c98b5733c94b908b9bbe63d3fd698f772eed289c7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a60e6d4793a7edacd573013c98b5733c94b908b9bbe63d3fd698f772eed289c7/userdata/shm major:0 minor:537 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a6e1a17cdf628ad1d6c859dc2741c8e5533022bb4b1d4a9deacf8709bd53c33e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a6e1a17cdf628ad1d6c859dc2741c8e5533022bb4b1d4a9deacf8709bd53c33e/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a8742926579beb3bc6f4cff1fa7c25f0bdd68039ed37e9a331f3e110c7838ff1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a8742926579beb3bc6f4cff1fa7c25f0bdd68039ed37e9a331f3e110c7838ff1/userdata/shm major:0 minor:716 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aa37dd5bc712a6e66397e0efcad2c702b51d3841d761278b212389b13ad668e0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aa37dd5bc712a6e66397e0efcad2c702b51d3841d761278b212389b13ad668e0/userdata/shm major:0 minor:748 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/add87c6ecc390c11dd4bb671cf6c85cf8d43a3b5be958bf533ae60889482daca/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/add87c6ecc390c11dd4bb671cf6c85cf8d43a3b5be958bf533ae60889482daca/userdata/shm major:0 minor:538 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/af61cb28f0ada5d7c4d2b6d4eb5d095894f589e163adb675a754c13d082c9ab9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/af61cb28f0ada5d7c4d2b6d4eb5d095894f589e163adb675a754c13d082c9ab9/userdata/shm major:0 minor:398 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b077d967ff0915e46adebbfea57fba17bebbd700385551a20b3c9d4bda18abd6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b077d967ff0915e46adebbfea57fba17bebbd700385551a20b3c9d4bda18abd6/userdata/shm major:0 minor:764 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b3cfd9fdcf09c3cddf191f185c0ac9e10d9889f1bf6c1c0a015d8feabfcf56b7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b3cfd9fdcf09c3cddf191f185c0ac9e10d9889f1bf6c1c0a015d8feabfcf56b7/userdata/shm major:0 minor:1000 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b595f395aac0332f79c685e4f9b8d1184bc8d65ea7662129777c88a2f4b6d75c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b595f395aac0332f79c685e4f9b8d1184bc8d65ea7662129777c88a2f4b6d75c/userdata/shm major:0 minor:473 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bacf9b29c15cf47bbdc9ebe2fcba6bca3cfdec92ee8864175a5412cbbe3c9659/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bacf9b29c15cf47bbdc9ebe2fcba6bca3cfdec92ee8864175a5412cbbe3c9659/userdata/shm major:0 minor:340 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bd5dcd2c4add7ffc4e409d02664e000a9abb556798c746bb479a7c76fa9d67b8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bd5dcd2c4add7ffc4e409d02664e000a9abb556798c746bb479a7c76fa9d67b8/userdata/shm major:0 minor:997 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/be29035bd3f07d8681e71946753c9f5c4233d203be4ff12561b76d96bc674177/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/be29035bd3f07d8681e71946753c9f5c4233d203be4ff12561b76d96bc674177/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5fa73884bbf6d82e89a9b049cd7e08d54171e2ca181ad4d436172b3a8202990/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5fa73884bbf6d82e89a9b049cd7e08d54171e2ca181ad4d436172b3a8202990/userdata/shm major:0 minor:295 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c68b78ea048e3e05fd1fcd40eae1c2d97a33dc3cbf3cea258f66da49798e5912/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c68b78ea048e3e05fd1fcd40eae1c2d97a33dc3cbf3cea258f66da49798e5912/userdata/shm major:0 minor:650 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cd7158aca6c004ac8177200d17fa2e56721dfe46e78c27563fd124a05f790d1a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cd7158aca6c004ac8177200d17fa2e56721dfe46e78c27563fd124a05f790d1a/userdata/shm major:0 minor:775 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d0a7109bce95d1a32301d6e84ffc12bd1d37b091b1ee1ee044686d1a38898e0f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d0a7109bce95d1a32301d6e84ffc12bd1d37b091b1ee1ee044686d1a38898e0f/userdata/shm major:0 minor:807 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d13fb6fe74528b3a775305b065aa4a4f2df12cd45d9b3cf6d3d699a5fdafc519/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d13fb6fe74528b3a775305b065aa4a4f2df12cd45d9b3cf6d3d699a5fdafc519/userdata/shm major:0 minor:147 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d377c24744b60cc35617b8e88be818c3a9283d990df16a27d5112c6aed9ce981/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d377c24744b60cc35617b8e88be818c3a9283d990df16a27d5112c6aed9ce981/userdata/shm major:0 minor:839 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d49374ae3fffc96a5ea6ebfe8e306371f24f9cbc5677024d9ced60c8e5b1a65e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d49374ae3fffc96a5ea6ebfe8e306371f24f9cbc5677024d9ced60c8e5b1a65e/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dadda19ba6587a75b418addc51b36c2e0a7c53f63977a60f6f393649c7b6d587/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dadda19ba6587a75b418addc51b36c2e0a7c53f63977a60f6f393649c7b6d587/userdata/shm major:0 minor:231 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dbac153ecd4a3f99cd0e69d237edceb4a48ab6c1d711adc4be460a302948a462/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dbac153ecd4a3f99cd0e69d237edceb4a48ab6c1d711adc4be460a302948a462/userdata/shm major:0 minor:1016 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dc3b4571309a88f03db49c8f3410740df7ca0758d3a470ee04a34d6d5a032bdd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dc3b4571309a88f03db49c8f3410740df7ca0758d3a470ee04a34d6d5a032bdd/userdata/shm major:0 minor:383 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dd3ceeb0da8db938eae2cfa500166d7af7a50e381f011dcd54ec971db54cfcba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dd3ceeb0da8db938eae2cfa500166d7af7a50e381f011dcd54ec971db54cfcba/userdata/shm major:0 minor:772 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/de6b829dcdd7c76ab814893ea3af1edbdeba5f9048feac58739f12fbe595c34c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/de6b829dcdd7c76ab814893ea3af1edbdeba5f9048feac58739f12fbe595c34c/userdata/shm major:0 minor:331 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/de761a393a8022719aa1e6c89bc8322fd69d7f7150a0db01e39124de59cc355c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/de761a393a8022719aa1e6c89bc8322fd69d7f7150a0db01e39124de59cc355c/userdata/shm major:0 minor:84 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e596a971faed7fb65d78d19abb83585c95e9a5de18c154df5de65c3d54692d18/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e596a971faed7fb65d78d19abb83585c95e9a5de18c154df5de65c3d54692d18/userdata/shm major:0 minor:891 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/eced2e42d842617466080f00d7950ca196964eff7f84fad83ac2c918e5c89adc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/eced2e42d842617466080f00d7950ca196964eff7f84fad83ac2c918e5c89adc/userdata/shm major:0 minor:505 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f1e81c2caa02917ae2e1efaeab30f34c00bb80423dce6819a41e6640d4fdc6d5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f1e81c2caa02917ae2e1efaeab30f34c00bb80423dce6819a41e6640d4fdc6d5/userdata/shm major:0 minor:863 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f4ce4120d8890f765a717b5f92a49bb939a1d012d4ecb18b255dc6309ea6d107/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f4ce4120d8890f765a717b5f92a49bb939a1d012d4ecb18b255dc6309ea6d107/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f669ceacf6e4215d33879fd75925e984def643e57187c462c685b966c75f2673/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f669ceacf6e4215d33879fd75925e984def643e57187c462c685b966c75f2673/userdata/shm major:0 minor:768 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f6a17f679ed7a7fbe57a462f9ffd2577eef58e5ba226eff8515fa879120c4750/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f6a17f679ed7a7fbe57a462f9ffd2577eef58e5ba226eff8515fa879120c4750/userdata/shm major:0 minor:651 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f9d46acb28343da01106746c6081478cf22731025778bc59637f1078390a8865/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f9d46acb28343da01106746c6081478cf22731025778bc59637f1078390a8865/userdata/shm major:0 minor:540 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fbbdab5ef2164d5b878fbbf6c9e025a67f22281db5fb649f6dbfc4b829160d91/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fbbdab5ef2164d5b878fbbf6c9e025a67f22281db5fb649f6dbfc4b829160d91/userdata/shm major:0 minor:858 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fbcae0a406fe6ca88ac22ca96551fc1de219ee3e9705034ce16efd7971fc9fed/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fbcae0a406fe6ca88ac22ca96551fc1de219ee3e9705034ce16efd7971fc9fed/userdata/shm major:0 minor:867 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/035c8af0-95f3-4ab6-939c-d7fa8bda40a3/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/035c8af0-95f3-4ab6-939c-d7fa8bda40a3/volumes/kubernetes.io~projected/kube-api-access major:0 minor:589 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0393fe12-2533-4c9c-a8e4-a58003c88f36/volumes/kubernetes.io~projected/kube-api-access-p5rwv:{mountpoint:/var/lib/kubelet/pods/0393fe12-2533-4c9c-a8e4-a58003c88f36/volumes/kubernetes.io~projected/kube-api-access-p5rwv major:0 minor:721 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/188e42e5-9f9c-42af-ba15-5548c4fa4b52/volumes/kubernetes.io~projected/kube-api-access-25g7f:{mountpoint:/var/lib/kubelet/pods/188e42e5-9f9c-42af-ba15-5548c4fa4b52/volumes/kubernetes.io~projected/kube-api-access-25g7f major:0 minor:871 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/188e42e5-9f9c-42af-ba15-5548c4fa4b52/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/188e42e5-9f9c-42af-ba15-5548c4fa4b52/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:870 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/188e42e5-9f9c-42af-ba15-5548c4fa4b52/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/188e42e5-9f9c-42af-ba15-5548c4fa4b52/volumes/kubernetes.io~secret/srv-cert major:0 minor:869 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18e9a9d3-9b18-4c19-9558-f33c68101922/volumes/kubernetes.io~projected/kube-api-access-6bbcf:{mountpoint:/var/lib/kubelet/pods/18e9a9d3-9b18-4c19-9558-f33c68101922/volumes/kubernetes.io~projected/kube-api-access-6bbcf major:0 minor:191 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/18e9a9d3-9b18-4c19-9558-f33c68101922/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/18e9a9d3-9b18-4c19-9558-f33c68101922/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:530 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa/volumes/kubernetes.io~projected/kube-api-access-qfkd9:{mountpoint:/var/lib/kubelet/pods/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa/volumes/kubernetes.io~projected/kube-api-access-qfkd9 major:0 minor:993 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29402454-a920-471e-895e-764235d16eb4/volumes/kubernetes.io~projected/kube-api-access-r9bv7:{mountpoint:/var/lib/kubelet/pods/29402454-a920-471e-895e-764235d16eb4/volumes/kubernetes.io~projected/kube-api-access-r9bv7 major:0 minor:195 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/29402454-a920-471e-895e-764235d16eb4/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/29402454-a920-471e-895e-764235d16eb4/volumes/kubernetes.io~secret/serving-cert major:0 minor:159 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2:{mountpoint:/var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2 major:0 minor:368 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~secret/metrics-tls major:0 minor:550 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl:{mountpoint:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl major:0 minor:166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/404c402a-705f-4352-b9df-b89562070d9c/volumes/kubernetes.io~projected/kube-api-access-vkqml:{mountpoint:/var/lib/kubelet/pods/404c402a-705f-4352-b9df-b89562070d9c/volumes/kubernetes.io~projected/kube-api-access-vkqml major:0 minor:766 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/404c402a-705f-4352-b9df-b89562070d9c/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/404c402a-705f-4352-b9df-b89562070d9c/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:723 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x:{mountpoint:/var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x major:0 minor:107 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/442600dc-09b2-4fee-9f89-777296b2ee40/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/442600dc-09b2-4fee-9f89-777296b2ee40/volumes/kubernetes.io~projected/kube-api-access major:0 minor:201 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/442600dc-09b2-4fee-9f89-777296b2ee40/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/442600dc-09b2-4fee-9f89-777296b2ee40/volumes/kubernetes.io~secret/serving-cert major:0 minor:185 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4488757c-f0fd-48fa-a3f9-6373b0bcafe4/volumes/kubernetes.io~projected/kube-api-access-hh2cd:{mountpoint:/var/lib/kubelet/pods/4488757c-f0fd-48fa-a3f9-6373b0bcafe4/volumes/kubernetes.io~projected/kube-api-access-hh2cd major:0 minor:843 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4488757c-f0fd-48fa-a3f9-6373b0bcafe4/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/4488757c-f0fd-48fa-a3f9-6373b0bcafe4/volumes/kubernetes.io~secret/cert major:0 minor:842 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4488757c-f0fd-48fa-a3f9-6373b0bcafe4/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/4488757c-f0fd-48fa-a3f9-6373b0bcafe4/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:841 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt:{mountpoint:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt major:0 minor:68 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/48801344-a48a-493e-aea4-19d998d0b708/volumes/kubernetes.io~projected/kube-api-access-nqfds:{mountpoint:/var/lib/kubelet/pods/48801344-a48a-493e-aea4-19d998d0b708/volumes/kubernetes.io~projected/kube-api-access-nqfds major:0 minor:395 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/48801344-a48a-493e-aea4-19d998d0b708/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/48801344-a48a-493e-aea4-19d998d0b708/volumes/kubernetes.io~secret/signing-key major:0 minor:365 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e51bba5-0ebe-4e55-a588-38b71548c605/volumes/kubernetes.io~projected/kube-api-access-2dxw9:{mountpoint:/var/lib/kubelet/pods/4e51bba5-0ebe-4e55-a588-38b71548c605/volumes/kubernetes.io~projected/kube-api-access-2dxw9 major:0 minor:205 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4e51bba5-0ebe-4e55-a588-38b71548c605/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/4e51bba5-0ebe-4e55-a588-38b71548c605/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:161 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x:{mountpoint:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x major:0 minor:196 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:438 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:437 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/54f29618-42c2-4270-9af7-7d82852d7cec/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/54f29618-42c2-4270-9af7-7d82852d7cec/volumes/kubernetes.io~projected/ca-certs major:0 minor:649 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/54f29618-42c2-4270-9af7-7d82852d7cec/volumes/kubernetes.io~projected/kube-api-access-w4wht:{mountpoint:/var/lib/kubelet/pods/54f29618-42c2-4270-9af7-7d82852d7cec/volumes/kubernetes.io~projected/kube-api-access-w4wht major:0 minor:648 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a275679-b7b6-4c28-b389-94cd2b014d6c/volumes/kubernetes.io~projected/kube-api-access-pmbll:{mountpoint:/var/lib/kubelet/pods/5a275679-b7b6-4c28-b389-94cd2b014d6c/volumes/kubernetes.io~projected/kube-api-access-pmbll major:0 minor:866 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a275679-b7b6-4c28-b389-94cd2b014d6c/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/5a275679-b7b6-4c28-b389-94cd2b014d6c/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:865 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw:{mountpoint:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw major:0 minor:388 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:387 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5d39ed24-4301-4cea-8a42-a08f4ba8b479/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/5d39ed24-4301-4cea-8a42-a08f4ba8b479/volumes/kubernetes.io~projected/kube-api-access major:0 minor:889 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/kube-api-access-b5mwd:{mountpoint:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/kube-api-access-b5mwd major:0 minor:194 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:463 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62220aa5-4065-472c-8a17-c0a58942ab8a/volumes/kubernetes.io~projected/kube-api-access-xtk9h:{mountpoint:/var/lib/kubelet/pods/62220aa5-4065-472c-8a17-c0a58942ab8a/volumes/kubernetes.io~projected/kube-api-access-xtk9h major:0 minor:862 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62220aa5-4065-472c-8a17-c0a58942ab8a/volumes/kubernetes.io~secret/profile-collector-cert:{mountpoint:/var/lib/kubelet/pods/62220aa5-4065-472c-8a17-c0a58942ab8a/volumes/kubernetes.io~secret/profile-collector-cert major:0 minor:861 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62220aa5-4065-472c-8a17-c0a58942ab8a/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/62220aa5-4065-472c-8a17-c0a58942ab8a/volumes/kubernetes.io~secret/srv-cert major:0 minor:860 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/62fc29f4-557f-4a75-8b78-6ca425c81b81/volumes/kubernetes.io~projected/kube-api-access-bs597:{mountpoint:/var/lib/kubelet/pods/62fc29f4-557f-4a75-8b78-6ca425c81b81/volumes/kubernetes.io~projected/kube-api-access-bs597 major:0 minor:373 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/642e5115-b7f2-4561-bc6b-1a74b6d891c4/volumes/kubernetes.io~projected/kube-api-access-dzpnw:{mountpoint:/var/lib/kubelet/pods/642e5115-b7f2-4561-bc6b-1a74b6d891c4/volumes/kubernetes.io~projected/kube-api-access-dzpnw major:0 minor:760 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/642e5115-b7f2-4561-bc6b-1a74b6d891c4/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/642e5115-b7f2-4561-bc6b-1a74b6d891c4/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:755 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x:{mountpoint:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x major:0 minor:745 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls major:0 minor:742 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~projected/kube-api-access-rjd5j:{mountpoint:/var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~projected/kube-api-access-rjd5j major:0 minor:188 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~secret/etcd-client major:0 minor:184 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~secret/serving-cert major:0 minor:160 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld:{mountpoint:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld major:0 minor:779 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:780 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/737fcc7d-d850-4352-9f17-383c85d5bc28/volumes/kubernetes.io~projected/kube-api-access-5dpp2:{mountpoint:/var/lib/kubelet/pods/737fcc7d-d850-4352-9f17-383c85d5bc28/volumes/kubernetes.io~projected/kube-api-access-5dpp2 major:0 minor:190 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/737fcc7d-d850-4352-9f17-383c85d5bc28/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/737fcc7d-d850-4352-9f17-383c85d5bc28/volumes/kubernetes.io~secret/serving-cert major:0 minor:176 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7390ccc6-dfbe-4f51-960c-7628f49bffb7/volumes/kubernetes.io~projected/kube-api-access-5v65g:{mountpoint:/var/lib/kubelet/pods/7390ccc6-dfbe-4f51-960c-7628f49bffb7/volumes/kubernetes.io~projected/kube-api-access-5v65g major:0 minor:599 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7390ccc6-dfbe-4f51-960c-7628f49bffb7/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/7390ccc6-dfbe-4f51-960c-7628f49bffb7/volumes/kubernetes.io~secret/encryption-config major:0 minor:597 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7390ccc6-dfbe-4f51-960c-7628f49bffb7/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/7390ccc6-dfbe-4f51-960c-7628f49bffb7/volumes/kubernetes.io~secret/etcd-client major:0 minor:596 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7390ccc6-dfbe-4f51-960c-7628f49bffb7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7390ccc6-dfbe-4f51-960c-7628f49bffb7/volumes/kubernetes.io~secret/serving-cert major:0 minor:598 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74b2561b-933b-4c58-a63a-7a8c671d0ae9/volumes/kubernetes.io~projected/kube-api-access-kx9vc:{mountpoint:/var/lib/kubelet/pods/74b2561b-933b-4c58-a63a-7a8c671d0ae9/volumes/kubernetes.io~projected/kube-api-access-kx9vc major:0 minor:206 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74b2561b-933b-4c58-a63a-7a8c671d0ae9/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/74b2561b-933b-4c58-a63a-7a8c671d0ae9/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:465 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/78be97a3-18d1-4962-804f-372974dc8ccc/volumes/kubernetes.io~projected/kube-api-access-wzlnz:{mountpoint:/var/lib/kubelet/pods/78be97a3-18d1-4962-804f-372974dc8ccc/volumes/kubernetes.io~projected/kube-api-access-wzlnz major:0 minor:533 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/78be97a3-18d1-4962-804f-372974dc8ccc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/78be97a3-18d1-4962-804f-372974dc8ccc/volumes/kubernetes.io~secret/serving-cert major:0 minor:374 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4/volumes/kubernetes.io~projected/kube-api-access-zdxgd:{mountpoint:/var/lib/kubelet/pods/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4/volumes/kubernetes.io~projected/kube-api-access-zdxgd major:0 minor:757 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:756 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f/volumes/kubernetes.io~projected/kube-api-access major:0 minor:771 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80d3b238-70c3-4e71-96a1-99405352033f/volumes/kubernetes.io~projected/kube-api-access-rxbdv:{mountpoint:/var/lib/kubelet/pods/80d3b238-70c3-4e71-96a1-99405352033f/volumes/kubernetes.io~projected/kube-api-access-rxbdv major:0 minor:432 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/822e1750-652e-4ceb-8fea-b2c1c905b0f1/volumes/kubernetes.io~projected/kube-api-access-djfsw:{mountpoint:/var/lib/kubelet/pods/822e1750-652e-4ceb-8fea-b2c1c905b0f1/volumes/kubernetes.io~projected/kube-api-access-djfsw major:0 minor:1012 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/86c571b6-0f65-41f0-b1be-f63d7a974782/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/86c571b6-0f65-41f0-b1be-f63d7a974782/volumes/kubernetes.io~projected/kube-api-access major:0 minor:774 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e623376-9e14-4341-9dcf-7a7c218b6f9f/volumes/kubernetes.io~projected/kube-api-access-xvwzr:{mountpoint:/var/lib/kubelet/pods/8e623376-9e14-4341-9dcf-7a7c218b6f9f/volumes/kubernetes.io~projected/kube-api-access-xvwzr major:0 minor:203 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e623376-9e14-4341-9dcf-7a7c218b6f9f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8e623376-9e14-4341-9dcf-7a7c218b6f9f/volumes/kubernetes.io~secret/serving-cert major:0 minor:154 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845/volumes/kubernetes.io~projected/ca-certs major:0 minor:639 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845/volumes/kubernetes.io~projected/kube-api-access-7p9ld:{mountpoint:/var/lib/kubelet/pods/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845/volumes/kubernetes.io~projected/kube-api-access-7p9ld major:0 minor:647 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:640 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:198 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/kube-api-access-t24jh:{mountpoint:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/kube-api-access-t24jh major:0 minor:204 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~secret/metrics-tls major:0 minor:471 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/970d4376-f299-412c-a8ee-90aa980c689e/volumes/kubernetes.io~projected/kube-api-access-hqstc:{mountpoint:/var/lib/kubelet/pods/970d4376-f299-412c-a8ee-90aa980c689e/volumes/kubernetes.io~projected/kube-api-access-hqstc major:0 minor:189 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/volumes/kubernetes.io~projected/kube-api-access-f42cr:{mountpoint:/var/lib/kubelet/pods/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/volumes/kubernetes.io~projected/kube-api-access-f42cr major:0 minor:200 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/volumes/kubernetes.io~secret/serving-cert major:0 minor:151 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2 major:0 minor:212 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:187 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a6d86b04-1d3f-4f27-a262-b732c1295997/volumes/kubernetes.io~projected/kube-api-access-lxhk5:{mountpoint:/var/lib/kubelet/pods/a6d86b04-1d3f-4f27-a262-b732c1295997/volumes/kubernetes.io~projected/kube-api-access-lxhk5 major:0 minor:999 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g:{mountpoint:/var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g major:0 minor:427 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm:{mountpoint:/var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab6e5720-2c30-4962-9c67-89f1607d137f/volumes/kubernetes.io~projected/kube-api-access-xmk2b:{mountpoint:/var/lib/kubelet/pods/ab6e5720-2c30-4962-9c67-89f1607d137f/volumes/kubernetes.io~projected/kube-api-access-xmk2b major:0 minor:202 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab6e5720-2c30-4962-9c67-89f1607d137f/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/ab6e5720-2c30-4962-9c67-89f1607d137f/volumes/kubernetes.io~secret/webhook-certs major:0 minor:531 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl:{mountpoint:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl major:0 minor:146 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:142 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5:{mountpoint:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5 major:0 minor:135 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~secret/metrics-certs major:0 minor:532 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b1b4fccc-6bf6-47ac-8ae1-32cad23734da/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b1b4fccc-6bf6-47ac-8ae1-32cad23734da/volumes/kubernetes.io~projected/kube-api-access major:0 minor:717 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg:{mountpoint:/var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg major:0 minor:211 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:718 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert major:0 minor:715 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2511146-1d04-4ecd-a28e-79662ef7b9d3/volumes/kubernetes.io~projected/kube-api-access-hnshv:{mountpoint:/var/lib/kubelet/pods/c2511146-1d04-4ecd-a28e-79662ef7b9d3/volumes/kubernetes.io~projected/kube-api-access-hnshv major:0 minor:857 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c2511146-1d04-4ecd-a28e-79662ef7b9d3/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c2511146-1d04-4ecd-a28e-79662ef7b9d3/volumes/kubernetes.io~secret/serving-cert major:0 minor:856 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ Feb 16 17:02:00.957009 master-0 kubenswrapper[15493]: c303189e-adae-4fe2-8dd7-cc9b80f73e66/volumes/kubernetes.io~projected/kube-api-access-v2s8l:{mountpoint:/var/lib/kubelet/pods/c303189e-adae-4fe2-8dd7-cc9b80f73e66/volumes/kubernetes.io~projected/kube-api-access-v2s8l major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:519 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp major:0 minor:515 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n major:0 minor:520 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52:{mountpoint:/var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52 major:0 minor:785 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~secret/proxy-tls major:0 minor:784 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cc9a20f4-255a-4312-8f43-174a28c06340/volumes/kubernetes.io~projected/kube-api-access-qwh24:{mountpoint:/var/lib/kubelet/pods/cc9a20f4-255a-4312-8f43-174a28c06340/volumes/kubernetes.io~projected/kube-api-access-qwh24 major:0 minor:595 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d020c902-2adb-4919-8dd9-0c2109830580/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/d020c902-2adb-4919-8dd9-0c2109830580/volumes/kubernetes.io~projected/kube-api-access major:0 minor:207 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d020c902-2adb-4919-8dd9-0c2109830580/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d020c902-2adb-4919-8dd9-0c2109830580/volumes/kubernetes.io~secret/serving-cert major:0 minor:177 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d1524fc1-d157-435a-8bf8-7e877c45909d/volumes/kubernetes.io~projected/kube-api-access-nrzjr:{mountpoint:/var/lib/kubelet/pods/d1524fc1-d157-435a-8bf8-7e877c45909d/volumes/kubernetes.io~projected/kube-api-access-nrzjr major:0 minor:560 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d1524fc1-d157-435a-8bf8-7e877c45909d/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/d1524fc1-d157-435a-8bf8-7e877c45909d/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:506 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9859457-f0d1-4754-a6c5-cf05d5abf447/volumes/kubernetes.io~projected/kube-api-access-t4gl5:{mountpoint:/var/lib/kubelet/pods/d9859457-f0d1-4754-a6c5-cf05d5abf447/volumes/kubernetes.io~projected/kube-api-access-t4gl5 major:0 minor:199 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d9859457-f0d1-4754-a6c5-cf05d5abf447/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/d9859457-f0d1-4754-a6c5-cf05d5abf447/volumes/kubernetes.io~secret/metrics-tls major:0 minor:366 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dce85b5e-6e92-4e0e-bee7-07b1a3634302/volumes/kubernetes.io~projected/kube-api-access-fhcw6:{mountpoint:/var/lib/kubelet/pods/dce85b5e-6e92-4e0e-bee7-07b1a3634302/volumes/kubernetes.io~projected/kube-api-access-fhcw6 major:0 minor:604 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dce85b5e-6e92-4e0e-bee7-07b1a3634302/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/dce85b5e-6e92-4e0e-bee7-07b1a3634302/volumes/kubernetes.io~secret/encryption-config major:0 minor:603 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dce85b5e-6e92-4e0e-bee7-07b1a3634302/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/dce85b5e-6e92-4e0e-bee7-07b1a3634302/volumes/kubernetes.io~secret/etcd-client major:0 minor:602 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dce85b5e-6e92-4e0e-bee7-07b1a3634302/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/dce85b5e-6e92-4e0e-bee7-07b1a3634302/volumes/kubernetes.io~secret/serving-cert major:0 minor:601 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67:{mountpoint:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67 major:0 minor:208 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:529 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e1a7c783-2e23-4284-b648-147984cf1022/volumes/kubernetes.io~projected/kube-api-access-2cjmj:{mountpoint:/var/lib/kubelet/pods/e1a7c783-2e23-4284-b648-147984cf1022/volumes/kubernetes.io~projected/kube-api-access-2cjmj major:0 minor:895 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e1a7c783-2e23-4284-b648-147984cf1022/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e1a7c783-2e23-4284-b648-147984cf1022/volumes/kubernetes.io~secret/serving-cert major:0 minor:888 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e69d8c51-e2a6-4f61-9c26-072784f6cf40/volumes/kubernetes.io~projected/kube-api-access-xr8t6:{mountpoint:/var/lib/kubelet/pods/e69d8c51-e2a6-4f61-9c26-072784f6cf40/volumes/kubernetes.io~projected/kube-api-access-xr8t6 major:0 minor:192 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e69d8c51-e2a6-4f61-9c26-072784f6cf40/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/e69d8c51-e2a6-4f61-9c26-072784f6cf40/volumes/kubernetes.io~secret/serving-cert major:0 minor:186 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e73ee493-de15-44c2-bd51-e12fcbb27a15/volumes/kubernetes.io~projected/kube-api-access-57xvt:{mountpoint:/var/lib/kubelet/pods/e73ee493-de15-44c2-bd51-e12fcbb27a15/volumes/kubernetes.io~projected/kube-api-access-57xvt major:0 minor:767 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e73ee493-de15-44c2-bd51-e12fcbb27a15/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/e73ee493-de15-44c2-bd51-e12fcbb27a15/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:762 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e73ee493-de15-44c2-bd51-e12fcbb27a15/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/e73ee493-de15-44c2-bd51-e12fcbb27a15/volumes/kubernetes.io~secret/webhook-cert major:0 minor:761 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eaf7edff-0a89-4ac0-b9dd-511e098b5434/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/eaf7edff-0a89-4ac0-b9dd-511e098b5434/volumes/kubernetes.io~projected/kube-api-access major:0 minor:197 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eaf7edff-0a89-4ac0-b9dd-511e098b5434/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/eaf7edff-0a89-4ac0-b9dd-511e098b5434/volumes/kubernetes.io~secret/serving-cert major:0 minor:155 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/edbaac23-11f0-4bc7-a7ce-b593c774c0fa/volumes/kubernetes.io~projected/kube-api-access-dptnc:{mountpoint:/var/lib/kubelet/pods/edbaac23-11f0-4bc7-a7ce-b593c774c0fa/volumes/kubernetes.io~projected/kube-api-access-dptnc major:0 minor:193 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/edbaac23-11f0-4bc7-a7ce-b593c774c0fa/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/edbaac23-11f0-4bc7-a7ce-b593c774c0fa/volumes/kubernetes.io~secret/serving-cert major:0 minor:180 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee84198d-6357-4429-a90c-455c3850a788/volumes/kubernetes.io~projected/kube-api-access-tbq2b:{mountpoint:/var/lib/kubelet/pods/ee84198d-6357-4429-a90c-455c3850a788/volumes/kubernetes.io~projected/kube-api-access-tbq2b major:0 minor:838 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ee84198d-6357-4429-a90c-455c3850a788/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/ee84198d-6357-4429-a90c-455c3850a788/volumes/kubernetes.io~secret/cert major:0 minor:834 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3beb7bf-922f-425d-8a19-fd407a7153a8/volumes/kubernetes.io~projected/kube-api-access-qhz6z:{mountpoint:/var/lib/kubelet/pods/f3beb7bf-922f-425d-8a19-fd407a7153a8/volumes/kubernetes.io~projected/kube-api-access-qhz6z major:0 minor:783 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz:{mountpoint:/var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz major:0 minor:887 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~secret/proxy-tls major:0 minor:882 fsType:tmpfs blockSize:0} overlay_0-1003:{mountpoint:/var/lib/containers/storage/overlay/0d8904b440f2f91e5671de3162d384a4cbd2db9cf85279aeb361ff630f5deac1/merged major:0 minor:1003 fsType:overlay blockSize:0} overlay_0-1010:{mountpoint:/var/lib/containers/storage/overlay/6dd2fa3a116514592b47e17c871dff7758cf3552c4cf6a0541e6a00182c662c1/merged major:0 minor:1010 fsType:overlay blockSize:0} overlay_0-1015:{mountpoint:/var/lib/containers/storage/overlay/2c4e2c257badad8733a214a45bfd171779886b1ca41da71cead59bb75ccbe17f/merged major:0 minor:1015 fsType:overlay blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/299d5db622db39f92d58928871ced643987ca227c08e8caffed59ed33bee6dc4/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1030:{mountpoint:/var/lib/containers/storage/overlay/91790cde5f60cb74ddc1b1ab41684d77e17dc17fd4484d441fdc2cb9e0990b12/merged major:0 minor:1030 fsType:overlay blockSize:0} overlay_0-1032:{mountpoint:/var/lib/containers/storage/overlay/f0e5b88cfa1dce5d2613e7fa771dc916ade65173a737ce52110bf6f8981980f7/merged major:0 minor:1032 fsType:overlay blockSize:0} overlay_0-1034:{mountpoint:/var/lib/containers/storage/overlay/b49d30a7a64bb6715bf2c0df6c3f7870288b0a66f6f7cc7232822e68c259b6c3/merged major:0 minor:1034 fsType:overlay blockSize:0} overlay_0-1037:{mountpoint:/var/lib/containers/storage/overlay/8a3cb11e435cad6183df5cecf98d617bbf682a9915f5a98666db203028bc34f7/merged major:0 minor:1037 fsType:overlay blockSize:0} overlay_0-112:{mountpoint:/var/lib/containers/storage/overlay/ce81bbecbcc863a95ed0d10f27172967bc7bbe367409b4ecdf486fe896eb3d02/merged major:0 minor:112 fsType:overlay blockSize:0} overlay_0-114:{mountpoint:/var/lib/containers/storage/overlay/240fbd3480795448881e3185229bb5ea2f53a6772328ecb7637f0edd311280b0/merged major:0 minor:114 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/a5d08108c9c597dc05436c7894a3410e90d4ed23214da40f1960d5a943f5b451/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/3b83e5b8c015be60f8807049c4a40c9efb971a4240b73f8d970c3262dea3fae9/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/36df33378cc0b59e98add0ea326634525c0feac09aa99b30d154f5aa997adb28/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-123:{mountpoint:/var/lib/containers/storage/overlay/7759418e25dbe997a51aab5e46e101eabdafb1836c9e3752c1131d01ecca82c4/merged major:0 minor:123 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/29a2374e228420896b7f30eb355ef8c1cb5f374dd23a1a355e03c40a33e522b0/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/d733d2aa9e7b209a944a430010e0f22f2d487aa5a3561f6410522a2de89b2558/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/9c5ec6bd02e00979482116988d1b306ffae5e0ebc398bb3e78d6abc6e28f5d62/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/43fa2383e928d5696d56446d213c47bf85788cf4aba45a066c9e4009dfb4527f/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-149:{mountpoint:/var/lib/containers/storage/overlay/2b4f28a6ed575a266c31138e7629277c9490bb458d8a976a4cb6850e54a55a41/merged major:0 minor:149 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/1d255e08b7c9baf9634ab59c50f063480d1a9fc4593debd7871eca18732b4738/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-162:{mountpoint:/var/lib/containers/storage/overlay/c2ff895be99f92ccf878588023dd7c7541852816b81b5b175cb6b462033a1875/merged major:0 minor:162 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/428278196b465f9736e75e77fa99c57e448b3415cbe04ee18157ed1473b6c610/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/27e8d9a9aa9a0b69e28767a9145087cae598a6ade3fa40026578198fe4e9b0de/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/44204d75a1494103ada9c6cb43832da1b72d3d4a24fc7fe71a116ed9fd5e0053/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/2d38b5784933ede16604ba6338d79bc805d0f4127afefae2688c1e78f8ed0342/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-178:{mountpoint:/var/lib/containers/storage/overlay/adb7c441e93f719fa2a6e861f64c376bd9f1ea6f9f7c320efd7002b516c99bf0/merged major:0 minor:178 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/ec4e929eaf7048ed063da2d34bbf9678ff3a29a20ee79486633d8b6ea7ea3606/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-213:{mountpoint:/var/lib/containers/storage/overlay/4441d044f3dca641184c1599aa07d2e375fd56e92208ca91a5a1766dc658a7f4/merged major:0 minor:213 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/84eb34ca24454ba6ab143b219301a344fa41bfc3e6933bbd3079d41939ce966f/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-220:{mountpoint:/var/lib/containers/storage/overlay/d342ae8422eeb409e769043cd29a26a61c7f48664e1c32bbaf7cffed2a130d0e/merged major:0 minor:220 fsType:overlay blockSize:0} overlay_0-229:{mountpoint:/var/lib/containers/storage/overlay/db861271223daad05feed50aeb8d265d40680f9091c856c41485e8ce6c97e87f/merged major:0 minor:229 fsType:overlay blockSize:0} overlay_0-233:{mountpoint:/var/lib/containers/storage/overlay/ba662e2cc19a324502f74b7a07aa7e03c56d5d2ccef0323325f5c3668d7404fd/merged major:0 minor:233 fsType:overlay blockSize:0} overlay_0-235:{mountpoint:/var/lib/containers/storage/overlay/66c225a6916a9c8a947b2aa323b73fd82a26be7b38370dbee342855b2c1d8194/merged major:0 minor:235 fsType:overlay blockSize:0} overlay_0-240:{mountpoint:/var/lib/containers/storage/overlay/8f63b48314bb4bda0d7f27b15ddd2a76df73759a10dc4c8fc1f54be447f5e426/merged major:0 minor:240 fsType:overlay blockSize:0} overlay_0-245:{mountpoint:/var/lib/containers/storage/overlay/86bd6f1f647ad44ab51cf62d8fca17302a2594b45f63dd7c8c27345540dac403/merged major:0 minor:245 fsType:overlay blockSize:0} overlay_0-250:{mountpoint:/var/lib/containers/storage/overlay/abac6e0d1bb99825429d7282b13a5c2988af0913ef7cc2696c5df950e2a942ee/merged major:0 minor:250 fsType:overlay blockSize:0} overlay_0-255:{mountpoint:/var/lib/containers/storage/overlay/2b39a72e49d1da69ed2a99ff4ab8e9da393360ce8b42c0cfb7cdf17d15d58b3f/merged major:0 minor:255 fsType:overlay blockSize:0} overlay_0-260:{mountpoint:/var/lib/containers/storage/overlay/da70de85edeeaad62089327e90d51985a066903d1f5f280b6b643699410f7b79/merged major:0 minor:260 fsType:overlay blockSize:0} overlay_0-265:{mountpoint:/var/lib/containers/storage/overlay/a69049b3edf8a87c1686e831899d662ac7adebed0c52e8f63153f58fdbaa91c5/merged major:0 minor:265 fsType:overlay blockSize:0} overlay_0-270:{mountpoint:/var/lib/containers/storage/overlay/76ee37cb963ce0292e358b39316bcedd6b0b6521f691a5a3d9a795d4790f0be9/merged major:0 minor:270 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/2cff0aacf676f37753cb56cc7eab4c57b131264f250182900ae88309ac58cde5/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/18c1f9eee7d37fb48d410e39e01faf9ccaa53efd6e3a5bcecbe4471eb380f910/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/8f37690ca7fa757339ce9d364bf14b458b1d51805557c8df990d3458e664987d/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/d0c40a2f0a4af382dda361085134ee98d5c558550c83b449c8e53246c46a972e/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/4b07bcf55655096c5047253d96dca3371494f580f5bfe25ee8718a2cb503bc31/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/9e6baea84ceeb7d0eef954d517afeb3ea512303ae5f2246ae292acf57939fa15/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/475b19fbf2528b81709e07cbf639a3aa6e7853c3f20346b52f552cdacfa25596/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/818c90e2e877c77e2d88df68aeb94819307b02584060ffcefd8dee6cb5c3915a/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/cd07d52935600dd6452e5a409db54885260e02b66813f07c412ed0bd6c2b7ac3/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/08b1469eb210c641a532196fdff472a586b1af1ecfb1a684a1a78d18ab89ff2e/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/f8b6378b399c05f470eda4fea51aa2abff2521cfec002fa2227a007c45c030bf/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/f0081dbfc14ac87641c84830d07932cba6bf4abed98fc9fe001ad21b51bf3632/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-333:{mountpoint:/var/lib/containers/storage/overlay/fe082c95fb3fb318eefa2ed56ccb274bd7771102a16fd1ef05713a7929916850/merged major:0 minor:333 fsType:overlay blockSize:0} overlay_0-335:{mountpoint:/var/lib/containers/storage/overlay/f015eff68f9fb578c4818f6ea865e6aee92861f7e20d01e9851bbbf127ac0563/merged major:0 minor:335 fsType:overlay blockSize:0} overlay_0-342:{mountpoint:/var/lib/containers/storage/overlay/e37ed7d2de76aa90e2bab8f6d10b5e63df8efc10d72c646ae1529d79feedc254/merged major:0 minor:342 fsType:overlay blockSize:0} overlay_0-345:{mountpoint:/var/lib/containers/storage/overlay/0bd5d8db5ae3eaeee975924d021183d5322f01775ed60f23315d67888a4de726/merged major:0 minor:345 fsType:overlay blockSize:0} overlay_0-347:{mountpoint:/var/lib/containers/storage/overlay/e031b605975c923d2bf4be3a485ee514c9965b833d397a8259ff3a2613fae1bb/merged major:0 minor:347 fsType:overlay blockSize:0} overlay_0-349:{mountpoint:/var/lib/containers/storage/overlay/8d2cc8c7bdef474e075127e354e0878538e0a26cb073cdd65c66469f07554577/merged major:0 minor:349 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/cfe49923389ab9c3faffef71f263229db7d12022c66f5421b495136cc6f969c8/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-353:{mountpoint:/var/lib/containers/storage/overlay/b09d2449c1a3c895c7305063a3df5a44eb57a1395a63a5d953c4a9523d95abf2/merged major:0 minor:353 fsType:overlay blockSize:0} overlay_0-355:{mountpoint:/var/lib/containers/storage/overlay/d1eb9975ede7106d370ea4aa5d9e4e0f11263922dcf6f59aabb9e9a851777c39/merged major:0 minor:355 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/4db3f440cdac4dba8aa04880a9835907bb9be84c7eb0129ec73ee6bc7aef298f/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-359:{mountpoint:/var/lib/containers/storage/overlay/54f43f0cf2134131d13f60486cf880c2e811c34f551c4511f9ff9908670b45d2/merged major:0 minor:359 fsType:overlay blockSize:0} overlay_0-361:{mountpoint:/var/lib/containers/storage/overlay/febd1e0bf295b46ba2601f60b3f3b13119f5b8a64dfcce613a3c5aa9af3dd5d2/merged major:0 minor:361 fsType:overlay blockSize:0} overlay_0-369:{mountpoint:/var/lib/containers/storage/overlay/ea9dc292d37beae68b768200d6faec79dde73a80d15dba89badce35e25709288/merged major:0 minor:369 fsType:overlay blockSize:0} overlay_0-377:{mountpoint:/var/lib/containers/storage/overlay/63b1b2bcdf19672efa02706ac3982215a5ef4eaf654be9380ecb16e42ada4c37/merged major:0 minor:377 fsType:overlay blockSize:0} overlay_0-385:{mountpoint:/var/lib/containers/storage/overlay/1e25b66fe6ceca1dccfefef272dd4710ca6f00838d612111f2fc135d6a2433ad/merged major:0 minor:385 fsType:overlay blockSize:0} overlay_0-401:{mountpoint:/var/lib/containers/storage/overlay/ea7a00ebfeea9c74ae7d9a2b98a9762753c6599ed3893c8951d1c62c49c7a221/merged major:0 minor:401 fsType:overlay blockSize:0} overlay_0-403:{mountpoint:/var/lib/containers/storage/overlay/c889b7393bb92eeb007d2e09ba0d72c9faaebc32e4f2ebd5bcadc02be491bf93/merged major:0 minor:403 fsType:overlay blockSize:0} overlay_0-405:{mountpoint:/var/lib/containers/storage/overlay/d5c7bb2bcba729348f2d91e6a2102d80a73ef6f26b723ad973477810e3fcb151/merged major:0 minor:405 fsType:overlay blockSize:0} overlay_0-407:{mountpoint:/var/lib/containers/storage/overlay/6ad93cda0a4c36908d10ed13c5fd65365766c0bc33e5f483e6aab4ee1068cebd/merged major:0 minor:407 fsType:overlay blockSize:0} overlay_0-409:{mountpoint:/var/lib/containers/storage/overlay/309eb2cd2a0615b0e652f523b3dce331dbe9f1e8725743adfe0fc3649a5f0fb1/merged major:0 minor:409 fsType:overlay blockSize:0} overlay_0-419:{mountpoint:/var/lib/containers/storage/overlay/5586634a8cc493bb3eec2e9b60cdc01b1be4a015872718ab25ab5e60671eebb4/merged major:0 minor:419 fsType:overlay blockSize:0} overlay_0-421:{mountpoint:/var/lib/containers/storage/overlay/a07091bb63a7a51a48a43f3e7d2c2f1fea2c9004cf47abd4c40e2f494bdc0e0e/merged major:0 minor:421 fsType:overlay blockSize:0} overlay_0-435:{mountpoint:/var/lib/containers/storage/overlay/ffefbb391192fe841b07ef1ecc65acb0d18ed385dd69177ac18092a76d90d76c/merged major:0 minor:435 fsType:overlay blockSize:0} overlay_0-441:{mountpoint:/var/lib/containers/storage/overlay/e0448cc6e483365bf3d126a6a00b819693f974097bf5456b953aa2c572921b32/merged major:0 minor:441 fsType:overlay blockSize:0} overlay_0-443:{mountpoint:/var/lib/containers/storage/overlay/96cd646009c93e922d642c8a4b73184d7c4c35bfc03083ae545ded0d8261ee88/merged major:0 minor:443 fsType:overlay blockSize:0} overlay_0-445:{mountpoint:/var/lib/containers/storage/overlay/81a9aecc63fd1771714fe799e77e46fc37817b80d6a979ff75e637a09a7430d6/merged major:0 minor:445 fsType:overlay blockSize:0} overlay_0-455:{mountpoint:/var/lib/containers/storage/overlay/a6e6a06b59796015ba782eba5443e234dc4f69eb1647e76da76535bb7758d942/merged major:0 minor:455 fsType:overlay blockSize:0} overlay_0-457:{mountpoint:/var/lib/containers/storage/overlay/0d983d215ba50775438552d09e76081dd496e2217e9f0a18e318739ae94f1872/merged major:0 minor:457 fsType:overlay blockSize:0} overlay_0-459:{mountpoint:/var/lib/containers/storage/overlay/f07e7d2d556dc4d928cdc1996195784a224f41607d5bbd78ff3c4128773a1ce6/merged major:0 minor:459 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/e036e1579af4bf1f3b75c25ff983ccbe4e7d2e2e2e0dee660e06a3999b42a2c4/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-461:{mountpoint:/var/lib/containers/storage/overlay/85b5f0d86bbf9feac84c774edb37de76d3951dad626a82da067010c78fbf1996/merged major:0 minor:461 fsType:overlay blockSize:0} overlay_0-464:{mountpoint:/var/lib/containers/storage/overlay/c479a46df152e60a63be282caf031c4406cb289666dd644865161f8962a68fe2/merged major:0 minor:464 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/1a704c7ce5b7e7c23d2dcf60a9a4aa3a78233c063ce33db0f2bd4f81a8f4d6a7/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-481:{mountpoint:/var/lib/containers/storage/overlay/4c68dbc5d6751311ebf4d12820d162a817f229ff2129fc32408ff2b126ffbb6e/merged major:0 minor:481 fsType:overlay blockSize:0} overlay_0-483:{mountpoint:/var/lib/containers/storage/overlay/f7bfc23b24b9900782de45128f132066f13d3fe5011701a38ce0996895559933/merged major:0 minor:483 fsType:overlay blockSize:0} overlay_0-485:{mountpoint:/var/lib/containers/storage/overlay/8a52754559353db869933eecf5ecec3e377dc979c220d892319bf4f929502745/merged major:0 minor:485 fsType:overlay blockSize:0} overlay_0-487:{mountpoint:/var/lib/containers/storage/overlay/9ad93e20af7fc62960eecd788c9455461c54ea253b4191f77080c2db32777af8/merged major:0 minor:487 fsType:overlay blockSize:0} overlay_0-492:{mountpoint:/var/lib/containers/storage/overlay/4abd79b32486ede59fd37e10b85a73b30376867e74ab6e8b11df53bd8e9a3796/merged major:0 minor:492 fsType:overlay blockSize:0} overlay_0-493:{mountpoint:/var/lib/containers/storage/overlay/d263332b5c00dd3bcafce428042275a483623a5dafb02d422bad695b654c69cd/merged major:0 minor:493 fsType:overlay blockSize:0} overlay_0-502:{mountpoint:/var/lib/containers/storage/overlay/6b0886c1478e18caaf34013c82f92b988d07d7e2dba9fb193034ebc380e5af02/merged major:0 minor:502 fsType:overlay blockSize:0} overlay_0-507:{mountpoint:/var/lib/containers/storage/overlay/c7d1a3e2f0f4bf00c029eee68a72ed481b6f495d504cd9960a1afb72de0331fb/merged major:0 minor:507 fsType:overlay blockSize:0} overlay_0-509:{mountpoint:/var/lib/containers/storage/overlay/fc090ad0fd809ca6aee6a4c00dd64802d0fbf3bd9729ce0ec4c12d3189e0a0a0/merged major:0 minor:509 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/b6579114c0b8bd65796c72e3b7a472be1e21e3856425a83313186fa9a77ab9ae/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-521:{mountpoint:/var/lib/containers/storage/overlay/7a9b098ebf1a74c61333bab460e7fd78d30ea75cb3282f66183060c83c902540/merged major:0 minor:521 fsType:overlay blockSize:0} overlay_0-523:{mountpoint:/var/lib/containers/storage/overlay/fc673b95c70092564001a29c89c65f0c72fdfac2d15217044c14d77cbc9c53fc/merged major:0 minor:523 fsType:overlay blockSize:0} overlay_0-535:{mountpoint:/var/lib/containers/storage/overlay/5829be45af7915f000969f0c71159a56ab0d1044dc12893bdfd29cf0be76da4b/merged major:0 minor:535 fsType:overlay blockSize:0} overlay_0-547:{mountpoint:/var/lib/containers/storage/overlay/92559bcee54c053d1f1563c8a6cbe4b2f5ce64164277d80901401f99ae24545c/merged major:0 minor:547 fsType:overlay blockSize:0} overlay_0-553:{mountpoint:/var/lib/containers/storage/overlay/742ee1c0bf65ee730c53e145987d7482b95281dd547379fa31511cf30fbed0d6/merged major:0 minor:553 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/5892f8fd8563a7556060a1d1dc18b5a5050d99d83120cfb93e6048162f37e9b7/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-562:{mountpoint:/var/lib/containers/storage/overlay/90540c75c1e21188bc510020a56fdac40e8f067192ae22a7e8f04a20841ad238/merged major:0 minor:562 fsType:overlay blockSize:0} overlay_0-564:{mountpoint:/var/lib/containers/storage/overlay/eb4d3e538ac1e33ba94d23a070ab36c05491433961846f06dd7def777d43ac9f/merged major:0 minor:564 fsType:overlay blockSize:0} overlay_0-566:{mountpoint:/var/lib/containers/storage/overlay/789b2e22ccff052f081040598b934f63315b90a43ce7894bf07704d7e2dd3c75/merged major:0 minor:566 fsType:overlay blockSize:0} overlay_0-573:{mountpoint:/var/lib/containers/storage/overlay/124a6356ba1c6541853d99dd667cdd3499034bd23e2969404c8e8faab6613e6c/merged major:0 minor:573 fsType:overlay blockSize:0} overlay_0-582:{mountpoint:/var/lib/containers/storage/overlay/c766f652068a379906a8565c198743a4687e8a02844e5778d28a784a669b82f6/merged major:0 minor:582 fsType:overlay blockSize:0} overlay_0-584:{mountpoint:/var/lib/containers/storage/overlay/ca21fe6b7d811b7c8318e8b504d511c38378709c33680bc1944011171336b147/merged major:0 minor:584 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/e09d1db4a47874e29f4f0b5e5bb565034881f034c7b3cf3a45c76e13eb16515f/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-605:{mountpoint:/var/lib/containers/storage/overlay/1af01e29fb32eb582d5f0aaa7e2ec72355fe63a98e5687725b7c7b8444660b70/merged major:0 minor:605 fsType:overlay blockSize:0} overlay_0-609:{mountpoint:/var/lib/containers/storage/overlay/f9ec6c905a499e6169b43ddf7a0ee25f7fc74f2154558a5fd0743d55a3d4fbdd/merged major:0 minor:609 fsType:overlay blockSize:0} overlay_0-611:{mountpoint:/var/lib/containers/storage/overlay/51c8d2391db0aef400a5131c2f765079730ad22a805c39801752e1352ce9b111/merged major:0 minor:611 fsType:overlay blockSize:0} overlay_0-613:{mountpoint:/var/lib/containers/storage/overlay/3ea6bcf7af677b4a0e361606018080819706c3f6c638139e00bf6f9394350702/merged major:0 minor:613 fsType:overlay blockSize:0} overlay_0-615:{mountpoint:/var/lib/containers/storage/overlay/a3dfa002d25aa60c0eaeef0246faede9188311722a958525524466f9aaa7de4e/merged major:0 minor:615 fsType:overlay blockSize:0} overlay_0-618:{mountpoint:/var/lib/containers/storage/overlay/aae71661b8fc1b004b1c2cc7c8687dde9da8caee4e0d0545b550d3fcc84cfdbc/merged major:0 minor:618 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/a0a9765bac0fa4bee18dec3955765a829fa19c3fe212136ef13d36b9d31e5956/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-624:{mountpoint:/var/lib/containers/storage/overlay/7bca151fa6076b76e0af4508e465e02f126948608dd33f4181ce62616e9b2ff6/merged major:0 minor:624 fsType:overlay blockSize:0} overlay_0-637:{mountpoint:/var/lib/containers/storage/overlay/b04671abe19d6b9d14dacb67f65fbd9bcac7fe59861080308eab044d2705d60a/merged major:0 minor:637 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/ea602b2ce4f19a4134a3e28d7d12ee6feb55d2e6fa7f7d8493aea527c49d2b64/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-654:{mountpoint:/var/lib/containers/storage/overlay/dcf1c12b8f23f4cd3897ca06ef608254a4321abbcba0c70c4695e7aa959cace0/merged major:0 minor:654 fsType:overlay blockSize:0} overlay_0-656:{mountpoint:/var/lib/containers/storage/overlay/55c1b3a5787b8530c88b55504a4f3c8c08de5a5ee29dc9725f15fa01ad7b2f99/merged major:0 minor:656 fsType:overlay blockSize:0} overlay_0-658:{mountpoint:/var/lib/containers/storage/overlay/d62fad3896e6fbdf0c1f54e99bb99b7703a21cdf48a89f1a88d9760ecace49ec/merged major:0 minor:658 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/56734c3dd736245d685600d2f315dcaf0c64a5aac28d64e2f6b176280d3b76e6/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-660:{mountpoint:/var/lib/containers/storage/overlay/ed52998c6fdc84ad283c124776efc8d9e70a616daa66d25a1275573444dc9be9/merged major:0 minor:660 fsType:overlay blockSize:0} overlay_0-662:{mountpoint:/var/lib/containers/storage/overlay/2c4d77e95837533ec007c15b61fc27f9755212ebbcfb090735ebd374b544ea4a/merged major:0 minor:662 fsType:overlay blockSize:0} overlay_0-664:{mountpoint:/var/lib/containers/storage/overlay/856f708efab5b180c01a0bab773822adf7d650ad955240413e0f29f43232c9cb/merged major:0 minor:664 fsType:overlay blockSize:0} overlay_0-668:{mountpoint:/var/lib/containers/storage/overlay/78e98c754a37b9513962e2d862e444bdecdd37e0dfe5af677f44a4fcbca1902d/merged major:0 minor:668 fsType:overlay blockSize:0} overlay_0-674:{mountpoint:/var/lib/containers/storage/overlay/938bdf2a2ae95ad988e33ed619f101eb1c0ee379b7bc8dbcef5d591353bf6bbb/merged major:0 minor:674 fsType:overlay blockSize:0} overlay_0-676:{mountpoint:/var/lib/containers/storage/overlay/950a1957e5db6888024ad3b89b4070ca022819ee4e4e1ab1b4f7ddf640c42dbb/merged major:0 minor:676 fsType:overlay blockSize:0} overlay_0-684:{mountpoint:/var/lib/containers/storage/overlay/4ce24b60a2c4386cdbe3f572d772e69407622d7aa68c0116c38c18fd0080c610/merged major:0 minor:684 fsType:overlay blockSize:0} overlay_0-686:{mountpoint:/var/lib/containers/storage/overlay/fb3c93ae8a43895c10d31cb87b587ddb7e1ae413c6ac26bfb35c40ae37b1cffb/merged major:0 minor:686 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/ed41644de5b186c89b9e2d0af21f9d40df1133d901a406a04c0c53044cabab26/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-691:{mountpoint:/var/lib/containers/storage/overlay/979205db40c169c28f1a505ebfec70564f4f4d7ac0a00c49dc26031959be8fdf/merged major:0 minor:691 fsType:overlay blockSize:0} overlay_0-695:{mountpoint:/var/lib/containers/storage/overlay/d9c4698fba4a5fcd946fed9dc18cc29592156e2887b3e58ff0f8d55b84e091b1/merged major:0 minor:695 fsType:overlay blockSize:0} overlay_0-720:{mountpoint:/var/lib/containers/storage/overlay/2d08a62191383d581cfc58338a4b6c6980159b9ef89a1a40668155ca83dd6fbf/merged major:0 minor:720 fsType:overlay blockSize:0} overlay_0-728:{mountpoint:/var/lib/containers/storage/overlay/6c83246c4ce3743501a5bddfd6d68e62d785c139bc7d4e66f9830e26c7d7a36e/merged major:0 minor:728 fsType:overlay blockSize:0} overlay_0-730:{mountpoint:/var/lib/containers/storage/overlay/ad2872eaf5b18811f100694bf50375a712d1351fca71a1e521674565b758c42f/merged major:0 minor:730 fsType:overlay blockSize:0} overlay_0-735:{mountpoint:/var/lib/containers/storage/overlay/5c6d1814f291ad3fc4d4d0057c528820d3fe1bb011cc60601734061dfe01803b/merged major:0 minor:735 fsType:overlay blockSize:0} overlay_0-737:{mountpoint:/var/lib/containers/storage/overlay/d0bc1ca27c56ee875b03a870f09c07ebf4c82f8ead40d615a3634d13609c9a39/merged major:0 minor:737 fsType:overlay blockSize:0} overlay_0-739:{mountpoint:/var/lib/containers/storage/overlay/59d3acbac88483f6ba844eb6791e9405029feccebae6a78260e55bfcc2202fd8/merged major:0 minor:739 fsType:overlay blockSize:0} overlay_0-743:{mountpoint:/var/lib/containers/storage/overlay/a36ed9173a3af33e7cb031be542da0e92e17b9178a6bd462bdbc0533864ec798/merged major:0 minor:743 fsType:overlay blockSize:0} overlay_0-746:{mountpoint:/var/lib/containers/storage/overlay/9c8a0ec8b19cb9357fdea6ea3581f6e7dbcc499c33d3f6886c4004e68465997b/merged major:0 minor:746 fsType:overlay blockSize:0} overlay_0-750:{mountpoint:/var/lib/containers/storage/overlay/3960a98672b49514aa8bed09d9aec0b13216bc752ca5a8d84eb108bb6a884cb5/merged major:0 minor:750 fsType:overlay blockSize:0} overlay_0-751:{mountpoint:/var/lib/containers/storage/overlay/68920a657a1e773a67886661f70cd3081a84281d7480f39f56e84a1b18da9f3b/merged major:0 minor:751 fsType:overlay blockSize:0} overlay_0-753:{mountpoint:/var/lib/containers/storage/overlay/bb74d7699f04ed88c4b9be26a8ebc83397a4484eef58d7c6c3c1c3c827dc8fc0/merged major:0 minor:753 fsType:overlay blockSize:0} overlay_0-758:{mountpoint:/var/lib/containers/storage/overlay/4ee5d083e0d315945273b7e9ce3294291d2f9adada6f5fdab473a3cb3edf17d0/merged major:0 minor:758 fsType:overlay blockSize:0} overlay_0-763:{mountpoint:/var/lib/containers/storage/overlay/a974cc6c223a93816ce4174fbc6a45c3cd10dec9149fb3c90ce791167b75db23/merged major:0 minor:763 fsType:overlay blockSize:0} overlay_0-777:{mountpoint:/var/lib/containers/storage/overlay/9edde55f1dcc5c255769fb09f6fbbc63185de5cb8e9d515d3e3f4a672d99166e/merged major:0 minor:777 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/7fbdb50c7e6683ea976ed49e8d06713ab3dec49be60b888c74dfee03dcbedc03/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-789:{mountpoint:/var/lib/containers/storage/overlay/c89be8ae38d55b6eaa04dce71d0794fa1f3acda41df26cfe07a966aa2e556928/merged major:0 minor:789 fsType:overlay blockSize:0} overlay_0-792:{mountpoint:/var/lib/containers/storage/overlay/e3a51897de8561b8e433fc822c6ffbf4befbbbd0fef28425a9db8f8a18d24917/merged major:0 minor:792 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/ae3f6996321420d811aac85edcd1dcbb17543fd35884df4fac988b2b72986313/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-810:{mountpoint:/var/lib/containers/storage/overlay/8999246bfec13f6573b513061df2d75572aaf71cdc827410fd7aab2ea3c26c30/merged major:0 minor:810 fsType:overlay blockSize:0} overlay_0-816:{mountpoint:/var/lib/containers/storage/overlay/d9cd71f976fd6686a3fb858466340de3dfddc8f22ed10d463e0eb2f0792abdb5/merged major:0 minor:816 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/e614a45cca8f764badc3b965c7fcabc141d5f1ad4d52797fd9d3013def7bd6ec/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-820:{mountpoint:/var/lib/containers/storage/overlay/5126f11d69e741614a57318ed626cd6f43be6c4fc5c4b4e53764db9e9a1eeefc/merged major:0 minor:820 fsType:overlay blockSize:0} overlay_0-825:{mountpoint:/var/lib/containers/storage/overlay/018598bb4089a52a1e58e577dee9de8bc0e3fc69ecced161e3e589165e40e505/merged major:0 minor:825 fsType:overlay blockSize:0} overlay_0-827:{mountpoint:/var/lib/containers/storage/overlay/e4d629043ac936b0cc12548e339691e1b8b9e15b9919107d3c3c7d0bce643431/merged major:0 minor:827 fsType:overlay blockSize:0} overlay_0-846:{mountpoint:/var/lib/containers/storage/overlay/0c9691d36140684de554f47c5025a0c28348a43e60883c44f79661f926e44156/merged major:0 minor:846 fsType:overlay blockSize:0} overlay_0-848:{mountpoint:/var/lib/containers/storage/overlay/a7528af98d2bcbaf6f2335c9b7eace092361f341933d3c91c3a20b883804c720/merged major:0 minor:848 fsType:overlay blockSize:0} overlay_0-85:{mountpoint:/var/lib/containers/storage/overlay/d5a8854703fe26d5c961318fa3ec1e1aa70e4e244fe6d1af7a5187013aed1c73/merged major:0 minor:85 fsType:overlay blockSize:0} overlay_0-850:{mountpoint:/var/lib/containers/storage/overlay/eaea4f26bf1074639c1bdf8db52070810156a245b67f9dba0c3226c1ba373b56/merged major:0 minor:850 fsType:overlay blockSize:0} overlay_0-874:{mountpoint:/var/lib/containers/storage/overlay/aa5d438d7803916d45d868df4d4230ebfa20f352b26f8915b1d7106d1a235617/merged major:0 minor:874 fsType:overlay blockSize:0} overlay_0-876:{mountpoint:/var/lib/containers/storage/overlay/aec75f9bef72c85794ac8bfba6f36e2404406a697af1b32846bebe5316d9731e/merged major:0 minor:876 fsType:overlay blockSize:0} overlay_0-878:{mountpoint:/var/lib/containers/storage/overlay/9402b51d2145e80f5334a908117970c385c959fb212fd1829584c99a2b470e41/merged major:0 minor:878 fsType:overlay blockSize:0} overlay_0-880:{mountpoint:/var/lib/containers/storage/overlay/b415caf35db976e760b6b756db7943da63126846b705588f0279ed815d403bc2/merged major:0 minor:880 fsType:overlay blockSize:0} overlay_0-893:{mountpoint:/var/lib/containers/storage/overlay/aa0076cb830230816f779ccdd7fccd7f925aeeb890c3b2821f5cd64cc310d7a4/merged major:0 minor:893 fsType:overlay blockSize:0} overlay_0-900:{mountpoint:/var/lib/containers/storage/overlay/e5e60ab6bc8992475c31fed5519c6cd7f328f27302dcd68fdb5f222af3303461/merged major:0 minor:900 fsType:overlay blockSize:0} overlay_0-908:{mountpoint:/var/lib/containers/storage/overlay/7e7f2990f6ad70170840675607780b90da13d700b1ed7bd6df861d5e1be13f6d/merged major:0 minor:908 fsType:overlay blockSize:0} overlay_0-911:{mountpoint:/var/lib/containers/storage/overlay/ce6a5cc410b250da8e8f15a194152487464c9c37a77444691d6ca34005312454/merged major:0 minor:911 fsType:overlay blockSize:0} overlay_0-912:{mountpoint:/var/lib/containers/storage/overlay/1d4950bbab7632e222979712e0230618caa24db6e2ef0e3f302047b29a7944a4/merged major:0 minor:912 fsType:overlay blockSize:0} overlay_0-914:{mountpoint:/var/lib/containers/storage/overlay/c78c5c13b69d8ba4564768b3a10750a231a262e32b9cd4f803ad70ab5698af57/merged major:0 minor:914 fsType:overlay blockSize:0} overlay_0-916:{mountpoint:/var/lib/containers/storage/overlay/a3d06ffc30f9c2cdc9603d19509f01086dded639a71dae2e999e7866cfab7c07/merged major:0 minor:916 fsType:overlay blockSize:0} overlay_0-918:{mountpoint:/var/lib/containers/storage/overlay/e94bfa7bbdc3373465a5f56c8c506a4f5dffc53669a7b9552003e29841233b64/merged major:0 minor:918 fsType:overlay blockSize:0} overlay_0-92:{mountpoint:/var/lib/containers/storage/overlay/35d012802d7fbd7b4f0bd596b648020e4e37d52b6358e55a34fef7140ca50d96/merged major:0 minor:92 fsType:overlay blockSize:0} overlay_0-920:{mountpoint:/var/lib/containers/storage/overlay/6d525181155c6c2dd02145e3ead5b679ebdc7e0568132238510c9601dca7243f/merged major:0 minor:920 fsType:overlay blockSize:0} overlay_0-928:{mountpoint:/var/lib/containers/storage/overlay/631c78be8617c023ebbc3d3b4b937d83faaa8bc9a0b8d18ef7c78b4dae08e304/merged major:0 minor:928 fsType:overlay blockSize:0} overlay_0-930:{mountpoint:/var/lib/containers/storage/overlay/7ef1a80e8dc84a30192971518e1c8bedb36b311546d359bbb60e8482556802b1/merged major:0 minor:930 fsType:overlay blockSize:0} overlay_0-944:{mountpoint:/var/lib/containers/storage/overlay/deb090b5c3f89e2a688858923874261a78b5d13c2ae20052750899f100ae048a/merged major:0 minor:944 fsType:overlay blockSize:0} overlay_0-950:{mountpoint:/var/lib/containers/storage/overlay/98b19491635ff792e77ea20f478b0181c35c8750fea71b7921a8c3143ae42953/merged major:0 minor:950 fsType:overlay blockSize:0} overlay_0-959:{mountpoint:/var/lib/containers/storage/overlay/51580932b12925374fe5327359774c0a2a5197fcc39185c03f3afb524608da83/merged major:0 minor:959 fsType:overlay blockSize:0} overlay_0-961:{mountpoint:/var/lib/containers/storage/overlay/52933cd0aefb4a15da444b232135bc58fffd41ca27d16f51579bd9da6628005e/merged major:0 minor:961 fsType:overlay blockSize:0} overlay_0-963:{mountpoint:/var/lib/containers/storage/overlay/e45278b1afa5676c6322d3d25d23da00a8b98a3ba74556b3f32cede943c20cdd/merged major:0 minor:963 fsType:overlay blockSize:0} overlay_0-965:{mountpoint:/var/lib/containers/storage/overlay/20cc50e3028ab56318164cb10b6636053a6362079964e45299b754681b54cf66/merged major:0 minor:965 fsType:overlay blockSize:0} overlay_0-97:{mountpoint:/var/lib/containers/storage/overlay/34c4ff1d03a2c67584c836ebfec3ee99f0e1fc13a41354f3a97d87c9a1ef3e39/merged major:0 minor:97 fsType:overlay blockSize:0} overlay_0-978:{mountpoint:/var/lib/containers/storage/overlay/146cce84a11bb88f9201ce71289fce467e97c94e27ae55c4b3772d086bb79355/merged major:0 minor:978 fsType:overlay blockSize:0} overlay_0-986:{mountpoint:/var/lib/containers/storage/overlay/f9609f7048aa42a0716150f7535094afdb660ed358528c8ba6bd3de82ebdb8b7/merged major:0 minor:986 fsType:overlay blockSize:0} overlay_0-988:{mountpoint:/var/lib/containers/storage/overlay/b85185e21da5f29729311cd849d3ff532bdc309c214e815e9c91421b694b3818/merged major:0 minor:988 fsType:overlay blockSize:0}] Feb 16 17:02:00.996983 master-0 kubenswrapper[15493]: I0216 17:02:00.995311 15493 manager.go:217] Machine: {Timestamp:2026-02-16 17:02:00.993688408 +0000 UTC m=+0.143861518 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514149376 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:47bfea951bd14de8bb3b008f6812b13f SystemUUID:47bfea95-1bd1-4de8-bb3b-008f6812b13f BootID:4b8043a6-19a9-42c4-a3dd-d330b8dbba91 Filesystems:[{Device:/var/lib/kubelet/pods/5a275679-b7b6-4c28-b389-94cd2b014d6c/volumes/kubernetes.io~projected/kube-api-access-pmbll DeviceMajor:0 DeviceMinor:866 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:784 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x DeviceMajor:0 DeviceMinor:107 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-355 DeviceMajor:0 DeviceMinor:355 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-848 DeviceMajor:0 DeviceMinor:848 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-566 DeviceMajor:0 DeviceMinor:566 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-654 DeviceMajor:0 DeviceMinor:654 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-662 DeviceMajor:0 DeviceMinor:662 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d020c902-2adb-4919-8dd9-0c2109830580/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:177 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/18e9a9d3-9b18-4c19-9558-f33c68101922/volumes/kubernetes.io~projected/kube-api-access-6bbcf DeviceMajor:0 DeviceMinor:191 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/48fe704f6b9f25810dcd5004b13a7c413fb8fc4a4e972dfe51f7142aa16f0fee/userdata/shm DeviceMajor:0 DeviceMinor:439 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:463 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0393fe12-2533-4c9c-a8e4-a58003c88f36/volumes/kubernetes.io~projected/kube-api-access-p5rwv DeviceMajor:0 DeviceMinor:721 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257074688 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/8e623376-9e14-4341-9dcf-7a7c218b6f9f/volumes/kubernetes.io~projected/kube-api-access-xvwzr DeviceMajor:0 DeviceMinor:203 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/78be97a3-18d1-4962-804f-372974dc8ccc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:374 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-737 DeviceMajor:0 DeviceMinor:737 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-911 DeviceMajor:0 DeviceMinor:911 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:437 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8d9325183d87d503ed41689fd08cf0ecd5e5cd428a5bae6824cddf556b030e2a/userdata/shm DeviceMajor:0 DeviceMinor:477 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/18e9a9d3-9b18-4c19-9558-f33c68101922/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:530 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-502 DeviceMajor:0 DeviceMinor:502 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e1a7c783-2e23-4284-b648-147984cf1022/volumes/kubernetes.io~projected/kube-api-access-2cjmj DeviceMajor:0 DeviceMinor:895 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-959 DeviceMajor:0 DeviceMinor:959 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a6d86b04-1d3f-4f27-a262-b732c1295997/volumes/kubernetes.io~projected/kube-api-access-lxhk5 DeviceMajor:0 DeviceMinor:999 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-435 DeviceMajor:0 DeviceMinor:435 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/631bd21307bb03e494699eafea36a1ef9835bef8edb1d35b0dd997809ea4fddf/userdata/shm DeviceMajor:0 DeviceMinor:479 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dce85b5e-6e92-4e0e-bee7-07b1a3634302/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:601 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:515 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-930 DeviceMajor:0 DeviceMinor:930 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-377 DeviceMajor:0 DeviceMinor:377 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl DeviceMajor:0 DeviceMinor:166 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4e51bba5-0ebe-4e55-a588-38b71548c605/volumes/kubernetes.io~projected/kube-api-access-2dxw9 DeviceMajor:0 DeviceMinor:205 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-342 DeviceMajor:0 DeviceMinor:342 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-493 DeviceMajor:0 DeviceMinor:493 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:715 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/857c4fe5a333ed5e586a39e2b18bf56aa0347e3ba01b64ce2cc93b89b2b270ec/userdata/shm DeviceMajor:0 DeviceMinor:542 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eaf7edff-0a89-4ac0-b9dd-511e098b5434/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:197 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/905e4fdbfe2147706f13434bc2e5b9a3cbf884a588d29ecfca730d73382fb68f/userdata/shm DeviceMajor:0 DeviceMinor:433 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0c31bbb582da4a5c2f2c01e8ab5dbd9246ddce55c685733c6872e97a601d53de/userdata/shm DeviceMajor:0 DeviceMinor:1001 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/99e6140d34fdb87885ac6f6b6458e82728271da4c3991473cfe40419736e575d/userdata/shm DeviceMajor:0 DeviceMinor:337 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-613 DeviceMajor:0 DeviceMinor:613 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dd3ceeb0da8db938eae2cfa500166d7af7a50e381f011dcd54ec971db54cfcba/userdata/shm DeviceMajor:0 DeviceMinor:772 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-878 DeviceMajor:0 DeviceMinor:878 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/48801344-a48a-493e-aea4-19d998d0b708/volumes/kubernetes.io~projected/kube-api-access-nqfds DeviceMajor:0 DeviceMinor:395 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/80d3b238-70c3-4e71-96a1-99405352033f/volumes/kubernetes.io~projected/kube-api-access-rxbdv DeviceMajor:0 DeviceMinor:432 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dce85b5e-6e92-4e0e-bee7-07b1a3634302/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:603 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c2511146-1d04-4ecd-a28e-79662ef7b9d3/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:856 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/be29035bd3f07d8681e71946753c9f5c4233d203be4ff12561b76d96bc674177/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/kube-api-access-b5mwd DeviceMajor:0 DeviceMinor:194 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-369 DeviceMajor:0 DeviceMinor:369 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-950 DeviceMajor:0 DeviceMinor:950 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bd5dcd2c4add7ffc4e409d02664e000a9abb556798c746bb479a7c76fa9d67b8/userdata/shm DeviceMajor:0 DeviceMinor:997 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b3cfd9fdcf09c3cddf191f185c0ac9e10d9889f1bf6c1c0a015d8feabfcf56b7/userdata/shm DeviceMajor:0 DeviceMinor:1000 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dc3b4571309a88f03db49c8f3410740df7ca0758d3a470ee04a34d6d5a032bdd/userdata/shm DeviceMajor:0 DeviceMinor:383 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-85 DeviceMajor:0 DeviceMinor:85 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/add87c6ecc390c11dd4bb671cf6c85cf8d43a3b5be958bf533ae60889482daca/userdata/shm DeviceMajor:0 DeviceMinor:538 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-637 DeviceMajor:0 DeviceMinor:637 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-730 DeviceMajor:0 DeviceMinor:730 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f6a17f679ed7a7fbe57a462f9ffd2577eef58e5ba226eff8515fa879120c4750/userdata/shm DeviceMajor:0 DeviceMinor:651 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4488757c-f0fd-48fa-a3f9-6373b0bcafe4/volumes/kubernetes.io~projected/kube-api-access-hh2cd DeviceMajor:0 DeviceMinor:843 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-114 DeviceMajor:0 DeviceMinor:114 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-270 DeviceMajor:0 DeviceMinor:270 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b595f395aac0332f79c685e4f9b8d1184bc8d65ea7662129777c88a2f4b6d75c/userdata/shm DeviceMajor:0 DeviceMinor:473 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7f3624c603b0a3ab1d6d22b9ebbf3c00bc31ae7c696fca7464238b99ca1dc1bf/userdata/shm DeviceMajor:0 DeviceMinor:397 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:471 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-521 DeviceMajor:0 DeviceMinor:521 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-564 DeviceMajor:0 DeviceMinor:564 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/75a3e91157092df61ab323caf67fd50fd02c9c52e83eb981207e27d0552a17af/userdata/shm DeviceMajor:0 DeviceMinor:109 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a51ce5dfcf0ae0215dfb9bb56d30c910b8c1e31cb77a303efaded16db5c0b84f/userdata/shm DeviceMajor:0 DeviceMinor:238 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-345 DeviceMajor:0 DeviceMinor:345 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/62220aa5-4065-472c-8a17-c0a58942ab8a/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:860 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-928 DeviceMajor:0 DeviceMinor:928 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-739 DeviceMajor:0 DeviceMinor:739 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-876 DeviceMajor:0 DeviceMinor:876 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-750 DeviceMajor:0 DeviceMinor:750 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm DeviceMajor:0 DeviceMinor:127 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/737fcc7d-d850-4352-9f17-383c85d5bc28/volumes/kubernetes.io~projected/kube-api-access-5dpp2 DeviceMajor:0 DeviceMinor:190 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-335 DeviceMajor:0 DeviceMinor:335 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-846 DeviceMajor:0 DeviceMinor:846 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-553 DeviceMajor:0 DeviceMinor:553 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-656 DeviceMajor:0 DeviceMinor:656 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/822e1750-652e-4ceb-8fea-b2c1c905b0f1/volumes/kubernetes.io~projected/kube-api-access-djfsw DeviceMajor:0 DeviceMinor:1012 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1037 DeviceMajor:0 DeviceMinor:1037 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-573 DeviceMajor:0 DeviceMinor:573 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/80fdc50e531795c33b265621c0e851281169f624db10a2cc59cfc4a7fd66173e/userdata/shm DeviceMajor:0 DeviceMinor:844 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5a275679-b7b6-4c28-b389-94cd2b014d6c/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:865 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-746 DeviceMajor:0 DeviceMinor:746 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:187 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/27e2fd204d60ad6b8a779a11015379244968cbe2949b9df72430bd5ea3162c81/userdata/shm DeviceMajor:0 DeviceMinor:319 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-464 DeviceMajor:0 DeviceMinor:464 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g DeviceMajor:0 DeviceMinor:427 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x DeviceMajor:0 DeviceMinor:745 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw DeviceMajor:0 DeviceMinor:388 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n DeviceMajor:0 DeviceMinor:520 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-615 DeviceMajor:0 DeviceMinor:615 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95380b516961f947b4de886138c9d7adc4beb7c7579d206d803e4d6c415fb290/userdata/shm DeviceMajor:0 DeviceMinor:299 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/34d279c74bd940d5ab6f0f7e4b7983d57ebc4d60ff3c8f38850791761b56d54b/userdata/shm DeviceMajor:0 DeviceMinor:607 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a6e1a17cdf628ad1d6c859dc2741c8e5533022bb4b1d4a9deacf8709bd53c33e/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d49374ae3fffc96a5ea6ebfe8e306371f24f9cbc5677024d9ced60c8e5b1a65e/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e69d8c51-e2a6-4f61-9c26-072784f6cf40/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:186 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg DeviceMajor:0 DeviceMinor:211 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-255 DeviceMajor:0 DeviceMinor:255 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ee84198d-6357-4429-a90c-455c3850a788/volumes/kubernetes.io~projected/kube-api-access-tbq2b DeviceMajor:0 DeviceMinor:838 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a58bacdfec2737e0bee779689b2175a4a02a49f390a08f9357e0306cdd9834c8/userdata/shm DeviceMajor:0 DeviceMinor:120 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:151 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6518a84f5d47511f5f25592fd8ed06e7ac0d8f38709f9e4fcd73acdf3eb6490c/userdata/shm DeviceMajor:0 DeviceMinor:591 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845/volumes/kubernetes.io~projected/kube-api-access-7p9ld DeviceMajor:0 DeviceMinor:647 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-695 DeviceMajor:0 DeviceMinor:695 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/62220aa5-4065-472c-8a17-c0a58942ab8a/volumes/kubernetes.io~projected/kube-api-access-xtk9h DeviceMajor:0 DeviceMinor:862 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-893 DeviceMajor:0 DeviceMinor:893 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-361 DeviceMajor:0 DeviceMinor:361 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:550 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ee84198d-6357-4429-a90c-455c3850a788/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:834 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-684 DeviceMajor:0 DeviceMinor:684 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-753 DeviceMajor:0 DeviceMinor:753 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-758 DeviceMajor:0 DeviceMinor:758 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-850 DeviceMajor:0 DeviceMinor:850 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1d90441ff6782f784fce85c87a44597213ff8f98913ae13bfc6b95e97a8d2532/userdata/shm DeviceMajor:0 DeviceMinor:559 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-691 DeviceMajor:0 DeviceMinor:691 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-743 DeviceMajor:0 DeviceMinor:743 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-92 DeviceMajor:0 DeviceMinor:92 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d13fb6fe74528b3a775305b065aa4a4f2df12cd45d9b3cf6d3d699a5fdafc519/userdata/shm DeviceMajor:0 DeviceMinor:147 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/volumes/kubernetes.io~projected/kube-api-access-f42cr DeviceMajor:0 DeviceMinor:200 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-445 DeviceMajor:0 DeviceMinor:445 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-233 DeviceMajor:0 DeviceMinor:233 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f1e81c2caa02917ae2e1efaeab30f34c00bb80423dce6819a41e6640d4fdc6d5/userdata/shm DeviceMajor:0 DeviceMinor:863 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-123 DeviceMajor:0 DeviceMinor:123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eaf7edff-0a89-4ac0-b9dd-511e098b5434/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:155 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-547 DeviceMajor:0 DeviceMinor:547 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:771 Capacity:200003584 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:882 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-900 DeviceMajor:0 DeviceMinor:900 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/11ed7f8e3ea465f63c87bfe4f19d1af5fa7ffa1300819fc95ebd1dd0c7c845d0/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-407 DeviceMajor:0 DeviceMinor:407 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/69df1f56628ba74129afddfead0ccc135e8c4f4c22ab04aa82de33d67e1e6121/userdata/shm DeviceMajor:0 DeviceMinor:375 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-250 DeviceMajor:0 DeviceMinor:250 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7bcf62830ed108bcbaff872a01506e1cfaba1ae290ee01528f3fca2ecf257682/userdata/shm DeviceMajor:0 DeviceMinor:384 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:532 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1010 DeviceMajor:0 DeviceMinor:1010 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/442600dc-09b2-4fee-9f89-777296b2ee40/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:201 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/54f29618-42c2-4270-9af7-7d82852d7cec/volumes/kubernetes.io~projected/kube-api-access-w4wht DeviceMajor:0 DeviceMinor:648 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/188e42e5-9f9c-42af-ba15-5548c4fa4b52/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:869 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-916 DeviceMajor:0 DeviceMinor:916 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab6e5720-2c30-4962-9c67-89f1607d137f/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:531 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e73ee493-de15-44c2-bd51-e12fcbb27a15/volumes/kubernetes.io~projected/kube-api-access-57xvt DeviceMajor:0 DeviceMinor:767 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-961 DeviceMajor:0 DeviceMinor:961 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1003 DeviceMajor:0 DeviceMinor:1003 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d9859457-f0d1-4754-a6c5-cf05d5abf447/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:366 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-720 DeviceMajor:0 DeviceMinor:720 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-162 DeviceMajor:0 DeviceMinor:162 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-333 DeviceMajor:0 DeviceMinor:333 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-457 DeviceMajor:0 DeviceMinor:457 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz DeviceMajor:0 DeviceMinor:887 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1a6fc168713ed892fb86b4e303cafe982f512d1d221599bf5dd49b75c3751ce5/userdata/shm DeviceMajor:0 DeviceMinor:294 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-347 DeviceMajor:0 DeviceMinor:347 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/188e42e5-9f9c-42af-ba15-5548c4fa4b52/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:870 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e596a971faed7fb65d78d19abb83585c95e9a5de18c154df5de65c3d54692d18/userdata/shm DeviceMajor:0 DeviceMinor:891 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-265 DeviceMajor:0 DeviceMinor:265 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74b2561b-933b-4c58-a63a-7a8c671d0ae9/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:465 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dce85b5e-6e92-4e0e-bee7-07b1a3634302/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:602 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/eced2e42d842617466080f00d7950ca196964eff7f84fad83ac2c918e5c89adc/userdata/shm DeviceMajor:0 DeviceMinor:505 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c2511146-1d04-4ecd-a28e-79662ef7b9d3/volumes/kubernetes.io~projected/kube-api-access-hnshv DeviceMajor:0 DeviceMinor:857 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1030 DeviceMajor:0 DeviceMinor:1030 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:780 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl DeviceMajor:0 DeviceMinor:146 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5fa73884bbf6d82e89a9b049cd7e08d54171e2ca181ad4d436172b3a8202990/userdata/shm DeviceMajor:0 DeviceMinor:295 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/54f29618-42c2-4270-9af7-7d82852d7cec/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:649 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d1524fc1-d157-435a-8bf8-7e877c45909d/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:506 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-674 DeviceMajor:0 DeviceMinor:674 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d1524fc1-d157-435a-8bf8-7e877c45909d/volumes/kubernetes.io~projected/kube-api-access-nrzjr DeviceMajor:0 DeviceMinor:560 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/62220aa5-4065-472c-8a17-c0a58942ab8a/volumes/kubernetes.io~secret/profile-collector-cert DeviceMajor:0 DeviceMinor:861 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/188e42e5-9f9c-42af-ba15-5548c4fa4b52/volumes/kubernetes.io~projected/kube-api-access-25g7f DeviceMajor:0 DeviceMinor:871 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-359 DeviceMajor:0 DeviceMinor:359 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-421 DeviceMajor:0 DeviceMinor:421 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7390ccc6-dfbe-4f51-960c-7628f49bffb7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:598 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-624 DeviceMajor:0 DeviceMinor:624 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-777 DeviceMajor:0 DeviceMinor:777 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:756 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-409 DeviceMajor:0 DeviceMinor:409 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-483 DeviceMajor:0 DeviceMinor:483 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-562 DeviceMajor:0 DeviceMinor:562 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d377c24744b60cc35617b8e88be818c3a9283d990df16a27d5112c6aed9ce981/userdata/shm DeviceMajor:0 DeviceMinor:839 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:387 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/86c571b6-0f65-41f0-b1be-f63d7a974782/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:774 Capacity:200003584 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-792 DeviceMajor:0 DeviceMinor:792 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b1b4fccc-6bf6-47ac-8ae1-32cad23734da/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:717 Capacity:200003584 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1032 DeviceMajor:0 DeviceMinor:1032 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/29402454-a920-471e-895e-764235d16eb4/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:159 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-419 DeviceMajor:0 DeviceMinor:419 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-481 DeviceMajor:0 DeviceMinor:481 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7390ccc6-dfbe-4f51-960c-7628f49bffb7/volumes/kubernetes.io~projected/kube-api-access-5v65g DeviceMajor:0 DeviceMinor:599 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d0a7109bce95d1a32301d6e84ffc12bd1d37b091b1ee1ee044686d1a38898e0f/userdata/shm DeviceMajor:0 DeviceMinor:807 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-825 DeviceMajor:0 DeviceMinor:825 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~projected/kube-api-access-rjd5j DeviceMajor:0 DeviceMinor:188 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ab6e5720-2c30-4962-9c67-89f1607d137f/volumes/kubernetes.io~projected/kube-api-access-xmk2b DeviceMajor:0 DeviceMinor:202 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/kube-api-access-t24jh DeviceMajor:0 DeviceMinor:204 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:529 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-485 DeviceMajor:0 DeviceMinor:485 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-112 DeviceMajor:0 DeviceMinor:112 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d020c902-2adb-4919-8dd9-0c2109830580/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:207 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67 DeviceMajor:0 DeviceMinor:208 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/74ced4b4e3fdce2aecbb38a4d03ec1a93853cd8aa3de1fd3350c1e935e0a300f/userdata/shm DeviceMajor:0 DeviceMinor:297 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fbbdab5ef2164d5b878fbbf6c9e025a67f22281db5fb649f6dbfc4b829160d91/userdata/shm DeviceMajor:0 DeviceMinor:858 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-443 DeviceMajor:0 DeviceMinor:443 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-609 DeviceMajor:0 DeviceMinor:609 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/960647e5dd274ec370d3ea843747f832b88bbc5e8bbea57e384a265bf5609dcc/userdata/shm DeviceMajor:0 DeviceMinor:872 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-908 DeviceMajor:0 DeviceMinor:908 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0a3ce6339796232d6462786af4891ac2f6ae4477b24c445386f55fd5ad1be497/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt DeviceMajor:0 DeviceMinor:68 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/62fc29f4-557f-4a75-8b78-6ca425c81b81/volumes/kubernetes.io~projected/kube-api-access-bs597 DeviceMajor:0 DeviceMinor:373 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bacf9b29c15cf47bbdc9ebe2fcba6bca3cfdec92ee8864175a5412cbbe3c9659/userdata/shm DeviceMajor:0 DeviceMinor:340 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-455 DeviceMajor:0 DeviceMinor:455 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b077d967ff0915e46adebbfea57fba17bebbd700385551a20b3c9d4bda18abd6/userdata/shm DeviceMajor:0 DeviceMinor:764 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-810 DeviceMajor:0 DeviceMinor:810 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-178 DeviceMajor:0 DeviceMinor:178 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/737fcc7d-d850-4352-9f17-383c85d5bc28/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:176 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e69d8c51-e2a6-4f61-9c26-072784f6cf40/volumes/kubernetes.io~projected/kube-api-access-xr8t6 DeviceMajor:0 DeviceMinor:192 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/29402454-a920-471e-895e-764235d16eb4/volumes/kubernetes.io~projected/kube-api-access-r9bv7 DeviceMajor:0 DeviceMinor:195 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-816 DeviceMajor:0 DeviceMinor:816 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-920 DeviceMajor:0 DeviceMinor:920 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/dce85b5e-6e92-4e0e-bee7-07b1a3634302/volumes/kubernetes.io~projected/kube-api-access-fhcw6 DeviceMajor:0 DeviceMinor:604 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/642e5115-b7f2-4561-bc6b-1a74b6d891c4/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:755 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e1a7c783-2e23-4284-b648-147984cf1022/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:888 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e73ee493-de15-44c2-bd51-e12fcbb27a15/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:762 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa/volumes/kubernetes.io~projected/kube-api-access-qfkd9 DeviceMajor:0 DeviceMinor:993 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52 DeviceMajor:0 DeviceMinor:785 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4b4cf6ce22ab8720cdceaa9299137fdb7eefaf7a73cc07e0b511a6eb79ff2810/userdata/shm DeviceMajor:0 DeviceMinor:108 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-260 DeviceMajor:0 DeviceMinor:260 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-492 DeviceMajor:0 DeviceMinor:492 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/642e5115-b7f2-4561-bc6b-1a74b6d891c4/volumes/kubernetes.io~projected/kube-api-access-dzpnw DeviceMajor:0 DeviceMinor:760 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f3beb7bf-922f-425d-8a19-fd407a7153a8/volumes/kubernetes.io~projected/kube-api-access-qhz6z DeviceMajor:0 DeviceMinor:783 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-235 DeviceMajor:0 DeviceMinor:235 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-658 DeviceMajor:0 DeviceMinor:658 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-385 DeviceMajor:0 DeviceMinor:385 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-880 DeviceMajor:0 DeviceMinor:880 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e73ee493-de15-44c2-bd51-e12fcbb27a15/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:761 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-965 DeviceMajor:0 DeviceMinor:965 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/970d4376-f299-412c-a8ee-90aa980c689e/volumes/kubernetes.io~projected/kube-api-access-hqstc DeviceMajor:0 DeviceMinor:189 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x DeviceMajor:0 DeviceMinor:196 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-584 DeviceMajor:0 DeviceMinor:584 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-874 DeviceMajor:0 DeviceMinor:874 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-618 DeviceMajor:0 DeviceMinor:618 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:639 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/82c53f2c6633be154a699c2073aab7e95ac6eb9355d0da7ab4ff59d5ab695ebf/userdata/shm DeviceMajor:0 DeviceMinor:726 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/cc9a20f4-255a-4312-8f43-174a28c06340/volumes/kubernetes.io~projected/kube-api-access-qwh24 DeviceMajor:0 DeviceMinor:595 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:198 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-523 DeviceMajor:0 DeviceMinor:523 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2 DeviceMajor:0 DeviceMinor:368 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-441 DeviceMajor:0 DeviceMinor:441 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-664 DeviceMajor:0 DeviceMinor:664 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/040d7d0293a7b20224cd27a16c0bf2020794d17010ab130f879f9e5ce8511a88/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/442600dc-09b2-4fee-9f89-777296b2ee40/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:185 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-509 DeviceMajor:0 DeviceMinor:509 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/035c8af0-95f3-4ab6-939c-d7fa8bda40a3/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:589 Capacity:200003584 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4488757c-f0fd-48fa-a3f9-6373b0bcafe4/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:841 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-1015 DeviceMajor:0 DeviceMinor:1015 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/edbaac23-11f0-4bc7-a7ce-b593c774c0fa/volumes/kubernetes.io~projected/kube-api-access-dptnc DeviceMajor:0 DeviceMinor:193 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/de6b829dcdd7c76ab814893ea3af1edbdeba5f9048feac58739f12fbe595c34c/userdata/shm DeviceMajor:0 DeviceMinor:331 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-459 DeviceMajor:0 DeviceMinor:459 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1034 DeviceMajor:0 DeviceMinor:1034 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-487 DeviceMajor:0 DeviceMinor:487 Capacity:214143315968 Type:vfs Inodes:1045948 Feb 16 17:02:00.997431 master-0 kubenswrapper[15493]: 80 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7c0822a4b748eb1f3f4a4167fcf68aef3951b37e78e3f357e137483a9da93da7/userdata/shm DeviceMajor:0 DeviceMinor:552 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fbcae0a406fe6ca88ac22ca96551fc1de219ee3e9705034ce16efd7971fc9fed/userdata/shm DeviceMajor:0 DeviceMinor:867 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/de761a393a8022719aa1e6c89bc8322fd69d7f7150a0db01e39124de59cc355c/userdata/shm DeviceMajor:0 DeviceMinor:84 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/399ed6a40d22a74f841b008ea9aabb72324ddc42397c926051d186c2a8be50e2/userdata/shm DeviceMajor:0 DeviceMinor:168 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2 DeviceMajor:0 DeviceMinor:212 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8e623376-9e14-4341-9dcf-7a7c218b6f9f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:154 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/95008d005493fc2ada0d9b7ff7c718284548b7f519269f9c8d8a7c1fae08fbf6/userdata/shm DeviceMajor:0 DeviceMinor:901 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-912 DeviceMajor:0 DeviceMinor:912 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cd7158aca6c004ac8177200d17fa2e56721dfe46e78c27563fd124a05f790d1a/userdata/shm DeviceMajor:0 DeviceMinor:775 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/837a858734b801f62c18bbc1ac1678d7076080812a795cc7c558fa08b748a43c/userdata/shm DeviceMajor:0 DeviceMinor:534 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/78be97a3-18d1-4962-804f-372974dc8ccc/volumes/kubernetes.io~projected/kube-api-access-wzlnz DeviceMajor:0 DeviceMinor:533 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/af61cb28f0ada5d7c4d2b6d4eb5d095894f589e163adb675a754c13d082c9ab9/userdata/shm DeviceMajor:0 DeviceMinor:398 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-728 DeviceMajor:0 DeviceMinor:728 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c303189e-adae-4fe2-8dd7-cc9b80f73e66/volumes/kubernetes.io~projected/kube-api-access-v2s8l DeviceMajor:0 DeviceMinor:243 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-349 DeviceMajor:0 DeviceMinor:349 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4/volumes/kubernetes.io~projected/kube-api-access-zdxgd DeviceMajor:0 DeviceMinor:757 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-668 DeviceMajor:0 DeviceMinor:668 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:718 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dadda19ba6587a75b418addc51b36c2e0a7c53f63977a60f6f393649c7b6d587/userdata/shm DeviceMajor:0 DeviceMinor:231 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-535 DeviceMajor:0 DeviceMinor:535 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2ab21ee08c6858b29e0d5402811c6a5058510ebfc99fdee4dceca48abf0ebb37/userdata/shm DeviceMajor:0 DeviceMinor:554 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/7390ccc6-dfbe-4f51-960c-7628f49bffb7/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:597 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/404c402a-705f-4352-b9df-b89562070d9c/volumes/kubernetes.io~projected/kube-api-access-vkqml DeviceMajor:0 DeviceMinor:766 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-403 DeviceMajor:0 DeviceMinor:403 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7390ccc6-dfbe-4f51-960c-7628f49bffb7/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:596 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c68b78ea048e3e05fd1fcd40eae1c2d97a33dc3cbf3cea258f66da49798e5912/userdata/shm DeviceMajor:0 DeviceMinor:650 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-827 DeviceMajor:0 DeviceMinor:827 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f669ceacf6e4215d33879fd75925e984def643e57187c462c685b966c75f2673/userdata/shm DeviceMajor:0 DeviceMinor:768 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:142 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-220 DeviceMajor:0 DeviceMinor:220 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-245 DeviceMajor:0 DeviceMinor:245 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f4ce4120d8890f765a717b5f92a49bb939a1d012d4ecb18b255dc6309ea6d107/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-401 DeviceMajor:0 DeviceMinor:401 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:438 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-507 DeviceMajor:0 DeviceMinor:507 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-763 DeviceMajor:0 DeviceMinor:763 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5d39ed24-4301-4cea-8a42-a08f4ba8b479/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:889 Capacity:200003584 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-944 DeviceMajor:0 DeviceMinor:944 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-735 DeviceMajor:0 DeviceMinor:735 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a493ec972b676ff0c630095722c8d8d6f05ae211809b90b8791aa422b9dcb2fb/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4e51bba5-0ebe-4e55-a588-38b71548c605/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:161 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:519 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:640 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0cff847538436e1b2bb3434e2e04b8332738e465e04639ea97b586f2461bb9fc/userdata/shm DeviceMajor:0 DeviceMinor:298 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-686 DeviceMajor:0 DeviceMinor:686 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-914 DeviceMajor:0 DeviceMinor:914 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a60e6d4793a7edacd573013c98b5733c94b908b9bbe63d3fd698f772eed289c7/userdata/shm DeviceMajor:0 DeviceMinor:537 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-611 DeviceMajor:0 DeviceMinor:611 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-789 DeviceMajor:0 DeviceMinor:789 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-751 DeviceMajor:0 DeviceMinor:751 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-97 DeviceMajor:0 DeviceMinor:97 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-240 DeviceMajor:0 DeviceMinor:240 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-213 DeviceMajor:0 DeviceMinor:213 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/48801344-a48a-493e-aea4-19d998d0b708/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:365 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/404c402a-705f-4352-b9df-b89562070d9c/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:723 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/edbaac23-11f0-4bc7-a7ce-b593c774c0fa/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:180 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-405 DeviceMajor:0 DeviceMinor:405 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aa37dd5bc712a6e66397e0efcad2c702b51d3841d761278b212389b13ad668e0/userdata/shm DeviceMajor:0 DeviceMinor:748 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-820 DeviceMajor:0 DeviceMinor:820 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/45b31aa01f6a85e0dcf85670319be85b9e6d0c112d9bd0004ef655a9654d75f6/userdata/shm DeviceMajor:0 DeviceMinor:714 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-963 DeviceMajor:0 DeviceMinor:963 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:184 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-353 DeviceMajor:0 DeviceMinor:353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-605 DeviceMajor:0 DeviceMinor:605 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4488757c-f0fd-48fa-a3f9-6373b0bcafe4/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:842 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a8742926579beb3bc6f4cff1fa7c25f0bdd68039ed37e9a331f3e110c7838ff1/userdata/shm DeviceMajor:0 DeviceMinor:716 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-918 DeviceMajor:0 DeviceMinor:918 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5 DeviceMajor:0 DeviceMinor:135 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/84fbcf4f8c4afda2e79a3be2b73e332fc9f2d8ce27d17523a1712d76d3ce752e/userdata/shm DeviceMajor:0 DeviceMinor:324 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4c0337d0eb1672f3cea8f23ec0619b33320b69137b0d2e9bc66e94fe15bbe412/userdata/shm DeviceMajor:0 DeviceMinor:338 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-582 DeviceMajor:0 DeviceMinor:582 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:167 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/d9859457-f0d1-4754-a6c5-cf05d5abf447/volumes/kubernetes.io~projected/kube-api-access-t4gl5 DeviceMajor:0 DeviceMinor:199 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f9d46acb28343da01106746c6081478cf22731025778bc59637f1078390a8865/userdata/shm DeviceMajor:0 DeviceMinor:540 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-660 DeviceMajor:0 DeviceMinor:660 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-676 DeviceMajor:0 DeviceMinor:676 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4b0c06fa22c4b9fdff535e3051201dc0bd36447aab43eba5f5549b527b9cff7e/userdata/shm DeviceMajor:0 DeviceMinor:890 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-978 DeviceMajor:0 DeviceMinor:978 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/74b2561b-933b-4c58-a63a-7a8c671d0ae9/volumes/kubernetes.io~projected/kube-api-access-kx9vc DeviceMajor:0 DeviceMinor:206 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:209 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-461 DeviceMajor:0 DeviceMinor:461 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:742 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dbac153ecd4a3f99cd0e69d237edceb4a48ab6c1d711adc4be460a302948a462/userdata/shm DeviceMajor:0 DeviceMinor:1016 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld DeviceMajor:0 DeviceMinor:779 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-988 DeviceMajor:0 DeviceMinor:988 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-149 DeviceMajor:0 DeviceMinor:149 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6b3e071c-1c62-489b-91c1-aef0d197f40b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:160 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-229 DeviceMajor:0 DeviceMinor:229 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-986 DeviceMajor:0 DeviceMinor:986 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0c31bbb582da4a5 MacAddress:ee:41:87:ec:c3:b7 Speed:10000 Mtu:8900} {Name:0cff847538436e1 MacAddress:be:68:a2:80:ca:17 Speed:10000 Mtu:8900} {Name:11ed7f8e3ea465f MacAddress:7a:82:99:f6:2e:7a Speed:10000 Mtu:8900} {Name:1a6fc168713ed89 MacAddress:7a:0e:4d:4c:4c:5b Speed:10000 Mtu:8900} {Name:1d90441ff6782f7 MacAddress:16:1c:1c:46:e7:84 Speed:10000 Mtu:8900} {Name:27e2fd204d60ad6 MacAddress:da:3b:cb:b9:7e:bc Speed:10000 Mtu:8900} {Name:2ab21ee08c6858b MacAddress:16:45:49:d2:0c:e4 Speed:10000 Mtu:8900} {Name:34d279c74bd940d MacAddress:be:9d:1d:69:5f:3c Speed:10000 Mtu:8900} {Name:45b31aa01f6a85e MacAddress:c2:06:fc:40:e4:81 Speed:10000 Mtu:8900} {Name:48fe704f6b9f258 MacAddress:e2:1b:c1:46:09:bc Speed:10000 Mtu:8900} {Name:4b0c06fa22c4b9f MacAddress:da:1a:8f:08:52:ed Speed:10000 Mtu:8900} {Name:4c0337d0eb1672f MacAddress:1a:a7:5e:16:83:86 Speed:10000 Mtu:8900} {Name:6518a84f5d47511 MacAddress:d2:25:83:f2:4f:7d Speed:10000 Mtu:8900} {Name:69df1f56628ba74 MacAddress:f6:d5:67:6d:c5:6f Speed:10000 Mtu:8900} {Name:74ced4b4e3fdce2 MacAddress:d6:ae:d8:0c:3a:1b Speed:10000 Mtu:8900} {Name:7bcf62830ed108b MacAddress:2a:bf:d3:9d:6b:11 Speed:10000 Mtu:8900} {Name:7c0822a4b748eb1 MacAddress:e6:de:e8:f3:db:d4 Speed:10000 Mtu:8900} {Name:7f3624c603b0a3a MacAddress:a2:cb:54:f2:75:55 Speed:10000 Mtu:8900} {Name:80fdc50e531795c MacAddress:66:d2:6d:68:d5:a6 Speed:10000 Mtu:8900} {Name:837a858734b801f MacAddress:1e:78:12:5d:c0:c6 Speed:10000 Mtu:8900} {Name:84fbcf4f8c4afda MacAddress:26:a6:2c:f0:0b:ba Speed:10000 Mtu:8900} {Name:8d9325183d87d50 MacAddress:42:77:f6:8e:98:d5 Speed:10000 Mtu:8900} {Name:905e4fdbfe21477 MacAddress:4e:f7:6a:51:29:8f Speed:10000 Mtu:8900} {Name:95008d005493fc2 MacAddress:1a:99:f3:6c:81:65 Speed:10000 Mtu:8900} {Name:95380b516961f94 MacAddress:c6:89:0b:dd:6a:15 Speed:10000 Mtu:8900} {Name:960647e5dd274ec MacAddress:06:52:b9:0a:c2:bc Speed:10000 Mtu:8900} {Name:99e6140d34fdb87 MacAddress:56:e1:ea:75:6d:e9 Speed:10000 Mtu:8900} {Name:a60e6d4793a7eda MacAddress:3e:a6:8f:b2:12:c9 Speed:10000 Mtu:8900} {Name:a8742926579beb3 MacAddress:36:76:83:23:a9:fb Speed:10000 Mtu:8900} {Name:aa37dd5bc712a6e MacAddress:4a:f0:c8:e7:bb:b8 Speed:10000 Mtu:8900} {Name:add87c6ecc390c1 MacAddress:b2:e7:ad:5f:4a:90 Speed:10000 Mtu:8900} {Name:af61cb28f0ada5d MacAddress:fe:2e:e5:e2:b6:0f Speed:10000 Mtu:8900} {Name:b077d967ff0915e MacAddress:36:ff:58:43:84:21 Speed:10000 Mtu:8900} {Name:b595f395aac0332 MacAddress:d6:6e:2c:35:00:c2 Speed:10000 Mtu:8900} {Name:bacf9b29c15cf47 MacAddress:12:82:fc:2d:93:84 Speed:10000 Mtu:8900} {Name:bd5dcd2c4add7ff MacAddress:e2:e2:4b:07:cc:ca Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:52:47:03:db:66:8a Speed:0 Mtu:8900} {Name:c5fa73884bbf6d8 MacAddress:82:62:d8:02:bd:8b Speed:10000 Mtu:8900} {Name:c68b78ea048e3e0 MacAddress:96:9c:36:ef:3c:f6 Speed:10000 Mtu:8900} {Name:cd7158aca6c004a MacAddress:e2:5b:03:c4:e2:c1 Speed:10000 Mtu:8900} {Name:d0a7109bce95d1a MacAddress:b6:2a:32:39:60:a1 Speed:10000 Mtu:8900} {Name:d377c24744b60cc MacAddress:b2:c1:28:59:40:84 Speed:10000 Mtu:8900} {Name:dbac153ecd4a3f9 MacAddress:da:a3:d2:86:7c:1b Speed:10000 Mtu:8900} {Name:de6b829dcdd7c76 MacAddress:16:b2:d8:e5:f1:d2 Speed:10000 Mtu:8900} {Name:e596a971faed7fb MacAddress:ce:33:20:61:82:e9 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:2c:b9:e2 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:4a:2e:ce Speed:-1 Mtu:9000} {Name:f1e81c2caa02917 MacAddress:0a:20:78:57:ef:d1 Speed:10000 Mtu:8900} {Name:f4ce4120d8890f7 MacAddress:fe:10:91:7a:8d:25 Speed:10000 Mtu:8900} {Name:f669ceacf6e4215 MacAddress:ba:da:e5:44:de:8b Speed:10000 Mtu:8900} {Name:f6a17f679ed7a7f MacAddress:6e:67:8c:d6:88:17 Speed:10000 Mtu:8900} {Name:f9d46acb28343da MacAddress:be:9c:e1:32:ed:72 Speed:10000 Mtu:8900} {Name:fbbdab5ef2164d5 MacAddress:7e:35:4b:5f:34:33 Speed:10000 Mtu:8900} {Name:fbcae0a406fe6ca MacAddress:ea:c1:09:bc:d6:dc Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:6a:29:7e:a8:c6:78 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514149376 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 17:02:00.997431 master-0 kubenswrapper[15493]: I0216 17:02:00.996891 15493 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 17:02:00.997431 master-0 kubenswrapper[15493]: I0216 17:02:00.997002 15493 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 17:02:00.997431 master-0 kubenswrapper[15493]: I0216 17:02:00.997292 15493 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 17:02:00.997696 master-0 kubenswrapper[15493]: I0216 17:02:00.997543 15493 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 17:02:00.997802 master-0 kubenswrapper[15493]: I0216 17:02:00.997580 15493 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 17:02:00.997861 master-0 kubenswrapper[15493]: I0216 17:02:00.997833 15493 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 17:02:00.997861 master-0 kubenswrapper[15493]: I0216 17:02:00.997846 15493 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 17:02:00.997861 master-0 kubenswrapper[15493]: I0216 17:02:00.997858 15493 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:02:00.997974 master-0 kubenswrapper[15493]: I0216 17:02:00.997886 15493 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:02:00.998209 master-0 kubenswrapper[15493]: I0216 17:02:00.998157 15493 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:02:00.998326 master-0 kubenswrapper[15493]: I0216 17:02:00.998264 15493 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 17:02:00.998370 master-0 kubenswrapper[15493]: I0216 17:02:00.998355 15493 kubelet.go:418] "Attempting to sync node with API server" Feb 16 17:02:00.998403 master-0 kubenswrapper[15493]: I0216 17:02:00.998369 15493 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 17:02:00.998403 master-0 kubenswrapper[15493]: I0216 17:02:00.998385 15493 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 17:02:00.998403 master-0 kubenswrapper[15493]: I0216 17:02:00.998401 15493 kubelet.go:324] "Adding apiserver pod source" Feb 16 17:02:00.998497 master-0 kubenswrapper[15493]: I0216 17:02:00.998421 15493 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 17:02:01.000730 master-0 kubenswrapper[15493]: I0216 17:02:01.000650 15493 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 17:02:01.000878 master-0 kubenswrapper[15493]: I0216 17:02:01.000851 15493 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 17:02:01.001393 master-0 kubenswrapper[15493]: I0216 17:02:01.001317 15493 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 17:02:01.001911 master-0 kubenswrapper[15493]: I0216 17:02:01.001885 15493 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 17:02:01.001995 master-0 kubenswrapper[15493]: I0216 17:02:01.001934 15493 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 17:02:01.001995 master-0 kubenswrapper[15493]: I0216 17:02:01.001947 15493 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 17:02:01.001995 master-0 kubenswrapper[15493]: I0216 17:02:01.001955 15493 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 17:02:01.001995 master-0 kubenswrapper[15493]: I0216 17:02:01.001964 15493 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 17:02:01.001995 master-0 kubenswrapper[15493]: I0216 17:02:01.001974 15493 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 17:02:01.001995 master-0 kubenswrapper[15493]: I0216 17:02:01.001983 15493 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 17:02:01.001995 master-0 kubenswrapper[15493]: I0216 17:02:01.001992 15493 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 17:02:01.001995 master-0 kubenswrapper[15493]: I0216 17:02:01.002004 15493 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 17:02:01.002314 master-0 kubenswrapper[15493]: I0216 17:02:01.002014 15493 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 17:02:01.002314 master-0 kubenswrapper[15493]: I0216 17:02:01.002033 15493 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 17:02:01.002314 master-0 kubenswrapper[15493]: I0216 17:02:01.002050 15493 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 17:02:01.002314 master-0 kubenswrapper[15493]: I0216 17:02:01.002090 15493 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 17:02:01.002723 master-0 kubenswrapper[15493]: I0216 17:02:01.002599 15493 server.go:1280] "Started kubelet" Feb 16 17:02:01.002803 master-0 kubenswrapper[15493]: I0216 17:02:01.002767 15493 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 17:02:01.003331 master-0 kubenswrapper[15493]: I0216 17:02:01.003132 15493 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 17:02:01.003331 master-0 kubenswrapper[15493]: I0216 17:02:01.003211 15493 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 17:02:01.003477 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 17:02:01.006035 master-0 kubenswrapper[15493]: I0216 17:02:01.005994 15493 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:02:01.008276 master-0 kubenswrapper[15493]: I0216 17:02:01.008237 15493 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 17:02:01.008725 master-0 kubenswrapper[15493]: I0216 17:02:01.008684 15493 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:02:01.008983 master-0 kubenswrapper[15493]: I0216 17:02:01.008941 15493 server.go:449] "Adding debug handlers to kubelet server" Feb 16 17:02:01.015109 master-0 kubenswrapper[15493]: I0216 17:02:01.015074 15493 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 17:02:01.015193 master-0 kubenswrapper[15493]: I0216 17:02:01.015131 15493 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 17:02:01.015193 master-0 kubenswrapper[15493]: I0216 17:02:01.015145 15493 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 13:51:55.898038786 +0000 UTC Feb 16 17:02:01.015193 master-0 kubenswrapper[15493]: I0216 17:02:01.015181 15493 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h49m54.882860309s for next certificate rotation Feb 16 17:02:01.015193 master-0 kubenswrapper[15493]: I0216 17:02:01.015181 15493 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 17:02:01.015329 master-0 kubenswrapper[15493]: I0216 17:02:01.015207 15493 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 17:02:01.015329 master-0 kubenswrapper[15493]: I0216 17:02:01.015193 15493 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 17:02:01.016019 master-0 kubenswrapper[15493]: I0216 17:02:01.015942 15493 factory.go:55] Registering systemd factory Feb 16 17:02:01.016019 master-0 kubenswrapper[15493]: I0216 17:02:01.015970 15493 factory.go:221] Registration of the systemd container factory successfully Feb 16 17:02:01.016290 master-0 kubenswrapper[15493]: I0216 17:02:01.016262 15493 factory.go:153] Registering CRI-O factory Feb 16 17:02:01.016341 master-0 kubenswrapper[15493]: I0216 17:02:01.016293 15493 factory.go:221] Registration of the crio container factory successfully Feb 16 17:02:01.016402 master-0 kubenswrapper[15493]: I0216 17:02:01.016382 15493 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 17:02:01.016438 master-0 kubenswrapper[15493]: I0216 17:02:01.016415 15493 factory.go:103] Registering Raw factory Feb 16 17:02:01.016438 master-0 kubenswrapper[15493]: I0216 17:02:01.016432 15493 manager.go:1196] Started watching for new ooms in manager Feb 16 17:02:01.016964 master-0 kubenswrapper[15493]: I0216 17:02:01.016944 15493 manager.go:319] Starting recovery of all containers Feb 16 17:02:01.019176 master-0 kubenswrapper[15493]: I0216 17:02:01.019143 15493 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:02:01.022867 master-0 kubenswrapper[15493]: I0216 17:02:01.022817 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client" seLinuxMountContext="" Feb 16 17:02:01.022867 master-0 kubenswrapper[15493]: I0216 17:02:01.022864 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b1b4fccc-6bf6-47ac-8ae1-32cad23734da" volumeName="kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access" seLinuxMountContext="" Feb 16 17:02:01.022974 master-0 kubenswrapper[15493]: I0216 17:02:01.022876 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1524fc1-d157-435a-8bf8-7e877c45909d" volumeName="kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr" seLinuxMountContext="" Feb 16 17:02:01.022974 master-0 kubenswrapper[15493]: I0216 17:02:01.022885 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.022974 master-0 kubenswrapper[15493]: I0216 17:02:01.022894 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config" seLinuxMountContext="" Feb 16 17:02:01.022974 master-0 kubenswrapper[15493]: I0216 17:02:01.022902 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls" seLinuxMountContext="" Feb 16 17:02:01.022974 master-0 kubenswrapper[15493]: I0216 17:02:01.022912 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2" seLinuxMountContext="" Feb 16 17:02:01.022974 master-0 kubenswrapper[15493]: I0216 17:02:01.022933 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.022974 master-0 kubenswrapper[15493]: I0216 17:02:01.022943 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h" seLinuxMountContext="" Feb 16 17:02:01.022974 master-0 kubenswrapper[15493]: I0216 17:02:01.022952 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap" seLinuxMountContext="" Feb 16 17:02:01.022974 master-0 kubenswrapper[15493]: I0216 17:02:01.022960 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca" seLinuxMountContext="" Feb 16 17:02:01.022974 master-0 kubenswrapper[15493]: I0216 17:02:01.022968 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca" seLinuxMountContext="" Feb 16 17:02:01.022974 master-0 kubenswrapper[15493]: I0216 17:02:01.022976 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.022987 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.022995 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023003 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023012 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023022 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023030 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18e9a9d3-9b18-4c19-9558-f33c68101922" volumeName="kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023038 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023046 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023054 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023062 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023071 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023095 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023105 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023115 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="86c571b6-0f65-41f0-b1be-f63d7a974782" volumeName="kubernetes.io/projected/86c571b6-0f65-41f0-b1be-f63d7a974782-kube-api-access" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023125 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023134 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023142 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023151 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023160 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023168 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023178 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023187 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023209 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023217 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023226 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023235 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023246 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies" seLinuxMountContext="" Feb 16 17:02:01.023241 master-0 kubenswrapper[15493]: I0216 17:02:01.023254 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023262 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023272 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023280 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023288 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023296 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023305 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023313 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="642e5115-b7f2-4561-bc6b-1a74b6d891c4" volumeName="kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023321 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023330 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023339 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023348 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023361 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023370 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023380 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023389 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023398 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023407 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023416 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023425 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023433 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023442 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023450 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023459 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023466 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023475 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023483 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023491 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023499 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023507 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023515 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023523 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023531 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023540 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023549 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023558 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023566 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023575 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023583 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023592 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023603 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023612 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023621 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023630 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a275679-b7b6-4c28-b389-94cd2b014d6c" volumeName="kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023638 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023647 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023655 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023664 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023672 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023683 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023691 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023700 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023709 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023717 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023726 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023734 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023742 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023753 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5d39ed24-4301-4cea-8a42-a08f4ba8b479" volumeName="kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023761 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023775 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023786 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023798 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1524fc1-d157-435a-8bf8-7e877c45909d" volumeName="kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023809 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023820 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023835 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023846 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023855 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6d86b04-1d3f-4f27-a262-b732c1295997" volumeName="kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-catalog-content" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023901 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6d86b04-1d3f-4f27-a262-b732c1295997" volumeName="kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-utilities" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023914 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023937 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023946 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="970d4376-f299-412c-a8ee-90aa980c689e" volumeName="kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023955 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023963 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9859457-f0d1-4754-a6c5-cf05d5abf447" volumeName="kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023976 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.023987 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.024001 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.024014 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.024027 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config" seLinuxMountContext="" Feb 16 17:02:01.023994 master-0 kubenswrapper[15493]: I0216 17:02:01.024037 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024046 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024055 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024063 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024077 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad805251-19d0-4d2f-b741-7d11158f1f03" volumeName="kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024088 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024099 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024111 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024122 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024134 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024146 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024155 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024165 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024177 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa" volumeName="kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-utilities" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024188 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024201 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6fe41b0-1a42-4f07-8220-d9aaa50788ad" volumeName="kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024213 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024225 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024236 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024247 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024258 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024269 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024282 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024293 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024305 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024317 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a275679-b7b6-4c28-b389-94cd2b014d6c" volumeName="kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024328 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fe1c16d-061a-4a57-aea4-cf1d4b24d02f" volumeName="kubernetes.io/projected/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f-kube-api-access" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024341 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80d3b238-70c3-4e71-96a1-99405352033f" volumeName="kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024352 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config" seLinuxMountContext="" Feb 16 17:02:01.025998 master-0 kubenswrapper[15493]: I0216 17:02:01.024365 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle" seLinuxMountContext="" Feb 16 17:02:01.035221 master-0 kubenswrapper[15493]: I0216 17:02:01.035160 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:02:01.035221 master-0 kubenswrapper[15493]: I0216 17:02:01.035223 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config" seLinuxMountContext="" Feb 16 17:02:01.035420 master-0 kubenswrapper[15493]: I0216 17:02:01.035251 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj" seLinuxMountContext="" Feb 16 17:02:01.035420 master-0 kubenswrapper[15493]: I0216 17:02:01.035264 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6d86b04-1d3f-4f27-a262-b732c1295997" volumeName="kubernetes.io/projected/a6d86b04-1d3f-4f27-a262-b732c1295997-kube-api-access-lxhk5" seLinuxMountContext="" Feb 16 17:02:01.035420 master-0 kubenswrapper[15493]: I0216 17:02:01.035274 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities" seLinuxMountContext="" Feb 16 17:02:01.035420 master-0 kubenswrapper[15493]: I0216 17:02:01.035291 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots" seLinuxMountContext="" Feb 16 17:02:01.035420 master-0 kubenswrapper[15493]: I0216 17:02:01.035301 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n" seLinuxMountContext="" Feb 16 17:02:01.035420 master-0 kubenswrapper[15493]: I0216 17:02:01.035317 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca" seLinuxMountContext="" Feb 16 17:02:01.035420 master-0 kubenswrapper[15493]: I0216 17:02:01.035328 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 16 17:02:01.035420 master-0 kubenswrapper[15493]: I0216 17:02:01.035337 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs" seLinuxMountContext="" Feb 16 17:02:01.035420 master-0 kubenswrapper[15493]: I0216 17:02:01.035352 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad805251-19d0-4d2f-b741-7d11158f1f03" volumeName="kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5" seLinuxMountContext="" Feb 16 17:02:01.035420 master-0 kubenswrapper[15493]: I0216 17:02:01.035362 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content" seLinuxMountContext="" Feb 16 17:02:01.035420 master-0 kubenswrapper[15493]: I0216 17:02:01.035372 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x" seLinuxMountContext="" Feb 16 17:02:01.035420 master-0 kubenswrapper[15493]: I0216 17:02:01.035389 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls" seLinuxMountContext="" Feb 16 17:02:01.035420 master-0 kubenswrapper[15493]: I0216 17:02:01.035399 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035415 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035563 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035574 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c303189e-adae-4fe2-8dd7-cc9b80f73e66" volumeName="kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035590 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035600 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62fc29f4-557f-4a75-8b78-6ca425c81b81" volumeName="kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035615 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab6e5720-2c30-4962-9c67-89f1607d137f" volumeName="kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035627 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035637 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035654 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035665 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035680 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035691 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035700 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035717 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035728 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035741 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="035c8af0-95f3-4ab6-939c-d7fa8bda40a3" volumeName="kubernetes.io/projected/035c8af0-95f3-4ab6-939c-d7fa8bda40a3-kube-api-access" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035751 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035763 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035779 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035789 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035803 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035814 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035825 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035840 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035851 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035862 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035875 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca" seLinuxMountContext="" Feb 16 17:02:01.035856 master-0 kubenswrapper[15493]: I0216 17:02:01.035887 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18e9a9d3-9b18-4c19-9558-f33c68101922" volumeName="kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.035901 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa" volumeName="kubernetes.io/projected/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-kube-api-access-qfkd9" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.035912 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.035936 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab6e5720-2c30-4962-9c67-89f1607d137f" volumeName="kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.035949 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.035959 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.035973 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.035984 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.035995 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa" volumeName="kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-catalog-content" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036009 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036019 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036034 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9859457-f0d1-4754-a6c5-cf05d5abf447" volumeName="kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036044 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036054 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036069 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036081 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036096 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="642e5115-b7f2-4561-bc6b-1a74b6d891c4" volumeName="kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036107 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036131 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036144 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036161 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036181 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036199 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036212 15493 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert" seLinuxMountContext="" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036222 15493 reconstruct.go:97] "Volume reconstruction finished" Feb 16 17:02:01.036632 master-0 kubenswrapper[15493]: I0216 17:02:01.036229 15493 reconciler.go:26] "Reconciler: start to sync state" Feb 16 17:02:01.051581 master-0 kubenswrapper[15493]: I0216 17:02:01.051521 15493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 17:02:01.053990 master-0 kubenswrapper[15493]: I0216 17:02:01.053963 15493 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 17:02:01.054087 master-0 kubenswrapper[15493]: I0216 17:02:01.054006 15493 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 17:02:01.054087 master-0 kubenswrapper[15493]: I0216 17:02:01.054029 15493 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 17:02:01.054188 master-0 kubenswrapper[15493]: E0216 17:02:01.054076 15493 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 17:02:01.056001 master-0 kubenswrapper[15493]: I0216 17:02:01.055971 15493 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 17:02:01.061436 master-0 kubenswrapper[15493]: I0216 17:02:01.061379 15493 generic.go:334] "Generic (PLEG): container finished" podID="10280d4e-9a32-4fea-aea0-211e7c9f0502" containerID="500d24f874646514d290aa65da48da18a395647cf9847d120c566c759fe02946" exitCode=0 Feb 16 17:02:01.074547 master-0 kubenswrapper[15493]: I0216 17:02:01.074494 15493 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e" exitCode=1 Feb 16 17:02:01.076768 master-0 kubenswrapper[15493]: I0216 17:02:01.076699 15493 generic.go:334] "Generic (PLEG): container finished" podID="6b3e071c-1c62-489b-91c1-aef0d197f40b" containerID="925f178f46a1d5c4c22dbeed05e4d6e9975a60d252305dcd17064d2bc8dfab6e" exitCode=0 Feb 16 17:02:01.091620 master-0 kubenswrapper[15493]: I0216 17:02:01.091575 15493 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="b5e6e0c200ef6468da128fab1a901d498e73068beb07a54310f215479193099d" exitCode=0 Feb 16 17:02:01.091620 master-0 kubenswrapper[15493]: I0216 17:02:01.091608 15493 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="96c8b16be41a61f78ae9a0d158764cfb3f1dc1be9541f6dde4356d45ed489d8c" exitCode=0 Feb 16 17:02:01.091620 master-0 kubenswrapper[15493]: I0216 17:02:01.091616 15493 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="07ee05b11ab243298aba0652acab149107fdee4d056b25a8d70e009ebf722842" exitCode=0 Feb 16 17:02:01.091620 master-0 kubenswrapper[15493]: I0216 17:02:01.091624 15493 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="9b508704ca913b3676949d448345a8f778d17c4d3d7c7156e1db34b5da7a8c96" exitCode=0 Feb 16 17:02:01.091620 master-0 kubenswrapper[15493]: I0216 17:02:01.091631 15493 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="6f850c8263f7a5fffe361664a6b474015b2a97155111509d5a8154875803d4f3" exitCode=0 Feb 16 17:02:01.091931 master-0 kubenswrapper[15493]: I0216 17:02:01.091638 15493 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="f47270eadf232a1b51b70eb1069033d1ee831e9e2a83cf22e20d3b2db1ceb184" exitCode=0 Feb 16 17:02:01.106228 master-0 kubenswrapper[15493]: I0216 17:02:01.106168 15493 generic.go:334] "Generic (PLEG): container finished" podID="d020c902-2adb-4919-8dd9-0c2109830580" containerID="e310e36fd740b75515307293e697ecd768c9c8241ff939db071d778913f35a7a" exitCode=0 Feb 16 17:02:01.109408 master-0 kubenswrapper[15493]: I0216 17:02:01.109374 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 17:02:01.109734 master-0 kubenswrapper[15493]: I0216 17:02:01.109702 15493 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="6bb85739a7a836abfdb346023915e77abb0b10b023f88b2b7e7c9536a35657a8" exitCode=1 Feb 16 17:02:01.109734 master-0 kubenswrapper[15493]: I0216 17:02:01.109732 15493 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="aa5ecc6a98445fdbcf4dc0a764b1f3d8e109d603e5ddc36d010d08e31acfcc8f" exitCode=0 Feb 16 17:02:01.126531 master-0 kubenswrapper[15493]: I0216 17:02:01.126493 15493 generic.go:334] "Generic (PLEG): container finished" podID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" containerID="0c316f0475ab0d19308e3571553a8196d11f7628c2f61de84b97dea8ed48cf58" exitCode=0 Feb 16 17:02:01.132678 master-0 kubenswrapper[15493]: I0216 17:02:01.132631 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_7fe1c16d-061a-4a57-aea4-cf1d4b24d02f/installer/0.log" Feb 16 17:02:01.132744 master-0 kubenswrapper[15493]: I0216 17:02:01.132691 15493 generic.go:334] "Generic (PLEG): container finished" podID="7fe1c16d-061a-4a57-aea4-cf1d4b24d02f" containerID="90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99" exitCode=1 Feb 16 17:02:01.138031 master-0 kubenswrapper[15493]: I0216 17:02:01.134078 15493 generic.go:334] "Generic (PLEG): container finished" podID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" containerID="97d671c2a336b225236f0499e973eab6ef7683203f7b46f7e3767de75b466dd3" exitCode=0 Feb 16 17:02:01.138031 master-0 kubenswrapper[15493]: I0216 17:02:01.137380 15493 generic.go:334] "Generic (PLEG): container finished" podID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" containerID="8fa44b1ac9949e31fd12e8a885f114d1074a93f335ef9c428586ae9835e14643" exitCode=0 Feb 16 17:02:01.154261 master-0 kubenswrapper[15493]: E0216 17:02:01.154220 15493 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 17:02:01.160969 master-0 kubenswrapper[15493]: I0216 17:02:01.159981 15493 generic.go:334] "Generic (PLEG): container finished" podID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" containerID="8399cb1f8a954f603085247154bb48084a1a6283fe2b99aa8facab4cb78f381d" exitCode=0 Feb 16 17:02:01.175302 master-0 kubenswrapper[15493]: I0216 17:02:01.175262 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9_edbaac23-11f0-4bc7-a7ce-b593c774c0fa/openshift-controller-manager-operator/0.log" Feb 16 17:02:01.175462 master-0 kubenswrapper[15493]: I0216 17:02:01.175325 15493 generic.go:334] "Generic (PLEG): container finished" podID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" containerID="58c88a445d8c10824c3855b7412ae17cbbff466b8394e38c4224ab694839c37d" exitCode=1 Feb 16 17:02:01.177342 master-0 kubenswrapper[15493]: I0216 17:02:01.177311 15493 generic.go:334] "Generic (PLEG): container finished" podID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" containerID="a650093628feaa4193c1b7c57ea685e55d5af706446f54a283f32836e6d703a9" exitCode=0 Feb 16 17:02:01.177342 master-0 kubenswrapper[15493]: I0216 17:02:01.177338 15493 generic.go:334] "Generic (PLEG): container finished" podID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" containerID="ff9b3b2992b50e55900986e351d7a1b84719ad88820b81ad374c423bd1f1a2a8" exitCode=0 Feb 16 17:02:01.178718 master-0 kubenswrapper[15493]: I0216 17:02:01.178686 15493 generic.go:334] "Generic (PLEG): container finished" podID="1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa" containerID="12b334066ee229a3063ed554a4dd75cf5da1c898112391b182e46bd9935b002b" exitCode=0 Feb 16 17:02:01.201458 master-0 kubenswrapper[15493]: I0216 17:02:01.201382 15493 generic.go:334] "Generic (PLEG): container finished" podID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" containerID="dc9e8dbf3a74fb329eb23f61fe7acc2cbbecad6e0ad9994f107aa3c7b0c60d14" exitCode=0 Feb 16 17:02:01.208184 master-0 kubenswrapper[15493]: I0216 17:02:01.208143 15493 generic.go:334] "Generic (PLEG): container finished" podID="035c8af0-95f3-4ab6-939c-d7fa8bda40a3" containerID="8d78fa623e175273ca9fb1b430de0aa7e6c7b81ae465f33ce572879406853709" exitCode=0 Feb 16 17:02:01.214217 master-0 kubenswrapper[15493]: I0216 17:02:01.214061 15493 generic.go:334] "Generic (PLEG): container finished" podID="0393fe12-2533-4c9c-a8e4-a58003c88f36" containerID="923a71501b419dfeeea5a3bc9e6232ad282276a9f4cb4239a8c0e6dc182d5ef7" exitCode=0 Feb 16 17:02:01.216652 master-0 kubenswrapper[15493]: I0216 17:02:01.216612 15493 generic.go:334] "Generic (PLEG): container finished" podID="a6d86b04-1d3f-4f27-a262-b732c1295997" containerID="b61f3a0c3ac4f93f0d72928ae09f6e157b6ae98210058a751bcc300beda92cf1" exitCode=0 Feb 16 17:02:01.219200 master-0 kubenswrapper[15493]: I0216 17:02:01.219159 15493 generic.go:334] "Generic (PLEG): container finished" podID="f8589094-f18e-4070-a550-b2da6f8acfc0" containerID="a029a9519b0af6df58434184bb4dd337dec578276ce41db33a7f4964a78b38d1" exitCode=0 Feb 16 17:02:01.222592 master-0 kubenswrapper[15493]: I0216 17:02:01.222564 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv_4e51bba5-0ebe-4e55-a588-38b71548c605/cluster-olm-operator/0.log" Feb 16 17:02:01.223471 master-0 kubenswrapper[15493]: I0216 17:02:01.223441 15493 generic.go:334] "Generic (PLEG): container finished" podID="4e51bba5-0ebe-4e55-a588-38b71548c605" containerID="d0f7e8be40545fa33b748eaa6f879efc2d956e86b6534dcac117b6e66db8cbc2" exitCode=255 Feb 16 17:02:01.223471 master-0 kubenswrapper[15493]: I0216 17:02:01.223466 15493 generic.go:334] "Generic (PLEG): container finished" podID="4e51bba5-0ebe-4e55-a588-38b71548c605" containerID="a0dc239cad7cf5c0f46eaeb5867ad213f7711a1950bb1f960b003e867bacaff0" exitCode=0 Feb 16 17:02:01.223471 master-0 kubenswrapper[15493]: I0216 17:02:01.223474 15493 generic.go:334] "Generic (PLEG): container finished" podID="4e51bba5-0ebe-4e55-a588-38b71548c605" containerID="16a0cd95be2918fe98e0a8ede15fe5203c9e491ca6e96550b8c7ea95ff6081d2" exitCode=0 Feb 16 17:02:01.227615 master-0 kubenswrapper[15493]: I0216 17:02:01.227580 15493 generic.go:334] "Generic (PLEG): container finished" podID="9460ca0802075a8a6a10d7b3e6052c4d" containerID="e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098" exitCode=0 Feb 16 17:02:01.235361 master-0 kubenswrapper[15493]: I0216 17:02:01.235311 15493 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31" exitCode=0 Feb 16 17:02:01.235361 master-0 kubenswrapper[15493]: I0216 17:02:01.235357 15493 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa" exitCode=0 Feb 16 17:02:01.239635 master-0 kubenswrapper[15493]: I0216 17:02:01.239594 15493 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="6a76b7400b08797d8e5d6ecf8b5e5677ebdccdcb8c93451e24cae607d87b5dde" exitCode=0 Feb 16 17:02:01.248933 master-0 kubenswrapper[15493]: I0216 17:02:01.248886 15493 generic.go:334] "Generic (PLEG): container finished" podID="29402454-a920-471e-895e-764235d16eb4" containerID="f31ff62ede3b23583193a8479095d460885c4665f91f714a80c48601aa1a71ad" exitCode=0 Feb 16 17:02:01.251620 master-0 kubenswrapper[15493]: I0216 17:02:01.251534 15493 generic.go:334] "Generic (PLEG): container finished" podID="4549ea98-7379-49e1-8452-5efb643137ca" containerID="01bf42c6c3bf4f293fd2294a37aff703b4c469002ae6a87f7c50eefa7c6ae11b" exitCode=0 Feb 16 17:02:01.254865 master-0 kubenswrapper[15493]: I0216 17:02:01.254832 15493 generic.go:334] "Generic (PLEG): container finished" podID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerID="09c67126c6de3668502bac14b9e7abd00e8d4219a805f89a15ceed509ea0a832" exitCode=0 Feb 16 17:02:01.254865 master-0 kubenswrapper[15493]: I0216 17:02:01.254852 15493 generic.go:334] "Generic (PLEG): container finished" podID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerID="3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169" exitCode=0 Feb 16 17:02:01.254865 master-0 kubenswrapper[15493]: I0216 17:02:01.254859 15493 generic.go:334] "Generic (PLEG): container finished" podID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerID="5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2" exitCode=0 Feb 16 17:02:01.256142 master-0 kubenswrapper[15493]: I0216 17:02:01.256116 15493 generic.go:334] "Generic (PLEG): container finished" podID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" containerID="e8c4ffcf7c4ece8cb912757e2c966b100c9bb74e9a2ec208a540c26e8e9187ce" exitCode=0 Feb 16 17:02:01.299642 master-0 kubenswrapper[15493]: I0216 17:02:01.299584 15493 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 17:02:01.355142 master-0 kubenswrapper[15493]: E0216 17:02:01.355070 15493 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 17:02:01.756005 master-0 kubenswrapper[15493]: E0216 17:02:01.755948 15493 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 17:02:01.999398 master-0 kubenswrapper[15493]: I0216 17:02:01.999317 15493 apiserver.go:52] "Watching apiserver" Feb 16 17:02:02.019003 master-0 kubenswrapper[15493]: I0216 17:02:02.018945 15493 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 17:02:02.262976 master-0 kubenswrapper[15493]: I0216 17:02:02.262858 15493 generic.go:334] "Generic (PLEG): container finished" podID="86c571b6-0f65-41f0-b1be-f63d7a974782" containerID="e607db32e1640f4a53c9cd19e2f52a26fa9cbdb5cdabb553570529d03baa71fa" exitCode=0 Feb 16 17:02:02.557029 master-0 kubenswrapper[15493]: E0216 17:02:02.556885 15493 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 17:02:03.353357 master-0 kubenswrapper[15493]: I0216 17:02:03.353306 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_7fe1c16d-061a-4a57-aea4-cf1d4b24d02f/installer/0.log" Feb 16 17:02:04.157756 master-0 kubenswrapper[15493]: E0216 17:02:04.157678 15493 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 17:02:04.520024 master-0 kubenswrapper[15493]: I0216 17:02:04.519974 15493 manager.go:324] Recovery completed Feb 16 17:02:04.598877 master-0 kubenswrapper[15493]: I0216 17:02:04.598778 15493 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 17:02:04.598877 master-0 kubenswrapper[15493]: I0216 17:02:04.598806 15493 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 17:02:04.598877 master-0 kubenswrapper[15493]: I0216 17:02:04.598824 15493 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:02:04.599389 master-0 kubenswrapper[15493]: I0216 17:02:04.598994 15493 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 16 17:02:04.599389 master-0 kubenswrapper[15493]: I0216 17:02:04.599005 15493 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 16 17:02:04.599389 master-0 kubenswrapper[15493]: I0216 17:02:04.599023 15493 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 16 17:02:04.599389 master-0 kubenswrapper[15493]: I0216 17:02:04.599030 15493 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 16 17:02:04.599389 master-0 kubenswrapper[15493]: I0216 17:02:04.599036 15493 policy_none.go:49] "None policy: Start" Feb 16 17:02:04.603297 master-0 kubenswrapper[15493]: I0216 17:02:04.603235 15493 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 17:02:04.603297 master-0 kubenswrapper[15493]: I0216 17:02:04.603299 15493 state_mem.go:35] "Initializing new in-memory state store" Feb 16 17:02:04.603667 master-0 kubenswrapper[15493]: I0216 17:02:04.603625 15493 state_mem.go:75] "Updated machine memory state" Feb 16 17:02:04.603667 master-0 kubenswrapper[15493]: I0216 17:02:04.603654 15493 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 16 17:02:04.623259 master-0 kubenswrapper[15493]: I0216 17:02:04.623178 15493 manager.go:334] "Starting Device Plugin manager" Feb 16 17:02:04.623259 master-0 kubenswrapper[15493]: I0216 17:02:04.623249 15493 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 17:02:04.623494 master-0 kubenswrapper[15493]: I0216 17:02:04.623270 15493 server.go:79] "Starting device plugin registration server" Feb 16 17:02:04.623857 master-0 kubenswrapper[15493]: I0216 17:02:04.623809 15493 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 17:02:04.623965 master-0 kubenswrapper[15493]: I0216 17:02:04.623841 15493 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 17:02:04.624064 master-0 kubenswrapper[15493]: I0216 17:02:04.624039 15493 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 17:02:04.624199 master-0 kubenswrapper[15493]: I0216 17:02:04.624161 15493 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 17:02:04.624199 master-0 kubenswrapper[15493]: I0216 17:02:04.624178 15493 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 17:02:04.630246 master-0 kubenswrapper[15493]: E0216 17:02:04.630002 15493 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1894c8ca4adada44 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:02:04.628253252 +0000 UTC m=+3.778426332,LastTimestamp:2026-02-16 17:02:04.628253252 +0000 UTC m=+3.778426332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:02:04.724396 master-0 kubenswrapper[15493]: I0216 17:02:04.724269 15493 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:02:04.727819 master-0 kubenswrapper[15493]: I0216 17:02:04.727746 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:02:04.727956 master-0 kubenswrapper[15493]: I0216 17:02:04.727877 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:02:04.727956 master-0 kubenswrapper[15493]: I0216 17:02:04.727897 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:02:04.728220 master-0 kubenswrapper[15493]: I0216 17:02:04.728179 15493 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:02:04.729587 master-0 kubenswrapper[15493]: E0216 17:02:04.729523 15493 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:02:04.772942 master-0 kubenswrapper[15493]: E0216 17:02:04.772863 15493 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod737fcc7d_d850_4352_9f17_383c85d5bc28.slice/crio-conmon-435ef5863cc155441a05593945ab2775001a00c9f99d0e797a375813404c36ac.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:02:04.929743 master-0 kubenswrapper[15493]: I0216 17:02:04.929687 15493 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:02:04.932900 master-0 kubenswrapper[15493]: I0216 17:02:04.932864 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:02:04.933055 master-0 kubenswrapper[15493]: I0216 17:02:04.932914 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:02:04.933055 master-0 kubenswrapper[15493]: I0216 17:02:04.932961 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:02:04.933129 master-0 kubenswrapper[15493]: I0216 17:02:04.933083 15493 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:02:04.934147 master-0 kubenswrapper[15493]: E0216 17:02:04.934086 15493 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:02:05.335207 master-0 kubenswrapper[15493]: I0216 17:02:05.335141 15493 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:02:05.340390 master-0 kubenswrapper[15493]: I0216 17:02:05.339850 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:02:05.340390 master-0 kubenswrapper[15493]: I0216 17:02:05.339955 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:02:05.340390 master-0 kubenswrapper[15493]: I0216 17:02:05.339977 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:02:05.340390 master-0 kubenswrapper[15493]: I0216 17:02:05.340097 15493 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:02:05.341312 master-0 kubenswrapper[15493]: E0216 17:02:05.341248 15493 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:02:05.365097 master-0 kubenswrapper[15493]: I0216 17:02:05.365049 15493 generic.go:334] "Generic (PLEG): container finished" podID="737fcc7d-d850-4352-9f17-383c85d5bc28" containerID="435ef5863cc155441a05593945ab2775001a00c9f99d0e797a375813404c36ac" exitCode=0 Feb 16 17:02:05.369091 master-0 kubenswrapper[15493]: I0216 17:02:05.369016 15493 generic.go:334] "Generic (PLEG): container finished" podID="5d1e91e5a1fed5cf7076a92d2830d36f" containerID="a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b" exitCode=0 Feb 16 17:02:06.141445 master-0 kubenswrapper[15493]: I0216 17:02:06.141350 15493 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:02:06.144475 master-0 kubenswrapper[15493]: I0216 17:02:06.144407 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:02:06.144475 master-0 kubenswrapper[15493]: I0216 17:02:06.144480 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:02:06.144652 master-0 kubenswrapper[15493]: I0216 17:02:06.144493 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:02:06.144652 master-0 kubenswrapper[15493]: I0216 17:02:06.144588 15493 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:02:06.145651 master-0 kubenswrapper[15493]: E0216 17:02:06.145596 15493 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:02:07.358490 master-0 kubenswrapper[15493]: I0216 17:02:07.358379 15493 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","kube-system/bootstrap-kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 17:02:07.360136 master-0 kubenswrapper[15493]: I0216 17:02:07.360059 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:07.360329 master-0 kubenswrapper[15493]: I0216 17:02:07.360092 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.360556 master-0 kubenswrapper[15493]: I0216 17:02:07.360376 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-fc4bf7f79-tqnlw","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz","openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv","openshift-multus/multus-admission-controller-7c64d55f8-4jz2t","openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9","openshift-etcd/etcd-master-0-master-0","openshift-service-ca/service-ca-676cd8b9b5-cp9rb","openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl","assisted-installer/assisted-installer-controller-thhq2","openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd","openshift-dns-operator/dns-operator-86b8869b79-nhxlp","openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6","openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd","openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6","openshift-network-diagnostics/network-check-target-vwvwx","kube-system/bootstrap-kube-scheduler-master-0","openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v","openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv","openshift-kube-controller-manager/installer-1-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-marketplace/redhat-operators-lnzfx","openshift-multus/multus-6r7wj","openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb","openshift-dns/dns-default-qcgxx","openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8","openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc","kube-system/bootstrap-kube-controller-manager-master-0","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf","openshift-kube-scheduler/installer-4-master-0","openshift-marketplace/redhat-marketplace-4kd66","openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw","openshift-multus/network-metrics-daemon-279g6","openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts","openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8","openshift-dns/node-resolver-vfxj4","openshift-kube-controller-manager/installer-2-master-0","openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx","openshift-machine-config-operator/machine-config-daemon-98q6v","openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr","openshift-insights/insights-operator-cb4f7b4cf-6qrw5","openshift-marketplace/certified-operators-z69zq","openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm","openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b","openshift-kube-apiserver/installer-1-master-0","openshift-marketplace/certified-operators-8kkl7","openshift-marketplace/community-operators-7w4km","openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs","openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k","openshift-authentication-operator/authentication-operator-755d954778-lf4cb","openshift-etcd/installer-2-master-0","openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk","openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-marketplace/community-operators-n7kjr","openshift-network-node-identity/network-node-identity-hhcpr","openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf","openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2","openshift-ovn-kubernetes/ovnkube-node-flr86","openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48","openshift-multus/multus-additional-cni-plugins-rjdlk","openshift-network-operator/iptables-alerter-czzz2","openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5","openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz","openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9","openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k","openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6","openshift-cluster-node-tuning-operator/tuned-l5kbz","openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp","openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx","openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc","openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9","openshift-network-operator/network-operator-6fcf4c966-6bmf9"] Feb 16 17:02:07.360970 master-0 kubenswrapper[15493]: I0216 17:02:07.360903 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 17:02:07.361331 master-0 kubenswrapper[15493]: I0216 17:02:07.361251 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:02:07.363094 master-0 kubenswrapper[15493]: W0216 17:02:07.362997 15493 reflector.go:561] object-"openshift-network-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.363328 master-0 kubenswrapper[15493]: E0216 17:02:07.363242 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:02:07.363720 master-0 kubenswrapper[15493]: E0216 17:02:07.363613 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:02:07.363720 master-0 kubenswrapper[15493]: E0216 17:02:07.363623 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.363720 master-0 kubenswrapper[15493]: E0216 17:02:07.363676 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: W0216 17:02:07.363283 15493 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: E0216 17:02:07.363768 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: E0216 17:02:07.363796 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: W0216 17:02:07.363383 15493 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: W0216 17:02:07.363422 15493 reflector.go:561] object-"openshift-image-registry"/"image-registry-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: E0216 17:02:07.363112 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: W0216 17:02:07.363490 15493 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: E0216 17:02:07.363835 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: E0216 17:02:07.363946 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: W0216 17:02:07.363516 15493 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: W0216 17:02:07.364026 15493 reflector.go:561] object-"openshift-marketplace"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: E0216 17:02:07.364064 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: W0216 17:02:07.363604 15493 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.364123 master-0 kubenswrapper[15493]: E0216 17:02:07.363845 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: E0216 17:02:07.363858 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: W0216 17:02:07.363747 15493 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: W0216 17:02:07.364190 15493 reflector.go:561] object-"openshift-marketplace"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: E0216 17:02:07.364256 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: E0216 17:02:07.364071 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: W0216 17:02:07.364120 15493 reflector.go:561] object-"openshift-network-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: W0216 17:02:07.364277 15493 reflector.go:561] object-"openshift-marketplace"/"marketplace-trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: W0216 17:02:07.364304 15493 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-metrics": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: E0216 17:02:07.364263 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: E0216 17:02:07.364312 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: E0216 17:02:07.364124 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: E0216 17:02:07.364441 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: E0216 17:02:07.364482 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.364605 master-0 kubenswrapper[15493]: W0216 17:02:07.364410 15493 reflector.go:561] object-"openshift-network-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.366859 master-0 kubenswrapper[15493]: E0216 17:02:07.364655 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.366859 master-0 kubenswrapper[15493]: I0216 17:02:07.366816 15493 status_manager.go:851] "Failed to get status for pod" podUID="f8589094-f18e-4070-a550-b2da6f8acfc0" pod="assisted-installer/assisted-installer-controller-thhq2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/assisted-installer/pods/assisted-installer-controller-thhq2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:07.367228 master-0 kubenswrapper[15493]: I0216 17:02:07.367175 15493 status_manager.go:851] "Failed to get status for pod" podUID="4549ea98-7379-49e1-8452-5efb643137ca" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-6fcf4c966-6bmf9\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:07.367297 master-0 kubenswrapper[15493]: W0216 17:02:07.367248 15493 reflector.go:561] object-"openshift-service-ca-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.367333 master-0 kubenswrapper[15493]: W0216 17:02:07.367278 15493 reflector.go:561] object-"openshift-ingress-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.367368 master-0 kubenswrapper[15493]: W0216 17:02:07.367301 15493 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.367368 master-0 kubenswrapper[15493]: E0216 17:02:07.367341 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.367368 master-0 kubenswrapper[15493]: E0216 17:02:07.367298 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.367584 master-0 kubenswrapper[15493]: E0216 17:02:07.367375 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.367584 master-0 kubenswrapper[15493]: W0216 17:02:07.367376 15493 reflector.go:561] object-"openshift-ingress-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.367584 master-0 kubenswrapper[15493]: W0216 17:02:07.367391 15493 reflector.go:561] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.367584 master-0 kubenswrapper[15493]: E0216 17:02:07.367431 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.367584 master-0 kubenswrapper[15493]: E0216 17:02:07.367439 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.367584 master-0 kubenswrapper[15493]: W0216 17:02:07.367376 15493 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.367584 master-0 kubenswrapper[15493]: W0216 17:02:07.367433 15493 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.367584 master-0 kubenswrapper[15493]: E0216 17:02:07.367479 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.367584 master-0 kubenswrapper[15493]: W0216 17:02:07.367479 15493 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.367584 master-0 kubenswrapper[15493]: W0216 17:02:07.367520 15493 reflector.go:561] object-"openshift-service-ca-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.367584 master-0 kubenswrapper[15493]: E0216 17:02:07.367572 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.367584 master-0 kubenswrapper[15493]: E0216 17:02:07.367564 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.367584 master-0 kubenswrapper[15493]: W0216 17:02:07.367513 15493 reflector.go:561] object-"openshift-ingress-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: W0216 17:02:07.367543 15493 reflector.go:561] object-"openshift-ingress-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: W0216 17:02:07.367477 15493 reflector.go:561] object-"openshift-monitoring"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: W0216 17:02:07.367604 15493 reflector.go:561] object-"openshift-monitoring"/"cluster-monitoring-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dcluster-monitoring-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: E0216 17:02:07.367667 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dcluster-monitoring-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: E0216 17:02:07.367656 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: E0216 17:02:07.367611 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: W0216 17:02:07.367505 15493 reflector.go:561] object-"openshift-monitoring"/"telemetry-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dtelemetry-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: E0216 17:02:07.367710 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: E0216 17:02:07.367532 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: E0216 17:02:07.367718 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"telemetry-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dtelemetry-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: W0216 17:02:07.367669 15493 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: E0216 17:02:07.367749 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: I0216 17:02:07.367796 15493 status_manager.go:851] "Failed to get status for pod" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-756d64c8c4-ln4wm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: W0216 17:02:07.367875 15493 reflector.go:561] object-"openshift-monitoring"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368044 master-0 kubenswrapper[15493]: E0216 17:02:07.368021 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368545 master-0 kubenswrapper[15493]: W0216 17:02:07.368173 15493 reflector.go:561] object-"openshift-dns-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368545 master-0 kubenswrapper[15493]: E0216 17:02:07.368212 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368545 master-0 kubenswrapper[15493]: W0216 17:02:07.368252 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368545 master-0 kubenswrapper[15493]: W0216 17:02:07.368289 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368545 master-0 kubenswrapper[15493]: W0216 17:02:07.368313 15493 reflector.go:561] object-"openshift-dns-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368545 master-0 kubenswrapper[15493]: E0216 17:02:07.368338 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368545 master-0 kubenswrapper[15493]: E0216 17:02:07.368300 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368545 master-0 kubenswrapper[15493]: W0216 17:02:07.368379 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368545 master-0 kubenswrapper[15493]: E0216 17:02:07.368446 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368545 master-0 kubenswrapper[15493]: W0216 17:02:07.368295 15493 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368545 master-0 kubenswrapper[15493]: W0216 17:02:07.368486 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dnode-tuning-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: W0216 17:02:07.368486 15493 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: W0216 17:02:07.368569 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dperformance-addon-operator-webhook-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: W0216 17:02:07.368632 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: W0216 17:02:07.368638 15493 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: E0216 17:02:07.368679 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: E0216 17:02:07.368505 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: W0216 17:02:07.368696 15493 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: E0216 17:02:07.368694 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: I0216 17:02:07.368730 15493 status_manager.go:851] "Failed to get status for pod" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/pods/openshift-apiserver-operator-6d4655d9cf-qhn9v\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: E0216 17:02:07.368707 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"performance-addon-operator-webhook-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dperformance-addon-operator-webhook-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: E0216 17:02:07.368705 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: E0216 17:02:07.368569 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"node-tuning-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dnode-tuning-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: E0216 17:02:07.368498 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: W0216 17:02:07.368723 15493 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: W0216 17:02:07.368752 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: E0216 17:02:07.368740 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: E0216 17:02:07.368803 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: E0216 17:02:07.368796 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: W0216 17:02:07.368802 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: W0216 17:02:07.368836 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.368865 master-0 kubenswrapper[15493]: W0216 17:02:07.368852 15493 reflector.go:561] object-"openshift-dns-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: E0216 17:02:07.368893 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: E0216 17:02:07.368891 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: W0216 17:02:07.368767 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: E0216 17:02:07.368894 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: E0216 17:02:07.368987 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: W0216 17:02:07.368725 15493 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: E0216 17:02:07.369035 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: W0216 17:02:07.368575 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: E0216 17:02:07.369080 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: W0216 17:02:07.368730 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: E0216 17:02:07.369129 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: W0216 17:02:07.368916 15493 reflector.go:561] object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/secrets?fieldSelector=metadata.name%3Dcluster-olm-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: E0216 17:02:07.369175 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-olm-operator\"/\"cluster-olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/secrets?fieldSelector=metadata.name%3Dcluster-olm-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: I0216 17:02:07.369450 15493 status_manager.go:851] "Failed to get status for pod" podUID="4549ea98-7379-49e1-8452-5efb643137ca" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-6fcf4c966-6bmf9\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: W0216 17:02:07.369514 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: E0216 17:02:07.369549 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.369561 master-0 kubenswrapper[15493]: W0216 17:02:07.369547 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.369607 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: W0216 17:02:07.369595 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: W0216 17:02:07.369646 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: W0216 17:02:07.369647 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.369677 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.369656 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.369686 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: W0216 17:02:07.369647 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: W0216 17:02:07.369673 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.369776 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: W0216 17:02:07.369689 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.369717 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: W0216 17:02:07.369824 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.369861 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: W0216 17:02:07.369867 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: I0216 17:02:07.369887 15493 status_manager.go:851] "Failed to get status for pod" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/pods/cluster-image-registry-operator-96c8c64b8-zwwnk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: W0216 17:02:07.369910 15493 reflector.go:561] object-"openshift-etcd-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.369964 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: W0216 17:02:07.369980 15493 reflector.go:561] object-"openshift-etcd-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.369999 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.369887 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: W0216 17:02:07.369958 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.370017 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: W0216 17:02:07.370012 15493 reflector.go:561] object-"openshift-cluster-olm-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.370046 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.370050 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-olm-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: W0216 17:02:07.370012 15493 reflector.go:561] object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.370191 master-0 kubenswrapper[15493]: E0216 17:02:07.370076 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-olm-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: I0216 17:02:07.370329 15493 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="31b75d0b-8694-4e17-995b-76e2288745c2" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: I0216 17:02:07.370392 15493 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="919d325a-e3bb-4db5-8ebc-382d41928e44" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.370631 15493 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.370669 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.370700 15493 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.370698 15493 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.370761 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.370754 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.370791 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.370779 15493 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.370806 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.370773 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.370801 15493 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: I0216 17:02:07.370834 15493 status_manager.go:851] "Failed to get status for pod" podUID="29402454-a920-471e-895e-764235d16eb4" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-5dc4688546-pl7r5\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.370764 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.370861 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.370827 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.370861 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.370984 15493 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.370894 15493 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.371019 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.371033 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.371002 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.371043 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.371037 15493 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.371056 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.371099 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.371081 15493 reflector.go:561] object-"openshift-network-node-identity"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.370909 15493 reflector.go:561] object-"openshift-network-node-identity"/"ovnkube-identity-cm": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.371127 15493 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.371152 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.371167 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.371045 15493 reflector.go:561] object-"openshift-multus"/"metrics-daemon-secret": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.371160 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.371198 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.371106 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.371169 15493 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.371222 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.371274 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: W0216 17:02:07.371106 15493 reflector.go:561] object-"openshift-network-node-identity"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.371368 master-0 kubenswrapper[15493]: E0216 17:02:07.371343 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.372707 master-0 kubenswrapper[15493]: W0216 17:02:07.371439 15493 reflector.go:561] object-"openshift-multus"/"multus-admission-controller-secret": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.372707 master-0 kubenswrapper[15493]: E0216 17:02:07.371503 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-admission-controller-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.373985 master-0 kubenswrapper[15493]: I0216 17:02:07.373937 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:02:07.375491 master-0 kubenswrapper[15493]: I0216 17:02:07.375447 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:02:07.381189 master-0 kubenswrapper[15493]: W0216 17:02:07.381092 15493 reflector.go:561] object-"openshift-network-node-identity"/"network-node-identity-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.381280 master-0 kubenswrapper[15493]: E0216 17:02:07.381204 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.382315 master-0 kubenswrapper[15493]: I0216 17:02:07.382278 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:02:07.386251 master-0 kubenswrapper[15493]: I0216 17:02:07.385999 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_7fe1c16d-061a-4a57-aea4-cf1d4b24d02f/installer/0.log" Feb 16 17:02:07.386376 master-0 kubenswrapper[15493]: I0216 17:02:07.386273 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:02:07.393242 master-0 kubenswrapper[15493]: I0216 17:02:07.393201 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:07.393562 master-0 kubenswrapper[15493]: I0216 17:02:07.393527 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:07.393744 master-0 kubenswrapper[15493]: I0216 17:02:07.393692 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:07.394120 master-0 kubenswrapper[15493]: I0216 17:02:07.394080 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:02:07.394394 master-0 kubenswrapper[15493]: I0216 17:02:07.394362 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:07.394636 master-0 kubenswrapper[15493]: I0216 17:02:07.394594 15493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78f7ca346bca4984ddbbaf801650ea12f9c20b44ed1343037c4daed41481b056" Feb 16 17:02:07.394713 master-0 kubenswrapper[15493]: I0216 17:02:07.394633 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerStarted","Data":"32282e3210e204263457f22f7fb6c9b2c61db1832f983d1236a1034b1a5140d4"} Feb 16 17:02:07.394713 master-0 kubenswrapper[15493]: I0216 17:02:07.394708 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerStarted","Data":"e33dd133299981fac9e32c9766093c6f93957b6afe0a293539ddeda20c06cf82"} Feb 16 17:02:07.394814 master-0 kubenswrapper[15493]: I0216 17:02:07.394723 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerStarted","Data":"80fdc50e531795c33b265621c0e851281169f624db10a2cc59cfc4a7fd66173e"} Feb 16 17:02:07.394814 master-0 kubenswrapper[15493]: I0216 17:02:07.394748 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerStarted","Data":"c7a678a1566dce1a83b3b33b3d0dd73aa2c7ba1c17bac97e5cf444e5f241b28a"} Feb 16 17:02:07.394814 master-0 kubenswrapper[15493]: I0216 17:02:07.394715 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:07.400626 master-0 kubenswrapper[15493]: W0216 17:02:07.400539 15493 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.400726 master-0 kubenswrapper[15493]: E0216 17:02:07.400628 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.402642 master-0 kubenswrapper[15493]: I0216 17:02:07.402582 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:07.420614 master-0 kubenswrapper[15493]: I0216 17:02:07.420539 15493 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 16 17:02:07.424531 master-0 kubenswrapper[15493]: W0216 17:02:07.424458 15493 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.424636 master-0 kubenswrapper[15493]: E0216 17:02:07.424534 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441164 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f-kube-api-access\") pod \"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f\" (UID: \"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f\") " Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441251 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/035c8af0-95f3-4ab6-939c-d7fa8bda40a3-kube-api-access\") pod \"035c8af0-95f3-4ab6-939c-d7fa8bda40a3\" (UID: \"035c8af0-95f3-4ab6-939c-d7fa8bda40a3\") " Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441484 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441510 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441535 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441561 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441584 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441605 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441627 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441674 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441696 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441720 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441742 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441763 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441785 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441807 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441831 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441855 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441875 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441896 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441915 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441959 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.441982 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: W0216 17:02:07.441903 15493 reflector.go:561] object-"openshift-network-node-identity"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: E0216 17:02:07.442050 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.442006 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.442086 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.442126 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.442144 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.442161 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.442200 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.442217 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.442234 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.442250 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:07.442580 master-0 kubenswrapper[15493]: I0216 17:02:07.442592 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.442740 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.442895 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.443189 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.443285 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.443316 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.443371 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.443403 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.443459 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.443498 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.443515 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.443530 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.443585 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.443620 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.443645 master-0 kubenswrapper[15493]: I0216 17:02:07.443654 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.443685 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.443711 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.443739 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.443765 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.443787 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.443812 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.443836 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmk2b\" (UniqueName: \"kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.443860 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.443885 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.443893 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.443911 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.443977 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.444064 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.444096 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-var-lock\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.444117 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86c571b6-0f65-41f0-b1be-f63d7a974782-kube-api-access\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.444135 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.444149 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.444158 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.444229 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.444253 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.444260 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.444300 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.444330 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:07.444347 master-0 kubenswrapper[15493]: I0216 17:02:07.444357 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.444461 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.444491 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.444515 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.444547 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.444572 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.444595 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.444619 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-utilities\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.444705 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.444711 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-utilities\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.444794 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.444822 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.444855 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.444967 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445008 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445036 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445064 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445117 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445154 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445181 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445205 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445229 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445247 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445265 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445285 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445306 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445326 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445356 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-var-lock\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445385 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445413 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445456 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445495 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445510 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445521 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445549 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445574 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445596 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445623 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445650 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445677 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445705 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445754 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445797 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445831 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445863 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445892 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445932 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445965 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.445990 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446013 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446036 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446060 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446108 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446135 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446154 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446171 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446260 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446279 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446296 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446319 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446405 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446444 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446471 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446494 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446520 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446547 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446574 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446599 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446623 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446648 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446701 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446724 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446749 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446771 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446794 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446949 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.446984 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447004 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447029 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447055 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447079 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447101 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447127 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfkd9\" (UniqueName: \"kubernetes.io/projected/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-kube-api-access-qfkd9\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447153 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447175 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447199 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447221 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447246 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447269 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447297 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447319 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447346 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447367 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447391 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447413 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035c8af0-95f3-4ab6-939c-d7fa8bda40a3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "035c8af0-95f3-4ab6-939c-d7fa8bda40a3" (UID: "035c8af0-95f3-4ab6-939c-d7fa8bda40a3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447691 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447729 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447769 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447838 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447864 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447893 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.447953 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448017 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448039 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448059 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448174 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448205 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-catalog-content\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448227 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448251 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448274 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448295 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448315 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448328 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-catalog-content\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448340 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448373 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448403 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448428 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448458 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448481 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448504 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448565 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448595 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448598 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448619 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:07.448539 master-0 kubenswrapper[15493]: I0216 17:02:07.448642 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.448663 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.448686 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.448708 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.448729 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.448756 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.448787 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.448809 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.448832 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.448856 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.448880 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.448904 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.448953 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.448978 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449002 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449025 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449049 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449132 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449167 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449193 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449220 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449243 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-utilities\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449269 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449296 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449322 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449351 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449378 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449510 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449605 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-catalog-content\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449633 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449657 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449684 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449718 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449745 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449771 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449795 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449820 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449846 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449870 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449897 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449940 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449968 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.449993 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450030 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450056 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450118 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxhk5\" (UniqueName: \"kubernetes.io/projected/a6d86b04-1d3f-4f27-a262-b732c1295997-kube-api-access-lxhk5\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450144 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450175 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450200 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450224 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450248 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450272 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450297 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450320 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450346 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450370 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450399 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450423 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450449 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450475 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450501 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450524 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450552 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450579 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450604 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450627 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450656 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450704 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450732 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450760 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450782 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450810 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450837 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450861 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450885 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450915 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450968 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.450995 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.451020 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.451050 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.451076 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.451100 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.451123 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.451364 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-catalog-content\") pod \"community-operators-n7kjr\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.451954 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.451976 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.451993 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452019 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452045 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452075 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-utilities\") pod \"certified-operators-8kkl7\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452076 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452103 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-var-lock\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452131 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452158 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452219 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452252 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452250 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452287 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452306 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452326 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452345 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452443 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452225 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452611 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452642 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452663 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452684 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452709 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452735 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452754 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452771 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452796 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452820 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452853 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452891 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452939 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452967 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.452998 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.453026 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.453137 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.453167 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.453195 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.453220 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:07.453510 master-0 kubenswrapper[15493]: I0216 17:02:07.453545 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:07.459282 master-0 kubenswrapper[15493]: I0216 17:02:07.453663 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/035c8af0-95f3-4ab6-939c-d7fa8bda40a3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:07.459282 master-0 kubenswrapper[15493]: I0216 17:02:07.456136 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7fe1c16d-061a-4a57-aea4-cf1d4b24d02f" (UID: "7fe1c16d-061a-4a57-aea4-cf1d4b24d02f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:07.464165 master-0 kubenswrapper[15493]: W0216 17:02:07.464088 15493 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.464314 master-0 kubenswrapper[15493]: E0216 17:02:07.464178 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.481138 master-0 kubenswrapper[15493]: I0216 17:02:07.481059 15493 status_manager.go:851] "Failed to get status for pod" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-c588d8cb4-wjr7d\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:07.501112 master-0 kubenswrapper[15493]: W0216 17:02:07.501011 15493 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.501248 master-0 kubenswrapper[15493]: E0216 17:02:07.501192 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.521328 master-0 kubenswrapper[15493]: W0216 17:02:07.521243 15493 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.521475 master-0 kubenswrapper[15493]: E0216 17:02:07.521336 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.540679 master-0 kubenswrapper[15493]: W0216 17:02:07.540569 15493 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.540679 master-0 kubenswrapper[15493]: E0216 17:02:07.540648 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.554550 master-0 kubenswrapper[15493]: I0216 17:02:07.554498 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.554955 master-0 kubenswrapper[15493]: I0216 17:02:07.554584 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.554955 master-0 kubenswrapper[15493]: I0216 17:02:07.554714 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:02:07.554955 master-0 kubenswrapper[15493]: I0216 17:02:07.554821 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.554955 master-0 kubenswrapper[15493]: I0216 17:02:07.554882 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.554955 master-0 kubenswrapper[15493]: I0216 17:02:07.554914 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.555293 master-0 kubenswrapper[15493]: I0216 17:02:07.554987 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.555293 master-0 kubenswrapper[15493]: I0216 17:02:07.555035 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:02:07.555293 master-0 kubenswrapper[15493]: I0216 17:02:07.555062 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.555293 master-0 kubenswrapper[15493]: I0216 17:02:07.555150 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.555293 master-0 kubenswrapper[15493]: I0216 17:02:07.555220 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.555293 master-0 kubenswrapper[15493]: I0216 17:02:07.555254 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:07.555571 master-0 kubenswrapper[15493]: I0216 17:02:07.555339 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.555571 master-0 kubenswrapper[15493]: I0216 17:02:07.555417 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:02:07.555571 master-0 kubenswrapper[15493]: I0216 17:02:07.555435 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:07.555571 master-0 kubenswrapper[15493]: I0216 17:02:07.555469 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-var-lock\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:07.555571 master-0 kubenswrapper[15493]: I0216 17:02:07.555545 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.555571 master-0 kubenswrapper[15493]: I0216 17:02:07.555563 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.555936 master-0 kubenswrapper[15493]: I0216 17:02:07.555601 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:07.555936 master-0 kubenswrapper[15493]: I0216 17:02:07.555639 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.555936 master-0 kubenswrapper[15493]: I0216 17:02:07.555659 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.555936 master-0 kubenswrapper[15493]: I0216 17:02:07.555731 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.555936 master-0 kubenswrapper[15493]: I0216 17:02:07.555764 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.555936 master-0 kubenswrapper[15493]: I0216 17:02:07.555821 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:02:07.555936 master-0 kubenswrapper[15493]: I0216 17:02:07.555841 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.555936 master-0 kubenswrapper[15493]: I0216 17:02:07.555873 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.556534 master-0 kubenswrapper[15493]: I0216 17:02:07.556001 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:02:07.556534 master-0 kubenswrapper[15493]: I0216 17:02:07.556060 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.556534 master-0 kubenswrapper[15493]: I0216 17:02:07.556094 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:02:07.556534 master-0 kubenswrapper[15493]: I0216 17:02:07.556130 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.556534 master-0 kubenswrapper[15493]: I0216 17:02:07.556125 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.556534 master-0 kubenswrapper[15493]: I0216 17:02:07.556386 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.556534 master-0 kubenswrapper[15493]: I0216 17:02:07.556421 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:07.556534 master-0 kubenswrapper[15493]: I0216 17:02:07.556453 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-var-lock\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:07.556534 master-0 kubenswrapper[15493]: I0216 17:02:07.556484 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.556534 master-0 kubenswrapper[15493]: I0216 17:02:07.556514 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.556956 master-0 kubenswrapper[15493]: I0216 17:02:07.556613 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.556956 master-0 kubenswrapper[15493]: I0216 17:02:07.556687 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:07.556956 master-0 kubenswrapper[15493]: I0216 17:02:07.556721 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.556956 master-0 kubenswrapper[15493]: I0216 17:02:07.556780 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:02:07.556956 master-0 kubenswrapper[15493]: I0216 17:02:07.556798 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.556956 master-0 kubenswrapper[15493]: I0216 17:02:07.556817 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.556956 master-0 kubenswrapper[15493]: I0216 17:02:07.556871 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-var-lock\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:02:07.556956 master-0 kubenswrapper[15493]: I0216 17:02:07.556894 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.556956 master-0 kubenswrapper[15493]: I0216 17:02:07.556943 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.556956 master-0 kubenswrapper[15493]: I0216 17:02:07.556964 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557006 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557025 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557042 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557058 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557110 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557136 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557155 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557172 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-var-lock\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557189 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557214 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557255 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557273 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557290 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557321 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557342 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.557354 master-0 kubenswrapper[15493]: I0216 17:02:07.557360 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557396 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557426 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557444 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557480 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557484 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557526 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557542 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557555 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-var-lock\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557576 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557594 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557603 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557637 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557645 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557665 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557671 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557700 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557704 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557724 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557736 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557748 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557767 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557779 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557791 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557815 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557833 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557835 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.557814 master-0 kubenswrapper[15493]: I0216 17:02:07.557850 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.557900 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.557902 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.557940 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.557959 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.557981 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.557994 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558014 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558043 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558060 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558091 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558096 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558119 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558125 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558147 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558167 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558178 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558200 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558220 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558246 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558265 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-var-lock\") pod \"installer-1-master-0\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558272 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558305 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558309 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558333 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558351 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558356 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558387 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558402 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558437 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558438 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558464 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558470 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558490 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558491 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558524 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558248 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558562 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558584 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558592 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558611 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558617 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558640 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.558586 master-0 kubenswrapper[15493]: I0216 17:02:07.558651 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558669 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558680 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558692 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558709 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558745 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558768 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558795 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558812 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558815 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558840 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558849 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558858 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558880 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558564 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558911 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558975 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559027 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559030 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559069 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559091 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.558981 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559143 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559164 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559167 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559188 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559208 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559246 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559292 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559302 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559327 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559352 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559376 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559437 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559479 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:07.560242 master-0 kubenswrapper[15493]: I0216 17:02:07.559477 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:07.561626 master-0 kubenswrapper[15493]: W0216 17:02:07.561268 15493 reflector.go:561] object-"openshift-network-operator"/"iptables-alerter-script": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.561626 master-0 kubenswrapper[15493]: E0216 17:02:07.561348 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"iptables-alerter-script\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.580656 master-0 kubenswrapper[15493]: W0216 17:02:07.580573 15493 reflector.go:561] object-"openshift-multus"/"whereabouts-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dwhereabouts-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.580656 master-0 kubenswrapper[15493]: E0216 17:02:07.580632 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"whereabouts-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dwhereabouts-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.601116 master-0 kubenswrapper[15493]: W0216 17:02:07.601014 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.601116 master-0 kubenswrapper[15493]: E0216 17:02:07.601089 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.620992 master-0 kubenswrapper[15493]: W0216 17:02:07.620739 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.620992 master-0 kubenswrapper[15493]: E0216 17:02:07.620826 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.641465 master-0 kubenswrapper[15493]: W0216 17:02:07.641381 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.641465 master-0 kubenswrapper[15493]: E0216 17:02:07.641460 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.661807 master-0 kubenswrapper[15493]: W0216 17:02:07.661702 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.661807 master-0 kubenswrapper[15493]: E0216 17:02:07.661797 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.664781 master-0 kubenswrapper[15493]: I0216 17:02:07.663552 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:02:07.664781 master-0 kubenswrapper[15493]: I0216 17:02:07.664536 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:07.665456 master-0 kubenswrapper[15493]: I0216 17:02:07.665410 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:07.680571 master-0 kubenswrapper[15493]: W0216 17:02:07.680480 15493 reflector.go:561] object-"openshift-service-ca"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.680699 master-0 kubenswrapper[15493]: E0216 17:02:07.680576 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.703199 master-0 kubenswrapper[15493]: W0216 17:02:07.703096 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode300ec3a145c1339a627607b3c84b99d.slice/crio-55448d8bea6b7d300f8becd37c0b5654a24938ecf842378babc2a1e0bcb81d5b WatchSource:0}: Error finding container 55448d8bea6b7d300f8becd37c0b5654a24938ecf842378babc2a1e0bcb81d5b: Status 404 returned error can't find the container with id 55448d8bea6b7d300f8becd37c0b5654a24938ecf842378babc2a1e0bcb81d5b Feb 16 17:02:07.704401 master-0 kubenswrapper[15493]: W0216 17:02:07.704326 15493 reflector.go:561] object-"openshift-service-ca"/"signing-key": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.704467 master-0 kubenswrapper[15493]: E0216 17:02:07.704398 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-key\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.726247 master-0 kubenswrapper[15493]: W0216 17:02:07.726148 15493 reflector.go:561] object-"openshift-service-ca"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.726324 master-0 kubenswrapper[15493]: E0216 17:02:07.726265 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.743109 master-0 kubenswrapper[15493]: W0216 17:02:07.743035 15493 reflector.go:561] object-"openshift-service-ca"/"signing-cabundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.743226 master-0 kubenswrapper[15493]: E0216 17:02:07.743124 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-cabundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.746369 master-0 kubenswrapper[15493]: I0216 17:02:07.746343 15493 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:02:07.752225 master-0 kubenswrapper[15493]: I0216 17:02:07.752174 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:02:07.752225 master-0 kubenswrapper[15493]: I0216 17:02:07.752223 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:02:07.752353 master-0 kubenswrapper[15493]: I0216 17:02:07.752237 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:02:07.752499 master-0 kubenswrapper[15493]: I0216 17:02:07.752483 15493 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:02:07.761307 master-0 kubenswrapper[15493]: W0216 17:02:07.761236 15493 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.761378 master-0 kubenswrapper[15493]: E0216 17:02:07.761312 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.772682 master-0 kubenswrapper[15493]: I0216 17:02:07.772590 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:02:07.780830 master-0 kubenswrapper[15493]: W0216 17:02:07.780731 15493 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.780881 master-0 kubenswrapper[15493]: E0216 17:02:07.780843 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.806365 master-0 kubenswrapper[15493]: W0216 17:02:07.806211 15493 reflector.go:561] object-"openshift-dns"/"dns-default-metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.806474 master-0 kubenswrapper[15493]: E0216 17:02:07.806348 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default-metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.812494 master-0 kubenswrapper[15493]: I0216 17:02:07.812452 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:02:07.818166 master-0 kubenswrapper[15493]: I0216 17:02:07.818128 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:02:07.821295 master-0 kubenswrapper[15493]: W0216 17:02:07.821207 15493 reflector.go:561] object-"openshift-dns"/"dns-default": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.821359 master-0 kubenswrapper[15493]: E0216 17:02:07.821304 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.841428 master-0 kubenswrapper[15493]: W0216 17:02:07.841349 15493 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.841609 master-0 kubenswrapper[15493]: E0216 17:02:07.841447 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.861586 master-0 kubenswrapper[15493]: W0216 17:02:07.861433 15493 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.861586 master-0 kubenswrapper[15493]: E0216 17:02:07.861518 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.866658 master-0 kubenswrapper[15493]: I0216 17:02:07.866622 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-utilities\") pod \"a6d86b04-1d3f-4f27-a262-b732c1295997\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " Feb 16 17:02:07.866758 master-0 kubenswrapper[15493]: I0216 17:02:07.866674 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-catalog-content\") pod \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " Feb 16 17:02:07.866758 master-0 kubenswrapper[15493]: I0216 17:02:07.866730 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-utilities\") pod \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " Feb 16 17:02:07.866758 master-0 kubenswrapper[15493]: I0216 17:02:07.866754 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-kubelet-dir\") pod \"86c571b6-0f65-41f0-b1be-f63d7a974782\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " Feb 16 17:02:07.866876 master-0 kubenswrapper[15493]: I0216 17:02:07.866821 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-var-lock\") pod \"86c571b6-0f65-41f0-b1be-f63d7a974782\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " Feb 16 17:02:07.866934 master-0 kubenswrapper[15493]: I0216 17:02:07.866888 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-catalog-content\") pod \"a6d86b04-1d3f-4f27-a262-b732c1295997\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " Feb 16 17:02:07.867827 master-0 kubenswrapper[15493]: I0216 17:02:07.867171 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-var-lock" (OuterVolumeSpecName: "var-lock") pod "86c571b6-0f65-41f0-b1be-f63d7a974782" (UID: "86c571b6-0f65-41f0-b1be-f63d7a974782"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:02:07.867827 master-0 kubenswrapper[15493]: I0216 17:02:07.867248 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa" (UID: "1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:02:07.867827 master-0 kubenswrapper[15493]: I0216 17:02:07.867237 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "86c571b6-0f65-41f0-b1be-f63d7a974782" (UID: "86c571b6-0f65-41f0-b1be-f63d7a974782"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:02:07.867827 master-0 kubenswrapper[15493]: I0216 17:02:07.867382 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-utilities" (OuterVolumeSpecName: "utilities") pod "a6d86b04-1d3f-4f27-a262-b732c1295997" (UID: "a6d86b04-1d3f-4f27-a262-b732c1295997"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:02:07.867827 master-0 kubenswrapper[15493]: I0216 17:02:07.867544 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6d86b04-1d3f-4f27-a262-b732c1295997" (UID: "a6d86b04-1d3f-4f27-a262-b732c1295997"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:02:07.867827 master-0 kubenswrapper[15493]: I0216 17:02:07.867788 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-utilities" (OuterVolumeSpecName: "utilities") pod "1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa" (UID: "1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:02:07.869605 master-0 kubenswrapper[15493]: I0216 17:02:07.869548 15493 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-utilities\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:07.869682 master-0 kubenswrapper[15493]: I0216 17:02:07.869608 15493 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:07.869682 master-0 kubenswrapper[15493]: I0216 17:02:07.869630 15493 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/86c571b6-0f65-41f0-b1be-f63d7a974782-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:07.869682 master-0 kubenswrapper[15493]: I0216 17:02:07.869652 15493 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:07.869682 master-0 kubenswrapper[15493]: I0216 17:02:07.869671 15493 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6d86b04-1d3f-4f27-a262-b732c1295997-utilities\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:07.869849 master-0 kubenswrapper[15493]: I0216 17:02:07.869690 15493 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-catalog-content\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:07.881057 master-0 kubenswrapper[15493]: W0216 17:02:07.880915 15493 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.881157 master-0 kubenswrapper[15493]: E0216 17:02:07.881073 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.901694 master-0 kubenswrapper[15493]: W0216 17:02:07.901617 15493 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.901694 master-0 kubenswrapper[15493]: E0216 17:02:07.901686 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.921877 master-0 kubenswrapper[15493]: W0216 17:02:07.921771 15493 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.921877 master-0 kubenswrapper[15493]: E0216 17:02:07.921855 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.941406 master-0 kubenswrapper[15493]: W0216 17:02:07.941321 15493 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.941483 master-0 kubenswrapper[15493]: E0216 17:02:07.941417 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.960484 master-0 kubenswrapper[15493]: W0216 17:02:07.960390 15493 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.960484 master-0 kubenswrapper[15493]: E0216 17:02:07.960472 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:07.980682 master-0 kubenswrapper[15493]: W0216 17:02:07.980551 15493 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:07.980682 master-0 kubenswrapper[15493]: E0216 17:02:07.980622 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.001066 master-0 kubenswrapper[15493]: W0216 17:02:08.000979 15493 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.001212 master-0 kubenswrapper[15493]: E0216 17:02:08.001078 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.020703 master-0 kubenswrapper[15493]: W0216 17:02:08.020635 15493 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.020853 master-0 kubenswrapper[15493]: E0216 17:02:08.020704 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.040823 master-0 kubenswrapper[15493]: W0216 17:02:08.040735 15493 reflector.go:561] object-"openshift-catalogd"/"catalogd-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dcatalogd-trusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.040972 master-0 kubenswrapper[15493]: E0216 17:02:08.040826 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"catalogd-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dcatalogd-trusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.060402 master-0 kubenswrapper[15493]: W0216 17:02:08.060332 15493 reflector.go:561] object-"openshift-catalogd"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.060520 master-0 kubenswrapper[15493]: E0216 17:02:08.060406 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.081174 master-0 kubenswrapper[15493]: W0216 17:02:08.081096 15493 reflector.go:561] object-"openshift-catalogd"/"catalogserver-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/secrets?fieldSelector=metadata.name%3Dcatalogserver-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.081174 master-0 kubenswrapper[15493]: E0216 17:02:08.081167 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"catalogserver-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/secrets?fieldSelector=metadata.name%3Dcatalogserver-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.101182 master-0 kubenswrapper[15493]: W0216 17:02:08.101060 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.101347 master-0 kubenswrapper[15493]: E0216 17:02:08.101200 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.121186 master-0 kubenswrapper[15493]: W0216 17:02:08.121056 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.121296 master-0 kubenswrapper[15493]: E0216 17:02:08.121197 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.141511 master-0 kubenswrapper[15493]: W0216 17:02:08.141322 15493 reflector.go:561] object-"openshift-catalogd"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.141511 master-0 kubenswrapper[15493]: E0216 17:02:08.141500 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.160889 master-0 kubenswrapper[15493]: W0216 17:02:08.160797 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.161101 master-0 kubenswrapper[15493]: E0216 17:02:08.160890 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.180977 master-0 kubenswrapper[15493]: W0216 17:02:08.180871 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.181112 master-0 kubenswrapper[15493]: E0216 17:02:08.180984 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.201402 master-0 kubenswrapper[15493]: W0216 17:02:08.201293 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.201554 master-0 kubenswrapper[15493]: E0216 17:02:08.201407 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.221184 master-0 kubenswrapper[15493]: W0216 17:02:08.221073 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.221184 master-0 kubenswrapper[15493]: E0216 17:02:08.221198 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.241678 master-0 kubenswrapper[15493]: W0216 17:02:08.241538 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.241836 master-0 kubenswrapper[15493]: E0216 17:02:08.241696 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.260721 master-0 kubenswrapper[15493]: W0216 17:02:08.260644 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.260721 master-0 kubenswrapper[15493]: E0216 17:02:08.260717 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.281112 master-0 kubenswrapper[15493]: W0216 17:02:08.281030 15493 reflector.go:561] object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Doperator-controller-trusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.281299 master-0 kubenswrapper[15493]: E0216 17:02:08.281115 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-controller\"/\"operator-controller-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Doperator-controller-trusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.301171 master-0 kubenswrapper[15493]: W0216 17:02:08.301087 15493 reflector.go:561] object-"openshift-operator-controller"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.301311 master-0 kubenswrapper[15493]: E0216 17:02:08.301171 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-controller\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.321579 master-0 kubenswrapper[15493]: W0216 17:02:08.321336 15493 reflector.go:561] object-"openshift-operator-controller"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.321579 master-0 kubenswrapper[15493]: E0216 17:02:08.321414 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-controller\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.341129 master-0 kubenswrapper[15493]: W0216 17:02:08.340980 15493 reflector.go:561] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.341129 master-0 kubenswrapper[15493]: E0216 17:02:08.341139 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.361940 master-0 kubenswrapper[15493]: W0216 17:02:08.361794 15493 reflector.go:561] object-"openshift-cluster-version"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.361940 master-0 kubenswrapper[15493]: E0216 17:02:08.361903 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.381582 master-0 kubenswrapper[15493]: I0216 17:02:08.381497 15493 request.go:700] Waited for 1.007192874s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 16 17:02:08.383705 master-0 kubenswrapper[15493]: W0216 17:02:08.383582 15493 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.383812 master-0 kubenswrapper[15493]: E0216 17:02:08.383714 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.394848 master-0 kubenswrapper[15493]: I0216 17:02:08.394706 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n7kjr" Feb 16 17:02:08.398069 master-0 kubenswrapper[15493]: I0216 17:02:08.397985 15493 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="d6a60e780e1ec46760e0dc56aa2e2446cbe16e5a5a365f586b3a5546ece26409" exitCode=0 Feb 16 17:02:08.402584 master-0 kubenswrapper[15493]: W0216 17:02:08.402426 15493 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.403059 master-0 kubenswrapper[15493]: E0216 17:02:08.402643 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.403256 master-0 kubenswrapper[15493]: I0216 17:02:08.403195 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:02:08.405988 master-0 kubenswrapper[15493]: I0216 17:02:08.405884 15493 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="38ea24f9e1f52bb2d156d60e530f0e5be87fcb1b940f2e51d9ccf98bf655afc7" exitCode=0 Feb 16 17:02:08.408859 master-0 kubenswrapper[15493]: I0216 17:02:08.408797 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Feb 16 17:02:08.408859 master-0 kubenswrapper[15493]: I0216 17:02:08.408852 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kkl7" Feb 16 17:02:08.409010 master-0 kubenswrapper[15493]: I0216 17:02:08.408985 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:02:08.421939 master-0 kubenswrapper[15493]: W0216 17:02:08.421631 15493 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.422068 master-0 kubenswrapper[15493]: E0216 17:02:08.421960 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.441380 master-0 kubenswrapper[15493]: W0216 17:02:08.441227 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-j874l": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-dockercfg-j874l&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.441380 master-0 kubenswrapper[15493]: E0216 17:02:08.441367 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cloud-credential-operator-dockercfg-j874l\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-dockercfg-j874l&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.442216 master-0 kubenswrapper[15493]: E0216 17:02:08.442168 15493 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.442385 master-0 kubenswrapper[15493]: E0216 17:02:08.442346 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.942312957 +0000 UTC m=+8.092486067 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.442479 master-0 kubenswrapper[15493]: E0216 17:02:08.442445 15493 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.442533 master-0 kubenswrapper[15493]: E0216 17:02:08.442507 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.942488631 +0000 UTC m=+8.092661731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.442630 master-0 kubenswrapper[15493]: E0216 17:02:08.442592 15493 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.442718 master-0 kubenswrapper[15493]: E0216 17:02:08.442677 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.942658576 +0000 UTC m=+8.092831676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.442780 master-0 kubenswrapper[15493]: E0216 17:02:08.442753 15493 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.442850 master-0 kubenswrapper[15493]: E0216 17:02:08.442824 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.94280466 +0000 UTC m=+8.092977770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.443044 master-0 kubenswrapper[15493]: E0216 17:02:08.443007 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.443152 master-0 kubenswrapper[15493]: E0216 17:02:08.443125 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.443199 master-0 kubenswrapper[15493]: E0216 17:02:08.443172 15493 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.443240 master-0 kubenswrapper[15493]: E0216 17:02:08.443206 15493 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-control-plane-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.443353 master-0 kubenswrapper[15493]: E0216 17:02:08.443326 15493 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.443389 master-0 kubenswrapper[15493]: E0216 17:02:08.443346 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.443389 master-0 kubenswrapper[15493]: E0216 17:02:08.443328 15493 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.443448 master-0 kubenswrapper[15493]: E0216 17:02:08.443384 15493 secret.go:189] Couldn't get secret openshift-network-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.443484 master-0 kubenswrapper[15493]: E0216 17:02:08.443419 15493 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.443484 master-0 kubenswrapper[15493]: E0216 17:02:08.443323 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.443543 master-0 kubenswrapper[15493]: E0216 17:02:08.443466 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.443584 master-0 kubenswrapper[15493]: E0216 17:02:08.443552 15493 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.443627 master-0 kubenswrapper[15493]: E0216 17:02:08.443577 15493 configmap.go:193] Couldn't get configMap openshift-multus/whereabouts-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.443627 master-0 kubenswrapper[15493]: E0216 17:02:08.443583 15493 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.443627 master-0 kubenswrapper[15493]: E0216 17:02:08.443555 15493 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.443751 master-0 kubenswrapper[15493]: E0216 17:02:08.443670 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.443751 master-0 kubenswrapper[15493]: E0216 17:02:08.443684 15493 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.443840 master-0 kubenswrapper[15493]: E0216 17:02:08.443832 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.443937 master-0 kubenswrapper[15493]: E0216 17:02:08.443886 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.444067 master-0 kubenswrapper[15493]: E0216 17:02:08.444037 15493 configmap.go:193] Couldn't get configMap openshift-network-operator/iptables-alerter-script: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.444105 master-0 kubenswrapper[15493]: E0216 17:02:08.444070 15493 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.444177 master-0 kubenswrapper[15493]: E0216 17:02:08.444146 15493 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.444306 master-0 kubenswrapper[15493]: E0216 17:02:08.444277 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.444499 master-0 kubenswrapper[15493]: E0216 17:02:08.444467 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.943775075 +0000 UTC m=+8.093948185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.444566 master-0 kubenswrapper[15493]: E0216 17:02:08.444538 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.944513525 +0000 UTC m=+8.094686635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.444601 master-0 kubenswrapper[15493]: E0216 17:02:08.444575 15493 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.444601 master-0 kubenswrapper[15493]: E0216 17:02:08.444583 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.444667 master-0 kubenswrapper[15493]: E0216 17:02:08.444586 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.944567186 +0000 UTC m=+8.094740296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.444777 master-0 kubenswrapper[15493]: E0216 17:02:08.444699 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.944666769 +0000 UTC m=+8.094839879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.444777 master-0 kubenswrapper[15493]: E0216 17:02:08.444734 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.94471819 +0000 UTC m=+8.094891300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" (UniqueName: "kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.444777 master-0 kubenswrapper[15493]: E0216 17:02:08.444773 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.944758421 +0000 UTC m=+8.094931531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.444870 master-0 kubenswrapper[15493]: E0216 17:02:08.444808 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.944795582 +0000 UTC m=+8.094968692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.444870 master-0 kubenswrapper[15493]: E0216 17:02:08.444839 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.944828593 +0000 UTC m=+8.095001713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.444968 master-0 kubenswrapper[15493]: E0216 17:02:08.444872 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls podName:4549ea98-7379-49e1-8452-5efb643137ca nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.944861294 +0000 UTC m=+8.095034404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls") pod "network-operator-6fcf4c966-6bmf9" (UID: "4549ea98-7379-49e1-8452-5efb643137ca") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.444968 master-0 kubenswrapper[15493]: E0216 17:02:08.444907 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.944895525 +0000 UTC m=+8.095068635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445068 master-0 kubenswrapper[15493]: E0216 17:02:08.444995 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.944977537 +0000 UTC m=+8.095150717 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445068 master-0 kubenswrapper[15493]: E0216 17:02:08.445034 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945020978 +0000 UTC m=+8.095194088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445068 master-0 kubenswrapper[15493]: E0216 17:02:08.445065 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945053819 +0000 UTC m=+8.095226929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445190 master-0 kubenswrapper[15493]: E0216 17:02:08.445097 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.94508609 +0000 UTC m=+8.095259280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445190 master-0 kubenswrapper[15493]: E0216 17:02:08.445125 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945109831 +0000 UTC m=+8.095282941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445190 master-0 kubenswrapper[15493]: E0216 17:02:08.445155 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945141322 +0000 UTC m=+8.095314432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whereabouts-configmap" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445190 master-0 kubenswrapper[15493]: E0216 17:02:08.445166 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445190 master-0 kubenswrapper[15493]: E0216 17:02:08.445182 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945169902 +0000 UTC m=+8.095343012 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445389 master-0 kubenswrapper[15493]: E0216 17:02:08.445175 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445389 master-0 kubenswrapper[15493]: E0216 17:02:08.445235 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445389 master-0 kubenswrapper[15493]: E0216 17:02:08.445236 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945208353 +0000 UTC m=+8.095381463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445389 master-0 kubenswrapper[15493]: E0216 17:02:08.445354 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945316356 +0000 UTC m=+8.095489496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445560 master-0 kubenswrapper[15493]: E0216 17:02:08.445407 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script podName:b3fa6ac1-781f-446c-b6b4-18bdb7723c23 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945385058 +0000 UTC m=+8.095558288 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "iptables-alerter-script" (UniqueName: "kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script") pod "iptables-alerter-czzz2" (UID: "b3fa6ac1-781f-446c-b6b4-18bdb7723c23") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445560 master-0 kubenswrapper[15493]: E0216 17:02:08.445446 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945428599 +0000 UTC m=+8.095601819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445560 master-0 kubenswrapper[15493]: E0216 17:02:08.445459 15493 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445560 master-0 kubenswrapper[15493]: E0216 17:02:08.445462 15493 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445560 master-0 kubenswrapper[15493]: E0216 17:02:08.445480 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.94546281 +0000 UTC m=+8.095636010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445560 master-0 kubenswrapper[15493]: E0216 17:02:08.445510 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445560 master-0 kubenswrapper[15493]: E0216 17:02:08.445512 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945496711 +0000 UTC m=+8.095669911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445560 master-0 kubenswrapper[15493]: E0216 17:02:08.445561 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945542242 +0000 UTC m=+8.095715432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445586 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445596 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945581473 +0000 UTC m=+8.095754673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445602 15493 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445633 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945613694 +0000 UTC m=+8.095786874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445636 15493 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445666 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945650885 +0000 UTC m=+8.095823995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445698 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945684846 +0000 UTC m=+8.095857956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445726 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945714027 +0000 UTC m=+8.095887137 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445754 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945740517 +0000 UTC m=+8.095913627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445777 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945765898 +0000 UTC m=+8.095939018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445802 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945789649 +0000 UTC m=+8.095962759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445827 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945815679 +0000 UTC m=+8.095988799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445845 15493 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.445904 master-0 kubenswrapper[15493]: E0216 17:02:08.445899 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.446557 master-0 kubenswrapper[15493]: E0216 17:02:08.446009 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.446557 master-0 kubenswrapper[15493]: E0216 17:02:08.446020 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.945904152 +0000 UTC m=+8.096077312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.446557 master-0 kubenswrapper[15493]: E0216 17:02:08.446060 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.946041645 +0000 UTC m=+8.096214825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.446557 master-0 kubenswrapper[15493]: E0216 17:02:08.446092 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.946077076 +0000 UTC m=+8.096250256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.447049 master-0 kubenswrapper[15493]: E0216 17:02:08.447007 15493 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.447134 master-0 kubenswrapper[15493]: E0216 17:02:08.447105 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.947082813 +0000 UTC m=+8.097255933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.447181 master-0 kubenswrapper[15493]: E0216 17:02:08.447109 15493 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.447223 master-0 kubenswrapper[15493]: E0216 17:02:08.447189 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.947172045 +0000 UTC m=+8.097345215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448348 master-0 kubenswrapper[15493]: E0216 17:02:08.448297 15493 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.448421 master-0 kubenswrapper[15493]: E0216 17:02:08.448339 15493 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448421 master-0 kubenswrapper[15493]: E0216 17:02:08.448386 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.948368357 +0000 UTC m=+8.098541437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.448512 master-0 kubenswrapper[15493]: E0216 17:02:08.448427 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.948408518 +0000 UTC m=+8.098581628 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448512 master-0 kubenswrapper[15493]: E0216 17:02:08.448440 15493 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.448512 master-0 kubenswrapper[15493]: E0216 17:02:08.448456 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448512 master-0 kubenswrapper[15493]: E0216 17:02:08.448467 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448512 master-0 kubenswrapper[15493]: E0216 17:02:08.448487 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.94847407 +0000 UTC m=+8.098647260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.448512 master-0 kubenswrapper[15493]: E0216 17:02:08.448503 15493 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448512 master-0 kubenswrapper[15493]: E0216 17:02:08.448438 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448509 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.94849922 +0000 UTC m=+8.098672420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448545 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448578 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448597 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448552 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.948542182 +0000 UTC m=+8.098715262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448604 15493 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448634 15493 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448642 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.948626904 +0000 UTC m=+8.098800014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448677 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.948663265 +0000 UTC m=+8.098836375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448706 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.948693766 +0000 UTC m=+8.098866876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448727 15493 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448738 15493 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448766 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448791 15493 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448710 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.448790 master-0 kubenswrapper[15493]: E0216 17:02:08.448821 15493 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.448731 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.948720776 +0000 UTC m=+8.098893886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.448679 15493 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.448855 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.94884271 +0000 UTC m=+8.099015820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.448871 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.448879 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.94886759 +0000 UTC m=+8.099040700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.448802 15493 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.448913 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.948902391 +0000 UTC m=+8.099075501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.448993 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.948981173 +0000 UTC m=+8.099154283 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.449018 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.949007234 +0000 UTC m=+8.099180344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.449041 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.949031465 +0000 UTC m=+8.099204575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.449064 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.949053715 +0000 UTC m=+8.099226825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.449080 15493 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.449085 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.949075176 +0000 UTC m=+8.099248286 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.449117 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.949107187 +0000 UTC m=+8.099280297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.449135 15493 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.449145 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.949131137 +0000 UTC m=+8.099304337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.449168 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.949158338 +0000 UTC m=+8.099331448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.449166 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.449423 master-0 kubenswrapper[15493]: E0216 17:02:08.449181 15493 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.450254 master-0 kubenswrapper[15493]: E0216 17:02:08.449190 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.949180539 +0000 UTC m=+8.099353649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.450254 master-0 kubenswrapper[15493]: E0216 17:02:08.449520 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.949486207 +0000 UTC m=+8.099659317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.450254 master-0 kubenswrapper[15493]: E0216 17:02:08.449201 15493 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.450254 master-0 kubenswrapper[15493]: E0216 17:02:08.449559 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.949541408 +0000 UTC m=+8.099714588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.450254 master-0 kubenswrapper[15493]: E0216 17:02:08.449224 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.450254 master-0 kubenswrapper[15493]: E0216 17:02:08.449595 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.949579319 +0000 UTC m=+8.099752429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.450254 master-0 kubenswrapper[15493]: E0216 17:02:08.449636 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.9496166 +0000 UTC m=+8.099789770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.450254 master-0 kubenswrapper[15493]: E0216 17:02:08.449674 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.949657921 +0000 UTC m=+8.099831031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.451562 master-0 kubenswrapper[15493]: E0216 17:02:08.451522 15493 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.451614 master-0 kubenswrapper[15493]: E0216 17:02:08.451567 15493 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.451614 master-0 kubenswrapper[15493]: E0216 17:02:08.451601 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.451669 master-0 kubenswrapper[15493]: E0216 17:02:08.451650 15493 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.451708 master-0 kubenswrapper[15493]: E0216 17:02:08.451651 15493 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.451708 master-0 kubenswrapper[15493]: E0216 17:02:08.451533 15493 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.451783 master-0 kubenswrapper[15493]: E0216 17:02:08.451671 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.451783 master-0 kubenswrapper[15493]: E0216 17:02:08.451691 15493 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.451783 master-0 kubenswrapper[15493]: E0216 17:02:08.451730 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.451783 master-0 kubenswrapper[15493]: E0216 17:02:08.451579 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.951566912 +0000 UTC m=+8.101739992 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.451783 master-0 kubenswrapper[15493]: E0216 17:02:08.451768 15493 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.451783 master-0 kubenswrapper[15493]: E0216 17:02:08.451780 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.951771927 +0000 UTC m=+8.101945007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.451783 master-0 kubenswrapper[15493]: E0216 17:02:08.451793 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452032 master-0 kubenswrapper[15493]: E0216 17:02:08.451807 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.951796958 +0000 UTC m=+8.101970038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452032 master-0 kubenswrapper[15493]: E0216 17:02:08.451823 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.951817058 +0000 UTC m=+8.101990138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452032 master-0 kubenswrapper[15493]: E0216 17:02:08.451836 15493 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452032 master-0 kubenswrapper[15493]: E0216 17:02:08.451850 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.951833909 +0000 UTC m=+8.102007019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452032 master-0 kubenswrapper[15493]: E0216 17:02:08.451872 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452032 master-0 kubenswrapper[15493]: E0216 17:02:08.451875 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.95186393 +0000 UTC m=+8.102037040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452032 master-0 kubenswrapper[15493]: E0216 17:02:08.451901 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.95189045 +0000 UTC m=+8.102063560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452032 master-0 kubenswrapper[15493]: E0216 17:02:08.451916 15493 configmap.go:193] Couldn't get configMap openshift-network-node-identity/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452032 master-0 kubenswrapper[15493]: E0216 17:02:08.451915 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452032 master-0 kubenswrapper[15493]: E0216 17:02:08.451986 15493 configmap.go:193] Couldn't get configMap openshift-network-node-identity/ovnkube-identity-cm: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452032 master-0 kubenswrapper[15493]: E0216 17:02:08.451951 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.951913681 +0000 UTC m=+8.102086791 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452032 master-0 kubenswrapper[15493]: E0216 17:02:08.451850 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452032 master-0 kubenswrapper[15493]: E0216 17:02:08.452040 15493 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452401 master-0 kubenswrapper[15493]: E0216 17:02:08.452047 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952023454 +0000 UTC m=+8.102196564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452401 master-0 kubenswrapper[15493]: E0216 17:02:08.452073 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452401 master-0 kubenswrapper[15493]: E0216 17:02:08.452092 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952074575 +0000 UTC m=+8.102247775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452401 master-0 kubenswrapper[15493]: E0216 17:02:08.452129 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452401 master-0 kubenswrapper[15493]: E0216 17:02:08.452184 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452401 master-0 kubenswrapper[15493]: E0216 17:02:08.452212 15493 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452401 master-0 kubenswrapper[15493]: E0216 17:02:08.452181 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952133777 +0000 UTC m=+8.102306847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452401 master-0 kubenswrapper[15493]: E0216 17:02:08.452235 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452401 master-0 kubenswrapper[15493]: E0216 17:02:08.452248 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952238839 +0000 UTC m=+8.102411919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452401 master-0 kubenswrapper[15493]: E0216 17:02:08.452300 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452401 master-0 kubenswrapper[15493]: E0216 17:02:08.452342 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952321082 +0000 UTC m=+8.102494202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452700 master-0 kubenswrapper[15493]: E0216 17:02:08.452386 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952374123 +0000 UTC m=+8.102547233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452700 master-0 kubenswrapper[15493]: E0216 17:02:08.452458 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952445585 +0000 UTC m=+8.102618695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452700 master-0 kubenswrapper[15493]: E0216 17:02:08.452485 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952473616 +0000 UTC m=+8.102646726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452700 master-0 kubenswrapper[15493]: E0216 17:02:08.452509 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952497126 +0000 UTC m=+8.102670236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-identity-cm" (UniqueName: "kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452700 master-0 kubenswrapper[15493]: E0216 17:02:08.452531 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952521097 +0000 UTC m=+8.102694207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452700 master-0 kubenswrapper[15493]: E0216 17:02:08.452553 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952543308 +0000 UTC m=+8.102716418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452700 master-0 kubenswrapper[15493]: E0216 17:02:08.452555 15493 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452700 master-0 kubenswrapper[15493]: E0216 17:02:08.452567 15493 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.452938 master-0 kubenswrapper[15493]: E0216 17:02:08.452574 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952565118 +0000 UTC m=+8.102738228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452938 master-0 kubenswrapper[15493]: E0216 17:02:08.452609 15493 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.452938 master-0 kubenswrapper[15493]: E0216 17:02:08.452891 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952825445 +0000 UTC m=+8.102998625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.453048 master-0 kubenswrapper[15493]: E0216 17:02:08.452615 15493 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.453048 master-0 kubenswrapper[15493]: E0216 17:02:08.452965 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.453048 master-0 kubenswrapper[15493]: E0216 17:02:08.452972 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.453048 master-0 kubenswrapper[15493]: E0216 17:02:08.453021 15493 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.453048 master-0 kubenswrapper[15493]: E0216 17:02:08.453024 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.453269 master-0 kubenswrapper[15493]: E0216 17:02:08.453063 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.952913927 +0000 UTC m=+8.103087287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.453269 master-0 kubenswrapper[15493]: E0216 17:02:08.453067 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.453269 master-0 kubenswrapper[15493]: E0216 17:02:08.453115 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.953090382 +0000 UTC m=+8.103263492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.453269 master-0 kubenswrapper[15493]: E0216 17:02:08.453168 15493 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.453269 master-0 kubenswrapper[15493]: E0216 17:02:08.453195 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.953174204 +0000 UTC m=+8.103347404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.453269 master-0 kubenswrapper[15493]: E0216 17:02:08.453233 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.953216555 +0000 UTC m=+8.103389835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.453269 master-0 kubenswrapper[15493]: E0216 17:02:08.453271 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.953253726 +0000 UTC m=+8.103427026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.453468 master-0 kubenswrapper[15493]: E0216 17:02:08.453316 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.953298687 +0000 UTC m=+8.103471867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.453468 master-0 kubenswrapper[15493]: E0216 17:02:08.453356 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.953339519 +0000 UTC m=+8.103512819 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.453877 master-0 kubenswrapper[15493]: E0216 17:02:08.453827 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.953801681 +0000 UTC m=+8.103974971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.454041 master-0 kubenswrapper[15493]: E0216 17:02:08.453886 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.953870313 +0000 UTC m=+8.104043423 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.454041 master-0 kubenswrapper[15493]: E0216 17:02:08.453962 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.953906174 +0000 UTC m=+8.104079294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.454041 master-0 kubenswrapper[15493]: E0216 17:02:08.454000 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.953983266 +0000 UTC m=+8.104156586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.454041 master-0 kubenswrapper[15493]: E0216 17:02:08.454033 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.954018397 +0000 UTC m=+8.104191507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.454175 master-0 kubenswrapper[15493]: E0216 17:02:08.454066 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.954052087 +0000 UTC m=+8.104225197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.454175 master-0 kubenswrapper[15493]: E0216 17:02:08.453357 15493 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.454175 master-0 kubenswrapper[15493]: E0216 17:02:08.454138 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.954123349 +0000 UTC m=+8.104296469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.454175 master-0 kubenswrapper[15493]: E0216 17:02:08.453495 15493 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.454352 master-0 kubenswrapper[15493]: E0216 17:02:08.454210 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.954197121 +0000 UTC m=+8.104370231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.454352 master-0 kubenswrapper[15493]: E0216 17:02:08.453556 15493 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.454352 master-0 kubenswrapper[15493]: E0216 17:02:08.454301 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.954283114 +0000 UTC m=+8.104456224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.454460 master-0 kubenswrapper[15493]: E0216 17:02:08.453547 15493 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.454460 master-0 kubenswrapper[15493]: E0216 17:02:08.454421 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.954399677 +0000 UTC m=+8.104572937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.454460 master-0 kubenswrapper[15493]: E0216 17:02:08.453559 15493 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.454863 master-0 kubenswrapper[15493]: E0216 17:02:08.454798 15493 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.454982 master-0 kubenswrapper[15493]: E0216 17:02:08.453581 15493 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.455057 master-0 kubenswrapper[15493]: E0216 17:02:08.455000 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.954971222 +0000 UTC m=+8.105144342 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455057 master-0 kubenswrapper[15493]: E0216 17:02:08.453618 15493 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.455057 master-0 kubenswrapper[15493]: E0216 17:02:08.455051 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.955032533 +0000 UTC m=+8.105205673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.455147 master-0 kubenswrapper[15493]: E0216 17:02:08.453634 15493 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455191 master-0 kubenswrapper[15493]: E0216 17:02:08.455092 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.955074644 +0000 UTC m=+8.105247955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.455191 master-0 kubenswrapper[15493]: E0216 17:02:08.453637 15493 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.455191 master-0 kubenswrapper[15493]: E0216 17:02:08.453657 15493 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455191 master-0 kubenswrapper[15493]: E0216 17:02:08.453655 15493 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.455303 master-0 kubenswrapper[15493]: E0216 17:02:08.453649 15493 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455303 master-0 kubenswrapper[15493]: E0216 17:02:08.453699 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455303 master-0 kubenswrapper[15493]: E0216 17:02:08.453688 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.455303 master-0 kubenswrapper[15493]: E0216 17:02:08.453716 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455303 master-0 kubenswrapper[15493]: E0216 17:02:08.453720 15493 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.455303 master-0 kubenswrapper[15493]: E0216 17:02:08.453748 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455303 master-0 kubenswrapper[15493]: E0216 17:02:08.453742 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455483 master-0 kubenswrapper[15493]: E0216 17:02:08.455329 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.95529821 +0000 UTC m=+8.105471420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.455483 master-0 kubenswrapper[15493]: E0216 17:02:08.455385 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.955365052 +0000 UTC m=+8.105538182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455483 master-0 kubenswrapper[15493]: E0216 17:02:08.455423 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.955400383 +0000 UTC m=+8.105573493 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455483 master-0 kubenswrapper[15493]: E0216 17:02:08.455465 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.955446464 +0000 UTC m=+8.105619574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.455599 master-0 kubenswrapper[15493]: E0216 17:02:08.455503 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.955485545 +0000 UTC m=+8.105658655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455599 master-0 kubenswrapper[15493]: E0216 17:02:08.455547 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.955528407 +0000 UTC m=+8.105701527 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455599 master-0 kubenswrapper[15493]: E0216 17:02:08.455592 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.955577148 +0000 UTC m=+8.105750258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455689 master-0 kubenswrapper[15493]: E0216 17:02:08.455629 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.955612999 +0000 UTC m=+8.105786309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.455689 master-0 kubenswrapper[15493]: E0216 17:02:08.455674 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.95566406 +0000 UTC m=+8.105837170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455751 master-0 kubenswrapper[15493]: E0216 17:02:08.455714 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.955698891 +0000 UTC m=+8.105872001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:08.455790 master-0 kubenswrapper[15493]: E0216 17:02:08.455760 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.955744162 +0000 UTC m=+8.105917272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.455826 master-0 kubenswrapper[15493]: E0216 17:02:08.455798 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:08.955787853 +0000 UTC m=+8.105960963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:08.462264 master-0 kubenswrapper[15493]: W0216 17:02:08.462134 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.462546 master-0 kubenswrapper[15493]: E0216 17:02:08.462293 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cloud-credential-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.480775 master-0 kubenswrapper[15493]: W0216 17:02:08.480671 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"cco-trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dcco-trusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.480775 master-0 kubenswrapper[15493]: E0216 17:02:08.480766 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cco-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dcco-trusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.504385 master-0 kubenswrapper[15493]: W0216 17:02:08.504261 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.504485 master-0 kubenswrapper[15493]: E0216 17:02:08.504424 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.520548 master-0 kubenswrapper[15493]: W0216 17:02:08.520408 15493 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.520548 master-0 kubenswrapper[15493]: E0216 17:02:08.520495 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.540552 master-0 kubenswrapper[15493]: W0216 17:02:08.540484 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.540682 master-0 kubenswrapper[15493]: E0216 17:02:08.540562 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.560588 master-0 kubenswrapper[15493]: W0216 17:02:08.560496 15493 reflector.go:561] object-"openshift-etcd"/"installer-sa-dockercfg-rxv66": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-rxv66&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.560759 master-0 kubenswrapper[15493]: E0216 17:02:08.560604 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd\"/\"installer-sa-dockercfg-rxv66\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-rxv66&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.581662 master-0 kubenswrapper[15493]: W0216 17:02:08.581584 15493 reflector.go:561] object-"openshift-etcd"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.581797 master-0 kubenswrapper[15493]: E0216 17:02:08.581670 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.600847 master-0 kubenswrapper[15493]: W0216 17:02:08.600761 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.600847 master-0 kubenswrapper[15493]: E0216 17:02:08.600845 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.621190 master-0 kubenswrapper[15493]: W0216 17:02:08.621083 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-b9gfw": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-dockercfg-b9gfw&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.621280 master-0 kubenswrapper[15493]: E0216 17:02:08.621208 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-autoscaler-operator-dockercfg-b9gfw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-dockercfg-b9gfw&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.641761 master-0 kubenswrapper[15493]: W0216 17:02:08.641649 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-gtxjb": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-gtxjb&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.641886 master-0 kubenswrapper[15493]: E0216 17:02:08.641785 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-gtxjb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-gtxjb&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.668402 master-0 kubenswrapper[15493]: W0216 17:02:08.668332 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.668529 master-0 kubenswrapper[15493]: E0216 17:02:08.668416 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.680820 master-0 kubenswrapper[15493]: W0216 17:02:08.680763 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-autoscaler-operator-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.680882 master-0 kubenswrapper[15493]: E0216 17:02:08.680832 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-autoscaler-operator-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.700617 master-0 kubenswrapper[15493]: W0216 17:02:08.700533 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.700617 master-0 kubenswrapper[15493]: E0216 17:02:08.700608 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.721355 master-0 kubenswrapper[15493]: W0216 17:02:08.721261 15493 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy-cluster-autoscaler-operator&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.721562 master-0 kubenswrapper[15493]: E0216 17:02:08.721352 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy-cluster-autoscaler-operator\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy-cluster-autoscaler-operator&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.740758 master-0 kubenswrapper[15493]: W0216 17:02:08.740654 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-mzz6s": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-dockercfg-mzz6s&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.740758 master-0 kubenswrapper[15493]: E0216 17:02:08.740739 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-operator-dockercfg-mzz6s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-dockercfg-mzz6s&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.761043 master-0 kubenswrapper[15493]: W0216 17:02:08.760945 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.761043 master-0 kubenswrapper[15493]: E0216 17:02:08.761029 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.780703 master-0 kubenswrapper[15493]: W0216 17:02:08.780621 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dcluster-baremetal-operator-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.780703 master-0 kubenswrapper[15493]: E0216 17:02:08.780691 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dcluster-baremetal-operator-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.800952 master-0 kubenswrapper[15493]: W0216 17:02:08.800868 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-webhook-server-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.800952 master-0 kubenswrapper[15493]: E0216 17:02:08.800945 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-webhook-server-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.820260 master-0 kubenswrapper[15493]: W0216 17:02:08.820166 15493 reflector.go:561] object-"openshift-machine-api"/"baremetal-kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dbaremetal-kube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.820260 master-0 kubenswrapper[15493]: E0216 17:02:08.820242 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"baremetal-kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dbaremetal-kube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.840893 master-0 kubenswrapper[15493]: W0216 17:02:08.840802 15493 reflector.go:561] object-"openshift-insights"/"operator-dockercfg-rzjlw": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Doperator-dockercfg-rzjlw&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.841039 master-0 kubenswrapper[15493]: E0216 17:02:08.840939 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"operator-dockercfg-rzjlw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Doperator-dockercfg-rzjlw&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.861369 master-0 kubenswrapper[15493]: W0216 17:02:08.861289 15493 reflector.go:561] object-"openshift-insights"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.862825 master-0 kubenswrapper[15493]: E0216 17:02:08.861380 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.881459 master-0 kubenswrapper[15493]: W0216 17:02:08.881357 15493 reflector.go:561] object-"openshift-insights"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.881459 master-0 kubenswrapper[15493]: E0216 17:02:08.881445 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.902874 master-0 kubenswrapper[15493]: W0216 17:02:08.902725 15493 reflector.go:561] object-"openshift-insights"/"openshift-insights-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Dopenshift-insights-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.902874 master-0 kubenswrapper[15493]: E0216 17:02:08.902840 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"openshift-insights-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Dopenshift-insights-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.921255 master-0 kubenswrapper[15493]: W0216 17:02:08.921065 15493 reflector.go:561] object-"openshift-insights"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.921255 master-0 kubenswrapper[15493]: E0216 17:02:08.921185 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.941234 master-0 kubenswrapper[15493]: W0216 17:02:08.941145 15493 reflector.go:561] object-"openshift-insights"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.941431 master-0 kubenswrapper[15493]: E0216 17:02:08.941234 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.961257 master-0 kubenswrapper[15493]: W0216 17:02:08.961169 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.961257 master-0 kubenswrapper[15493]: E0216 17:02:08.961253 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"cluster-storage-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:08.981475 master-0 kubenswrapper[15493]: W0216 17:02:08.981396 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-x2982": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-dockercfg-x2982&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:08.981601 master-0 kubenswrapper[15493]: E0216 17:02:08.981484 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"cluster-storage-operator-dockercfg-x2982\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-dockercfg-x2982&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.001976 master-0 kubenswrapper[15493]: W0216 17:02:09.001808 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.002144 master-0 kubenswrapper[15493]: E0216 17:02:09.001986 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.003084 master-0 kubenswrapper[15493]: I0216 17:02:09.003053 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:09.003247 master-0 kubenswrapper[15493]: I0216 17:02:09.003220 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:09.003328 master-0 kubenswrapper[15493]: I0216 17:02:09.003301 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:09.003404 master-0 kubenswrapper[15493]: I0216 17:02:09.003353 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:09.003477 master-0 kubenswrapper[15493]: I0216 17:02:09.003430 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:09.003537 master-0 kubenswrapper[15493]: I0216 17:02:09.003512 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:09.003645 master-0 kubenswrapper[15493]: I0216 17:02:09.003614 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:09.003739 master-0 kubenswrapper[15493]: I0216 17:02:09.003692 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.003768 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.003806 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.003867 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.003946 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.003980 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004153 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004317 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004359 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004388 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004410 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004430 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004461 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004492 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004513 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004540 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004561 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004580 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004620 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004640 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004698 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004724 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004746 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004780 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004807 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004826 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004855 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004875 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004895 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004915 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004961 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.004988 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.005005 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.005042 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.005075 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:09.005178 master-0 kubenswrapper[15493]: I0216 17:02:09.005102 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:09.006451 master-0 kubenswrapper[15493]: I0216 17:02:09.006159 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:09.006451 master-0 kubenswrapper[15493]: I0216 17:02:09.006194 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:09.006451 master-0 kubenswrapper[15493]: I0216 17:02:09.006217 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:09.006451 master-0 kubenswrapper[15493]: I0216 17:02:09.006238 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:09.006451 master-0 kubenswrapper[15493]: I0216 17:02:09.006290 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:09.006451 master-0 kubenswrapper[15493]: I0216 17:02:09.006399 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:09.006451 master-0 kubenswrapper[15493]: I0216 17:02:09.006437 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:09.006824 master-0 kubenswrapper[15493]: I0216 17:02:09.006464 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:09.006824 master-0 kubenswrapper[15493]: I0216 17:02:09.006519 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:09.006824 master-0 kubenswrapper[15493]: I0216 17:02:09.006548 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:09.006824 master-0 kubenswrapper[15493]: I0216 17:02:09.006724 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:09.006824 master-0 kubenswrapper[15493]: I0216 17:02:09.006803 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:09.006824 master-0 kubenswrapper[15493]: I0216 17:02:09.006831 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:09.007398 master-0 kubenswrapper[15493]: I0216 17:02:09.006855 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:09.007398 master-0 kubenswrapper[15493]: I0216 17:02:09.006917 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:09.007398 master-0 kubenswrapper[15493]: I0216 17:02:09.006978 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:09.008233 master-0 kubenswrapper[15493]: I0216 17:02:09.007075 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:09.008233 master-0 kubenswrapper[15493]: I0216 17:02:09.007643 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:09.008233 master-0 kubenswrapper[15493]: I0216 17:02:09.007692 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:09.008233 master-0 kubenswrapper[15493]: I0216 17:02:09.007727 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:09.008233 master-0 kubenswrapper[15493]: I0216 17:02:09.007757 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:09.008233 master-0 kubenswrapper[15493]: I0216 17:02:09.007787 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:09.008233 master-0 kubenswrapper[15493]: I0216 17:02:09.007831 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:09.008233 master-0 kubenswrapper[15493]: I0216 17:02:09.007872 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:09.008233 master-0 kubenswrapper[15493]: I0216 17:02:09.007904 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:09.008233 master-0 kubenswrapper[15493]: I0216 17:02:09.007951 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:09.008233 master-0 kubenswrapper[15493]: I0216 17:02:09.008073 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:09.008233 master-0 kubenswrapper[15493]: I0216 17:02:09.008113 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:09.008233 master-0 kubenswrapper[15493]: I0216 17:02:09.008167 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:09.008866 master-0 kubenswrapper[15493]: I0216 17:02:09.008795 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:09.009196 master-0 kubenswrapper[15493]: I0216 17:02:09.008961 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:09.009196 master-0 kubenswrapper[15493]: I0216 17:02:09.009085 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:09.009196 master-0 kubenswrapper[15493]: I0216 17:02:09.009106 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:09.009473 master-0 kubenswrapper[15493]: I0216 17:02:09.009390 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009578 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009609 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009635 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009655 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009674 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009702 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009721 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009740 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009761 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009778 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009795 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009812 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009830 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009848 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009882 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009913 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009957 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.009978 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010029 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010048 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010065 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010084 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010110 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010127 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010146 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010165 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010183 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010209 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010227 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010248 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010266 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010289 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010307 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010324 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010344 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010368 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:09.010446 master-0 kubenswrapper[15493]: I0216 17:02:09.010387 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:09.011734 master-0 kubenswrapper[15493]: I0216 17:02:09.011468 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:09.011734 master-0 kubenswrapper[15493]: I0216 17:02:09.011563 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:09.011734 master-0 kubenswrapper[15493]: I0216 17:02:09.011597 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:09.011734 master-0 kubenswrapper[15493]: I0216 17:02:09.011620 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:09.011734 master-0 kubenswrapper[15493]: I0216 17:02:09.011639 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:09.011734 master-0 kubenswrapper[15493]: I0216 17:02:09.011663 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:09.011907 master-0 kubenswrapper[15493]: I0216 17:02:09.011743 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:09.021345 master-0 kubenswrapper[15493]: W0216 17:02:09.021276 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-hk5sk": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-hk5sk&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.021421 master-0 kubenswrapper[15493]: E0216 17:02:09.021352 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-hk5sk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-hk5sk&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.041295 master-0 kubenswrapper[15493]: W0216 17:02:09.041186 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"pprof-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.041457 master-0 kubenswrapper[15493]: E0216 17:02:09.041333 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.061384 master-0 kubenswrapper[15493]: W0216 17:02:09.061314 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.061448 master-0 kubenswrapper[15493]: E0216 17:02:09.061387 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.081393 master-0 kubenswrapper[15493]: W0216 17:02:09.081307 15493 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-7mlbn": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-7mlbn&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.081539 master-0 kubenswrapper[15493]: E0216 17:02:09.081404 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-7mlbn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-7mlbn&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.100940 master-0 kubenswrapper[15493]: W0216 17:02:09.100846 15493 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.101076 master-0 kubenswrapper[15493]: E0216 17:02:09.100945 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.121055 master-0 kubenswrapper[15493]: W0216 17:02:09.120908 15493 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.121055 master-0 kubenswrapper[15493]: E0216 17:02:09.121047 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.141166 master-0 kubenswrapper[15493]: W0216 17:02:09.141017 15493 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.141291 master-0 kubenswrapper[15493]: E0216 17:02:09.141161 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.161120 master-0 kubenswrapper[15493]: W0216 17:02:09.160980 15493 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.161120 master-0 kubenswrapper[15493]: E0216 17:02:09.161089 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.180900 master-0 kubenswrapper[15493]: W0216 17:02:09.180660 15493 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.180900 master-0 kubenswrapper[15493]: E0216 17:02:09.180773 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.201107 master-0 kubenswrapper[15493]: W0216 17:02:09.201041 15493 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.201107 master-0 kubenswrapper[15493]: E0216 17:02:09.201109 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.221431 master-0 kubenswrapper[15493]: W0216 17:02:09.221309 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.221431 master-0 kubenswrapper[15493]: E0216 17:02:09.221401 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.241481 master-0 kubenswrapper[15493]: W0216 17:02:09.241381 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-q2gzj": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-q2gzj&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.241481 master-0 kubenswrapper[15493]: E0216 17:02:09.241453 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-q2gzj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-q2gzj&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.261082 master-0 kubenswrapper[15493]: W0216 17:02:09.261018 15493 reflector.go:561] object-"openshift-machine-config-operator"/"mco-proxy-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.261210 master-0 kubenswrapper[15493]: E0216 17:02:09.261094 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.280369 master-0 kubenswrapper[15493]: W0216 17:02:09.280282 15493 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.280369 master-0 kubenswrapper[15493]: E0216 17:02:09.280344 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.301172 master-0 kubenswrapper[15493]: W0216 17:02:09.301102 15493 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.301276 master-0 kubenswrapper[15493]: E0216 17:02:09.301187 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.321112 master-0 kubenswrapper[15493]: W0216 17:02:09.321050 15493 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.321268 master-0 kubenswrapper[15493]: E0216 17:02:09.321150 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.341446 master-0 kubenswrapper[15493]: W0216 17:02:09.341369 15493 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.341446 master-0 kubenswrapper[15493]: E0216 17:02:09.341439 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.361199 master-0 kubenswrapper[15493]: W0216 17:02:09.361030 15493 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.361199 master-0 kubenswrapper[15493]: E0216 17:02:09.361147 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.381018 master-0 kubenswrapper[15493]: W0216 17:02:09.380882 15493 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.381534 master-0 kubenswrapper[15493]: E0216 17:02:09.381019 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.400494 master-0 kubenswrapper[15493]: I0216 17:02:09.400442 15493 request.go:700] Waited for 2.013427473s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0 Feb 16 17:02:09.401424 master-0 kubenswrapper[15493]: W0216 17:02:09.401334 15493 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.401561 master-0 kubenswrapper[15493]: E0216 17:02:09.401537 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.420753 master-0 kubenswrapper[15493]: W0216 17:02:09.420684 15493 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.421049 master-0 kubenswrapper[15493]: E0216 17:02:09.421023 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.441110 master-0 kubenswrapper[15493]: W0216 17:02:09.440959 15493 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ztpz8": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-ztpz8&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.441110 master-0 kubenswrapper[15493]: E0216 17:02:09.441064 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-ztpz8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-ztpz8&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.446745 master-0 kubenswrapper[15493]: E0216 17:02:09.446687 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:09.446745 master-0 kubenswrapper[15493]: E0216 17:02:09.446742 15493 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:09.446895 master-0 kubenswrapper[15493]: E0216 17:02:09.446855 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:02:09.94682837 +0000 UTC m=+9.097001440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:09.449082 master-0 kubenswrapper[15493]: E0216 17:02:09.449049 15493 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:09.449082 master-0 kubenswrapper[15493]: E0216 17:02:09.449076 15493 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:09.449171 master-0 kubenswrapper[15493]: E0216 17:02:09.449120 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:09.949109621 +0000 UTC m=+9.099282691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:09.460573 master-0 kubenswrapper[15493]: W0216 17:02:09.460525 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.460661 master-0 kubenswrapper[15493]: E0216 17:02:09.460586 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.481006 master-0 kubenswrapper[15493]: W0216 17:02:09.480950 15493 reflector.go:561] object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-qlqr4": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-qlqr4&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.481200 master-0 kubenswrapper[15493]: E0216 17:02:09.481182 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager\"/\"installer-sa-dockercfg-qlqr4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-qlqr4&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.501164 master-0 kubenswrapper[15493]: W0216 17:02:09.501050 15493 reflector.go:561] object-"openshift-kube-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.501352 master-0 kubenswrapper[15493]: E0216 17:02:09.501188 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.521782 master-0 kubenswrapper[15493]: W0216 17:02:09.521639 15493 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.521872 master-0 kubenswrapper[15493]: E0216 17:02:09.521796 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.542170 master-0 kubenswrapper[15493]: W0216 17:02:09.542054 15493 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.542258 master-0 kubenswrapper[15493]: E0216 17:02:09.542196 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.561508 master-0 kubenswrapper[15493]: W0216 17:02:09.561420 15493 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-kh5s4": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-kh5s4&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.561617 master-0 kubenswrapper[15493]: E0216 17:02:09.561516 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-kh5s4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-kh5s4&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.581003 master-0 kubenswrapper[15493]: W0216 17:02:09.580861 15493 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.581068 master-0 kubenswrapper[15493]: E0216 17:02:09.581018 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.601515 master-0 kubenswrapper[15493]: W0216 17:02:09.601423 15493 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-r5p9m": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-r5p9m&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.601515 master-0 kubenswrapper[15493]: E0216 17:02:09.601510 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-r5p9m\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-r5p9m&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.620781 master-0 kubenswrapper[15493]: W0216 17:02:09.620665 15493 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-5lx84": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-5lx84&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.620978 master-0 kubenswrapper[15493]: E0216 17:02:09.620781 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-5lx84\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-5lx84&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.641833 master-0 kubenswrapper[15493]: W0216 17:02:09.641750 15493 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.641915 master-0 kubenswrapper[15493]: E0216 17:02:09.641855 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.661667 master-0 kubenswrapper[15493]: W0216 17:02:09.661558 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-q5h8t": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-q5h8t&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.661667 master-0 kubenswrapper[15493]: E0216 17:02:09.661633 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-q5h8t\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-q5h8t&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.681204 master-0 kubenswrapper[15493]: W0216 17:02:09.681097 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-wnnb7": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-wnnb7&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.681369 master-0 kubenswrapper[15493]: E0216 17:02:09.681199 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wnnb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-wnnb7&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.701328 master-0 kubenswrapper[15493]: W0216 17:02:09.701192 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.701328 master-0 kubenswrapper[15493]: E0216 17:02:09.701270 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.721654 master-0 kubenswrapper[15493]: W0216 17:02:09.721542 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.721654 master-0 kubenswrapper[15493]: E0216 17:02:09.721631 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.740645 master-0 kubenswrapper[15493]: W0216 17:02:09.740529 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.740645 master-0 kubenswrapper[15493]: E0216 17:02:09.740625 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.761348 master-0 kubenswrapper[15493]: W0216 17:02:09.761242 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.761348 master-0 kubenswrapper[15493]: E0216 17:02:09.761328 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.780641 master-0 kubenswrapper[15493]: W0216 17:02:09.780562 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.780641 master-0 kubenswrapper[15493]: E0216 17:02:09.780622 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.801153 master-0 kubenswrapper[15493]: W0216 17:02:09.801039 15493 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-6858s": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-6858s&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.801153 master-0 kubenswrapper[15493]: E0216 17:02:09.801148 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-6858s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-6858s&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.820819 master-0 kubenswrapper[15493]: W0216 17:02:09.820712 15493 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-nslxl": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-nslxl&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.820951 master-0 kubenswrapper[15493]: E0216 17:02:09.820838 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-nslxl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-nslxl&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.840935 master-0 kubenswrapper[15493]: W0216 17:02:09.840835 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-t46bw": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-t46bw&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.841057 master-0 kubenswrapper[15493]: E0216 17:02:09.840983 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-t46bw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-t46bw&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.861612 master-0 kubenswrapper[15493]: W0216 17:02:09.861517 15493 reflector.go:561] object-"openshift-machine-config-operator"/"mcc-proxy-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.861612 master-0 kubenswrapper[15493]: E0216 17:02:09.861607 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.881043 master-0 kubenswrapper[15493]: W0216 17:02:09.880897 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcloud-controller-manager-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.881177 master-0 kubenswrapper[15493]: E0216 17:02:09.881045 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"cloud-controller-manager-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcloud-controller-manager-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.901153 master-0 kubenswrapper[15493]: W0216 17:02:09.901025 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lc8g2": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcluster-cloud-controller-manager-dockercfg-lc8g2&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.901283 master-0 kubenswrapper[15493]: E0216 17:02:09.901170 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"cluster-cloud-controller-manager-dockercfg-lc8g2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcluster-cloud-controller-manager-dockercfg-lc8g2&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.921291 master-0 kubenswrapper[15493]: W0216 17:02:09.921205 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dcloud-controller-manager-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.921398 master-0 kubenswrapper[15493]: E0216 17:02:09.921294 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"cloud-controller-manager-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dcloud-controller-manager-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.941498 master-0 kubenswrapper[15493]: W0216 17:02:09.941416 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.941685 master-0 kubenswrapper[15493]: E0216 17:02:09.941500 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.961974 master-0 kubenswrapper[15493]: W0216 17:02:09.961733 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.961974 master-0 kubenswrapper[15493]: E0216 17:02:09.961837 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:09.981712 master-0 kubenswrapper[15493]: W0216 17:02:09.981612 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:09.981712 master-0 kubenswrapper[15493]: E0216 17:02:09.981688 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:10.000781 master-0 kubenswrapper[15493]: E0216 17:02:10.000687 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:02:10.000781 master-0 kubenswrapper[15493]: I0216 17:02:10.000756 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:10.003535 master-0 kubenswrapper[15493]: E0216 17:02:10.003461 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.003535 master-0 kubenswrapper[15493]: E0216 17:02:10.003494 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.003535 master-0 kubenswrapper[15493]: E0216 17:02:10.003528 15493 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.003844 master-0 kubenswrapper[15493]: E0216 17:02:10.003552 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.003528602 +0000 UTC m=+10.153701682 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.003844 master-0 kubenswrapper[15493]: E0216 17:02:10.003555 15493 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.003844 master-0 kubenswrapper[15493]: E0216 17:02:10.003575 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.003564913 +0000 UTC m=+10.153737993 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.003844 master-0 kubenswrapper[15493]: E0216 17:02:10.003575 15493 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.003844 master-0 kubenswrapper[15493]: E0216 17:02:10.003593 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.003584673 +0000 UTC m=+10.153757763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.003844 master-0 kubenswrapper[15493]: E0216 17:02:10.003710 15493 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.003844 master-0 kubenswrapper[15493]: E0216 17:02:10.003743 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.003705837 +0000 UTC m=+10.153878967 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.003844 master-0 kubenswrapper[15493]: E0216 17:02:10.003741 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.003844 master-0 kubenswrapper[15493]: E0216 17:02:10.003798 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.003772888 +0000 UTC m=+10.153945988 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.003844 master-0 kubenswrapper[15493]: E0216 17:02:10.003825 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.003812889 +0000 UTC m=+10.153985999 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.003844 master-0 kubenswrapper[15493]: E0216 17:02:10.003845 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.00383566 +0000 UTC m=+10.154008770 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.004902 master-0 kubenswrapper[15493]: E0216 17:02:10.004762 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.004902 master-0 kubenswrapper[15493]: E0216 17:02:10.004824 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.004902 master-0 kubenswrapper[15493]: E0216 17:02:10.004848 15493 configmap.go:193] Couldn't get configMap openshift-network-operator/iptables-alerter-script: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.004902 master-0 kubenswrapper[15493]: E0216 17:02:10.004873 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.004902 master-0 kubenswrapper[15493]: E0216 17:02:10.004875 15493 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.004902 master-0 kubenswrapper[15493]: E0216 17:02:10.004830 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.004808366 +0000 UTC m=+10.154981516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.004902 master-0 kubenswrapper[15493]: E0216 17:02:10.004960 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.004915569 +0000 UTC m=+10.155088669 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.004902 master-0 kubenswrapper[15493]: E0216 17:02:10.004987 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script podName:b3fa6ac1-781f-446c-b6b4-18bdb7723c23 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.00497779 +0000 UTC m=+10.155151020 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "iptables-alerter-script" (UniqueName: "kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script") pod "iptables-alerter-czzz2" (UID: "b3fa6ac1-781f-446c-b6b4-18bdb7723c23") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.004902 master-0 kubenswrapper[15493]: E0216 17:02:10.005013 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.005004841 +0000 UTC m=+10.155178021 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.004902 master-0 kubenswrapper[15493]: E0216 17:02:10.005029 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.005022051 +0000 UTC m=+10.155195131 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.006297 master-0 kubenswrapper[15493]: E0216 17:02:10.006088 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.006297 master-0 kubenswrapper[15493]: E0216 17:02:10.006120 15493 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.006297 master-0 kubenswrapper[15493]: E0216 17:02:10.006182 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.006160482 +0000 UTC m=+10.156333582 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.006297 master-0 kubenswrapper[15493]: E0216 17:02:10.006223 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.006205243 +0000 UTC m=+10.156378353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.006297 master-0 kubenswrapper[15493]: E0216 17:02:10.006237 15493 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.006297 master-0 kubenswrapper[15493]: E0216 17:02:10.006249 15493 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.006297 master-0 kubenswrapper[15493]: E0216 17:02:10.006275 15493 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.006297 master-0 kubenswrapper[15493]: E0216 17:02:10.006296 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.006297 master-0 kubenswrapper[15493]: E0216 17:02:10.006256 15493 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.006297 master-0 kubenswrapper[15493]: E0216 17:02:10.006326 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006332 15493 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006357 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006384 15493 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006389 15493 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006410 15493 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006424 15493 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006442 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006411 15493 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006447 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006187 15493 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006494 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006501 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006530 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006543 15493 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006552 15493 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006549 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006546 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006452 15493 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006333 15493 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006536 15493 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006475 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006831 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006837 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006272 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.006260024 +0000 UTC m=+10.156433104 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006915 15493 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006958 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006977 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007026 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007060 15493 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006964 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.006908411 +0000 UTC m=+10.157081521 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007098 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006482 15493 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007122 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007096136 +0000 UTC m=+10.157269206 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006507 15493 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007145 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007137207 +0000 UTC m=+10.157310277 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007196 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007188299 +0000 UTC m=+10.157361499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006513 15493 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007213 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007205599 +0000 UTC m=+10.157378799 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007233 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.00722672 +0000 UTC m=+10.157399920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006541 15493 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007249 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.0072424 +0000 UTC m=+10.157415600 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006560 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007269 15493 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006570 15493 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006582 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006483 15493 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.006998 15493 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007267 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007260271 +0000 UTC m=+10.157433481 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007429 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007421845 +0000 UTC m=+10.157594905 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007443 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007437735 +0000 UTC m=+10.157610805 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007460 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007453426 +0000 UTC m=+10.157626496 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007475 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007467536 +0000 UTC m=+10.157640606 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007500 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007491647 +0000 UTC m=+10.157664817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007523 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007514217 +0000 UTC m=+10.157687427 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007544 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007537738 +0000 UTC m=+10.157710808 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007563 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007556228 +0000 UTC m=+10.157729298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007582 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007576419 +0000 UTC m=+10.157749489 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007598 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007593099 +0000 UTC m=+10.157766169 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007609 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.00760429 +0000 UTC m=+10.157777360 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007621 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.00761559 +0000 UTC m=+10.157788660 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007633 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.00762733 +0000 UTC m=+10.157800400 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007645 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007640431 +0000 UTC m=+10.157813501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007656 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007661 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007653791 +0000 UTC m=+10.157826861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007728 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007710263 +0000 UTC m=+10.157883423 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007756 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007741923 +0000 UTC m=+10.157915123 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007790 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007780884 +0000 UTC m=+10.157953984 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007814 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007802815 +0000 UTC m=+10.157975925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007828 15493 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007841 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007824966 +0000 UTC m=+10.157998066 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007861 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007852666 +0000 UTC m=+10.158025766 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007877 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007894 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007877787 +0000 UTC m=+10.158050877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007948 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007910848 +0000 UTC m=+10.158083928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.007977 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.007965319 +0000 UTC m=+10.158138549 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.008000 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.008013 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.008028 15493 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.008003 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.0079933 +0000 UTC m=+10.158166500 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.008052 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.008069 15493 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.008078 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008062842 +0000 UTC m=+10.158235992 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.007889 master-0 kubenswrapper[15493]: E0216 17:02:10.008093 15493 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008102 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008090933 +0000 UTC m=+10.158264103 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008119 15493 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008125 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008115993 +0000 UTC m=+10.158289203 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008205 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008192015 +0000 UTC m=+10.158365095 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008224 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008216666 +0000 UTC m=+10.158389746 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008244 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008236546 +0000 UTC m=+10.158409626 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008259 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008252997 +0000 UTC m=+10.158426077 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008274 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008267497 +0000 UTC m=+10.158440577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008280 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008291 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008283648 +0000 UTC m=+10.158456728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008297 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008331 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008311598 +0000 UTC m=+10.158484698 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008358 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008347219 +0000 UTC m=+10.158520329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008386 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.00837226 +0000 UTC m=+10.158545360 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008409 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008398281 +0000 UTC m=+10.158571381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008421 15493 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008432 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008421191 +0000 UTC m=+10.158594301 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008453 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008444432 +0000 UTC m=+10.158617542 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008483 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008470213 +0000 UTC m=+10.158643313 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008525 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008510614 +0000 UTC m=+10.158683854 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008562 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008548595 +0000 UTC m=+10.158721775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008591 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008581826 +0000 UTC m=+10.158754926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008617 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008608576 +0000 UTC m=+10.158781686 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008642 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008633967 +0000 UTC m=+10.158807067 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008669 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008660688 +0000 UTC m=+10.158833798 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008697 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008687288 +0000 UTC m=+10.158860388 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.008810 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.008796811 +0000 UTC m=+10.158969911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.009948 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.009988 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.009979733 +0000 UTC m=+10.160152803 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.010014 15493 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.010013 15493 configmap.go:193] Couldn't get configMap openshift-network-node-identity/ovnkube-identity-cm: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.010036 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.010030444 +0000 UTC m=+10.160203514 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.010068 15493 configmap.go:193] Couldn't get configMap openshift-network-node-identity/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.010090 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.010073815 +0000 UTC m=+10.160246915 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ovnkube-identity-cm" (UniqueName: "kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.010134 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.010117296 +0000 UTC m=+10.160290406 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011270 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011293 15493 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011299 15493 configmap.go:193] Couldn't get configMap openshift-multus/whereabouts-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011332 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.011311538 +0000 UTC m=+10.161484648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011340 15493 secret.go:189] Couldn't get secret openshift-network-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011349 15493 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011360 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.011343369 +0000 UTC m=+10.161516549 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "whereabouts-configmap" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011373 15493 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011398 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.01138028 +0000 UTC m=+10.161553500 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011415 15493 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011422 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls podName:4549ea98-7379-49e1-8452-5efb643137ca nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.01141047 +0000 UTC m=+10.161583660 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls") pod "network-operator-6fcf4c966-6bmf9" (UID: "4549ea98-7379-49e1-8452-5efb643137ca") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011437 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011439 15493 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011450 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.011438931 +0000 UTC m=+10.161612151 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011465 15493 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011476 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.011465392 +0000 UTC m=+10.161638592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011485 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011500 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011503 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011526 15493 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011527 15493 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011526 15493 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011554 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011572 15493 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011585 15493 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011596 15493 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011599 15493 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011606 15493 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011440 15493 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011625 15493 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011645 15493 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011667 15493 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011495 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011701 15493 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011577 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011476 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011504 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.011493083 +0000 UTC m=+10.161666173 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011815 15493 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011421 15493 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011835 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.011814181 +0000 UTC m=+10.161987341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011536 15493 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011874 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.011854372 +0000 UTC m=+10.162027482 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011562 15493 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011899 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.011887763 +0000 UTC m=+10.162060873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011913 15493 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-control-plane-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011974 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.011911774 +0000 UTC m=+10.162085004 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011979 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012008 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.011991716 +0000 UTC m=+10.162164896 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012007 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012012 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012068 15493 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011574 15493 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011628 15493 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012133 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011640 15493 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011664 15493 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012031 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012021387 +0000 UTC m=+10.162194577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012251 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012232962 +0000 UTC m=+10.162406062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.011572 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012277 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012266383 +0000 UTC m=+10.162439483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012303 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012292614 +0000 UTC m=+10.162465714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012341 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012329265 +0000 UTC m=+10.162502375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012371 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012361656 +0000 UTC m=+10.162534756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012404 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012393796 +0000 UTC m=+10.162566896 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012434 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012424277 +0000 UTC m=+10.162597377 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012464 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012454058 +0000 UTC m=+10.162627168 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012494 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012484689 +0000 UTC m=+10.162657799 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012524 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.01251544 +0000 UTC m=+10.162688540 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012554 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.01254223 +0000 UTC m=+10.162715330 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012599 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012575921 +0000 UTC m=+10.162749031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012632 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012622093 +0000 UTC m=+10.162795203 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012661 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012651473 +0000 UTC m=+10.162824583 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012688 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012678114 +0000 UTC m=+10.162851224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012726 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012711075 +0000 UTC m=+10.162884285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012766 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012752786 +0000 UTC m=+10.162925986 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012810 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012795367 +0000 UTC m=+10.162968597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012848 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012838208 +0000 UTC m=+10.163011308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.013062 master-0 kubenswrapper[15493]: E0216 17:02:10.012877 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012864559 +0000 UTC m=+10.163037669 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.018689 master-0 kubenswrapper[15493]: E0216 17:02:10.012970 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.01290922 +0000 UTC m=+10.163082360 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.018689 master-0 kubenswrapper[15493]: E0216 17:02:10.013009 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.012997602 +0000 UTC m=+10.163170702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" (UniqueName: "kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.018689 master-0 kubenswrapper[15493]: E0216 17:02:10.013032 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.013023133 +0000 UTC m=+10.163196243 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.018689 master-0 kubenswrapper[15493]: E0216 17:02:10.013062 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.013051874 +0000 UTC m=+10.163224984 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.018689 master-0 kubenswrapper[15493]: E0216 17:02:10.013092 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.013082015 +0000 UTC m=+10.163255125 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.018689 master-0 kubenswrapper[15493]: E0216 17:02:10.013119 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.013109595 +0000 UTC m=+10.163282705 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.018689 master-0 kubenswrapper[15493]: E0216 17:02:10.013149 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.013140026 +0000 UTC m=+10.163313126 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.018689 master-0 kubenswrapper[15493]: E0216 17:02:10.013181 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.013170797 +0000 UTC m=+10.163343897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.018689 master-0 kubenswrapper[15493]: E0216 17:02:10.013210 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.013199918 +0000 UTC m=+10.163373018 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.018689 master-0 kubenswrapper[15493]: E0216 17:02:10.013239 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.013230859 +0000 UTC m=+10.163403959 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:10.018689 master-0 kubenswrapper[15493]: E0216 17:02:10.013269 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.013260339 +0000 UTC m=+10.163433449 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.018689 master-0 kubenswrapper[15493]: E0216 17:02:10.013299 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.01328937 +0000 UTC m=+10.163462470 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:10.040830 master-0 kubenswrapper[15493]: I0216 17:02:10.040764 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:10.041615 master-0 kubenswrapper[15493]: I0216 17:02:10.041566 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:10.281172 master-0 kubenswrapper[15493]: E0216 17:02:10.281101 15493 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:02:10.281172 master-0 kubenswrapper[15493]: E0216 17:02:10.281162 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered] Feb 16 17:02:10.281453 master-0 kubenswrapper[15493]: E0216 17:02:10.281253 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/86c571b6-0f65-41f0-b1be-f63d7a974782-kube-api-access podName:86c571b6-0f65-41f0-b1be-f63d7a974782 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:10.781225361 +0000 UTC m=+9.931398461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/86c571b6-0f65-41f0-b1be-f63d7a974782-kube-api-access") pod "installer-1-master-0" (UID: "86c571b6-0f65-41f0-b1be-f63d7a974782") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered] Feb 16 17:02:10.341432 master-0 kubenswrapper[15493]: E0216 17:02:10.341358 15493 projected.go:194] Error preparing data for projected volume bound-sa-token for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:10.341628 master-0 kubenswrapper[15493]: E0216 17:02:10.341501 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:10.841468275 +0000 UTC m=+9.991641385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "bound-sa-token" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:10.348323 master-0 kubenswrapper[15493]: I0216 17:02:10.348251 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86c571b6-0f65-41f0-b1be-f63d7a974782-kube-api-access\") pod \"86c571b6-0f65-41f0-b1be-f63d7a974782\" (UID: \"86c571b6-0f65-41f0-b1be-f63d7a974782\") " Feb 16 17:02:10.352956 master-0 kubenswrapper[15493]: I0216 17:02:10.352850 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86c571b6-0f65-41f0-b1be-f63d7a974782-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "86c571b6-0f65-41f0-b1be-f63d7a974782" (UID: "86c571b6-0f65-41f0-b1be-f63d7a974782"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:10.400673 master-0 kubenswrapper[15493]: I0216 17:02:10.400595 15493 request.go:700] Waited for 2.955266558s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller/token Feb 16 17:02:10.455075 master-0 kubenswrapper[15493]: I0216 17:02:10.455005 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/86c571b6-0f65-41f0-b1be-f63d7a974782-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:10.761434 master-0 kubenswrapper[15493]: E0216 17:02:10.761339 15493 projected.go:194] Error preparing data for projected volume bound-sa-token for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:10.761717 master-0 kubenswrapper[15493]: E0216 17:02:10.761467 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.261437409 +0000 UTC m=+10.411610509 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "bound-sa-token" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:10.869369 master-0 kubenswrapper[15493]: I0216 17:02:10.869285 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:11.021276 master-0 kubenswrapper[15493]: E0216 17:02:11.021147 15493 projected.go:288] Couldn't get configMap openshift-network-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.041972 master-0 kubenswrapper[15493]: E0216 17:02:11.041886 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.041972 master-0 kubenswrapper[15493]: E0216 17:02:11.042011 15493 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.042256 master-0 kubenswrapper[15493]: E0216 17:02:11.042015 15493 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.061650 master-0 kubenswrapper[15493]: E0216 17:02:11.061594 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.079980 master-0 kubenswrapper[15493]: I0216 17:02:11.079879 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:11.079980 master-0 kubenswrapper[15493]: I0216 17:02:11.079964 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:11.080553 master-0 kubenswrapper[15493]: I0216 17:02:11.080173 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:11.080553 master-0 kubenswrapper[15493]: I0216 17:02:11.080222 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:11.080553 master-0 kubenswrapper[15493]: I0216 17:02:11.080252 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:11.080553 master-0 kubenswrapper[15493]: I0216 17:02:11.080283 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:11.080858 master-0 kubenswrapper[15493]: I0216 17:02:11.080453 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:11.080858 master-0 kubenswrapper[15493]: I0216 17:02:11.080647 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:11.080858 master-0 kubenswrapper[15493]: I0216 17:02:11.080695 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:11.080858 master-0 kubenswrapper[15493]: I0216 17:02:11.080723 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:11.080858 master-0 kubenswrapper[15493]: I0216 17:02:11.080758 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:11.080858 master-0 kubenswrapper[15493]: I0216 17:02:11.080784 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:11.080858 master-0 kubenswrapper[15493]: I0216 17:02:11.080807 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:11.080858 master-0 kubenswrapper[15493]: I0216 17:02:11.080833 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:11.080858 master-0 kubenswrapper[15493]: I0216 17:02:11.080857 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:11.081348 master-0 kubenswrapper[15493]: I0216 17:02:11.080977 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:11.081348 master-0 kubenswrapper[15493]: I0216 17:02:11.081009 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:11.081348 master-0 kubenswrapper[15493]: E0216 17:02:11.080980 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.081348 master-0 kubenswrapper[15493]: I0216 17:02:11.081033 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:11.081348 master-0 kubenswrapper[15493]: I0216 17:02:11.081214 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:11.081646 master-0 kubenswrapper[15493]: I0216 17:02:11.081367 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:11.081646 master-0 kubenswrapper[15493]: I0216 17:02:11.081410 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:11.081646 master-0 kubenswrapper[15493]: I0216 17:02:11.081441 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:11.081646 master-0 kubenswrapper[15493]: I0216 17:02:11.081562 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:11.081646 master-0 kubenswrapper[15493]: I0216 17:02:11.081621 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:11.081945 master-0 kubenswrapper[15493]: I0216 17:02:11.081689 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:11.081945 master-0 kubenswrapper[15493]: I0216 17:02:11.081765 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:11.081945 master-0 kubenswrapper[15493]: I0216 17:02:11.081820 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:11.081945 master-0 kubenswrapper[15493]: I0216 17:02:11.081885 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:11.082224 master-0 kubenswrapper[15493]: I0216 17:02:11.081955 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:11.082224 master-0 kubenswrapper[15493]: I0216 17:02:11.082022 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:11.082224 master-0 kubenswrapper[15493]: I0216 17:02:11.082122 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:11.082224 master-0 kubenswrapper[15493]: I0216 17:02:11.082165 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:11.082224 master-0 kubenswrapper[15493]: I0216 17:02:11.082203 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:11.082493 master-0 kubenswrapper[15493]: I0216 17:02:11.082248 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:11.082493 master-0 kubenswrapper[15493]: I0216 17:02:11.082314 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:11.082493 master-0 kubenswrapper[15493]: I0216 17:02:11.082356 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:11.082493 master-0 kubenswrapper[15493]: I0216 17:02:11.082415 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:11.082493 master-0 kubenswrapper[15493]: I0216 17:02:11.082470 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:11.082788 master-0 kubenswrapper[15493]: I0216 17:02:11.082514 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:11.082788 master-0 kubenswrapper[15493]: I0216 17:02:11.082570 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:11.082788 master-0 kubenswrapper[15493]: I0216 17:02:11.082609 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:11.082788 master-0 kubenswrapper[15493]: I0216 17:02:11.082687 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:11.083052 master-0 kubenswrapper[15493]: I0216 17:02:11.082847 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:11.083052 master-0 kubenswrapper[15493]: I0216 17:02:11.082893 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:11.083052 master-0 kubenswrapper[15493]: I0216 17:02:11.082983 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:11.083227 master-0 kubenswrapper[15493]: I0216 17:02:11.083074 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:11.083227 master-0 kubenswrapper[15493]: I0216 17:02:11.083114 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:11.083227 master-0 kubenswrapper[15493]: I0216 17:02:11.083146 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:11.083400 master-0 kubenswrapper[15493]: I0216 17:02:11.083243 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:11.083400 master-0 kubenswrapper[15493]: I0216 17:02:11.083281 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:11.083400 master-0 kubenswrapper[15493]: I0216 17:02:11.083320 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:11.083400 master-0 kubenswrapper[15493]: I0216 17:02:11.083394 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:11.083613 master-0 kubenswrapper[15493]: I0216 17:02:11.083428 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:11.083613 master-0 kubenswrapper[15493]: I0216 17:02:11.083542 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:11.083613 master-0 kubenswrapper[15493]: I0216 17:02:11.083573 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:11.083773 master-0 kubenswrapper[15493]: I0216 17:02:11.083647 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:11.083993 master-0 kubenswrapper[15493]: I0216 17:02:11.083779 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:11.083993 master-0 kubenswrapper[15493]: I0216 17:02:11.083852 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:11.083993 master-0 kubenswrapper[15493]: I0216 17:02:11.083895 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:11.083993 master-0 kubenswrapper[15493]: I0216 17:02:11.083974 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:11.084228 master-0 kubenswrapper[15493]: I0216 17:02:11.084016 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:11.084228 master-0 kubenswrapper[15493]: I0216 17:02:11.084074 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:11.084228 master-0 kubenswrapper[15493]: I0216 17:02:11.084114 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:11.084228 master-0 kubenswrapper[15493]: I0216 17:02:11.084176 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:11.084228 master-0 kubenswrapper[15493]: I0216 17:02:11.084216 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:11.084503 master-0 kubenswrapper[15493]: I0216 17:02:11.084255 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:11.084503 master-0 kubenswrapper[15493]: I0216 17:02:11.084299 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:11.084503 master-0 kubenswrapper[15493]: I0216 17:02:11.084339 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:11.084503 master-0 kubenswrapper[15493]: I0216 17:02:11.084381 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:11.084503 master-0 kubenswrapper[15493]: I0216 17:02:11.084448 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:11.084761 master-0 kubenswrapper[15493]: I0216 17:02:11.084610 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:11.084761 master-0 kubenswrapper[15493]: I0216 17:02:11.084646 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:11.084761 master-0 kubenswrapper[15493]: I0216 17:02:11.084673 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:11.084956 master-0 kubenswrapper[15493]: I0216 17:02:11.084765 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:11.084956 master-0 kubenswrapper[15493]: I0216 17:02:11.084841 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:11.084956 master-0 kubenswrapper[15493]: I0216 17:02:11.084883 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:11.084956 master-0 kubenswrapper[15493]: I0216 17:02:11.084939 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:11.085193 master-0 kubenswrapper[15493]: I0216 17:02:11.085036 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:11.085193 master-0 kubenswrapper[15493]: I0216 17:02:11.085090 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:11.085193 master-0 kubenswrapper[15493]: I0216 17:02:11.085130 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:11.085193 master-0 kubenswrapper[15493]: I0216 17:02:11.085169 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:11.085431 master-0 kubenswrapper[15493]: I0216 17:02:11.085208 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:11.085431 master-0 kubenswrapper[15493]: I0216 17:02:11.085283 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:11.085431 master-0 kubenswrapper[15493]: I0216 17:02:11.085420 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:11.085597 master-0 kubenswrapper[15493]: I0216 17:02:11.085470 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:11.085597 master-0 kubenswrapper[15493]: I0216 17:02:11.085582 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:11.085708 master-0 kubenswrapper[15493]: I0216 17:02:11.085627 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:11.085948 master-0 kubenswrapper[15493]: I0216 17:02:11.085862 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:11.086023 master-0 kubenswrapper[15493]: I0216 17:02:11.085977 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:11.086127 master-0 kubenswrapper[15493]: I0216 17:02:11.086080 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:11.086191 master-0 kubenswrapper[15493]: I0216 17:02:11.086129 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:11.086191 master-0 kubenswrapper[15493]: I0216 17:02:11.086175 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:11.086386 master-0 kubenswrapper[15493]: I0216 17:02:11.086248 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:11.086386 master-0 kubenswrapper[15493]: I0216 17:02:11.086317 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:11.086386 master-0 kubenswrapper[15493]: I0216 17:02:11.086376 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:11.086570 master-0 kubenswrapper[15493]: I0216 17:02:11.086416 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:11.086570 master-0 kubenswrapper[15493]: I0216 17:02:11.086455 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:11.086570 master-0 kubenswrapper[15493]: I0216 17:02:11.086496 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:11.086570 master-0 kubenswrapper[15493]: I0216 17:02:11.086536 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:11.086792 master-0 kubenswrapper[15493]: I0216 17:02:11.086578 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:11.086792 master-0 kubenswrapper[15493]: I0216 17:02:11.086618 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:11.086792 master-0 kubenswrapper[15493]: I0216 17:02:11.086659 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:11.086792 master-0 kubenswrapper[15493]: I0216 17:02:11.086697 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:11.086792 master-0 kubenswrapper[15493]: I0216 17:02:11.086739 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:11.087102 master-0 kubenswrapper[15493]: I0216 17:02:11.086819 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:11.087102 master-0 kubenswrapper[15493]: I0216 17:02:11.086883 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:11.087102 master-0 kubenswrapper[15493]: I0216 17:02:11.086965 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:11.087102 master-0 kubenswrapper[15493]: I0216 17:02:11.087010 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:11.087102 master-0 kubenswrapper[15493]: I0216 17:02:11.087071 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:11.087377 master-0 kubenswrapper[15493]: I0216 17:02:11.087126 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:11.087377 master-0 kubenswrapper[15493]: I0216 17:02:11.087182 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:11.087377 master-0 kubenswrapper[15493]: I0216 17:02:11.087253 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:11.087377 master-0 kubenswrapper[15493]: I0216 17:02:11.087297 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:11.087377 master-0 kubenswrapper[15493]: I0216 17:02:11.087349 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:11.087656 master-0 kubenswrapper[15493]: I0216 17:02:11.087389 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:11.087656 master-0 kubenswrapper[15493]: I0216 17:02:11.087428 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:11.087656 master-0 kubenswrapper[15493]: I0216 17:02:11.087472 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:11.087656 master-0 kubenswrapper[15493]: I0216 17:02:11.087512 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:11.087656 master-0 kubenswrapper[15493]: I0216 17:02:11.087552 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:11.087966 master-0 kubenswrapper[15493]: I0216 17:02:11.087687 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:11.087966 master-0 kubenswrapper[15493]: I0216 17:02:11.087772 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:11.102598 master-0 kubenswrapper[15493]: E0216 17:02:11.102526 15493 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.122865 master-0 kubenswrapper[15493]: E0216 17:02:11.122783 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.141100 master-0 kubenswrapper[15493]: E0216 17:02:11.141014 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.162088 master-0 kubenswrapper[15493]: E0216 17:02:11.162025 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.181985 master-0 kubenswrapper[15493]: E0216 17:02:11.181905 15493 projected.go:288] Couldn't get configMap openshift-network-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.201904 master-0 kubenswrapper[15493]: E0216 17:02:11.201829 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.221877 master-0 kubenswrapper[15493]: E0216 17:02:11.221819 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.242221 master-0 kubenswrapper[15493]: E0216 17:02:11.242112 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.261818 master-0 kubenswrapper[15493]: E0216 17:02:11.261741 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.291449 master-0 kubenswrapper[15493]: I0216 17:02:11.291286 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:11.301982 master-0 kubenswrapper[15493]: E0216 17:02:11.301904 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.322577 master-0 kubenswrapper[15493]: E0216 17:02:11.322502 15493 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.361966 master-0 kubenswrapper[15493]: E0216 17:02:11.361853 15493 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.361966 master-0 kubenswrapper[15493]: E0216 17:02:11.361959 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/serviceaccounts/openshift-kube-scheduler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:11.362480 master-0 kubenswrapper[15493]: E0216 17:02:11.362068 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.862035663 +0000 UTC m=+11.012208773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/serviceaccounts/openshift-kube-scheduler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:11.383644 master-0 kubenswrapper[15493]: E0216 17:02:11.383583 15493 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.401408 master-0 kubenswrapper[15493]: I0216 17:02:11.401332 15493 status_manager.go:851] "Failed to get status for pod" podUID="f8589094-f18e-4070-a550-b2da6f8acfc0" pod="assisted-installer/assisted-installer-controller-thhq2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/assisted-installer/pods/assisted-installer-controller-thhq2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:11.402034 master-0 kubenswrapper[15493]: E0216 17:02:11.401668 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.420323 master-0 kubenswrapper[15493]: I0216 17:02:11.420270 15493 request.go:700] Waited for 3.667668241s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/nodes Feb 16 17:02:11.420945 master-0 kubenswrapper[15493]: E0216 17:02:11.420902 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.421392 master-0 kubenswrapper[15493]: E0216 17:02:11.421332 15493 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:02:11.441722 master-0 kubenswrapper[15493]: W0216 17:02:11.441613 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.441865 master-0 kubenswrapper[15493]: E0216 17:02:11.441741 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.441865 master-0 kubenswrapper[15493]: E0216 17:02:11.441824 15493 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.460689 master-0 kubenswrapper[15493]: W0216 17:02:11.460611 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.460778 master-0 kubenswrapper[15493]: E0216 17:02:11.460700 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.461737 master-0 kubenswrapper[15493]: E0216 17:02:11.461689 15493 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.481946 master-0 kubenswrapper[15493]: E0216 17:02:11.481859 15493 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.481946 master-0 kubenswrapper[15493]: E0216 17:02:11.481944 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:11.482222 master-0 kubenswrapper[15493]: E0216 17:02:11.482049 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:11.982018159 +0000 UTC m=+11.132191259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:11.482222 master-0 kubenswrapper[15493]: W0216 17:02:11.482014 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.482222 master-0 kubenswrapper[15493]: E0216 17:02:11.482146 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.501046 master-0 kubenswrapper[15493]: W0216 17:02:11.500962 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.501046 master-0 kubenswrapper[15493]: E0216 17:02:11.501040 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.502235 master-0 kubenswrapper[15493]: E0216 17:02:11.502179 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.520749 master-0 kubenswrapper[15493]: E0216 17:02:11.520686 15493 projected.go:288] Couldn't get configMap openshift-cluster-version/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.520749 master-0 kubenswrapper[15493]: E0216 17:02:11.520733 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:11.520749 master-0 kubenswrapper[15493]: W0216 17:02:11.520730 15493 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.521272 master-0 kubenswrapper[15493]: E0216 17:02:11.520798 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.521272 master-0 kubenswrapper[15493]: E0216 17:02:11.520828 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.020799965 +0000 UTC m=+11.170973055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:11.541019 master-0 kubenswrapper[15493]: E0216 17:02:11.540960 15493 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.541758 master-0 kubenswrapper[15493]: W0216 17:02:11.541612 15493 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.541758 master-0 kubenswrapper[15493]: E0216 17:02:11.541715 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.561080 master-0 kubenswrapper[15493]: E0216 17:02:11.561008 15493 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.561903 master-0 kubenswrapper[15493]: W0216 17:02:11.561803 15493 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.561971 master-0 kubenswrapper[15493]: E0216 17:02:11.561948 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.581016 master-0 kubenswrapper[15493]: W0216 17:02:11.580910 15493 reflector.go:561] object-"openshift-monitoring"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.581206 master-0 kubenswrapper[15493]: E0216 17:02:11.581018 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.582433 master-0 kubenswrapper[15493]: E0216 17:02:11.582400 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.601363 master-0 kubenswrapper[15493]: W0216 17:02:11.601167 15493 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.601363 master-0 kubenswrapper[15493]: E0216 17:02:11.601272 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.602198 master-0 kubenswrapper[15493]: E0216 17:02:11.602153 15493 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.602257 master-0 kubenswrapper[15493]: E0216 17:02:11.602204 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/serviceaccounts/kube-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:11.602324 master-0 kubenswrapper[15493]: E0216 17:02:11.602294 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.102266431 +0000 UTC m=+11.252439541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/serviceaccounts/kube-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:11.621519 master-0 kubenswrapper[15493]: E0216 17:02:11.621463 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.621723 master-0 kubenswrapper[15493]: W0216 17:02:11.621618 15493 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.621796 master-0 kubenswrapper[15493]: E0216 17:02:11.621757 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.641366 master-0 kubenswrapper[15493]: E0216 17:02:11.641260 15493 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.642313 master-0 kubenswrapper[15493]: W0216 17:02:11.642185 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.642453 master-0 kubenswrapper[15493]: E0216 17:02:11.642333 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.661717 master-0 kubenswrapper[15493]: W0216 17:02:11.661580 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dperformance-addon-operator-webhook-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.661978 master-0 kubenswrapper[15493]: E0216 17:02:11.661709 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"performance-addon-operator-webhook-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dperformance-addon-operator-webhook-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.662093 master-0 kubenswrapper[15493]: E0216 17:02:11.662007 15493 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.662163 master-0 kubenswrapper[15493]: E0216 17:02:11.662085 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:11.662293 master-0 kubenswrapper[15493]: E0216 17:02:11.662240 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access podName:5d39ed24-4301-4cea-8a42-a08f4ba8b479 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.162206347 +0000 UTC m=+11.312379457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access") pod "installer-2-master-0" (UID: "5d39ed24-4301-4cea-8a42-a08f4ba8b479") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:11.682079 master-0 kubenswrapper[15493]: E0216 17:02:11.681998 15493 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.682548 master-0 kubenswrapper[15493]: W0216 17:02:11.682190 15493 reflector.go:561] object-"openshift-ingress-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.682548 master-0 kubenswrapper[15493]: E0216 17:02:11.682328 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.701298 master-0 kubenswrapper[15493]: W0216 17:02:11.701172 15493 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.701440 master-0 kubenswrapper[15493]: E0216 17:02:11.701308 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.701554 master-0 kubenswrapper[15493]: E0216 17:02:11.701453 15493 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.721057 master-0 kubenswrapper[15493]: W0216 17:02:11.720790 15493 reflector.go:561] object-"openshift-monitoring"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.721057 master-0 kubenswrapper[15493]: E0216 17:02:11.720882 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.721978 master-0 kubenswrapper[15493]: E0216 17:02:11.721907 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.740975 master-0 kubenswrapper[15493]: W0216 17:02:11.740820 15493 reflector.go:561] object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.741174 master-0 kubenswrapper[15493]: E0216 17:02:11.740985 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-olm-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.741501 master-0 kubenswrapper[15493]: E0216 17:02:11.741445 15493 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.761341 master-0 kubenswrapper[15493]: W0216 17:02:11.761257 15493 reflector.go:561] object-"openshift-multus"/"multus-admission-controller-secret": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.761536 master-0 kubenswrapper[15493]: E0216 17:02:11.761341 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-admission-controller-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.780646 master-0 kubenswrapper[15493]: W0216 17:02:11.780548 15493 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.780782 master-0 kubenswrapper[15493]: E0216 17:02:11.780645 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.781571 master-0 kubenswrapper[15493]: E0216 17:02:11.781541 15493 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.800667 master-0 kubenswrapper[15493]: W0216 17:02:11.800531 15493 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.800667 master-0 kubenswrapper[15493]: E0216 17:02:11.800631 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.801814 master-0 kubenswrapper[15493]: E0216 17:02:11.801776 15493 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.821037 master-0 kubenswrapper[15493]: W0216 17:02:11.820976 15493 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.821156 master-0 kubenswrapper[15493]: E0216 17:02:11.821035 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.821260 master-0 kubenswrapper[15493]: E0216 17:02:11.821212 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.841235 master-0 kubenswrapper[15493]: W0216 17:02:11.841172 15493 reflector.go:561] object-"openshift-etcd-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.841394 master-0 kubenswrapper[15493]: E0216 17:02:11.841238 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.842258 master-0 kubenswrapper[15493]: E0216 17:02:11.842220 15493 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.860801 master-0 kubenswrapper[15493]: W0216 17:02:11.860735 15493 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.860873 master-0 kubenswrapper[15493]: E0216 17:02:11.860801 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.862446 master-0 kubenswrapper[15493]: E0216 17:02:11.862411 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.881065 master-0 kubenswrapper[15493]: W0216 17:02:11.880904 15493 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-metrics": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.881130 master-0 kubenswrapper[15493]: E0216 17:02:11.881100 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.881212 master-0 kubenswrapper[15493]: E0216 17:02:11.881180 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.900764 master-0 kubenswrapper[15493]: E0216 17:02:11.900697 15493 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.901501 master-0 kubenswrapper[15493]: W0216 17:02:11.901322 15493 reflector.go:561] object-"openshift-network-node-identity"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.901501 master-0 kubenswrapper[15493]: E0216 17:02:11.901478 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.911197 master-0 kubenswrapper[15493]: I0216 17:02:11.911120 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:11.921359 master-0 kubenswrapper[15493]: E0216 17:02:11.921241 15493 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.921684 master-0 kubenswrapper[15493]: W0216 17:02:11.921568 15493 reflector.go:561] object-"openshift-network-node-identity"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.921770 master-0 kubenswrapper[15493]: E0216 17:02:11.921703 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.941598 master-0 kubenswrapper[15493]: W0216 17:02:11.941418 15493 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.941598 master-0 kubenswrapper[15493]: E0216 17:02:11.941556 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.942435 master-0 kubenswrapper[15493]: E0216 17:02:11.942388 15493 projected.go:288] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.960042 master-0 kubenswrapper[15493]: E0216 17:02:11.959958 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:11.961264 master-0 kubenswrapper[15493]: E0216 17:02:11.961138 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:11.961350 master-0 kubenswrapper[15493]: W0216 17:02:11.961235 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.961385 master-0 kubenswrapper[15493]: E0216 17:02:11.961356 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.961461 master-0 kubenswrapper[15493]: E0216 17:02:11.961428 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:11.962308 master-0 kubenswrapper[15493]: E0216 17:02:11.962243 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:11.963073 master-0 kubenswrapper[15493]: E0216 17:02:11.962987 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:11.964048 master-0 kubenswrapper[15493]: E0216 17:02:11.963982 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:11.964048 master-0 kubenswrapper[15493]: I0216 17:02:11.964044 15493 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 17:02:11.964829 master-0 kubenswrapper[15493]: E0216 17:02:11.964777 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 16 17:02:11.981785 master-0 kubenswrapper[15493]: W0216 17:02:11.981678 15493 reflector.go:561] object-"openshift-dns-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:11.981977 master-0 kubenswrapper[15493]: E0216 17:02:11.981791 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:11.981977 master-0 kubenswrapper[15493]: E0216 17:02:11.981866 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.001819 master-0 kubenswrapper[15493]: E0216 17:02:12.001689 15493 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.002193 master-0 kubenswrapper[15493]: W0216 17:02:12.002101 15493 reflector.go:561] object-"openshift-marketplace"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.002253 master-0 kubenswrapper[15493]: E0216 17:02:12.002208 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.015172 master-0 kubenswrapper[15493]: I0216 17:02:12.015103 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:12.021220 master-0 kubenswrapper[15493]: W0216 17:02:12.021087 15493 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.021352 master-0 kubenswrapper[15493]: E0216 17:02:12.021242 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.021352 master-0 kubenswrapper[15493]: E0216 17:02:12.021322 15493 projected.go:288] Couldn't get configMap openshift-network-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.021352 master-0 kubenswrapper[15493]: E0216 17:02:12.021349 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zt8mt for pod openshift-network-operator/network-operator-6fcf4c966-6bmf9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/cluster-network-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.021548 master-0 kubenswrapper[15493]: E0216 17:02:12.021456 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt podName:4549ea98-7379-49e1-8452-5efb643137ca nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.521424714 +0000 UTC m=+11.671597824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zt8mt" (UniqueName: "kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt") pod "network-operator-6fcf4c966-6bmf9" (UID: "4549ea98-7379-49e1-8452-5efb643137ca") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/cluster-network-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.022148 master-0 kubenswrapper[15493]: E0216 17:02:12.022106 15493 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.041138 master-0 kubenswrapper[15493]: E0216 17:02:12.041047 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.041861 master-0 kubenswrapper[15493]: W0216 17:02:12.041784 15493 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.041946 master-0 kubenswrapper[15493]: E0216 17:02:12.041863 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.042314 master-0 kubenswrapper[15493]: E0216 17:02:12.042045 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.042314 master-0 kubenswrapper[15493]: E0216 17:02:12.042079 15493 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.042314 master-0 kubenswrapper[15493]: E0216 17:02:12.042096 15493 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.042314 master-0 kubenswrapper[15493]: E0216 17:02:12.042128 15493 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.042314 master-0 kubenswrapper[15493]: E0216 17:02:12.042155 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.042132722 +0000 UTC m=+12.192305802 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.042314 master-0 kubenswrapper[15493]: E0216 17:02:12.042197 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.042177013 +0000 UTC m=+12.192350123 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.042314 master-0 kubenswrapper[15493]: E0216 17:02:12.042096 15493 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.042314 master-0 kubenswrapper[15493]: E0216 17:02:12.042224 15493 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.042314 master-0 kubenswrapper[15493]: E0216 17:02:12.042270 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.542260045 +0000 UTC m=+11.692433245 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.061757 master-0 kubenswrapper[15493]: W0216 17:02:12.061545 15493 reflector.go:561] object-"openshift-multus"/"whereabouts-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dwhereabouts-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.061757 master-0 kubenswrapper[15493]: E0216 17:02:12.061647 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"whereabouts-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dwhereabouts-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.062312 master-0 kubenswrapper[15493]: E0216 17:02:12.062160 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.062312 master-0 kubenswrapper[15493]: E0216 17:02:12.062204 15493 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.062312 master-0 kubenswrapper[15493]: E0216 17:02:12.062259 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.062312 master-0 kubenswrapper[15493]: E0216 17:02:12.062277 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.562255074 +0000 UTC m=+11.712428184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.080197 master-0 kubenswrapper[15493]: E0216 17:02:12.080145 15493 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.080676 master-0 kubenswrapper[15493]: E0216 17:02:12.080220 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.080205969 +0000 UTC m=+13.230379049 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.080676 master-0 kubenswrapper[15493]: E0216 17:02:12.080402 15493 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.080676 master-0 kubenswrapper[15493]: E0216 17:02:12.080594 15493 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.080676 master-0 kubenswrapper[15493]: E0216 17:02:12.080635 15493 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.080676 master-0 kubenswrapper[15493]: E0216 17:02:12.080651 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.08061554 +0000 UTC m=+13.230788680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.080676 master-0 kubenswrapper[15493]: E0216 17:02:12.080669 15493 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.080676 master-0 kubenswrapper[15493]: E0216 17:02:12.080681 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.080671181 +0000 UTC m=+13.230844251 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080700 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.080692672 +0000 UTC m=+13.230865742 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080728 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080729 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080764 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.080754524 +0000 UTC m=+13.230927604 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080793 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.080780494 +0000 UTC m=+13.230953634 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080790 15493 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080808 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080818 15493 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080849 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.080842136 +0000 UTC m=+13.231015216 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080868 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.080860907 +0000 UTC m=+13.231034097 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080875 15493 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080882 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.080876397 +0000 UTC m=+13.231049467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080896 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080908 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.080900918 +0000 UTC m=+13.231073998 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080956 15493 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080970 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.080952319 +0000 UTC m=+13.231125399 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.080990 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.0809828 +0000 UTC m=+13.231155870 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.081007 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.081013 15493 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.081039 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.081030931 +0000 UTC m=+13.231204011 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.081042 15493 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.081075 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.081066152 +0000 UTC m=+13.231239232 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: W0216 17:02:12.081092 15493 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.081150 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.081126564 +0000 UTC m=+13.231299694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.081153 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.081160 master-0 kubenswrapper[15493]: E0216 17:02:12.081189 15493 projected.go:194] Error preparing data for projected volume kube-api-access-8r28x for pod openshift-multus/multus-6r7wj: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.081149 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.081243 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.581222566 +0000 UTC m=+11.731395646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8r28x" (UniqueName: "kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.081279 15493 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.081289 15493 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.081309 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.081298738 +0000 UTC m=+13.231471818 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.081325 15493 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.081337 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.081323309 +0000 UTC m=+13.231496509 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.081360 15493 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.081365 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.08135205 +0000 UTC m=+13.231525230 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.081402 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.081394281 +0000 UTC m=+13.231567351 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082328 15493 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082351 15493 secret.go:189] Couldn't get secret openshift-network-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082382 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082368086 +0000 UTC m=+13.232541266 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082396 15493 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082423 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082429 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082462 15493 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082479 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082488 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082481 15493 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082508 15493 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082517 15493 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082491 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082538 15493 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082334 15493 configmap.go:193] Couldn't get configMap openshift-multus/whereabouts-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082533 15493 configmap.go:193] Couldn't get configMap openshift-network-operator/iptables-alerter-script: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082564 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082564 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082409 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls podName:4549ea98-7379-49e1-8452-5efb643137ca nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082397507 +0000 UTC m=+13.232570727 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls") pod "network-operator-6fcf4c966-6bmf9" (UID: "4549ea98-7379-49e1-8452-5efb643137ca") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082546 15493 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-control-plane-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082601 15493 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082608 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082594382 +0000 UTC m=+13.232767562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082634 15493 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.082590 master-0 kubenswrapper[15493]: E0216 17:02:12.082635 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082619323 +0000 UTC m=+13.232792513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" (UniqueName: "kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082666 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082655674 +0000 UTC m=+13.232828864 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082689 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082677805 +0000 UTC m=+13.232851005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082702 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082710 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082700285 +0000 UTC m=+13.232873475 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082745 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082730346 +0000 UTC m=+13.232903506 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082774 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082762407 +0000 UTC m=+13.232935617 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082796 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082784687 +0000 UTC m=+13.232957917 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082816 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082805978 +0000 UTC m=+13.232979198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082840 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082826949 +0000 UTC m=+13.233000159 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082861 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082850889 +0000 UTC m=+13.233024099 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082884 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.08287288 +0000 UTC m=+13.233046090 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082893 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082900 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082892 +0000 UTC m=+13.233065080 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "whereabouts-configmap" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082944 15493 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082953 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082917741 +0000 UTC m=+13.233090941 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082980 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script podName:b3fa6ac1-781f-446c-b6b4-18bdb7723c23 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082970512 +0000 UTC m=+13.233143712 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "iptables-alerter-script" (UniqueName: "kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script") pod "iptables-alerter-czzz2" (UID: "b3fa6ac1-781f-446c-b6b4-18bdb7723c23") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.082998 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.082990993 +0000 UTC m=+13.233164073 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.083026 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.083017104 +0000 UTC m=+13.233190314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.083027 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.083057 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.083043984 +0000 UTC m=+13.233217204 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.083075 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.083068135 +0000 UTC m=+13.233241355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.083090 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.083083275 +0000 UTC m=+13.233256355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.083106 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.083106 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.083097976 +0000 UTC m=+13.233271056 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.083117 15493 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.083142 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.083129497 +0000 UTC m=+13.233302687 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.083165 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.083152837 +0000 UTC m=+13.233325917 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.083187 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.083177498 +0000 UTC m=+13.233350578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084080 15493 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084112 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084127 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084116403 +0000 UTC m=+13.234289553 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084148 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084156 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084145423 +0000 UTC m=+13.234318513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084196 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084187205 +0000 UTC m=+13.234360275 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084243 15493 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084279 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084269507 +0000 UTC m=+13.234442667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084300 15493 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084338 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084323788 +0000 UTC m=+13.234496978 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084370 15493 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084397 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.08438906 +0000 UTC m=+13.234562220 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084428 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084459 15493 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084484 15493 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084495 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084485362 +0000 UTC m=+13.234658432 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084513 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084505373 +0000 UTC m=+13.234678533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084516 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084530 15493 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.084464 master-0 kubenswrapper[15493]: E0216 17:02:12.084545 15493 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084561 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084531 15493 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084595 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084541 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084535554 +0000 UTC m=+13.234708624 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084624 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084612106 +0000 UTC m=+13.234785186 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084636 15493 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084658 15493 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084682 15493 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084642 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084633276 +0000 UTC m=+13.234806456 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084715 15493 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084723 15493 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084731 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084757 15493 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084767 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084771 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084801 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084737 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084716979 +0000 UTC m=+13.234890089 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084701 15493 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084835 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084822651 +0000 UTC m=+13.234995831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084850 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084857 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084849312 +0000 UTC m=+13.235022512 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084868 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084878 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084868243 +0000 UTC m=+13.235041403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084894 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084887103 +0000 UTC m=+13.235060183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084952 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084904724 +0000 UTC m=+13.235077804 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084965 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084974 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084965435 +0000 UTC m=+13.235138515 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.084998 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.084990506 +0000 UTC m=+13.235163586 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085018 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085010196 +0000 UTC m=+13.235183286 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085039 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085025667 +0000 UTC m=+13.235198747 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085064 15493 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085102 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085092238 +0000 UTC m=+13.235265318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085045 15493 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085120 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085112529 +0000 UTC m=+13.235285609 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085138 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085129429 +0000 UTC m=+13.235302499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085060 15493 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085182 15493 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085168 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.08516026 +0000 UTC m=+13.235333340 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085247 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085233952 +0000 UTC m=+13.235407112 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085249 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085265 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085256933 +0000 UTC m=+13.235430143 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085284 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085275493 +0000 UTC m=+13.235448693 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085303 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085294064 +0000 UTC m=+13.235467244 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085306 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085320 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085313834 +0000 UTC m=+13.235487034 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085320 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085336 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085328545 +0000 UTC m=+13.235501815 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085352 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085346115 +0000 UTC m=+13.235519305 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085373 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085363616 +0000 UTC m=+13.235536806 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085383 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085390 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085382786 +0000 UTC m=+13.235555976 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085407 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085399757 +0000 UTC m=+13.235572957 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085420 15493 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085422 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085415527 +0000 UTC m=+13.235588727 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085445 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085435808 +0000 UTC m=+13.235608998 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085474 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.085464228 +0000 UTC m=+13.235637428 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.085985 15493 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.086047 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.086029213 +0000 UTC m=+13.236202373 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.086053 15493 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.086081 15493 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.086082 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.086090 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.086081635 +0000 UTC m=+13.236254705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.086145 15493 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.086160 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.086147126 +0000 UTC m=+13.236320276 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.086180 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.086171227 +0000 UTC m=+13.236344307 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.086202 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.086190348 +0000 UTC m=+13.236363518 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.086031 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.086254 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.086245489 +0000 UTC m=+13.236418569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.087274 15493 configmap.go:193] Couldn't get configMap openshift-network-node-identity/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.087315 15493 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.087347 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.087326038 +0000 UTC m=+13.237499148 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.087364 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.087379 15493 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.087400 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.087378329 +0000 UTC m=+13.237551469 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.087295 master-0 kubenswrapper[15493]: E0216 17:02:12.087403 15493 configmap.go:193] Couldn't get configMap openshift-network-node-identity/ovnkube-identity-cm: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087424 15493 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087431 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.08741808 +0000 UTC m=+13.237591190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087450 15493 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087467 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.087449201 +0000 UTC m=+13.237622441 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087473 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087492 15493 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087511 15493 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087513 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087529 15493 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087545 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087558 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087565 15493 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087569 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087500 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.087484942 +0000 UTC m=+13.237658162 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ovnkube-identity-cm" (UniqueName: "kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087586 15493 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087601 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087656 15493 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087657 15493 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087468 15493 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087609 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087739 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087777 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087471 15493 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087458 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087479 15493 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087549 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087588 15493 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087614 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.087599395 +0000 UTC m=+13.237772565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087618 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087940 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.087909323 +0000 UTC m=+13.238082403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087965 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.087954754 +0000 UTC m=+13.238127824 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.087985 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.087977315 +0000 UTC m=+13.238150385 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088006 15493 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088037 15493 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088043 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.087992785 +0000 UTC m=+13.238165855 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088117 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088099208 +0000 UTC m=+13.238272308 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088141 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088129109 +0000 UTC m=+13.238302219 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088176 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.0881663 +0000 UTC m=+13.238339400 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088200 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.0881892 +0000 UTC m=+13.238362300 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088227 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088213231 +0000 UTC m=+13.238386341 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088251 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088240232 +0000 UTC m=+13.238413342 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088284 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088273673 +0000 UTC m=+13.238446773 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088313 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088304003 +0000 UTC m=+13.238477113 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088342 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088332914 +0000 UTC m=+13.238506014 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088373 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088361455 +0000 UTC m=+13.238534565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088408 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088398336 +0000 UTC m=+13.238571446 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088437 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088427827 +0000 UTC m=+13.238600937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088520 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088506289 +0000 UTC m=+13.238679469 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088558 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.08854462 +0000 UTC m=+13.238717800 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088588 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088578461 +0000 UTC m=+13.238751561 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088617 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088607072 +0000 UTC m=+13.238780172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088648 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088637732 +0000 UTC m=+13.238810842 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088678 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088668063 +0000 UTC m=+13.238841163 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088716 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088702624 +0000 UTC m=+13.238875724 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088748 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088736845 +0000 UTC m=+13.238910015 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088780 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088769986 +0000 UTC m=+13.238943096 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:12.092115 master-0 kubenswrapper[15493]: E0216 17:02:12.088811 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:14.088799017 +0000 UTC m=+13.238972127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.101564 master-0 kubenswrapper[15493]: E0216 17:02:12.101515 15493 projected.go:288] Couldn't get configMap openshift-monitoring/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.102068 master-0 kubenswrapper[15493]: W0216 17:02:12.101918 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.102139 master-0 kubenswrapper[15493]: E0216 17:02:12.102088 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.103102 master-0 kubenswrapper[15493]: E0216 17:02:12.103055 15493 projected.go:288] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.103176 master-0 kubenswrapper[15493]: E0216 17:02:12.103104 15493 projected.go:194] Error preparing data for projected volume kube-api-access-b5mwd for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.103236 master-0 kubenswrapper[15493]: E0216 17:02:12.103191 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.603167077 +0000 UTC m=+11.753340187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b5mwd" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.118899 master-0 kubenswrapper[15493]: I0216 17:02:12.118834 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:12.119274 master-0 kubenswrapper[15493]: I0216 17:02:12.119145 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:12.121597 master-0 kubenswrapper[15493]: E0216 17:02:12.121413 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.121597 master-0 kubenswrapper[15493]: W0216 17:02:12.121529 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.121597 master-0 kubenswrapper[15493]: E0216 17:02:12.121592 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.123871 master-0 kubenswrapper[15493]: E0216 17:02:12.123821 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.123954 master-0 kubenswrapper[15493]: E0216 17:02:12.123877 15493 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.124075 master-0 kubenswrapper[15493]: E0216 17:02:12.124036 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.623999378 +0000 UTC m=+11.774172488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.141270 master-0 kubenswrapper[15493]: E0216 17:02:12.141203 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.141270 master-0 kubenswrapper[15493]: E0216 17:02:12.141250 15493 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.141494 master-0 kubenswrapper[15493]: E0216 17:02:12.141337 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.641308006 +0000 UTC m=+11.791481116 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.141494 master-0 kubenswrapper[15493]: E0216 17:02:12.141387 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.141768 master-0 kubenswrapper[15493]: W0216 17:02:12.141661 15493 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.141839 master-0 kubenswrapper[15493]: E0216 17:02:12.141798 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.160700 master-0 kubenswrapper[15493]: W0216 17:02:12.160574 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.160700 master-0 kubenswrapper[15493]: E0216 17:02:12.160691 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.160989 master-0 kubenswrapper[15493]: E0216 17:02:12.160848 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.162774 master-0 kubenswrapper[15493]: E0216 17:02:12.162740 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.162836 master-0 kubenswrapper[15493]: E0216 17:02:12.162777 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/serviceaccounts/kube-storage-version-migrator-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.162873 master-0 kubenswrapper[15493]: E0216 17:02:12.162854 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.662831036 +0000 UTC m=+11.813004136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/serviceaccounts/kube-storage-version-migrator-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.165648 master-0 kubenswrapper[15493]: E0216 17:02:12.165585 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 16 17:02:12.180868 master-0 kubenswrapper[15493]: E0216 17:02:12.180808 15493 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.181585 master-0 kubenswrapper[15493]: W0216 17:02:12.181484 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.181672 master-0 kubenswrapper[15493]: E0216 17:02:12.181618 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.182539 master-0 kubenswrapper[15493]: E0216 17:02:12.182496 15493 projected.go:288] Couldn't get configMap openshift-network-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.183046 master-0 kubenswrapper[15493]: E0216 17:02:12.182541 15493 projected.go:194] Error preparing data for projected volume kube-api-access-q46jg for pod openshift-network-operator/iptables-alerter-czzz2: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.183046 master-0 kubenswrapper[15493]: E0216 17:02:12.182628 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg podName:b3fa6ac1-781f-446c-b6b4-18bdb7723c23 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.682600679 +0000 UTC m=+11.832773789 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q46jg" (UniqueName: "kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg") pod "iptables-alerter-czzz2" (UID: "b3fa6ac1-781f-446c-b6b4-18bdb7723c23") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.201063 master-0 kubenswrapper[15493]: E0216 17:02:12.201000 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.201294 master-0 kubenswrapper[15493]: W0216 17:02:12.201222 15493 reflector.go:561] object-"openshift-network-node-identity"/"ovnkube-identity-cm": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.201378 master-0 kubenswrapper[15493]: E0216 17:02:12.201315 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.202408 master-0 kubenswrapper[15493]: E0216 17:02:12.202357 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.202451 master-0 kubenswrapper[15493]: E0216 17:02:12.202421 15493 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.202575 master-0 kubenswrapper[15493]: E0216 17:02:12.202542 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.702507236 +0000 UTC m=+11.852680346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.220995 master-0 kubenswrapper[15493]: E0216 17:02:12.220863 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.221221 master-0 kubenswrapper[15493]: W0216 17:02:12.221075 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.221290 master-0 kubenswrapper[15493]: E0216 17:02:12.221221 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.222081 master-0 kubenswrapper[15493]: E0216 17:02:12.222031 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.222185 master-0 kubenswrapper[15493]: E0216 17:02:12.222076 15493 projected.go:194] Error preparing data for projected volume kube-api-access-sx92x for pod openshift-machine-config-operator/machine-config-daemon-98q6v: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-daemon/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.222420 master-0 kubenswrapper[15493]: E0216 17:02:12.222371 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.722345431 +0000 UTC m=+11.872518541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sx92x" (UniqueName: "kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-daemon/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.223584 master-0 kubenswrapper[15493]: I0216 17:02:12.223525 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:12.241151 master-0 kubenswrapper[15493]: E0216 17:02:12.241080 15493 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.241233 master-0 kubenswrapper[15493]: E0216 17:02:12.241166 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-2-master-0: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.241328 master-0 kubenswrapper[15493]: E0216 17:02:12.241295 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access podName:b1b4fccc-6bf6-47ac-8ae1-32cad23734da nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.741266191 +0000 UTC m=+11.891439301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access") pod "installer-2-master-0" (UID: "b1b4fccc-6bf6-47ac-8ae1-32cad23734da") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.241616 master-0 kubenswrapper[15493]: W0216 17:02:12.241523 15493 reflector.go:561] object-"openshift-service-ca"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.241692 master-0 kubenswrapper[15493]: E0216 17:02:12.241646 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.242665 master-0 kubenswrapper[15493]: E0216 17:02:12.242628 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.242699 master-0 kubenswrapper[15493]: E0216 17:02:12.242672 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xmk2b for pod openshift-multus/multus-admission-controller-7c64d55f8-4jz2t: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.242764 master-0 kubenswrapper[15493]: E0216 17:02:12.242736 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.7427182 +0000 UTC m=+11.892891310 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xmk2b" (UniqueName: "kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.260854 master-0 kubenswrapper[15493]: E0216 17:02:12.260769 15493 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.261497 master-0 kubenswrapper[15493]: W0216 17:02:12.261418 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.261548 master-0 kubenswrapper[15493]: E0216 17:02:12.261513 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.262501 master-0 kubenswrapper[15493]: E0216 17:02:12.262460 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.262567 master-0 kubenswrapper[15493]: E0216 17:02:12.262507 15493 projected.go:194] Error preparing data for projected volume kube-api-access-fkwxl for pod openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-control-plane/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.262652 master-0 kubenswrapper[15493]: E0216 17:02:12.262616 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.762582646 +0000 UTC m=+11.912755756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fkwxl" (UniqueName: "kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-control-plane/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.280827 master-0 kubenswrapper[15493]: E0216 17:02:12.280779 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.281131 master-0 kubenswrapper[15493]: W0216 17:02:12.281033 15493 reflector.go:561] object-"openshift-service-ca-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.281199 master-0 kubenswrapper[15493]: E0216 17:02:12.281162 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.300853 master-0 kubenswrapper[15493]: E0216 17:02:12.300779 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.301112 master-0 kubenswrapper[15493]: W0216 17:02:12.300836 15493 reflector.go:561] object-"openshift-marketplace"/"marketplace-trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.301112 master-0 kubenswrapper[15493]: E0216 17:02:12.300957 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.302106 master-0 kubenswrapper[15493]: E0216 17:02:12.302058 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.302106 master-0 kubenswrapper[15493]: E0216 17:02:12.302106 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.302283 master-0 kubenswrapper[15493]: E0216 17:02:12.302202 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.802175573 +0000 UTC m=+11.952348673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.321576 master-0 kubenswrapper[15493]: E0216 17:02:12.321448 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.321971 master-0 kubenswrapper[15493]: W0216 17:02:12.321836 15493 reflector.go:561] object-"openshift-multus"/"metrics-daemon-secret": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.322068 master-0 kubenswrapper[15493]: E0216 17:02:12.321995 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.322885 master-0 kubenswrapper[15493]: E0216 17:02:12.322833 15493 projected.go:288] Couldn't get configMap openshift-dns/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.322981 master-0 kubenswrapper[15493]: E0216 17:02:12.322882 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zl5w2 for pod openshift-dns/dns-default-qcgxx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/dns/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.323044 master-0 kubenswrapper[15493]: E0216 17:02:12.323029 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2 podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.822995564 +0000 UTC m=+11.973168674 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zl5w2" (UniqueName: "kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/dns/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.340915 master-0 kubenswrapper[15493]: W0216 17:02:12.340793 15493 reflector.go:561] object-"openshift-network-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.341078 master-0 kubenswrapper[15493]: E0216 17:02:12.340976 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.341286 master-0 kubenswrapper[15493]: E0216 17:02:12.341254 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.361161 master-0 kubenswrapper[15493]: E0216 17:02:12.361002 15493 projected.go:288] Couldn't get configMap openshift-network-node-identity/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.361442 master-0 kubenswrapper[15493]: W0216 17:02:12.361257 15493 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.361442 master-0 kubenswrapper[15493]: E0216 17:02:12.361349 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.381658 master-0 kubenswrapper[15493]: W0216 17:02:12.381433 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.381658 master-0 kubenswrapper[15493]: E0216 17:02:12.381521 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.381658 master-0 kubenswrapper[15493]: E0216 17:02:12.381525 15493 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.385084 master-0 kubenswrapper[15493]: E0216 17:02:12.384636 15493 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.385084 master-0 kubenswrapper[15493]: E0216 17:02:12.384681 15493 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.385084 master-0 kubenswrapper[15493]: E0216 17:02:12.384760 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.884734808 +0000 UTC m=+12.034907878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.401903 master-0 kubenswrapper[15493]: W0216 17:02:12.401791 15493 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.402645 master-0 kubenswrapper[15493]: E0216 17:02:12.401850 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.402645 master-0 kubenswrapper[15493]: E0216 17:02:12.402095 15493 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.402645 master-0 kubenswrapper[15493]: E0216 17:02:12.401912 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.402645 master-0 kubenswrapper[15493]: E0216 17:02:12.402227 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.90218948 +0000 UTC m=+12.052362590 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.421318 master-0 kubenswrapper[15493]: E0216 17:02:12.421153 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.421318 master-0 kubenswrapper[15493]: E0216 17:02:12.421301 15493 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.421318 master-0 kubenswrapper[15493]: W0216 17:02:12.421170 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.421697 master-0 kubenswrapper[15493]: E0216 17:02:12.421373 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.421697 master-0 kubenswrapper[15493]: E0216 17:02:12.421422 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.921381938 +0000 UTC m=+12.071555058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.438363 master-0 kubenswrapper[15493]: I0216 17:02:12.438312 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_b1b4fccc-6bf6-47ac-8ae1-32cad23734da/installer/0.log" Feb 16 17:02:12.438615 master-0 kubenswrapper[15493]: I0216 17:02:12.438371 15493 generic.go:334] "Generic (PLEG): container finished" podID="b1b4fccc-6bf6-47ac-8ae1-32cad23734da" containerID="a6f2cc640b5de57d7f65239e3dfae00a6c9cda6decad3cf4c15c3e87bd2e0a2d" exitCode=1 Feb 16 17:02:12.440476 master-0 kubenswrapper[15493]: I0216 17:02:12.440409 15493 request.go:700] Waited for 3.851743552s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 16 17:02:12.441417 master-0 kubenswrapper[15493]: W0216 17:02:12.441339 15493 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.441417 master-0 kubenswrapper[15493]: E0216 17:02:12.441405 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.442601 master-0 kubenswrapper[15493]: E0216 17:02:12.442559 15493 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.442601 master-0 kubenswrapper[15493]: E0216 17:02:12.442584 15493 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.442745 master-0 kubenswrapper[15493]: E0216 17:02:12.442649 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.94263268 +0000 UTC m=+12.092805750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.461441 master-0 kubenswrapper[15493]: W0216 17:02:12.461321 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.461441 master-0 kubenswrapper[15493]: E0216 17:02:12.461438 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.462550 master-0 kubenswrapper[15493]: E0216 17:02:12.462466 15493 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.462550 master-0 kubenswrapper[15493]: E0216 17:02:12.462522 15493 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.462762 master-0 kubenswrapper[15493]: E0216 17:02:12.462649 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:12.962610649 +0000 UTC m=+12.112783759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.480975 master-0 kubenswrapper[15493]: W0216 17:02:12.480818 15493 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.480975 master-0 kubenswrapper[15493]: E0216 17:02:12.480905 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.501316 master-0 kubenswrapper[15493]: W0216 17:02:12.501161 15493 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.501535 master-0 kubenswrapper[15493]: E0216 17:02:12.501316 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.503096 master-0 kubenswrapper[15493]: E0216 17:02:12.503025 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.503096 master-0 kubenswrapper[15493]: E0216 17:02:12.503070 15493 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.503296 master-0 kubenswrapper[15493]: E0216 17:02:12.503139 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.003121271 +0000 UTC m=+12.153294341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.520594 master-0 kubenswrapper[15493]: W0216 17:02:12.520503 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.520735 master-0 kubenswrapper[15493]: E0216 17:02:12.520610 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.539089 master-0 kubenswrapper[15493]: I0216 17:02:12.539015 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:12.541594 master-0 kubenswrapper[15493]: E0216 17:02:12.541113 15493 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.541594 master-0 kubenswrapper[15493]: W0216 17:02:12.541085 15493 reflector.go:561] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.541594 master-0 kubenswrapper[15493]: E0216 17:02:12.541186 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.541594 master-0 kubenswrapper[15493]: E0216 17:02:12.541140 15493 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.541594 master-0 kubenswrapper[15493]: E0216 17:02:12.541314 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.041284751 +0000 UTC m=+12.191457861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.561682 master-0 kubenswrapper[15493]: E0216 17:02:12.561333 15493 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.561682 master-0 kubenswrapper[15493]: E0216 17:02:12.561391 15493 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.561682 master-0 kubenswrapper[15493]: E0216 17:02:12.561538 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.061462445 +0000 UTC m=+12.211635595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.562103 master-0 kubenswrapper[15493]: W0216 17:02:12.561700 15493 reflector.go:561] object-"openshift-dns-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.562103 master-0 kubenswrapper[15493]: E0216 17:02:12.561806 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.567546 master-0 kubenswrapper[15493]: E0216 17:02:12.567480 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 16 17:02:12.581957 master-0 kubenswrapper[15493]: W0216 17:02:12.581653 15493 reflector.go:561] object-"openshift-dns-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.581957 master-0 kubenswrapper[15493]: E0216 17:02:12.581779 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.582756 master-0 kubenswrapper[15493]: E0216 17:02:12.582683 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.582756 master-0 kubenswrapper[15493]: E0216 17:02:12.582751 15493 projected.go:194] Error preparing data for projected volume kube-api-access-bnnc5 for pod openshift-multus/network-metrics-daemon-279g6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/metrics-daemon-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.582897 master-0 kubenswrapper[15493]: E0216 17:02:12.582853 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5 podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.082826181 +0000 UTC m=+12.232999281 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bnnc5" (UniqueName: "kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/metrics-daemon-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.601161 master-0 kubenswrapper[15493]: W0216 17:02:12.601047 15493 reflector.go:561] object-"openshift-network-node-identity"/"network-node-identity-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.601381 master-0 kubenswrapper[15493]: E0216 17:02:12.601170 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.621385 master-0 kubenswrapper[15493]: W0216 17:02:12.621272 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.621597 master-0 kubenswrapper[15493]: E0216 17:02:12.621386 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.621712 master-0 kubenswrapper[15493]: E0216 17:02:12.621593 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.621712 master-0 kubenswrapper[15493]: E0216 17:02:12.621692 15493 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.621880 master-0 kubenswrapper[15493]: E0216 17:02:12.621774 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.121752111 +0000 UTC m=+12.271925191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.640516 master-0 kubenswrapper[15493]: I0216 17:02:12.640448 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:12.640675 master-0 kubenswrapper[15493]: I0216 17:02:12.640534 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:12.640675 master-0 kubenswrapper[15493]: I0216 17:02:12.640619 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:12.640809 master-0 kubenswrapper[15493]: W0216 17:02:12.640677 15493 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.640809 master-0 kubenswrapper[15493]: E0216 17:02:12.640756 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.640809 master-0 kubenswrapper[15493]: I0216 17:02:12.640776 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:12.641071 master-0 kubenswrapper[15493]: I0216 17:02:12.641017 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:12.642088 master-0 kubenswrapper[15493]: E0216 17:02:12.642040 15493 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.642088 master-0 kubenswrapper[15493]: E0216 17:02:12.642084 15493 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.642297 master-0 kubenswrapper[15493]: E0216 17:02:12.642267 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.142237503 +0000 UTC m=+12.292410583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.661111 master-0 kubenswrapper[15493]: W0216 17:02:12.661006 15493 reflector.go:561] object-"openshift-image-registry"/"image-registry-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.661111 master-0 kubenswrapper[15493]: E0216 17:02:12.661105 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.681273 master-0 kubenswrapper[15493]: W0216 17:02:12.681151 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dnode-tuning-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.681434 master-0 kubenswrapper[15493]: E0216 17:02:12.681277 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"node-tuning-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dnode-tuning-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.682271 master-0 kubenswrapper[15493]: E0216 17:02:12.682207 15493 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.682271 master-0 kubenswrapper[15493]: E0216 17:02:12.682262 15493 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.682614 master-0 kubenswrapper[15493]: E0216 17:02:12.682363 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.182326594 +0000 UTC m=+12.332499704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.701648 master-0 kubenswrapper[15493]: E0216 17:02:12.701574 15493 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.701648 master-0 kubenswrapper[15493]: W0216 17:02:12.701587 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.701974 master-0 kubenswrapper[15493]: E0216 17:02:12.701700 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.701974 master-0 kubenswrapper[15493]: E0216 17:02:12.701638 15493 projected.go:194] Error preparing data for projected volume kube-api-access-6ftld for pod openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.701974 master-0 kubenswrapper[15493]: E0216 17:02:12.701849 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.201781309 +0000 UTC m=+12.351954399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6ftld" (UniqueName: "kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.721330 master-0 kubenswrapper[15493]: W0216 17:02:12.721215 15493 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.721592 master-0 kubenswrapper[15493]: E0216 17:02:12.721337 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.722418 master-0 kubenswrapper[15493]: E0216 17:02:12.722363 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.722480 master-0 kubenswrapper[15493]: E0216 17:02:12.722426 15493 projected.go:194] Error preparing data for projected volume kube-api-access-qfkd9 for pod openshift-marketplace/community-operators-n7kjr: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.722578 master-0 kubenswrapper[15493]: E0216 17:02:12.722543 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-kube-api-access-qfkd9 podName:1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.222513857 +0000 UTC m=+12.372686957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qfkd9" (UniqueName: "kubernetes.io/projected/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-kube-api-access-qfkd9") pod "community-operators-n7kjr" (UID: "1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.741670 master-0 kubenswrapper[15493]: E0216 17:02:12.741599 15493 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.741670 master-0 kubenswrapper[15493]: E0216 17:02:12.741659 15493 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.742046 master-0 kubenswrapper[15493]: W0216 17:02:12.741613 15493 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.742046 master-0 kubenswrapper[15493]: E0216 17:02:12.741736 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.742046 master-0 kubenswrapper[15493]: E0216 17:02:12.741762 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.241732096 +0000 UTC m=+12.391905196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.744282 master-0 kubenswrapper[15493]: I0216 17:02:12.744230 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfkd9\" (UniqueName: \"kubernetes.io/projected/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-kube-api-access-qfkd9\") pod \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\" (UID: \"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa\") " Feb 16 17:02:12.744817 master-0 kubenswrapper[15493]: I0216 17:02:12.744770 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:12.745847 master-0 kubenswrapper[15493]: I0216 17:02:12.745795 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:12.746057 master-0 kubenswrapper[15493]: I0216 17:02:12.746013 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:12.746197 master-0 kubenswrapper[15493]: I0216 17:02:12.746152 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:12.746554 master-0 kubenswrapper[15493]: I0216 17:02:12.746495 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:02:12.746638 master-0 kubenswrapper[15493]: I0216 17:02:12.746571 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:12.746704 master-0 kubenswrapper[15493]: I0216 17:02:12.746662 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmk2b\" (UniqueName: \"kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:12.748645 master-0 kubenswrapper[15493]: I0216 17:02:12.748578 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-kube-api-access-qfkd9" (OuterVolumeSpecName: "kube-api-access-qfkd9") pod "1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa" (UID: "1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa"). InnerVolumeSpecName "kube-api-access-qfkd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:12.749131 master-0 kubenswrapper[15493]: I0216 17:02:12.749088 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qfkd9\" (UniqueName: \"kubernetes.io/projected/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa-kube-api-access-qfkd9\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:12.761894 master-0 kubenswrapper[15493]: W0216 17:02:12.761793 15493 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.762073 master-0 kubenswrapper[15493]: E0216 17:02:12.761964 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.781669 master-0 kubenswrapper[15493]: E0216 17:02:12.781619 15493 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.781669 master-0 kubenswrapper[15493]: E0216 17:02:12.781667 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.781952 master-0 kubenswrapper[15493]: E0216 17:02:12.781768 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.281741315 +0000 UTC m=+12.431914415 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.782209 master-0 kubenswrapper[15493]: W0216 17:02:12.782008 15493 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.782333 master-0 kubenswrapper[15493]: E0216 17:02:12.782283 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.801758 master-0 kubenswrapper[15493]: W0216 17:02:12.801608 15493 reflector.go:561] object-"openshift-network-node-identity"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.802058 master-0 kubenswrapper[15493]: E0216 17:02:12.801764 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.802835 master-0 kubenswrapper[15493]: E0216 17:02:12.802781 15493 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.802901 master-0 kubenswrapper[15493]: E0216 17:02:12.802835 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/serviceaccounts/cluster-olm-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.802980 master-0 kubenswrapper[15493]: E0216 17:02:12.802953 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.302904585 +0000 UTC m=+12.453077695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/serviceaccounts/cluster-olm-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.821320 master-0 kubenswrapper[15493]: W0216 17:02:12.821224 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.821409 master-0 kubenswrapper[15493]: E0216 17:02:12.821341 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.821569 master-0 kubenswrapper[15493]: E0216 17:02:12.821475 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.821622 master-0 kubenswrapper[15493]: E0216 17:02:12.821591 15493 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.821758 master-0 kubenswrapper[15493]: E0216 17:02:12.821720 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.321688232 +0000 UTC m=+12.471861332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.842378 master-0 kubenswrapper[15493]: W0216 17:02:12.842170 15493 reflector.go:561] object-"openshift-cluster-olm-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.842378 master-0 kubenswrapper[15493]: E0216 17:02:12.842334 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-olm-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.842590 master-0 kubenswrapper[15493]: E0216 17:02:12.842424 15493 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.842590 master-0 kubenswrapper[15493]: E0216 17:02:12.842460 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.842681 master-0 kubenswrapper[15493]: E0216 17:02:12.842588 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.342552814 +0000 UTC m=+12.492725924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.855991 master-0 kubenswrapper[15493]: I0216 17:02:12.855837 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:12.855991 master-0 kubenswrapper[15493]: I0216 17:02:12.855960 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:02:12.856429 master-0 kubenswrapper[15493]: I0216 17:02:12.856359 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:12.862033 master-0 kubenswrapper[15493]: W0216 17:02:12.861887 15493 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.862154 master-0 kubenswrapper[15493]: E0216 17:02:12.862070 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.863070 master-0 kubenswrapper[15493]: E0216 17:02:12.862875 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.863070 master-0 kubenswrapper[15493]: E0216 17:02:12.862909 15493 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.863070 master-0 kubenswrapper[15493]: E0216 17:02:12.862996 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.362974574 +0000 UTC m=+12.513147664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.881105 master-0 kubenswrapper[15493]: W0216 17:02:12.880992 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.881204 master-0 kubenswrapper[15493]: E0216 17:02:12.881106 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.882044 master-0 kubenswrapper[15493]: E0216 17:02:12.882011 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.882044 master-0 kubenswrapper[15493]: E0216 17:02:12.882042 15493 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.882163 master-0 kubenswrapper[15493]: E0216 17:02:12.882095 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.3820832 +0000 UTC m=+12.532256260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.900904 master-0 kubenswrapper[15493]: W0216 17:02:12.900836 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.901035 master-0 kubenswrapper[15493]: E0216 17:02:12.900913 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.901270 master-0 kubenswrapper[15493]: E0216 17:02:12.901240 15493 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.901308 master-0 kubenswrapper[15493]: E0216 17:02:12.901272 15493 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.901355 master-0 kubenswrapper[15493]: E0216 17:02:12.901342 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.401323159 +0000 UTC m=+12.551496239 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.920761 master-0 kubenswrapper[15493]: W0216 17:02:12.920702 15493 reflector.go:561] object-"openshift-ingress-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.920834 master-0 kubenswrapper[15493]: E0216 17:02:12.920756 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.922224 master-0 kubenswrapper[15493]: E0216 17:02:12.922170 15493 projected.go:288] Couldn't get configMap openshift-dns/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.922308 master-0 kubenswrapper[15493]: E0216 17:02:12.922227 15493 projected.go:194] Error preparing data for projected volume kube-api-access-8m29g for pod openshift-dns/node-resolver-vfxj4: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/node-resolver/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.922376 master-0 kubenswrapper[15493]: E0216 17:02:12.922349 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g podName:a6fe41b0-1a42-4f07-8220-d9aaa50788ad nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.422327675 +0000 UTC m=+12.572500805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8m29g" (UniqueName: "kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g") pod "node-resolver-vfxj4" (UID: "a6fe41b0-1a42-4f07-8220-d9aaa50788ad") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/node-resolver/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.941594 master-0 kubenswrapper[15493]: W0216 17:02:12.941508 15493 reflector.go:561] object-"openshift-ingress-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.941685 master-0 kubenswrapper[15493]: E0216 17:02:12.941596 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.942661 master-0 kubenswrapper[15493]: E0216 17:02:12.942634 15493 projected.go:288] Couldn't get configMap openshift-cloud-controller-manager-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.942738 master-0 kubenswrapper[15493]: E0216 17:02:12.942662 15493 projected.go:194] Error preparing data for projected volume kube-api-access-r87zw for pod openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/serviceaccounts/cluster-cloud-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.942784 master-0 kubenswrapper[15493]: E0216 17:02:12.942743 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.442717435 +0000 UTC m=+12.592890615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r87zw" (UniqueName: "kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/serviceaccounts/cluster-cloud-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.961236 master-0 kubenswrapper[15493]: W0216 17:02:12.961148 15493 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.961384 master-0 kubenswrapper[15493]: E0216 17:02:12.961245 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.962255 master-0 kubenswrapper[15493]: E0216 17:02:12.962211 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.962255 master-0 kubenswrapper[15493]: E0216 17:02:12.962249 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2gq8x for pod openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/cluster-node-tuning-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.962686 master-0 kubenswrapper[15493]: E0216 17:02:12.962620 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.46257938 +0000 UTC m=+12.612752500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2gq8x" (UniqueName: "kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/cluster-node-tuning-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.964337 master-0 kubenswrapper[15493]: I0216 17:02:12.964286 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:12.964481 master-0 kubenswrapper[15493]: I0216 17:02:12.964445 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:02:12.964954 master-0 kubenswrapper[15493]: I0216 17:02:12.964605 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:12.964954 master-0 kubenswrapper[15493]: I0216 17:02:12.964705 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:12.964954 master-0 kubenswrapper[15493]: I0216 17:02:12.964784 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:12.980936 master-0 kubenswrapper[15493]: W0216 17:02:12.980835 15493 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:12.981048 master-0 kubenswrapper[15493]: E0216 17:02:12.981009 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:12.982021 master-0 kubenswrapper[15493]: E0216 17:02:12.981995 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:12.982060 master-0 kubenswrapper[15493]: E0216 17:02:12.982024 15493 projected.go:194] Error preparing data for projected volume kube-api-access-j5qxm for pod openshift-multus/multus-additional-cni-plugins-rjdlk: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:12.982099 master-0 kubenswrapper[15493]: E0216 17:02:12.982077 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.482063766 +0000 UTC m=+12.632236846 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j5qxm" (UniqueName: "kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.002682 master-0 kubenswrapper[15493]: W0216 17:02:13.001462 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.002682 master-0 kubenswrapper[15493]: E0216 17:02:13.001555 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.002682 master-0 kubenswrapper[15493]: E0216 17:02:13.002548 15493 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.002682 master-0 kubenswrapper[15493]: E0216 17:02:13.002573 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/serviceaccounts/operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.002682 master-0 kubenswrapper[15493]: E0216 17:02:13.002638 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.50262113 +0000 UTC m=+12.652794210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/serviceaccounts/operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.021359 master-0 kubenswrapper[15493]: W0216 17:02:13.021294 15493 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.021434 master-0 kubenswrapper[15493]: E0216 17:02:13.021367 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.022535 master-0 kubenswrapper[15493]: E0216 17:02:13.022414 15493 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.022614 master-0 kubenswrapper[15493]: E0216 17:02:13.022549 15493 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.022674 master-0 kubenswrapper[15493]: E0216 17:02:13.022624 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.522604799 +0000 UTC m=+12.672777869 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.041220 master-0 kubenswrapper[15493]: E0216 17:02:13.041157 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.041220 master-0 kubenswrapper[15493]: E0216 17:02:13.041214 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-baremetal-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.041357 master-0 kubenswrapper[15493]: W0216 17:02:13.041118 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.041357 master-0 kubenswrapper[15493]: E0216 17:02:13.041296 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.541275463 +0000 UTC m=+12.691448523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-baremetal-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.041357 master-0 kubenswrapper[15493]: E0216 17:02:13.041288 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.061476 master-0 kubenswrapper[15493]: W0216 17:02:13.061333 15493 reflector.go:561] object-"openshift-network-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.061476 master-0 kubenswrapper[15493]: E0216 17:02:13.061449 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.062517 master-0 kubenswrapper[15493]: E0216 17:02:13.062445 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.062517 master-0 kubenswrapper[15493]: E0216 17:02:13.062511 15493 projected.go:194] Error preparing data for projected volume kube-api-access-lxhk5 for pod openshift-marketplace/certified-operators-8kkl7: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.062794 master-0 kubenswrapper[15493]: E0216 17:02:13.062644 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6d86b04-1d3f-4f27-a262-b732c1295997-kube-api-access-lxhk5 podName:a6d86b04-1d3f-4f27-a262-b732c1295997 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.562617817 +0000 UTC m=+12.712790917 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lxhk5" (UniqueName: "kubernetes.io/projected/a6d86b04-1d3f-4f27-a262-b732c1295997-kube-api-access-lxhk5") pod "certified-operators-8kkl7" (UID: "a6d86b04-1d3f-4f27-a262-b732c1295997") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.066139 master-0 kubenswrapper[15493]: I0216 17:02:13.066084 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxhk5\" (UniqueName: \"kubernetes.io/projected/a6d86b04-1d3f-4f27-a262-b732c1295997-kube-api-access-lxhk5\") pod \"a6d86b04-1d3f-4f27-a262-b732c1295997\" (UID: \"a6d86b04-1d3f-4f27-a262-b732c1295997\") " Feb 16 17:02:13.066526 master-0 kubenswrapper[15493]: I0216 17:02:13.066480 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:13.068711 master-0 kubenswrapper[15493]: I0216 17:02:13.068607 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:13.069248 master-0 kubenswrapper[15493]: I0216 17:02:13.069005 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:13.069248 master-0 kubenswrapper[15493]: I0216 17:02:13.069113 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:13.069248 master-0 kubenswrapper[15493]: I0216 17:02:13.069157 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:13.072303 master-0 kubenswrapper[15493]: I0216 17:02:13.072237 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d86b04-1d3f-4f27-a262-b732c1295997-kube-api-access-lxhk5" (OuterVolumeSpecName: "kube-api-access-lxhk5") pod "a6d86b04-1d3f-4f27-a262-b732c1295997" (UID: "a6d86b04-1d3f-4f27-a262-b732c1295997"). InnerVolumeSpecName "kube-api-access-lxhk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:13.080705 master-0 kubenswrapper[15493]: W0216 17:02:13.080616 15493 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.080883 master-0 kubenswrapper[15493]: E0216 17:02:13.080710 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.081719 master-0 kubenswrapper[15493]: E0216 17:02:13.081685 15493 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.081719 master-0 kubenswrapper[15493]: E0216 17:02:13.081712 15493 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/serviceaccounts/service-ca/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.081817 master-0 kubenswrapper[15493]: E0216 17:02:13.081788 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.581768104 +0000 UTC m=+12.731941174 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/serviceaccounts/service-ca/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.101673 master-0 kubenswrapper[15493]: E0216 17:02:13.101626 15493 projected.go:288] Couldn't get configMap openshift-monitoring/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.101673 master-0 kubenswrapper[15493]: E0216 17:02:13.101668 15493 projected.go:194] Error preparing data for projected volume kube-api-access-j7w67 for pod openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/serviceaccounts/cluster-monitoring-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.101890 master-0 kubenswrapper[15493]: W0216 17:02:13.101689 15493 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.101890 master-0 kubenswrapper[15493]: E0216 17:02:13.101801 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.101890 master-0 kubenswrapper[15493]: E0216 17:02:13.101749 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67 podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.601730002 +0000 UTC m=+12.751903072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-j7w67" (UniqueName: "kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/serviceaccounts/cluster-monitoring-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.121364 master-0 kubenswrapper[15493]: W0216 17:02:13.121281 15493 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.121364 master-0 kubenswrapper[15493]: E0216 17:02:13.121365 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.122033 master-0 kubenswrapper[15493]: E0216 17:02:13.122004 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.122092 master-0 kubenswrapper[15493]: E0216 17:02:13.122035 15493 projected.go:194] Error preparing data for projected volume kube-api-access-9xrw2 for pod openshift-ovn-kubernetes/ovnkube-node-flr86: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-node/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.122126 master-0 kubenswrapper[15493]: E0216 17:02:13.122109 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2 podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.622089241 +0000 UTC m=+12.772262311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9xrw2" (UniqueName: "kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-node/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.140765 master-0 kubenswrapper[15493]: W0216 17:02:13.140690 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.140900 master-0 kubenswrapper[15493]: E0216 17:02:13.140768 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.141701 master-0 kubenswrapper[15493]: E0216 17:02:13.141665 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.141760 master-0 kubenswrapper[15493]: E0216 17:02:13.141703 15493 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/marketplace-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.141809 master-0 kubenswrapper[15493]: E0216 17:02:13.141774 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.641754531 +0000 UTC m=+12.791927611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/marketplace-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.161003 master-0 kubenswrapper[15493]: E0216 17:02:13.160946 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.161003 master-0 kubenswrapper[15493]: E0216 17:02:13.160991 15493 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.161171 master-0 kubenswrapper[15493]: E0216 17:02:13.161052 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.661037441 +0000 UTC m=+12.811210531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.161171 master-0 kubenswrapper[15493]: W0216 17:02:13.161103 15493 reflector.go:561] object-"openshift-etcd-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.161323 master-0 kubenswrapper[15493]: E0216 17:02:13.161170 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.173126 master-0 kubenswrapper[15493]: I0216 17:02:13.173052 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:13.173472 master-0 kubenswrapper[15493]: I0216 17:02:13.173398 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:13.173877 master-0 kubenswrapper[15493]: I0216 17:02:13.173831 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:13.174659 master-0 kubenswrapper[15493]: I0216 17:02:13.174607 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxhk5\" (UniqueName: \"kubernetes.io/projected/a6d86b04-1d3f-4f27-a262-b732c1295997-kube-api-access-lxhk5\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:13.181075 master-0 kubenswrapper[15493]: E0216 17:02:13.181023 15493 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.181145 master-0 kubenswrapper[15493]: E0216 17:02:13.181078 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/serviceaccounts/cloud-credential-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.181179 master-0 kubenswrapper[15493]: E0216 17:02:13.181166 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.681141053 +0000 UTC m=+12.831314163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/serviceaccounts/cloud-credential-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.182018 master-0 kubenswrapper[15493]: W0216 17:02:13.181886 15493 reflector.go:561] object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/secrets?fieldSelector=metadata.name%3Dcluster-olm-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.182082 master-0 kubenswrapper[15493]: E0216 17:02:13.182052 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-olm-operator\"/\"cluster-olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/secrets?fieldSelector=metadata.name%3Dcluster-olm-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.201165 master-0 kubenswrapper[15493]: E0216 17:02:13.201108 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.201165 master-0 kubenswrapper[15493]: E0216 17:02:13.201149 15493 projected.go:194] Error preparing data for projected volume kube-api-access-8p2jz for pod openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.201463 master-0 kubenswrapper[15493]: W0216 17:02:13.201143 15493 reflector.go:561] object-"openshift-service-ca-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.201463 master-0 kubenswrapper[15493]: E0216 17:02:13.201227 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.701206714 +0000 UTC m=+12.851379814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8p2jz" (UniqueName: "kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.201463 master-0 kubenswrapper[15493]: E0216 17:02:13.201231 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.221173 master-0 kubenswrapper[15493]: E0216 17:02:13.220989 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.221173 master-0 kubenswrapper[15493]: W0216 17:02:13.220963 15493 reflector.go:561] object-"openshift-service-ca"/"signing-key": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.221173 master-0 kubenswrapper[15493]: E0216 17:02:13.221030 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.221173 master-0 kubenswrapper[15493]: E0216 17:02:13.221064 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-key\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.221173 master-0 kubenswrapper[15493]: E0216 17:02:13.221106 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.721090061 +0000 UTC m=+12.871263131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.241430 master-0 kubenswrapper[15493]: W0216 17:02:13.241351 15493 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.241430 master-0 kubenswrapper[15493]: E0216 17:02:13.241422 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.261036 master-0 kubenswrapper[15493]: E0216 17:02:13.260955 15493 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.261036 master-0 kubenswrapper[15493]: E0216 17:02:13.261029 15493 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.261240 master-0 kubenswrapper[15493]: E0216 17:02:13.261228 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.761204312 +0000 UTC m=+12.911377392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.261585 master-0 kubenswrapper[15493]: W0216 17:02:13.261506 15493 reflector.go:561] object-"openshift-marketplace"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.261669 master-0 kubenswrapper[15493]: E0216 17:02:13.261587 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.275551 master-0 kubenswrapper[15493]: I0216 17:02:13.275478 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:13.275551 master-0 kubenswrapper[15493]: I0216 17:02:13.275556 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:13.275880 master-0 kubenswrapper[15493]: I0216 17:02:13.275804 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:13.280966 master-0 kubenswrapper[15493]: E0216 17:02:13.280916 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.280966 master-0 kubenswrapper[15493]: E0216 17:02:13.280959 15493 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-autoscaler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.281196 master-0 kubenswrapper[15493]: E0216 17:02:13.281014 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.780998906 +0000 UTC m=+12.931171966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-autoscaler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.281894 master-0 kubenswrapper[15493]: W0216 17:02:13.281760 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.282097 master-0 kubenswrapper[15493]: E0216 17:02:13.281895 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.300864 master-0 kubenswrapper[15493]: W0216 17:02:13.300774 15493 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.300864 master-0 kubenswrapper[15493]: E0216 17:02:13.300861 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.301131 master-0 kubenswrapper[15493]: E0216 17:02:13.300903 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.301131 master-0 kubenswrapper[15493]: E0216 17:02:13.300935 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hmj52 for pod openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.301131 master-0 kubenswrapper[15493]: E0216 17:02:13.301000 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52 podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.800984615 +0000 UTC m=+12.951157685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hmj52" (UniqueName: "kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.321112 master-0 kubenswrapper[15493]: W0216 17:02:13.321016 15493 reflector.go:561] object-"openshift-network-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.321112 master-0 kubenswrapper[15493]: E0216 17:02:13.321108 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.322071 master-0 kubenswrapper[15493]: E0216 17:02:13.322032 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.322071 master-0 kubenswrapper[15493]: E0216 17:02:13.322065 15493 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.322313 master-0 kubenswrapper[15493]: E0216 17:02:13.322142 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.822122164 +0000 UTC m=+12.972295244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.341177 master-0 kubenswrapper[15493]: W0216 17:02:13.341024 15493 reflector.go:561] object-"openshift-service-ca"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.341177 master-0 kubenswrapper[15493]: E0216 17:02:13.341083 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.342086 master-0 kubenswrapper[15493]: E0216 17:02:13.342036 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.342086 master-0 kubenswrapper[15493]: E0216 17:02:13.342074 15493 projected.go:194] Error preparing data for projected volume kube-api-access-wn82n for pod openshift-cluster-node-tuning-operator/tuned-l5kbz: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/tuned/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.342242 master-0 kubenswrapper[15493]: E0216 17:02:13.342140 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n podName:c45ce0e5-c50b-4210-b7bb-82db2b2bc1db nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.842123394 +0000 UTC m=+12.992296464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wn82n" (UniqueName: "kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n") pod "tuned-l5kbz" (UID: "c45ce0e5-c50b-4210-b7bb-82db2b2bc1db") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/tuned/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.361121 master-0 kubenswrapper[15493]: W0216 17:02:13.360956 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.361121 master-0 kubenswrapper[15493]: E0216 17:02:13.361049 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.361351 master-0 kubenswrapper[15493]: E0216 17:02:13.361230 15493 projected.go:288] Couldn't get configMap openshift-network-node-identity/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.361351 master-0 kubenswrapper[15493]: E0216 17:02:13.361271 15493 projected.go:194] Error preparing data for projected volume kube-api-access-vk7xl for pod openshift-network-node-identity/network-node-identity-hhcpr: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/serviceaccounts/network-node-identity/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.361488 master-0 kubenswrapper[15493]: E0216 17:02:13.361363 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.861334582 +0000 UTC m=+13.011507692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vk7xl" (UniqueName: "kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/serviceaccounts/network-node-identity/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.369263 master-0 kubenswrapper[15493]: E0216 17:02:13.369177 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 16 17:02:13.380820 master-0 kubenswrapper[15493]: W0216 17:02:13.380725 15493 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.380820 master-0 kubenswrapper[15493]: E0216 17:02:13.380812 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.381086 master-0 kubenswrapper[15493]: I0216 17:02:13.380842 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:13.381289 master-0 kubenswrapper[15493]: I0216 17:02:13.381221 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:13.381398 master-0 kubenswrapper[15493]: I0216 17:02:13.381296 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:13.381398 master-0 kubenswrapper[15493]: I0216 17:02:13.381378 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:13.381775 master-0 kubenswrapper[15493]: E0216 17:02:13.381720 15493 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:13.381861 master-0 kubenswrapper[15493]: E0216 17:02:13.381780 15493 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.381861 master-0 kubenswrapper[15493]: I0216 17:02:13.381706 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:13.382053 master-0 kubenswrapper[15493]: E0216 17:02:13.381864 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:13.881836795 +0000 UTC m=+13.032009905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:13.401812 master-0 kubenswrapper[15493]: W0216 17:02:13.401706 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.402050 master-0 kubenswrapper[15493]: E0216 17:02:13.401821 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.421695 master-0 kubenswrapper[15493]: W0216 17:02:13.421605 15493 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.421817 master-0 kubenswrapper[15493]: E0216 17:02:13.421695 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.440958 master-0 kubenswrapper[15493]: W0216 17:02:13.440614 15493 reflector.go:561] object-"openshift-ingress-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.440958 master-0 kubenswrapper[15493]: E0216 17:02:13.440698 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.460749 master-0 kubenswrapper[15493]: I0216 17:02:13.460684 15493 request.go:700] Waited for 4.509617641s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 16 17:02:13.462143 master-0 kubenswrapper[15493]: W0216 17:02:13.462014 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.462238 master-0 kubenswrapper[15493]: E0216 17:02:13.462174 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.482758 master-0 kubenswrapper[15493]: W0216 17:02:13.482650 15493 reflector.go:561] object-"openshift-monitoring"/"telemetry-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dtelemetry-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.482982 master-0 kubenswrapper[15493]: E0216 17:02:13.482769 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"telemetry-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dtelemetry-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.487616 master-0 kubenswrapper[15493]: I0216 17:02:13.487552 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:13.488033 master-0 kubenswrapper[15493]: I0216 17:02:13.487967 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:02:13.488124 master-0 kubenswrapper[15493]: I0216 17:02:13.488056 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:13.488124 master-0 kubenswrapper[15493]: I0216 17:02:13.488101 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:13.488257 master-0 kubenswrapper[15493]: I0216 17:02:13.488142 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:13.488581 master-0 kubenswrapper[15493]: I0216 17:02:13.488501 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:13.501495 master-0 kubenswrapper[15493]: W0216 17:02:13.501390 15493 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.501495 master-0 kubenswrapper[15493]: E0216 17:02:13.501485 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.521348 master-0 kubenswrapper[15493]: W0216 17:02:13.521226 15493 reflector.go:561] object-"openshift-monitoring"/"cluster-monitoring-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dcluster-monitoring-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.521479 master-0 kubenswrapper[15493]: E0216 17:02:13.521352 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dcluster-monitoring-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.541061 master-0 kubenswrapper[15493]: W0216 17:02:13.540906 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.541197 master-0 kubenswrapper[15493]: E0216 17:02:13.541071 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.561578 master-0 kubenswrapper[15493]: W0216 17:02:13.561486 15493 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.561578 master-0 kubenswrapper[15493]: E0216 17:02:13.561566 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.580827 master-0 kubenswrapper[15493]: W0216 17:02:13.580727 15493 reflector.go:561] object-"openshift-network-operator"/"iptables-alerter-script": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.580827 master-0 kubenswrapper[15493]: E0216 17:02:13.580817 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"iptables-alerter-script\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.595091 master-0 kubenswrapper[15493]: I0216 17:02:13.595006 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:13.595317 master-0 kubenswrapper[15493]: I0216 17:02:13.595138 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:13.595677 master-0 kubenswrapper[15493]: I0216 17:02:13.595424 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:13.595677 master-0 kubenswrapper[15493]: I0216 17:02:13.595514 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:13.601662 master-0 kubenswrapper[15493]: W0216 17:02:13.601550 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.601805 master-0 kubenswrapper[15493]: E0216 17:02:13.601680 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.611354 master-0 kubenswrapper[15493]: E0216 17:02:13.611127 15493 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1894c8ca4adada44 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:02:04.628253252 +0000 UTC m=+3.778426332,LastTimestamp:2026-02-16 17:02:04.628253252 +0000 UTC m=+3.778426332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:02:13.620524 master-0 kubenswrapper[15493]: W0216 17:02:13.620441 15493 reflector.go:561] object-"openshift-catalogd"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.620665 master-0 kubenswrapper[15493]: E0216 17:02:13.620540 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.641471 master-0 kubenswrapper[15493]: W0216 17:02:13.641311 15493 reflector.go:561] object-"openshift-dns"/"dns-default": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.641471 master-0 kubenswrapper[15493]: E0216 17:02:13.641435 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.661183 master-0 kubenswrapper[15493]: W0216 17:02:13.661024 15493 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.661183 master-0 kubenswrapper[15493]: E0216 17:02:13.661149 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.681881 master-0 kubenswrapper[15493]: W0216 17:02:13.681769 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.682092 master-0 kubenswrapper[15493]: E0216 17:02:13.681892 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.698357 master-0 kubenswrapper[15493]: I0216 17:02:13.698279 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:13.698617 master-0 kubenswrapper[15493]: I0216 17:02:13.698590 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:13.698671 master-0 kubenswrapper[15493]: I0216 17:02:13.698629 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:13.699664 master-0 kubenswrapper[15493]: I0216 17:02:13.699589 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:13.699756 master-0 kubenswrapper[15493]: I0216 17:02:13.699698 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:13.701116 master-0 kubenswrapper[15493]: W0216 17:02:13.701035 15493 reflector.go:561] object-"openshift-dns"/"dns-default-metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.701193 master-0 kubenswrapper[15493]: E0216 17:02:13.701141 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default-metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.721941 master-0 kubenswrapper[15493]: W0216 17:02:13.721835 15493 reflector.go:561] object-"openshift-catalogd"/"catalogserver-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/secrets?fieldSelector=metadata.name%3Dcatalogserver-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.722150 master-0 kubenswrapper[15493]: E0216 17:02:13.721966 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"catalogserver-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/secrets?fieldSelector=metadata.name%3Dcatalogserver-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.741469 master-0 kubenswrapper[15493]: W0216 17:02:13.741360 15493 reflector.go:561] object-"openshift-service-ca"/"signing-cabundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.741677 master-0 kubenswrapper[15493]: E0216 17:02:13.741480 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-cabundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.761150 master-0 kubenswrapper[15493]: W0216 17:02:13.761051 15493 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.761378 master-0 kubenswrapper[15493]: E0216 17:02:13.761164 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.781528 master-0 kubenswrapper[15493]: W0216 17:02:13.781421 15493 reflector.go:561] object-"openshift-operator-controller"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.781751 master-0 kubenswrapper[15493]: E0216 17:02:13.781545 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-controller\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.801760 master-0 kubenswrapper[15493]: W0216 17:02:13.801574 15493 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.801760 master-0 kubenswrapper[15493]: E0216 17:02:13.801744 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.804279 master-0 kubenswrapper[15493]: I0216 17:02:13.803961 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:13.804479 master-0 kubenswrapper[15493]: I0216 17:02:13.804419 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:02:13.804766 master-0 kubenswrapper[15493]: I0216 17:02:13.804724 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:13.804871 master-0 kubenswrapper[15493]: I0216 17:02:13.804847 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:13.805193 master-0 kubenswrapper[15493]: I0216 17:02:13.805163 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:13.821531 master-0 kubenswrapper[15493]: W0216 17:02:13.821440 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.821650 master-0 kubenswrapper[15493]: E0216 17:02:13.821545 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.841533 master-0 kubenswrapper[15493]: W0216 17:02:13.841423 15493 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.841722 master-0 kubenswrapper[15493]: E0216 17:02:13.841557 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.860994 master-0 kubenswrapper[15493]: W0216 17:02:13.860862 15493 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.861089 master-0 kubenswrapper[15493]: E0216 17:02:13.861043 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.881742 master-0 kubenswrapper[15493]: W0216 17:02:13.881570 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.881742 master-0 kubenswrapper[15493]: E0216 17:02:13.881698 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.900891 master-0 kubenswrapper[15493]: W0216 17:02:13.900798 15493 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.901090 master-0 kubenswrapper[15493]: E0216 17:02:13.900903 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.909272 master-0 kubenswrapper[15493]: I0216 17:02:13.909216 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:13.909538 master-0 kubenswrapper[15493]: I0216 17:02:13.909522 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:13.909658 master-0 kubenswrapper[15493]: I0216 17:02:13.909643 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:13.909755 master-0 kubenswrapper[15493]: I0216 17:02:13.909742 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:13.921558 master-0 kubenswrapper[15493]: W0216 17:02:13.921475 15493 reflector.go:561] object-"openshift-etcd"/"installer-sa-dockercfg-rxv66": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-rxv66&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.921558 master-0 kubenswrapper[15493]: E0216 17:02:13.921561 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd\"/\"installer-sa-dockercfg-rxv66\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-rxv66&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.941580 master-0 kubenswrapper[15493]: W0216 17:02:13.941471 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.941790 master-0 kubenswrapper[15493]: E0216 17:02:13.941595 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.961143 master-0 kubenswrapper[15493]: W0216 17:02:13.961033 15493 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.961143 master-0 kubenswrapper[15493]: E0216 17:02:13.961141 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:13.981613 master-0 kubenswrapper[15493]: W0216 17:02:13.981530 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-b9gfw": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-dockercfg-b9gfw&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:13.981613 master-0 kubenswrapper[15493]: E0216 17:02:13.981601 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-autoscaler-operator-dockercfg-b9gfw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-dockercfg-b9gfw&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.001309 master-0 kubenswrapper[15493]: W0216 17:02:14.001207 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"cco-trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dcco-trusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.001555 master-0 kubenswrapper[15493]: E0216 17:02:14.001302 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cco-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dcco-trusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.021493 master-0 kubenswrapper[15493]: W0216 17:02:14.021367 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.021493 master-0 kubenswrapper[15493]: E0216 17:02:14.021451 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cloud-credential-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.041623 master-0 kubenswrapper[15493]: W0216 17:02:14.041498 15493 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.041623 master-0 kubenswrapper[15493]: E0216 17:02:14.041585 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.062084 master-0 kubenswrapper[15493]: W0216 17:02:14.061898 15493 reflector.go:561] object-"openshift-catalogd"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.062220 master-0 kubenswrapper[15493]: E0216 17:02:14.062101 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.067613 master-0 kubenswrapper[15493]: E0216 17:02:14.067548 15493 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:14.069807 master-0 kubenswrapper[15493]: E0216 17:02:14.069759 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:14.080981 master-0 kubenswrapper[15493]: W0216 17:02:14.080835 15493 reflector.go:561] object-"openshift-catalogd"/"catalogd-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dcatalogd-trusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.080981 master-0 kubenswrapper[15493]: E0216 17:02:14.080966 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"catalogd-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dcatalogd-trusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.101370 master-0 kubenswrapper[15493]: W0216 17:02:14.101237 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.101609 master-0 kubenswrapper[15493]: E0216 17:02:14.101369 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.115264 master-0 kubenswrapper[15493]: I0216 17:02:14.115190 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:14.115445 master-0 kubenswrapper[15493]: I0216 17:02:14.115390 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:14.115521 master-0 kubenswrapper[15493]: I0216 17:02:14.115446 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:14.115710 master-0 kubenswrapper[15493]: I0216 17:02:14.115658 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:14.115863 master-0 kubenswrapper[15493]: I0216 17:02:14.115824 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:14.115977 master-0 kubenswrapper[15493]: I0216 17:02:14.115896 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:14.116071 master-0 kubenswrapper[15493]: I0216 17:02:14.116050 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:14.116141 master-0 kubenswrapper[15493]: I0216 17:02:14.116097 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:14.116206 master-0 kubenswrapper[15493]: I0216 17:02:14.116155 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:14.116270 master-0 kubenswrapper[15493]: I0216 17:02:14.116212 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:14.116336 master-0 kubenswrapper[15493]: I0216 17:02:14.116277 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:14.116402 master-0 kubenswrapper[15493]: I0216 17:02:14.116332 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:14.116699 master-0 kubenswrapper[15493]: I0216 17:02:14.116642 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:14.116783 master-0 kubenswrapper[15493]: I0216 17:02:14.116751 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:14.116849 master-0 kubenswrapper[15493]: I0216 17:02:14.116828 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:14.116955 master-0 kubenswrapper[15493]: I0216 17:02:14.116880 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:14.117098 master-0 kubenswrapper[15493]: I0216 17:02:14.117045 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:14.117177 master-0 kubenswrapper[15493]: I0216 17:02:14.117119 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:14.117243 master-0 kubenswrapper[15493]: I0216 17:02:14.117176 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:14.117243 master-0 kubenswrapper[15493]: I0216 17:02:14.117227 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:14.117374 master-0 kubenswrapper[15493]: I0216 17:02:14.117308 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:14.117438 master-0 kubenswrapper[15493]: I0216 17:02:14.117388 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:14.117497 master-0 kubenswrapper[15493]: I0216 17:02:14.117451 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:14.117566 master-0 kubenswrapper[15493]: I0216 17:02:14.117506 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:14.117631 master-0 kubenswrapper[15493]: I0216 17:02:14.117560 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:14.117631 master-0 kubenswrapper[15493]: I0216 17:02:14.117599 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:14.117750 master-0 kubenswrapper[15493]: I0216 17:02:14.117637 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:14.117750 master-0 kubenswrapper[15493]: I0216 17:02:14.117683 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:14.117750 master-0 kubenswrapper[15493]: I0216 17:02:14.117721 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:14.117972 master-0 kubenswrapper[15493]: I0216 17:02:14.117761 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:14.117972 master-0 kubenswrapper[15493]: I0216 17:02:14.117798 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:14.117972 master-0 kubenswrapper[15493]: I0216 17:02:14.117906 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:14.118188 master-0 kubenswrapper[15493]: I0216 17:02:14.118003 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:14.118188 master-0 kubenswrapper[15493]: I0216 17:02:14.118068 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:14.118188 master-0 kubenswrapper[15493]: I0216 17:02:14.118099 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:14.118188 master-0 kubenswrapper[15493]: I0216 17:02:14.118128 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:14.118188 master-0 kubenswrapper[15493]: I0216 17:02:14.118156 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:14.118485 master-0 kubenswrapper[15493]: I0216 17:02:14.118235 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:14.118485 master-0 kubenswrapper[15493]: I0216 17:02:14.118286 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:14.118485 master-0 kubenswrapper[15493]: I0216 17:02:14.118342 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:14.118485 master-0 kubenswrapper[15493]: I0216 17:02:14.118395 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:14.118485 master-0 kubenswrapper[15493]: I0216 17:02:14.118439 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:14.118485 master-0 kubenswrapper[15493]: I0216 17:02:14.118478 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:14.118822 master-0 kubenswrapper[15493]: I0216 17:02:14.118519 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:14.118822 master-0 kubenswrapper[15493]: I0216 17:02:14.118561 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:14.118822 master-0 kubenswrapper[15493]: I0216 17:02:14.118597 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:14.118822 master-0 kubenswrapper[15493]: I0216 17:02:14.118771 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:14.119113 master-0 kubenswrapper[15493]: I0216 17:02:14.118831 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:14.119113 master-0 kubenswrapper[15493]: I0216 17:02:14.118874 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:14.119113 master-0 kubenswrapper[15493]: I0216 17:02:14.118916 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:14.119113 master-0 kubenswrapper[15493]: I0216 17:02:14.119054 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:14.119113 master-0 kubenswrapper[15493]: I0216 17:02:14.119095 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:14.119476 master-0 kubenswrapper[15493]: I0216 17:02:14.119131 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:14.119476 master-0 kubenswrapper[15493]: I0216 17:02:14.119272 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:14.119476 master-0 kubenswrapper[15493]: I0216 17:02:14.119351 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:14.119476 master-0 kubenswrapper[15493]: I0216 17:02:14.119396 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:14.119476 master-0 kubenswrapper[15493]: I0216 17:02:14.119440 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:14.119782 master-0 kubenswrapper[15493]: I0216 17:02:14.119517 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:14.119782 master-0 kubenswrapper[15493]: I0216 17:02:14.119563 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:14.119782 master-0 kubenswrapper[15493]: I0216 17:02:14.119602 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:14.119782 master-0 kubenswrapper[15493]: I0216 17:02:14.119633 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:14.119782 master-0 kubenswrapper[15493]: I0216 17:02:14.119721 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:14.119782 master-0 kubenswrapper[15493]: I0216 17:02:14.119749 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:14.119782 master-0 kubenswrapper[15493]: I0216 17:02:14.119776 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:14.120233 master-0 kubenswrapper[15493]: I0216 17:02:14.119802 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:14.120233 master-0 kubenswrapper[15493]: I0216 17:02:14.119829 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:14.120233 master-0 kubenswrapper[15493]: I0216 17:02:14.119861 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:14.120233 master-0 kubenswrapper[15493]: I0216 17:02:14.119896 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:14.120233 master-0 kubenswrapper[15493]: I0216 17:02:14.119955 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:14.120233 master-0 kubenswrapper[15493]: I0216 17:02:14.120006 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:14.120233 master-0 kubenswrapper[15493]: I0216 17:02:14.120044 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:14.120233 master-0 kubenswrapper[15493]: I0216 17:02:14.120170 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:14.120233 master-0 kubenswrapper[15493]: I0216 17:02:14.120224 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:14.120978 master-0 kubenswrapper[15493]: I0216 17:02:14.120268 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:14.120978 master-0 kubenswrapper[15493]: I0216 17:02:14.120337 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:14.120978 master-0 kubenswrapper[15493]: I0216 17:02:14.120406 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:14.120978 master-0 kubenswrapper[15493]: I0216 17:02:14.120519 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:14.120978 master-0 kubenswrapper[15493]: I0216 17:02:14.120628 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:14.120978 master-0 kubenswrapper[15493]: I0216 17:02:14.120666 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:14.120978 master-0 kubenswrapper[15493]: I0216 17:02:14.120709 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:14.120978 master-0 kubenswrapper[15493]: W0216 17:02:14.120766 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.120978 master-0 kubenswrapper[15493]: E0216 17:02:14.120863 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.120978 master-0 kubenswrapper[15493]: I0216 17:02:14.120806 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:14.120978 master-0 kubenswrapper[15493]: I0216 17:02:14.120983 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:14.121648 master-0 kubenswrapper[15493]: I0216 17:02:14.121062 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:14.121648 master-0 kubenswrapper[15493]: I0216 17:02:14.121105 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:14.121648 master-0 kubenswrapper[15493]: I0216 17:02:14.121226 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:14.121648 master-0 kubenswrapper[15493]: I0216 17:02:14.121346 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:14.121648 master-0 kubenswrapper[15493]: I0216 17:02:14.121396 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:14.121648 master-0 kubenswrapper[15493]: I0216 17:02:14.121561 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:14.121648 master-0 kubenswrapper[15493]: I0216 17:02:14.121605 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:14.122104 master-0 kubenswrapper[15493]: I0216 17:02:14.121684 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:14.122104 master-0 kubenswrapper[15493]: I0216 17:02:14.121754 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:14.122104 master-0 kubenswrapper[15493]: I0216 17:02:14.121884 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:14.122104 master-0 kubenswrapper[15493]: I0216 17:02:14.121971 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:14.122104 master-0 kubenswrapper[15493]: I0216 17:02:14.122015 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:14.122104 master-0 kubenswrapper[15493]: I0216 17:02:14.122050 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:14.122104 master-0 kubenswrapper[15493]: I0216 17:02:14.122100 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:14.122505 master-0 kubenswrapper[15493]: I0216 17:02:14.122254 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:14.122505 master-0 kubenswrapper[15493]: I0216 17:02:14.122313 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:14.122505 master-0 kubenswrapper[15493]: I0216 17:02:14.122371 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:14.122505 master-0 kubenswrapper[15493]: I0216 17:02:14.122416 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:14.122505 master-0 kubenswrapper[15493]: I0216 17:02:14.122458 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:14.122802 master-0 kubenswrapper[15493]: I0216 17:02:14.122544 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:14.122802 master-0 kubenswrapper[15493]: I0216 17:02:14.122723 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:14.122989 master-0 kubenswrapper[15493]: I0216 17:02:14.122856 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:14.123062 master-0 kubenswrapper[15493]: I0216 17:02:14.123034 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:14.123130 master-0 kubenswrapper[15493]: I0216 17:02:14.123080 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:14.123263 master-0 kubenswrapper[15493]: I0216 17:02:14.123211 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:14.123344 master-0 kubenswrapper[15493]: I0216 17:02:14.123282 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:14.123344 master-0 kubenswrapper[15493]: I0216 17:02:14.123323 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:14.123463 master-0 kubenswrapper[15493]: I0216 17:02:14.123377 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:14.123463 master-0 kubenswrapper[15493]: I0216 17:02:14.123417 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:14.123584 master-0 kubenswrapper[15493]: I0216 17:02:14.123508 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:14.123584 master-0 kubenswrapper[15493]: I0216 17:02:14.123557 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:14.123720 master-0 kubenswrapper[15493]: I0216 17:02:14.123592 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:14.123900 master-0 kubenswrapper[15493]: I0216 17:02:14.123831 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:14.124034 master-0 kubenswrapper[15493]: I0216 17:02:14.123910 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:14.124034 master-0 kubenswrapper[15493]: I0216 17:02:14.124022 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:14.124183 master-0 kubenswrapper[15493]: I0216 17:02:14.124105 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:14.124310 master-0 kubenswrapper[15493]: I0216 17:02:14.124274 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:14.124405 master-0 kubenswrapper[15493]: I0216 17:02:14.124365 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:14.124483 master-0 kubenswrapper[15493]: I0216 17:02:14.124429 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:14.141963 master-0 kubenswrapper[15493]: W0216 17:02:14.141752 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-j874l": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-dockercfg-j874l&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.141963 master-0 kubenswrapper[15493]: E0216 17:02:14.141885 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cloud-credential-operator-dockercfg-j874l\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-dockercfg-j874l&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.160837 master-0 kubenswrapper[15493]: W0216 17:02:14.160728 15493 reflector.go:561] object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Doperator-controller-trusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.160837 master-0 kubenswrapper[15493]: E0216 17:02:14.160815 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-controller\"/\"operator-controller-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Doperator-controller-trusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.181163 master-0 kubenswrapper[15493]: W0216 17:02:14.181053 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.181163 master-0 kubenswrapper[15493]: E0216 17:02:14.181156 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.200830 master-0 kubenswrapper[15493]: W0216 17:02:14.200742 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.200830 master-0 kubenswrapper[15493]: E0216 17:02:14.200818 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.220777 master-0 kubenswrapper[15493]: W0216 17:02:14.220662 15493 reflector.go:561] object-"openshift-cluster-version"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.220777 master-0 kubenswrapper[15493]: E0216 17:02:14.220759 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.241226 master-0 kubenswrapper[15493]: W0216 17:02:14.241098 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.241226 master-0 kubenswrapper[15493]: E0216 17:02:14.241187 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.261450 master-0 kubenswrapper[15493]: W0216 17:02:14.261368 15493 reflector.go:561] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.261450 master-0 kubenswrapper[15493]: E0216 17:02:14.261439 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.281363 master-0 kubenswrapper[15493]: W0216 17:02:14.281259 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.281582 master-0 kubenswrapper[15493]: E0216 17:02:14.281371 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.301352 master-0 kubenswrapper[15493]: W0216 17:02:14.301269 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-x2982": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-dockercfg-x2982&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.301460 master-0 kubenswrapper[15493]: E0216 17:02:14.301395 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"cluster-storage-operator-dockercfg-x2982\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-dockercfg-x2982&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.321562 master-0 kubenswrapper[15493]: W0216 17:02:14.321433 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.321741 master-0 kubenswrapper[15493]: E0216 17:02:14.321581 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.341720 master-0 kubenswrapper[15493]: W0216 17:02:14.341621 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.341720 master-0 kubenswrapper[15493]: E0216 17:02:14.341711 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.361608 master-0 kubenswrapper[15493]: W0216 17:02:14.361479 15493 reflector.go:561] object-"openshift-etcd"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.361608 master-0 kubenswrapper[15493]: E0216 17:02:14.361580 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.380669 master-0 kubenswrapper[15493]: W0216 17:02:14.380570 15493 reflector.go:561] object-"openshift-operator-controller"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.380805 master-0 kubenswrapper[15493]: E0216 17:02:14.380676 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-controller\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.400744 master-0 kubenswrapper[15493]: W0216 17:02:14.400641 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.400744 master-0 kubenswrapper[15493]: E0216 17:02:14.400698 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.421328 master-0 kubenswrapper[15493]: W0216 17:02:14.421267 15493 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.421848 master-0 kubenswrapper[15493]: E0216 17:02:14.421326 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.441623 master-0 kubenswrapper[15493]: W0216 17:02:14.441525 15493 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy-cluster-autoscaler-operator&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.441623 master-0 kubenswrapper[15493]: E0216 17:02:14.441618 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy-cluster-autoscaler-operator\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy-cluster-autoscaler-operator&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.460825 master-0 kubenswrapper[15493]: W0216 17:02:14.460716 15493 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.460825 master-0 kubenswrapper[15493]: E0216 17:02:14.460821 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.480314 master-0 kubenswrapper[15493]: I0216 17:02:14.480262 15493 request.go:700] Waited for 4.544585348s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 16 17:02:14.481122 master-0 kubenswrapper[15493]: W0216 17:02:14.481029 15493 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.481194 master-0 kubenswrapper[15493]: E0216 17:02:14.481138 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.500798 master-0 kubenswrapper[15493]: W0216 17:02:14.500703 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dcluster-baremetal-operator-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.501026 master-0 kubenswrapper[15493]: E0216 17:02:14.500797 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dcluster-baremetal-operator-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.520843 master-0 kubenswrapper[15493]: W0216 17:02:14.520765 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-webhook-server-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.521000 master-0 kubenswrapper[15493]: E0216 17:02:14.520850 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-webhook-server-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.542829 master-0 kubenswrapper[15493]: W0216 17:02:14.542734 15493 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.542829 master-0 kubenswrapper[15493]: E0216 17:02:14.542828 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.561422 master-0 kubenswrapper[15493]: W0216 17:02:14.561303 15493 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-7mlbn": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-7mlbn&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.561593 master-0 kubenswrapper[15493]: E0216 17:02:14.561433 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-7mlbn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-7mlbn&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.581423 master-0 kubenswrapper[15493]: W0216 17:02:14.581285 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-autoscaler-operator-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.581815 master-0 kubenswrapper[15493]: E0216 17:02:14.581771 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-autoscaler-operator-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.601904 master-0 kubenswrapper[15493]: E0216 17:02:14.601848 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/bootstrap-kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:02:14.602433 master-0 kubenswrapper[15493]: I0216 17:02:14.602319 15493 kubelet_pods.go:1320] "Clean up containers for orphaned pod we had not seen before" podUID="9460ca0802075a8a6a10d7b3e6052c4d" killPodOptions="" Feb 16 17:02:14.602632 master-0 kubenswrapper[15493]: I0216 17:02:14.602594 15493 kubelet_pods.go:1320] "Clean up containers for orphaned pod we had not seen before" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" killPodOptions="" Feb 16 17:02:14.604468 master-0 kubenswrapper[15493]: E0216 17:02:14.604392 15493 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.21s" Feb 16 17:02:14.618090 master-0 kubenswrapper[15493]: I0216 17:02:14.618016 15493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" path="/var/lib/kubelet/pods/5d1e91e5a1fed5cf7076a92d2830d36f/volumes" Feb 16 17:02:14.619059 master-0 kubenswrapper[15493]: I0216 17:02:14.619008 15493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9460ca0802075a8a6a10d7b3e6052c4d" path="/var/lib/kubelet/pods/9460ca0802075a8a6a10d7b3e6052c4d/volumes" Feb 16 17:02:14.619482 master-0 kubenswrapper[15493]: I0216 17:02:14.619439 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:14.621003 master-0 kubenswrapper[15493]: W0216 17:02:14.620858 15493 reflector.go:561] object-"openshift-insights"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.621003 master-0 kubenswrapper[15493]: E0216 17:02:14.620965 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.622366 master-0 kubenswrapper[15493]: I0216 17:02:14.622324 15493 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:02:14.624692 master-0 kubenswrapper[15493]: I0216 17:02:14.624626 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:02:14.624692 master-0 kubenswrapper[15493]: I0216 17:02:14.624694 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:02:14.624875 master-0 kubenswrapper[15493]: I0216 17:02:14.624716 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:02:14.625177 master-0 kubenswrapper[15493]: I0216 17:02:14.625131 15493 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:02:14.640760 master-0 kubenswrapper[15493]: W0216 17:02:14.640667 15493 reflector.go:561] object-"openshift-insights"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.640951 master-0 kubenswrapper[15493]: E0216 17:02:14.640761 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.660769 master-0 kubenswrapper[15493]: W0216 17:02:14.660604 15493 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.660769 master-0 kubenswrapper[15493]: E0216 17:02:14.660682 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.680915 master-0 kubenswrapper[15493]: W0216 17:02:14.680850 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-gtxjb": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-gtxjb&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.681097 master-0 kubenswrapper[15493]: E0216 17:02:14.680939 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-gtxjb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-gtxjb&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.702220 master-0 kubenswrapper[15493]: W0216 17:02:14.702133 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.702376 master-0 kubenswrapper[15493]: E0216 17:02:14.702222 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"cluster-storage-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.721092 master-0 kubenswrapper[15493]: W0216 17:02:14.720987 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.721092 master-0 kubenswrapper[15493]: E0216 17:02:14.721066 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.747605 master-0 kubenswrapper[15493]: W0216 17:02:14.747499 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-hk5sk": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-hk5sk&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.747873 master-0 kubenswrapper[15493]: E0216 17:02:14.747610 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-hk5sk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-hk5sk&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.760968 master-0 kubenswrapper[15493]: W0216 17:02:14.760860 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-q2gzj": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-q2gzj&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.761164 master-0 kubenswrapper[15493]: E0216 17:02:14.760986 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-q2gzj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-q2gzj&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.780841 master-0 kubenswrapper[15493]: W0216 17:02:14.780766 15493 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.780841 master-0 kubenswrapper[15493]: E0216 17:02:14.780836 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.801078 master-0 kubenswrapper[15493]: W0216 17:02:14.800967 15493 reflector.go:561] object-"openshift-machine-api"/"baremetal-kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dbaremetal-kube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.801078 master-0 kubenswrapper[15493]: E0216 17:02:14.801055 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"baremetal-kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dbaremetal-kube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.820689 master-0 kubenswrapper[15493]: W0216 17:02:14.820563 15493 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.820689 master-0 kubenswrapper[15493]: E0216 17:02:14.820676 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.840513 master-0 kubenswrapper[15493]: W0216 17:02:14.840404 15493 reflector.go:561] object-"openshift-insights"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.840513 master-0 kubenswrapper[15493]: E0216 17:02:14.840502 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.860296 master-0 kubenswrapper[15493]: W0216 17:02:14.860223 15493 reflector.go:561] object-"openshift-insights"/"operator-dockercfg-rzjlw": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Doperator-dockercfg-rzjlw&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.860296 master-0 kubenswrapper[15493]: E0216 17:02:14.860288 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"operator-dockercfg-rzjlw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Doperator-dockercfg-rzjlw&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.880942 master-0 kubenswrapper[15493]: W0216 17:02:14.880859 15493 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.881069 master-0 kubenswrapper[15493]: E0216 17:02:14.880944 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.901637 master-0 kubenswrapper[15493]: W0216 17:02:14.901555 15493 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.901637 master-0 kubenswrapper[15493]: E0216 17:02:14.901626 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.921490 master-0 kubenswrapper[15493]: W0216 17:02:14.921321 15493 reflector.go:561] object-"openshift-insights"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.921490 master-0 kubenswrapper[15493]: E0216 17:02:14.921414 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.941409 master-0 kubenswrapper[15493]: W0216 17:02:14.941343 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-mzz6s": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-dockercfg-mzz6s&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.941478 master-0 kubenswrapper[15493]: E0216 17:02:14.941419 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-operator-dockercfg-mzz6s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-dockercfg-mzz6s&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.960635 master-0 kubenswrapper[15493]: W0216 17:02:14.960543 15493 reflector.go:561] object-"openshift-machine-config-operator"/"mco-proxy-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.960731 master-0 kubenswrapper[15493]: E0216 17:02:14.960631 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:14.971018 master-0 kubenswrapper[15493]: E0216 17:02:14.970943 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 16 17:02:14.980948 master-0 kubenswrapper[15493]: W0216 17:02:14.980829 15493 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:14.980948 master-0 kubenswrapper[15493]: E0216 17:02:14.980883 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.001576 master-0 kubenswrapper[15493]: W0216 17:02:15.001500 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.001576 master-0 kubenswrapper[15493]: E0216 17:02:15.001585 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.021152 master-0 kubenswrapper[15493]: W0216 17:02:15.021046 15493 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.021152 master-0 kubenswrapper[15493]: E0216 17:02:15.021134 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.041198 master-0 kubenswrapper[15493]: W0216 17:02:15.041053 15493 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.041198 master-0 kubenswrapper[15493]: E0216 17:02:15.041178 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.061261 master-0 kubenswrapper[15493]: W0216 17:02:15.061115 15493 reflector.go:561] object-"openshift-insights"/"openshift-insights-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Dopenshift-insights-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.061261 master-0 kubenswrapper[15493]: E0216 17:02:15.061245 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"openshift-insights-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Dopenshift-insights-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.067748 master-0 kubenswrapper[15493]: E0216 17:02:15.067704 15493 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.067831 master-0 kubenswrapper[15493]: E0216 17:02:15.067749 15493 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.067880 master-0 kubenswrapper[15493]: E0216 17:02:15.067865 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:17.067818902 +0000 UTC m=+16.217992012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.070084 master-0 kubenswrapper[15493]: E0216 17:02:15.070028 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.070174 master-0 kubenswrapper[15493]: E0216 17:02:15.070090 15493 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.070174 master-0 kubenswrapper[15493]: E0216 17:02:15.070165 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:02:17.070145084 +0000 UTC m=+16.220318194 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.080976 master-0 kubenswrapper[15493]: W0216 17:02:15.080618 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-q5h8t": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-q5h8t&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.080976 master-0 kubenswrapper[15493]: E0216 17:02:15.080743 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-q5h8t\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-q5h8t&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.100735 master-0 kubenswrapper[15493]: W0216 17:02:15.100635 15493 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.100735 master-0 kubenswrapper[15493]: E0216 17:02:15.100722 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.115491 master-0 kubenswrapper[15493]: E0216 17:02:15.115390 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.115634 master-0 kubenswrapper[15493]: E0216 17:02:15.115534 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.115507595 +0000 UTC m=+18.265680705 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.116574 master-0 kubenswrapper[15493]: E0216 17:02:15.116527 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.116709 master-0 kubenswrapper[15493]: E0216 17:02:15.116602 15493 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.116709 master-0 kubenswrapper[15493]: E0216 17:02:15.116604 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.116709 master-0 kubenswrapper[15493]: E0216 17:02:15.116609 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.116588883 +0000 UTC m=+18.266761993 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.116709 master-0 kubenswrapper[15493]: E0216 17:02:15.116688 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.116704 15493 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.116718 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.116751 15493 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.116784 15493 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.116792 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.116812 15493 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.116827 15493 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.116697 15493 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.116708 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.116683016 +0000 UTC m=+18.266856136 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.116896 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.116953 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.116896261 +0000 UTC m=+18.267069371 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.116991 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.116973683 +0000 UTC m=+18.267146783 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.117031 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.117015094 +0000 UTC m=+18.267188194 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.117062 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.117047255 +0000 UTC m=+18.267220365 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.117084 15493 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.117097 master-0 kubenswrapper[15493]: E0216 17:02:15.117105 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.117090086 +0000 UTC m=+18.267263196 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117144 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.117127727 +0000 UTC m=+18.267300827 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117165 15493 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117172 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.117160788 +0000 UTC m=+18.267333888 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117199 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.117189389 +0000 UTC m=+18.267362489 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117226 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.11721312 +0000 UTC m=+18.267386230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117228 15493 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117250 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.11723886 +0000 UTC m=+18.267411960 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117304 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117311 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.117287382 +0000 UTC m=+18.267460492 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117344 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.117332713 +0000 UTC m=+18.267505813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117341 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117366 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.117355403 +0000 UTC m=+18.267528503 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117395 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.117383454 +0000 UTC m=+18.267556554 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117406 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117435 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.117406575 +0000 UTC m=+18.267579685 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117469 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.117454446 +0000 UTC m=+18.267627546 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.117493 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.117483767 +0000 UTC m=+18.267656867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118123 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118190 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.118170425 +0000 UTC m=+18.268343535 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118376 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118409 15493 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118469 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.118444672 +0000 UTC m=+18.268617782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118505 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118519 15493 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118550 15493 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118586 15493 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118605 15493 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118621 15493 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118564 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118613 15493 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118517 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.118495994 +0000 UTC m=+18.268669114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118673 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118686 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.118669998 +0000 UTC m=+18.268843108 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118706 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118708 15493 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118765 15493 configmap.go:193] Couldn't get configMap openshift-network-node-identity/ovnkube-identity-cm: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118784 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118794 15493 configmap.go:193] Couldn't get configMap openshift-network-node-identity/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118816 15493 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118504 15493 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118851 15493 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118893 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118628 15493 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118917 15493 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118979 15493 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119014 15493 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118714 15493 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119108 15493 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118718 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.118701229 +0000 UTC m=+18.268874339 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118731 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119178 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119150451 +0000 UTC m=+18.269323561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118751 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119215 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119200022 +0000 UTC m=+18.269373122 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.118845 15493 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119246 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119232123 +0000 UTC m=+18.269405233 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119282 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119282 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119261804 +0000 UTC m=+18.269434904 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119306 15493 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119338 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119319715 +0000 UTC m=+18.269492815 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119363 15493 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119371 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119357136 +0000 UTC m=+18.269530246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119407 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119392017 +0000 UTC m=+18.269565127 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119435 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119421558 +0000 UTC m=+18.269594668 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119469 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119452769 +0000 UTC m=+18.269625879 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119512 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.11949775 +0000 UTC m=+18.269670850 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ovnkube-identity-cm" (UniqueName: "kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119543 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119529221 +0000 UTC m=+18.269702331 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119575 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119561032 +0000 UTC m=+18.269734142 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119589 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119603 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119590593 +0000 UTC m=+18.269763703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119615 15493 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119638 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119623793 +0000 UTC m=+18.269796903 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119590 15493 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119648 15493 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119668 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119652004 +0000 UTC m=+18.269825114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119682 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119707 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119691135 +0000 UTC m=+18.269864245 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119752 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119725856 +0000 UTC m=+18.269898966 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119772 15493 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119792 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119775007 +0000 UTC m=+18.269948117 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119821 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119807388 +0000 UTC m=+18.269980498 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119974 15493 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119994 15493 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120020 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119982 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.119950762 +0000 UTC m=+18.270123872 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119861 15493 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120066 15493 secret.go:189] Couldn't get secret openshift-network-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119905 15493 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120094 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120065065 +0000 UTC m=+18.270238175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119893 15493 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.119858 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120032 15493 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120130 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls podName:4549ea98-7379-49e1-8452-5efb643137ca nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120109406 +0000 UTC m=+18.270282516 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls") pod "network-operator-6fcf4c966-6bmf9" (UID: "4549ea98-7379-49e1-8452-5efb643137ca") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120167 15493 configmap.go:193] Couldn't get configMap openshift-multus/whereabouts-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120174 15493 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120198 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120179358 +0000 UTC m=+18.270352418 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120212 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120205669 +0000 UTC m=+18.270378739 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120226 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120220059 +0000 UTC m=+18.270393129 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120230 15493 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120288 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120244 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.12023751 +0000 UTC m=+18.270410580 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120348 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120324932 +0000 UTC m=+18.270498092 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120381 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120365333 +0000 UTC m=+18.270538543 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120407 15493 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120413 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120396684 +0000 UTC m=+18.270569904 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120490 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120469846 +0000 UTC m=+18.270642996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120516 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120503697 +0000 UTC m=+18.270676797 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120539 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120527717 +0000 UTC m=+18.270700927 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120555 15493 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-control-plane-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120595 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120560 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120550948 +0000 UTC m=+18.270724058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120641 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.12062612 +0000 UTC m=+18.270799350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120648 15493 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120673 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120658111 +0000 UTC m=+18.270831321 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.120550 master-0 kubenswrapper[15493]: E0216 17:02:15.120705 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120690582 +0000 UTC m=+18.270863812 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.120736 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120718342 +0000 UTC m=+18.270891572 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.120781 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120766224 +0000 UTC m=+18.270939424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.120820 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120805775 +0000 UTC m=+18.270978985 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.120854 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120838486 +0000 UTC m=+18.271011686 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.120873 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.120885 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120870426 +0000 UTC m=+18.271043616 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: W0216 17:02:15.120867 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.120961 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.120909327 +0000 UTC m=+18.271082557 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.120972 15493 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121009 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.12099361 +0000 UTC m=+18.271166720 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121009 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.120951 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121044 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121025931 +0000 UTC m=+18.271199161 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.120984 15493 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121074 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121059881 +0000 UTC m=+18.271233111 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "whereabouts-configmap" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121100 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121087082 +0000 UTC m=+18.271260302 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121133 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121123013 +0000 UTC m=+18.271296123 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121157 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121146074 +0000 UTC m=+18.271319174 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121163 15493 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121177 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121169004 +0000 UTC m=+18.271342104 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121206 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121195635 +0000 UTC m=+18.271368745 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" (UniqueName: "kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121229 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121216326 +0000 UTC m=+18.271389426 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121240 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121254 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121240526 +0000 UTC m=+18.271413746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121259 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121283 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121268097 +0000 UTC m=+18.271441207 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121308 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121292418 +0000 UTC m=+18.271465528 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121331 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121320018 +0000 UTC m=+18.271493128 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121332 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121351 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121340189 +0000 UTC m=+18.271513289 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121381 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.12137186 +0000 UTC m=+18.271544970 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121403 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.12139372 +0000 UTC m=+18.271566820 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121422 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121413121 +0000 UTC m=+18.271586221 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121447 15493 configmap.go:193] Couldn't get configMap openshift-network-operator/iptables-alerter-script: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121534 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script podName:b3fa6ac1-781f-446c-b6b4-18bdb7723c23 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121512863 +0000 UTC m=+18.271686013 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "iptables-alerter-script" (UniqueName: "kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script") pod "iptables-alerter-czzz2" (UID: "b3fa6ac1-781f-446c-b6b4-18bdb7723c23") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121573 15493 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121627 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121609836 +0000 UTC m=+18.271783076 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121647 15493 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121723 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121702168 +0000 UTC m=+18.271875328 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121766 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121783 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121805 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121795821 +0000 UTC m=+18.271968891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121826 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121812861 +0000 UTC m=+18.271985971 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121837 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.121862 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.121853712 +0000 UTC m=+18.272026782 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123032 15493 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123085 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123110 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123117 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.123094905 +0000 UTC m=+18.273268045 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123158 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.123148397 +0000 UTC m=+18.273321467 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123160 15493 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123185 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123191 15493 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123219 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123195 15493 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123235 15493 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123262 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123264 15493 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123292 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123293 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123318 15493 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123383 15493 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123399 15493 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123222 15493 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123550 15493 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123173 15493 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123574 15493 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123171 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.123165977 +0000 UTC m=+18.273339047 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123619 15493 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123659 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.1236331 +0000 UTC m=+18.273806230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123712 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.123691041 +0000 UTC m=+18.273864251 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123727 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123760 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.123740852 +0000 UTC m=+18.273914052 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123789 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.123774283 +0000 UTC m=+18.273947503 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123819 15493 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123825 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.123808684 +0000 UTC m=+18.273981894 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123859 15493 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123859 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.123844495 +0000 UTC m=+18.274017705 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123910 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.123889406 +0000 UTC m=+18.274062536 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.123989 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.123970138 +0000 UTC m=+18.274143368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124047 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124062 15493 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124088 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124067301 +0000 UTC m=+18.274240491 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124182 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124163454 +0000 UTC m=+18.274336684 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124244 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124233745 +0000 UTC m=+18.274406845 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124268 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124258056 +0000 UTC m=+18.274431246 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124361 15493 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124407 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.12439476 +0000 UTC m=+18.274567860 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124431 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124478 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124487 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124456811 +0000 UTC m=+18.274629971 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124532 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124514543 +0000 UTC m=+18.274687763 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124550 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124579 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124563854 +0000 UTC m=+18.274737074 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.124495 master-0 kubenswrapper[15493]: E0216 17:02:15.124623 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124602735 +0000 UTC m=+18.274775915 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.127778 master-0 kubenswrapper[15493]: E0216 17:02:15.124656 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124641226 +0000 UTC m=+18.274814436 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.127778 master-0 kubenswrapper[15493]: E0216 17:02:15.124685 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124670817 +0000 UTC m=+18.274844027 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.127778 master-0 kubenswrapper[15493]: E0216 17:02:15.124714 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124700338 +0000 UTC m=+18.274873558 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.127778 master-0 kubenswrapper[15493]: E0216 17:02:15.124742 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124727919 +0000 UTC m=+18.274901139 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.127778 master-0 kubenswrapper[15493]: E0216 17:02:15.124749 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.127778 master-0 kubenswrapper[15493]: E0216 17:02:15.124776 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124759289 +0000 UTC m=+18.274932489 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.127778 master-0 kubenswrapper[15493]: E0216 17:02:15.124808 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.12479534 +0000 UTC m=+18.274968570 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.127778 master-0 kubenswrapper[15493]: E0216 17:02:15.124838 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124824901 +0000 UTC m=+18.274998111 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.127778 master-0 kubenswrapper[15493]: E0216 17:02:15.124869 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124856192 +0000 UTC m=+18.275029412 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:15.127778 master-0 kubenswrapper[15493]: E0216 17:02:15.124902 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124887213 +0000 UTC m=+18.275060423 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.127778 master-0 kubenswrapper[15493]: E0216 17:02:15.124970 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124954194 +0000 UTC m=+18.275127324 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.127778 master-0 kubenswrapper[15493]: E0216 17:02:15.125009 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.124996006 +0000 UTC m=+18.275169216 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:15.141519 master-0 kubenswrapper[15493]: W0216 17:02:15.141395 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.141519 master-0 kubenswrapper[15493]: E0216 17:02:15.141481 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.163824 master-0 kubenswrapper[15493]: W0216 17:02:15.163713 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.163824 master-0 kubenswrapper[15493]: E0216 17:02:15.163823 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.181490 master-0 kubenswrapper[15493]: W0216 17:02:15.181314 15493 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.181490 master-0 kubenswrapper[15493]: E0216 17:02:15.181397 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.203280 master-0 kubenswrapper[15493]: W0216 17:02:15.203134 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"pprof-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.203463 master-0 kubenswrapper[15493]: E0216 17:02:15.203303 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.221474 master-0 kubenswrapper[15493]: W0216 17:02:15.221359 15493 reflector.go:561] object-"openshift-machine-config-operator"/"mcc-proxy-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.221474 master-0 kubenswrapper[15493]: E0216 17:02:15.221457 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.241276 master-0 kubenswrapper[15493]: W0216 17:02:15.241171 15493 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.241500 master-0 kubenswrapper[15493]: E0216 17:02:15.241281 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.261530 master-0 kubenswrapper[15493]: W0216 17:02:15.261428 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-t46bw": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-t46bw&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.261530 master-0 kubenswrapper[15493]: E0216 17:02:15.261531 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-t46bw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-t46bw&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.281397 master-0 kubenswrapper[15493]: W0216 17:02:15.281181 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcloud-controller-manager-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.281397 master-0 kubenswrapper[15493]: E0216 17:02:15.281287 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"cloud-controller-manager-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcloud-controller-manager-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.301209 master-0 kubenswrapper[15493]: W0216 17:02:15.301083 15493 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-6858s": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-6858s&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.301508 master-0 kubenswrapper[15493]: E0216 17:02:15.301227 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-6858s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-6858s&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.320980 master-0 kubenswrapper[15493]: W0216 17:02:15.320816 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.320980 master-0 kubenswrapper[15493]: E0216 17:02:15.320955 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.343008 master-0 kubenswrapper[15493]: W0216 17:02:15.341657 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.343008 master-0 kubenswrapper[15493]: E0216 17:02:15.341799 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.361466 master-0 kubenswrapper[15493]: W0216 17:02:15.361356 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lc8g2": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcluster-cloud-controller-manager-dockercfg-lc8g2&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.361667 master-0 kubenswrapper[15493]: E0216 17:02:15.361465 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"cluster-cloud-controller-manager-dockercfg-lc8g2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcluster-cloud-controller-manager-dockercfg-lc8g2&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.381354 master-0 kubenswrapper[15493]: W0216 17:02:15.381235 15493 reflector.go:561] object-"openshift-kube-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.381582 master-0 kubenswrapper[15493]: E0216 17:02:15.381357 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.401881 master-0 kubenswrapper[15493]: W0216 17:02:15.401786 15493 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.401881 master-0 kubenswrapper[15493]: E0216 17:02:15.401885 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.420911 master-0 kubenswrapper[15493]: W0216 17:02:15.420816 15493 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.421104 master-0 kubenswrapper[15493]: E0216 17:02:15.420912 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.441264 master-0 kubenswrapper[15493]: W0216 17:02:15.441088 15493 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-kh5s4": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-kh5s4&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.441264 master-0 kubenswrapper[15493]: E0216 17:02:15.441182 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-kh5s4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-kh5s4&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.460796 master-0 kubenswrapper[15493]: W0216 17:02:15.460708 15493 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ztpz8": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-ztpz8&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.460796 master-0 kubenswrapper[15493]: E0216 17:02:15.460792 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-ztpz8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-ztpz8&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.481341 master-0 kubenswrapper[15493]: W0216 17:02:15.481217 15493 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-r5p9m": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-r5p9m&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.481341 master-0 kubenswrapper[15493]: E0216 17:02:15.481327 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-r5p9m\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-r5p9m&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.500003 master-0 kubenswrapper[15493]: I0216 17:02:15.499918 15493 request.go:700] Waited for 4.644812289s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dcloud-controller-manager-images&limit=500&resourceVersion=0 Feb 16 17:02:15.501417 master-0 kubenswrapper[15493]: W0216 17:02:15.501294 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dcloud-controller-manager-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.501514 master-0 kubenswrapper[15493]: E0216 17:02:15.501456 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"cloud-controller-manager-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dcloud-controller-manager-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.521610 master-0 kubenswrapper[15493]: W0216 17:02:15.521497 15493 reflector.go:561] object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-qlqr4": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-qlqr4&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.521610 master-0 kubenswrapper[15493]: E0216 17:02:15.521598 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager\"/\"installer-sa-dockercfg-qlqr4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-qlqr4&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.541510 master-0 kubenswrapper[15493]: W0216 17:02:15.541396 15493 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-5lx84": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-5lx84&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.541510 master-0 kubenswrapper[15493]: E0216 17:02:15.541491 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-5lx84\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-5lx84&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.561367 master-0 kubenswrapper[15493]: E0216 17:02:15.561279 15493 projected.go:194] Error preparing data for projected volume bound-sa-token for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.561574 master-0 kubenswrapper[15493]: E0216 17:02:15.561394 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:16.561370154 +0000 UTC m=+15.711543234 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "bound-sa-token" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.581480 master-0 kubenswrapper[15493]: W0216 17:02:15.581387 15493 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.581718 master-0 kubenswrapper[15493]: E0216 17:02:15.581510 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.601292 master-0 kubenswrapper[15493]: W0216 17:02:15.601186 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.601292 master-0 kubenswrapper[15493]: E0216 17:02:15.601291 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.622035 master-0 kubenswrapper[15493]: W0216 17:02:15.621943 15493 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.622035 master-0 kubenswrapper[15493]: E0216 17:02:15.622041 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.641157 master-0 kubenswrapper[15493]: W0216 17:02:15.641048 15493 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.641393 master-0 kubenswrapper[15493]: E0216 17:02:15.641168 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.661806 master-0 kubenswrapper[15493]: W0216 17:02:15.661696 15493 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-nslxl": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-nslxl&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.662068 master-0 kubenswrapper[15493]: E0216 17:02:15.661808 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-nslxl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-nslxl&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.682087 master-0 kubenswrapper[15493]: W0216 17:02:15.681633 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.682435 master-0 kubenswrapper[15493]: E0216 17:02:15.682393 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.701565 master-0 kubenswrapper[15493]: W0216 17:02:15.701383 15493 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.701565 master-0 kubenswrapper[15493]: E0216 17:02:15.701492 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.721885 master-0 kubenswrapper[15493]: W0216 17:02:15.721790 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.721885 master-0 kubenswrapper[15493]: E0216 17:02:15.721879 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.741395 master-0 kubenswrapper[15493]: W0216 17:02:15.741275 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-wnnb7": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-wnnb7&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.741617 master-0 kubenswrapper[15493]: E0216 17:02:15.741392 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wnnb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-wnnb7&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.761120 master-0 kubenswrapper[15493]: W0216 17:02:15.760985 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.761120 master-0 kubenswrapper[15493]: E0216 17:02:15.761104 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.780849 master-0 kubenswrapper[15493]: W0216 17:02:15.780749 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.780849 master-0 kubenswrapper[15493]: E0216 17:02:15.780849 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.801407 master-0 kubenswrapper[15493]: W0216 17:02:15.801308 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.801407 master-0 kubenswrapper[15493]: E0216 17:02:15.801398 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:15.820955 master-0 kubenswrapper[15493]: E0216 17:02:15.820884 15493 projected.go:194] Error preparing data for projected volume bound-sa-token for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.821139 master-0 kubenswrapper[15493]: E0216 17:02:15.821030 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:16.821010925 +0000 UTC m=+15.971184015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "bound-sa-token" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:15.840736 master-0 kubenswrapper[15493]: I0216 17:02:15.840628 15493 status_manager.go:851] "Failed to get status for pod" podUID="d020c902-2adb-4919-8dd9-0c2109830580" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-54984b6678-gp8gv\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:16.469676 master-0 kubenswrapper[15493]: I0216 17:02:16.469498 15493 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26" exitCode=1 Feb 16 17:02:16.500311 master-0 kubenswrapper[15493]: I0216 17:02:16.500222 15493 request.go:700] Waited for 3.229636359s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&limit=500&resourceVersion=0 Feb 16 17:02:16.501460 master-0 kubenswrapper[15493]: W0216 17:02:16.501338 15493 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:16.501538 master-0 kubenswrapper[15493]: E0216 17:02:16.501486 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:16.581640 master-0 kubenswrapper[15493]: W0216 17:02:16.581520 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:16.581640 master-0 kubenswrapper[15493]: E0216 17:02:16.581640 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:16.600973 master-0 kubenswrapper[15493]: W0216 17:02:16.600818 15493 reflector.go:561] object-"openshift-multus"/"multus-admission-controller-secret": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:16.601225 master-0 kubenswrapper[15493]: E0216 17:02:16.600978 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-admission-controller-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:16.610790 master-0 kubenswrapper[15493]: I0216 17:02:16.610705 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:16.723207 master-0 kubenswrapper[15493]: W0216 17:02:16.722969 15493 reflector.go:561] object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:16.723207 master-0 kubenswrapper[15493]: E0216 17:02:16.723081 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-olm-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:16.862517 master-0 kubenswrapper[15493]: E0216 17:02:16.862392 15493 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:16.862517 master-0 kubenswrapper[15493]: E0216 17:02:16.862458 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/serviceaccounts/openshift-kube-scheduler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:16.862883 master-0 kubenswrapper[15493]: E0216 17:02:16.862547 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:17.862521267 +0000 UTC m=+17.012694367 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/serviceaccounts/openshift-kube-scheduler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:16.882130 master-0 kubenswrapper[15493]: E0216 17:02:16.882049 15493 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:16.882130 master-0 kubenswrapper[15493]: E0216 17:02:16.882100 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:16.882130 master-0 kubenswrapper[15493]: E0216 17:02:16.882202 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:17.882144266 +0000 UTC m=+17.032317336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:16.901369 master-0 kubenswrapper[15493]: E0216 17:02:16.901285 15493 projected.go:288] Couldn't get configMap openshift-cluster-version/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:16.901369 master-0 kubenswrapper[15493]: E0216 17:02:16.901380 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:16.901646 master-0 kubenswrapper[15493]: E0216 17:02:16.901453 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:17.901430456 +0000 UTC m=+17.051603516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:16.920564 master-0 kubenswrapper[15493]: E0216 17:02:16.920464 15493 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:16.920564 master-0 kubenswrapper[15493]: E0216 17:02:16.920529 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/serviceaccounts/kube-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:16.920918 master-0 kubenswrapper[15493]: E0216 17:02:16.920601 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:17.920578003 +0000 UTC m=+17.070751073 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/serviceaccounts/kube-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:16.921766 master-0 kubenswrapper[15493]: I0216 17:02:16.921663 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:16.940904 master-0 kubenswrapper[15493]: E0216 17:02:16.940805 15493 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:16.940904 master-0 kubenswrapper[15493]: E0216 17:02:16.940858 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:16.940904 master-0 kubenswrapper[15493]: E0216 17:02:16.940950 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access podName:5d39ed24-4301-4cea-8a42-a08f4ba8b479 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:17.940906731 +0000 UTC m=+17.091079801 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access") pod "installer-2-master-0" (UID: "5d39ed24-4301-4cea-8a42-a08f4ba8b479") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:16.941414 master-0 kubenswrapper[15493]: W0216 17:02:16.941099 15493 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:16.941414 master-0 kubenswrapper[15493]: E0216 17:02:16.941171 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:16.962104 master-0 kubenswrapper[15493]: E0216 17:02:16.961905 15493 projected.go:288] Couldn't get configMap openshift-network-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:16.982037 master-0 kubenswrapper[15493]: E0216 17:02:16.981770 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.002315 master-0 kubenswrapper[15493]: E0216 17:02:17.002237 15493 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.022031 master-0 kubenswrapper[15493]: E0216 17:02:17.021970 15493 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.041656 master-0 kubenswrapper[15493]: E0216 17:02:17.041576 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.061431 master-0 kubenswrapper[15493]: E0216 17:02:17.061353 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.061431 master-0 kubenswrapper[15493]: W0216 17:02:17.061392 15493 reflector.go:561] object-"openshift-etcd-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.061713 master-0 kubenswrapper[15493]: E0216 17:02:17.061467 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.082160 master-0 kubenswrapper[15493]: E0216 17:02:17.082080 15493 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.082160 master-0 kubenswrapper[15493]: E0216 17:02:17.082132 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-2-master-0: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:17.082421 master-0 kubenswrapper[15493]: E0216 17:02:17.082196 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access podName:b1b4fccc-6bf6-47ac-8ae1-32cad23734da nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.08218051 +0000 UTC m=+17.232353580 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access") pod "installer-2-master-0" (UID: "b1b4fccc-6bf6-47ac-8ae1-32cad23734da") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:17.107362 master-0 kubenswrapper[15493]: E0216 17:02:17.102413 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.122518 master-0 kubenswrapper[15493]: E0216 17:02:17.122443 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.130908 master-0 kubenswrapper[15493]: I0216 17:02:17.130774 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:17.131498 master-0 kubenswrapper[15493]: I0216 17:02:17.131389 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:17.141651 master-0 kubenswrapper[15493]: E0216 17:02:17.141566 15493 projected.go:288] Couldn't get configMap openshift-network-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.161353 master-0 kubenswrapper[15493]: E0216 17:02:17.161264 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.181133 master-0 kubenswrapper[15493]: W0216 17:02:17.181028 15493 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.181331 master-0 kubenswrapper[15493]: E0216 17:02:17.181134 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.182278 master-0 kubenswrapper[15493]: E0216 17:02:17.182208 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.202293 master-0 kubenswrapper[15493]: E0216 17:02:17.202215 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.222608 master-0 kubenswrapper[15493]: E0216 17:02:17.221944 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.241712 master-0 kubenswrapper[15493]: E0216 17:02:17.241607 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.261377 master-0 kubenswrapper[15493]: E0216 17:02:17.261297 15493 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.281231 master-0 kubenswrapper[15493]: W0216 17:02:17.281050 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.281231 master-0 kubenswrapper[15493]: E0216 17:02:17.281156 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.281975 master-0 kubenswrapper[15493]: E0216 17:02:17.281903 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.301080 master-0 kubenswrapper[15493]: W0216 17:02:17.300992 15493 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.301080 master-0 kubenswrapper[15493]: E0216 17:02:17.301069 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.301666 master-0 kubenswrapper[15493]: E0216 17:02:17.301622 15493 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.321215 master-0 kubenswrapper[15493]: E0216 17:02:17.321069 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.321667 master-0 kubenswrapper[15493]: W0216 17:02:17.321559 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.321667 master-0 kubenswrapper[15493]: E0216 17:02:17.321656 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.374600 master-0 kubenswrapper[15493]: E0216 17:02:17.374421 15493 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.374600 master-0 kubenswrapper[15493]: E0216 17:02:17.374551 15493 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.375253 master-0 kubenswrapper[15493]: W0216 17:02:17.375204 15493 reflector.go:561] object-"openshift-multus"/"whereabouts-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dwhereabouts-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.375316 master-0 kubenswrapper[15493]: W0216 17:02:17.375215 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.375316 master-0 kubenswrapper[15493]: E0216 17:02:17.375270 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"whereabouts-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dwhereabouts-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.375316 master-0 kubenswrapper[15493]: E0216 17:02:17.375298 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.380513 master-0 kubenswrapper[15493]: W0216 17:02:17.380452 15493 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.380513 master-0 kubenswrapper[15493]: E0216 17:02:17.380508 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.382787 master-0 kubenswrapper[15493]: E0216 17:02:17.382731 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.401056 master-0 kubenswrapper[15493]: W0216 17:02:17.400998 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dperformance-addon-operator-webhook-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.401117 master-0 kubenswrapper[15493]: E0216 17:02:17.401058 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"performance-addon-operator-webhook-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dperformance-addon-operator-webhook-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.402240 master-0 kubenswrapper[15493]: E0216 17:02:17.402202 15493 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.421559 master-0 kubenswrapper[15493]: W0216 17:02:17.421452 15493 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-metrics": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.421720 master-0 kubenswrapper[15493]: E0216 17:02:17.421555 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.422530 master-0 kubenswrapper[15493]: E0216 17:02:17.422488 15493 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.441429 master-0 kubenswrapper[15493]: W0216 17:02:17.441336 15493 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.441429 master-0 kubenswrapper[15493]: E0216 17:02:17.441424 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.441568 master-0 kubenswrapper[15493]: E0216 17:02:17.441535 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.461437 master-0 kubenswrapper[15493]: E0216 17:02:17.461370 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.461636 master-0 kubenswrapper[15493]: W0216 17:02:17.461535 15493 reflector.go:561] object-"openshift-marketplace"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.461685 master-0 kubenswrapper[15493]: E0216 17:02:17.461634 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.481541 master-0 kubenswrapper[15493]: W0216 17:02:17.481457 15493 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.481541 master-0 kubenswrapper[15493]: E0216 17:02:17.481540 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.482226 master-0 kubenswrapper[15493]: E0216 17:02:17.482168 15493 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.500387 master-0 kubenswrapper[15493]: I0216 17:02:17.500300 15493 request.go:700] Waited for 3.267285985s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 16 17:02:17.501339 master-0 kubenswrapper[15493]: W0216 17:02:17.501227 15493 reflector.go:561] object-"openshift-ingress-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.501522 master-0 kubenswrapper[15493]: E0216 17:02:17.501342 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.521327 master-0 kubenswrapper[15493]: E0216 17:02:17.521230 15493 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.521830 master-0 kubenswrapper[15493]: W0216 17:02:17.521353 15493 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.521830 master-0 kubenswrapper[15493]: E0216 17:02:17.521651 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.540793 master-0 kubenswrapper[15493]: W0216 17:02:17.540693 15493 reflector.go:561] object-"openshift-network-node-identity"/"ovnkube-identity-cm": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.540793 master-0 kubenswrapper[15493]: E0216 17:02:17.540764 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.542031 master-0 kubenswrapper[15493]: E0216 17:02:17.541985 15493 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.560903 master-0 kubenswrapper[15493]: W0216 17:02:17.560801 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.560903 master-0 kubenswrapper[15493]: E0216 17:02:17.560875 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.562004 master-0 kubenswrapper[15493]: E0216 17:02:17.561963 15493 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.581247 master-0 kubenswrapper[15493]: W0216 17:02:17.581131 15493 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.581247 master-0 kubenswrapper[15493]: E0216 17:02:17.581235 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.601154 master-0 kubenswrapper[15493]: W0216 17:02:17.601024 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.602101 master-0 kubenswrapper[15493]: E0216 17:02:17.602003 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.621167 master-0 kubenswrapper[15493]: W0216 17:02:17.621103 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.621167 master-0 kubenswrapper[15493]: E0216 17:02:17.621151 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.622528 master-0 kubenswrapper[15493]: E0216 17:02:17.622285 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.640578 master-0 kubenswrapper[15493]: W0216 17:02:17.640471 15493 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.640578 master-0 kubenswrapper[15493]: E0216 17:02:17.640544 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.641630 master-0 kubenswrapper[15493]: E0216 17:02:17.641596 15493 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.660764 master-0 kubenswrapper[15493]: W0216 17:02:17.660693 15493 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.660764 master-0 kubenswrapper[15493]: E0216 17:02:17.660767 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.661757 master-0 kubenswrapper[15493]: E0216 17:02:17.661731 15493 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.681349 master-0 kubenswrapper[15493]: W0216 17:02:17.681122 15493 reflector.go:561] object-"openshift-monitoring"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.681349 master-0 kubenswrapper[15493]: E0216 17:02:17.681260 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.682328 master-0 kubenswrapper[15493]: E0216 17:02:17.682278 15493 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.701572 master-0 kubenswrapper[15493]: E0216 17:02:17.701503 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.701852 master-0 kubenswrapper[15493]: W0216 17:02:17.701577 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.701852 master-0 kubenswrapper[15493]: E0216 17:02:17.701653 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.721072 master-0 kubenswrapper[15493]: W0216 17:02:17.720977 15493 reflector.go:561] object-"openshift-network-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.721146 master-0 kubenswrapper[15493]: E0216 17:02:17.721082 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.741147 master-0 kubenswrapper[15493]: W0216 17:02:17.741046 15493 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.741147 master-0 kubenswrapper[15493]: E0216 17:02:17.741139 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.742555 master-0 kubenswrapper[15493]: E0216 17:02:17.742474 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.761045 master-0 kubenswrapper[15493]: W0216 17:02:17.760878 15493 reflector.go:561] object-"openshift-monitoring"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.761195 master-0 kubenswrapper[15493]: E0216 17:02:17.760987 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.762098 master-0 kubenswrapper[15493]: E0216 17:02:17.762050 15493 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.781438 master-0 kubenswrapper[15493]: E0216 17:02:17.781343 15493 projected.go:288] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.782515 master-0 kubenswrapper[15493]: W0216 17:02:17.782337 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.782515 master-0 kubenswrapper[15493]: E0216 17:02:17.782438 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.801686 master-0 kubenswrapper[15493]: W0216 17:02:17.801558 15493 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.801960 master-0 kubenswrapper[15493]: E0216 17:02:17.801689 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.801960 master-0 kubenswrapper[15493]: E0216 17:02:17.801799 15493 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.821260 master-0 kubenswrapper[15493]: E0216 17:02:17.821182 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.821411 master-0 kubenswrapper[15493]: W0216 17:02:17.821282 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.821411 master-0 kubenswrapper[15493]: E0216 17:02:17.821376 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.841710 master-0 kubenswrapper[15493]: W0216 17:02:17.841561 15493 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.841710 master-0 kubenswrapper[15493]: E0216 17:02:17.841681 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.842279 master-0 kubenswrapper[15493]: E0216 17:02:17.842177 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.861028 master-0 kubenswrapper[15493]: W0216 17:02:17.860943 15493 reflector.go:561] object-"openshift-network-node-identity"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.861361 master-0 kubenswrapper[15493]: E0216 17:02:17.861328 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.862042 master-0 kubenswrapper[15493]: E0216 17:02:17.861988 15493 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.881022 master-0 kubenswrapper[15493]: E0216 17:02:17.880904 15493 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.881178 master-0 kubenswrapper[15493]: W0216 17:02:17.881056 15493 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.881310 master-0 kubenswrapper[15493]: E0216 17:02:17.881163 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.891816 master-0 kubenswrapper[15493]: I0216 17:02:17.891728 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:17.892146 master-0 kubenswrapper[15493]: I0216 17:02:17.892095 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:17.901443 master-0 kubenswrapper[15493]: W0216 17:02:17.901305 15493 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.901443 master-0 kubenswrapper[15493]: E0216 17:02:17.901412 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.902270 master-0 kubenswrapper[15493]: E0216 17:02:17.902178 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.921125 master-0 kubenswrapper[15493]: W0216 17:02:17.920970 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.921407 master-0 kubenswrapper[15493]: E0216 17:02:17.921120 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.921407 master-0 kubenswrapper[15493]: E0216 17:02:17.921214 15493 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.941433 master-0 kubenswrapper[15493]: E0216 17:02:17.941318 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/bootstrap-kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:02:17.941433 master-0 kubenswrapper[15493]: I0216 17:02:17.941378 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:17.960892 master-0 kubenswrapper[15493]: E0216 17:02:17.960790 15493 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.961558 master-0 kubenswrapper[15493]: E0216 17:02:17.961476 15493 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:02:17.962687 master-0 kubenswrapper[15493]: E0216 17:02:17.962635 15493 projected.go:288] Couldn't get configMap openshift-network-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.962687 master-0 kubenswrapper[15493]: E0216 17:02:17.962681 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zt8mt for pod openshift-network-operator/network-operator-6fcf4c966-6bmf9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/cluster-network-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:17.962838 master-0 kubenswrapper[15493]: E0216 17:02:17.962804 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt podName:4549ea98-7379-49e1-8452-5efb643137ca nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.962760093 +0000 UTC m=+18.112933203 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zt8mt" (UniqueName: "kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt") pod "network-operator-6fcf4c966-6bmf9" (UID: "4549ea98-7379-49e1-8452-5efb643137ca") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/cluster-network-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:17.980831 master-0 kubenswrapper[15493]: E0216 17:02:17.980748 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.981406 master-0 kubenswrapper[15493]: W0216 17:02:17.981271 15493 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:17.981489 master-0 kubenswrapper[15493]: E0216 17:02:17.981430 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:17.982027 master-0 kubenswrapper[15493]: E0216 17:02:17.981990 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:17.982070 master-0 kubenswrapper[15493]: E0216 17:02:17.982046 15493 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:17.982205 master-0 kubenswrapper[15493]: E0216 17:02:17.982174 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:18.982137616 +0000 UTC m=+18.132310736 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:17.993443 master-0 kubenswrapper[15493]: I0216 17:02:17.993356 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:17.993778 master-0 kubenswrapper[15493]: I0216 17:02:17.993732 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:17.994222 master-0 kubenswrapper[15493]: I0216 17:02:17.994169 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:18.000786 master-0 kubenswrapper[15493]: W0216 17:02:18.000700 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.000898 master-0 kubenswrapper[15493]: E0216 17:02:18.000802 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.001870 master-0 kubenswrapper[15493]: E0216 17:02:18.001831 15493 projected.go:288] Couldn't get configMap openshift-monitoring/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.003067 master-0 kubenswrapper[15493]: E0216 17:02:18.003020 15493 projected.go:288] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.003067 master-0 kubenswrapper[15493]: E0216 17:02:18.003054 15493 projected.go:194] Error preparing data for projected volume kube-api-access-b5mwd for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.003174 master-0 kubenswrapper[15493]: E0216 17:02:18.003118 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.003099801 +0000 UTC m=+18.153272881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5mwd" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.021217 master-0 kubenswrapper[15493]: E0216 17:02:18.021122 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.021387 master-0 kubenswrapper[15493]: W0216 17:02:18.021260 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.021490 master-0 kubenswrapper[15493]: E0216 17:02:18.021419 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.022329 master-0 kubenswrapper[15493]: E0216 17:02:18.022294 15493 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.022329 master-0 kubenswrapper[15493]: E0216 17:02:18.022328 15493 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.022467 master-0 kubenswrapper[15493]: E0216 17:02:18.022388 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.022371101 +0000 UTC m=+18.172544181 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.040782 master-0 kubenswrapper[15493]: W0216 17:02:18.040648 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.040782 master-0 kubenswrapper[15493]: E0216 17:02:18.040748 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.041817 master-0 kubenswrapper[15493]: E0216 17:02:18.041774 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.041817 master-0 kubenswrapper[15493]: E0216 17:02:18.041806 15493 projected.go:194] Error preparing data for projected volume kube-api-access-8r28x for pod openshift-multus/multus-6r7wj: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.041817 master-0 kubenswrapper[15493]: E0216 17:02:18.041818 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.042044 master-0 kubenswrapper[15493]: E0216 17:02:18.041884 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.041856887 +0000 UTC m=+18.192029987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8r28x" (UniqueName: "kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.061513 master-0 kubenswrapper[15493]: E0216 17:02:18.061440 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.061513 master-0 kubenswrapper[15493]: E0216 17:02:18.061472 15493 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.061718 master-0 kubenswrapper[15493]: E0216 17:02:18.061522 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.061507967 +0000 UTC m=+18.211681127 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.061971 master-0 kubenswrapper[15493]: W0216 17:02:18.061870 15493 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.061971 master-0 kubenswrapper[15493]: E0216 17:02:18.061941 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.081100 master-0 kubenswrapper[15493]: W0216 17:02:18.080909 15493 reflector.go:561] object-"openshift-dns-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.081213 master-0 kubenswrapper[15493]: E0216 17:02:18.081112 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.081264 master-0 kubenswrapper[15493]: E0216 17:02:18.081211 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.099589 master-0 kubenswrapper[15493]: I0216 17:02:18.099516 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:18.101020 master-0 kubenswrapper[15493]: W0216 17:02:18.100908 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dnode-tuning-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.101112 master-0 kubenswrapper[15493]: E0216 17:02:18.101035 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"node-tuning-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dnode-tuning-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.101355 master-0 kubenswrapper[15493]: E0216 17:02:18.101323 15493 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.103271 master-0 kubenswrapper[15493]: E0216 17:02:18.103201 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.103271 master-0 kubenswrapper[15493]: E0216 17:02:18.103254 15493 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.103436 master-0 kubenswrapper[15493]: E0216 17:02:18.103347 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.103317363 +0000 UTC m=+18.253490453 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.121267 master-0 kubenswrapper[15493]: E0216 17:02:18.121194 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.121267 master-0 kubenswrapper[15493]: W0216 17:02:18.121223 15493 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.121491 master-0 kubenswrapper[15493]: E0216 17:02:18.121337 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.123599 master-0 kubenswrapper[15493]: E0216 17:02:18.123557 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.123708 master-0 kubenswrapper[15493]: E0216 17:02:18.123607 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/serviceaccounts/kube-storage-version-migrator-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.123766 master-0 kubenswrapper[15493]: E0216 17:02:18.123716 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.123685692 +0000 UTC m=+18.273858802 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/serviceaccounts/kube-storage-version-migrator-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.131140 master-0 kubenswrapper[15493]: E0216 17:02:18.131075 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.132304 master-0 kubenswrapper[15493]: E0216 17:02:18.132257 15493 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.141256 master-0 kubenswrapper[15493]: W0216 17:02:18.141165 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.141378 master-0 kubenswrapper[15493]: E0216 17:02:18.141274 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.141729 master-0 kubenswrapper[15493]: E0216 17:02:18.141658 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.141820 master-0 kubenswrapper[15493]: E0216 17:02:18.141763 15493 projected.go:288] Couldn't get configMap openshift-network-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.141820 master-0 kubenswrapper[15493]: E0216 17:02:18.141794 15493 projected.go:194] Error preparing data for projected volume kube-api-access-q46jg for pod openshift-network-operator/iptables-alerter-czzz2: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.141983 master-0 kubenswrapper[15493]: E0216 17:02:18.141874 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg podName:b3fa6ac1-781f-446c-b6b4-18bdb7723c23 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.141849123 +0000 UTC m=+18.292022213 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-q46jg" (UniqueName: "kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg") pod "iptables-alerter-czzz2" (UID: "b3fa6ac1-781f-446c-b6b4-18bdb7723c23") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.161241 master-0 kubenswrapper[15493]: W0216 17:02:18.161134 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.161479 master-0 kubenswrapper[15493]: E0216 17:02:18.161245 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.161479 master-0 kubenswrapper[15493]: E0216 17:02:18.161311 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.162285 master-0 kubenswrapper[15493]: E0216 17:02:18.162253 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.162285 master-0 kubenswrapper[15493]: E0216 17:02:18.162285 15493 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.162506 master-0 kubenswrapper[15493]: E0216 17:02:18.162369 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.162343865 +0000 UTC m=+18.312516975 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.172246 master-0 kubenswrapper[15493]: E0216 17:02:18.172168 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 16 17:02:18.181842 master-0 kubenswrapper[15493]: W0216 17:02:18.181640 15493 reflector.go:561] object-"openshift-cluster-olm-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.181842 master-0 kubenswrapper[15493]: E0216 17:02:18.181761 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-olm-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.182804 master-0 kubenswrapper[15493]: E0216 17:02:18.182742 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.182804 master-0 kubenswrapper[15493]: E0216 17:02:18.182783 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xmk2b for pod openshift-multus/multus-admission-controller-7c64d55f8-4jz2t: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.183142 master-0 kubenswrapper[15493]: E0216 17:02:18.182872 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.182844678 +0000 UTC m=+18.333017758 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xmk2b" (UniqueName: "kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.201099 master-0 kubenswrapper[15493]: E0216 17:02:18.201021 15493 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.201282 master-0 kubenswrapper[15493]: W0216 17:02:18.201123 15493 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.201282 master-0 kubenswrapper[15493]: E0216 17:02:18.201215 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.203343 master-0 kubenswrapper[15493]: E0216 17:02:18.203301 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.203343 master-0 kubenswrapper[15493]: E0216 17:02:18.203334 15493 projected.go:194] Error preparing data for projected volume kube-api-access-sx92x for pod openshift-machine-config-operator/machine-config-daemon-98q6v: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-daemon/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.203554 master-0 kubenswrapper[15493]: E0216 17:02:18.203518 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.203489574 +0000 UTC m=+18.353662664 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sx92x" (UniqueName: "kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-daemon/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.220975 master-0 kubenswrapper[15493]: E0216 17:02:18.220877 15493 projected.go:288] Couldn't get configMap openshift-network-node-identity/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.220975 master-0 kubenswrapper[15493]: W0216 17:02:18.220848 15493 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.221203 master-0 kubenswrapper[15493]: E0216 17:02:18.220988 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.222064 master-0 kubenswrapper[15493]: E0216 17:02:18.222033 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.222064 master-0 kubenswrapper[15493]: E0216 17:02:18.222058 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.222188 master-0 kubenswrapper[15493]: E0216 17:02:18.222124 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.222107087 +0000 UTC m=+18.372280147 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.240800 master-0 kubenswrapper[15493]: E0216 17:02:18.240701 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.241373 master-0 kubenswrapper[15493]: W0216 17:02:18.241303 15493 reflector.go:561] object-"openshift-ingress-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.241425 master-0 kubenswrapper[15493]: E0216 17:02:18.241381 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.242443 master-0 kubenswrapper[15493]: E0216 17:02:18.242410 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.242482 master-0 kubenswrapper[15493]: E0216 17:02:18.242447 15493 projected.go:194] Error preparing data for projected volume kube-api-access-fkwxl for pod openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-control-plane/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.242549 master-0 kubenswrapper[15493]: E0216 17:02:18.242525 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.242505127 +0000 UTC m=+18.392678207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fkwxl" (UniqueName: "kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-control-plane/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.261469 master-0 kubenswrapper[15493]: E0216 17:02:18.261394 15493 projected.go:288] Couldn't get configMap openshift-dns/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.261469 master-0 kubenswrapper[15493]: E0216 17:02:18.261422 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.261469 master-0 kubenswrapper[15493]: E0216 17:02:18.261443 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zl5w2 for pod openshift-dns/dns-default-qcgxx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/dns/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.261773 master-0 kubenswrapper[15493]: W0216 17:02:18.261454 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.261773 master-0 kubenswrapper[15493]: E0216 17:02:18.261522 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2 podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.261498809 +0000 UTC m=+18.411671889 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zl5w2" (UniqueName: "kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/dns/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.261773 master-0 kubenswrapper[15493]: E0216 17:02:18.261541 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.280730 master-0 kubenswrapper[15493]: W0216 17:02:18.280572 15493 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.280730 master-0 kubenswrapper[15493]: E0216 17:02:18.280652 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.282454 master-0 kubenswrapper[15493]: E0216 17:02:18.282415 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.282454 master-0 kubenswrapper[15493]: E0216 17:02:18.282450 15493 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.282554 master-0 kubenswrapper[15493]: E0216 17:02:18.282517 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.282499105 +0000 UTC m=+18.432672185 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.301068 master-0 kubenswrapper[15493]: W0216 17:02:18.300979 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.301256 master-0 kubenswrapper[15493]: E0216 17:02:18.301077 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.302271 master-0 kubenswrapper[15493]: E0216 17:02:18.302234 15493 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.302271 master-0 kubenswrapper[15493]: E0216 17:02:18.302269 15493 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.302362 master-0 kubenswrapper[15493]: E0216 17:02:18.302347 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.30232602 +0000 UTC m=+18.452499090 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.321308 master-0 kubenswrapper[15493]: E0216 17:02:18.321255 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.321308 master-0 kubenswrapper[15493]: E0216 17:02:18.321293 15493 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.321498 master-0 kubenswrapper[15493]: E0216 17:02:18.321353 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.321338133 +0000 UTC m=+18.471511273 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.321498 master-0 kubenswrapper[15493]: W0216 17:02:18.321374 15493 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.321498 master-0 kubenswrapper[15493]: E0216 17:02:18.321469 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.341582 master-0 kubenswrapper[15493]: W0216 17:02:18.341489 15493 reflector.go:561] object-"openshift-service-ca-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.341582 master-0 kubenswrapper[15493]: E0216 17:02:18.341575 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.361727 master-0 kubenswrapper[15493]: W0216 17:02:18.361615 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.361817 master-0 kubenswrapper[15493]: E0216 17:02:18.361732 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.375505 master-0 kubenswrapper[15493]: E0216 17:02:18.375427 15493 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.375505 master-0 kubenswrapper[15493]: E0216 17:02:18.375502 15493 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.375709 master-0 kubenswrapper[15493]: E0216 17:02:18.375441 15493 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.375709 master-0 kubenswrapper[15493]: E0216 17:02:18.375607 15493 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.375709 master-0 kubenswrapper[15493]: E0216 17:02:18.375590 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.375567988 +0000 UTC m=+18.525741058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.375709 master-0 kubenswrapper[15493]: E0216 17:02:18.375677 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.37565894 +0000 UTC m=+18.525832020 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.381744 master-0 kubenswrapper[15493]: W0216 17:02:18.381630 15493 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.381744 master-0 kubenswrapper[15493]: E0216 17:02:18.381736 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.383281 master-0 kubenswrapper[15493]: E0216 17:02:18.383243 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.383281 master-0 kubenswrapper[15493]: E0216 17:02:18.383283 15493 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.383485 master-0 kubenswrapper[15493]: E0216 17:02:18.383360 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.383337624 +0000 UTC m=+18.533510734 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.401400 master-0 kubenswrapper[15493]: W0216 17:02:18.401298 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.401400 master-0 kubenswrapper[15493]: E0216 17:02:18.401387 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.402353 master-0 kubenswrapper[15493]: E0216 17:02:18.402301 15493 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.402353 master-0 kubenswrapper[15493]: E0216 17:02:18.402335 15493 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.402553 master-0 kubenswrapper[15493]: E0216 17:02:18.402396 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.402374487 +0000 UTC m=+18.552547567 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.422016 master-0 kubenswrapper[15493]: W0216 17:02:18.421863 15493 reflector.go:561] object-"openshift-service-ca"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.422016 master-0 kubenswrapper[15493]: E0216 17:02:18.421997 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.422825 master-0 kubenswrapper[15493]: E0216 17:02:18.422765 15493 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.422825 master-0 kubenswrapper[15493]: E0216 17:02:18.422820 15493 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.423045 master-0 kubenswrapper[15493]: E0216 17:02:18.423013 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.4228979 +0000 UTC m=+18.573071010 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.441245 master-0 kubenswrapper[15493]: W0216 17:02:18.441144 15493 reflector.go:561] object-"openshift-network-node-identity"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.441359 master-0 kubenswrapper[15493]: E0216 17:02:18.441244 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.442323 master-0 kubenswrapper[15493]: E0216 17:02:18.442257 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.442323 master-0 kubenswrapper[15493]: E0216 17:02:18.442319 15493 projected.go:194] Error preparing data for projected volume kube-api-access-bnnc5 for pod openshift-multus/network-metrics-daemon-279g6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/metrics-daemon-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.442468 master-0 kubenswrapper[15493]: E0216 17:02:18.442411 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5 podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.442388406 +0000 UTC m=+18.592561526 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bnnc5" (UniqueName: "kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/metrics-daemon-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.461358 master-0 kubenswrapper[15493]: W0216 17:02:18.461264 15493 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.461358 master-0 kubenswrapper[15493]: E0216 17:02:18.461345 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.461542 master-0 kubenswrapper[15493]: E0216 17:02:18.461507 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.461542 master-0 kubenswrapper[15493]: E0216 17:02:18.461533 15493 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.461676 master-0 kubenswrapper[15493]: E0216 17:02:18.461609 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.461589444 +0000 UTC m=+18.611762544 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.481103 master-0 kubenswrapper[15493]: W0216 17:02:18.480965 15493 reflector.go:561] object-"openshift-marketplace"/"marketplace-trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.481252 master-0 kubenswrapper[15493]: E0216 17:02:18.481105 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.483408 master-0 kubenswrapper[15493]: E0216 17:02:18.483357 15493 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.484077 master-0 kubenswrapper[15493]: E0216 17:02:18.483430 15493 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.484077 master-0 kubenswrapper[15493]: E0216 17:02:18.483502 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.483479094 +0000 UTC m=+18.633652204 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.500518 master-0 kubenswrapper[15493]: I0216 17:02:18.500444 15493 request.go:700] Waited for 3.424227538s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 16 17:02:18.501578 master-0 kubenswrapper[15493]: W0216 17:02:18.501458 15493 reflector.go:561] object-"openshift-ingress-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.501578 master-0 kubenswrapper[15493]: E0216 17:02:18.501559 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.521632 master-0 kubenswrapper[15493]: E0216 17:02:18.521542 15493 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.521632 master-0 kubenswrapper[15493]: W0216 17:02:18.521554 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.521632 master-0 kubenswrapper[15493]: E0216 17:02:18.521615 15493 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.521632 master-0 kubenswrapper[15493]: E0216 17:02:18.521636 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.522146 master-0 kubenswrapper[15493]: E0216 17:02:18.521697 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.521666984 +0000 UTC m=+18.671840064 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.541637 master-0 kubenswrapper[15493]: W0216 17:02:18.541426 15493 reflector.go:561] object-"openshift-marketplace"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.541637 master-0 kubenswrapper[15493]: E0216 17:02:18.541555 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.542432 master-0 kubenswrapper[15493]: E0216 17:02:18.542387 15493 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.542432 master-0 kubenswrapper[15493]: E0216 17:02:18.542425 15493 projected.go:194] Error preparing data for projected volume kube-api-access-6ftld for pod openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.542548 master-0 kubenswrapper[15493]: E0216 17:02:18.542510 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.542484325 +0000 UTC m=+18.692657405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ftld" (UniqueName: "kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.561250 master-0 kubenswrapper[15493]: W0216 17:02:18.561158 15493 reflector.go:561] object-"openshift-dns-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.561379 master-0 kubenswrapper[15493]: E0216 17:02:18.561244 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.562321 master-0 kubenswrapper[15493]: E0216 17:02:18.562270 15493 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.562321 master-0 kubenswrapper[15493]: E0216 17:02:18.562318 15493 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.562437 master-0 kubenswrapper[15493]: E0216 17:02:18.562404 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.562380992 +0000 UTC m=+18.712554102 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.581799 master-0 kubenswrapper[15493]: W0216 17:02:18.581668 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.581799 master-0 kubenswrapper[15493]: E0216 17:02:18.581761 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.601768 master-0 kubenswrapper[15493]: W0216 17:02:18.601642 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.601768 master-0 kubenswrapper[15493]: E0216 17:02:18.601754 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.621530 master-0 kubenswrapper[15493]: W0216 17:02:18.621405 15493 reflector.go:561] object-"openshift-ingress-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.621759 master-0 kubenswrapper[15493]: E0216 17:02:18.621528 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.623679 master-0 kubenswrapper[15493]: E0216 17:02:18.623619 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.623679 master-0 kubenswrapper[15493]: E0216 17:02:18.623671 15493 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.623836 master-0 kubenswrapper[15493]: E0216 17:02:18.623756 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.623735375 +0000 UTC m=+18.773908445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.641070 master-0 kubenswrapper[15493]: W0216 17:02:18.640965 15493 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.641225 master-0 kubenswrapper[15493]: E0216 17:02:18.641078 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.642140 master-0 kubenswrapper[15493]: E0216 17:02:18.642092 15493 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.642140 master-0 kubenswrapper[15493]: E0216 17:02:18.642131 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/serviceaccounts/cluster-olm-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.642279 master-0 kubenswrapper[15493]: E0216 17:02:18.642204 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.642181824 +0000 UTC m=+18.792354894 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/serviceaccounts/cluster-olm-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.660859 master-0 kubenswrapper[15493]: W0216 17:02:18.660781 15493 reflector.go:561] object-"openshift-dns-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.660859 master-0 kubenswrapper[15493]: E0216 17:02:18.660851 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.661904 master-0 kubenswrapper[15493]: E0216 17:02:18.661855 15493 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.661904 master-0 kubenswrapper[15493]: E0216 17:02:18.661900 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.662039 master-0 kubenswrapper[15493]: E0216 17:02:18.661988 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.661968447 +0000 UTC m=+18.812141527 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.680588 master-0 kubenswrapper[15493]: W0216 17:02:18.680502 15493 reflector.go:561] object-"openshift-image-registry"/"image-registry-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.680588 master-0 kubenswrapper[15493]: E0216 17:02:18.680565 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.682781 master-0 kubenswrapper[15493]: E0216 17:02:18.682719 15493 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.682781 master-0 kubenswrapper[15493]: E0216 17:02:18.682775 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.682956 master-0 kubenswrapper[15493]: E0216 17:02:18.682883 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.68285464 +0000 UTC m=+18.833027730 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.701669 master-0 kubenswrapper[15493]: W0216 17:02:18.701545 15493 reflector.go:561] object-"openshift-service-ca-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.701669 master-0 kubenswrapper[15493]: E0216 17:02:18.701657 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.701669 master-0 kubenswrapper[15493]: E0216 17:02:18.701594 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.702204 master-0 kubenswrapper[15493]: E0216 17:02:18.701706 15493 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.702204 master-0 kubenswrapper[15493]: E0216 17:02:18.701795 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.701769461 +0000 UTC m=+18.851942561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.721749 master-0 kubenswrapper[15493]: W0216 17:02:18.721636 15493 reflector.go:561] object-"openshift-network-node-identity"/"network-node-identity-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.721749 master-0 kubenswrapper[15493]: E0216 17:02:18.721714 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.740782 master-0 kubenswrapper[15493]: W0216 17:02:18.740655 15493 reflector.go:561] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.740782 master-0 kubenswrapper[15493]: E0216 17:02:18.740773 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.742865 master-0 kubenswrapper[15493]: E0216 17:02:18.742814 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.742865 master-0 kubenswrapper[15493]: E0216 17:02:18.742862 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2gq8x for pod openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/cluster-node-tuning-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.743025 master-0 kubenswrapper[15493]: E0216 17:02:18.742979 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.742955941 +0000 UTC m=+18.893129051 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2gq8x" (UniqueName: "kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/cluster-node-tuning-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.761739 master-0 kubenswrapper[15493]: W0216 17:02:18.761638 15493 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.761739 master-0 kubenswrapper[15493]: E0216 17:02:18.761724 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.762807 master-0 kubenswrapper[15493]: E0216 17:02:18.762760 15493 projected.go:288] Couldn't get configMap openshift-dns/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.762807 master-0 kubenswrapper[15493]: E0216 17:02:18.762801 15493 projected.go:194] Error preparing data for projected volume kube-api-access-8m29g for pod openshift-dns/node-resolver-vfxj4: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/node-resolver/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.762976 master-0 kubenswrapper[15493]: E0216 17:02:18.762882 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g podName:a6fe41b0-1a42-4f07-8220-d9aaa50788ad nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.762860167 +0000 UTC m=+18.913033287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8m29g" (UniqueName: "kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g") pod "node-resolver-vfxj4" (UID: "a6fe41b0-1a42-4f07-8220-d9aaa50788ad") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/node-resolver/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.781373 master-0 kubenswrapper[15493]: W0216 17:02:18.781270 15493 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.781373 master-0 kubenswrapper[15493]: E0216 17:02:18.781342 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.781690 master-0 kubenswrapper[15493]: E0216 17:02:18.781579 15493 projected.go:288] Couldn't get configMap openshift-cloud-controller-manager-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.781690 master-0 kubenswrapper[15493]: E0216 17:02:18.781618 15493 projected.go:194] Error preparing data for projected volume kube-api-access-r87zw for pod openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/serviceaccounts/cluster-cloud-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.781690 master-0 kubenswrapper[15493]: E0216 17:02:18.781688 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.781667285 +0000 UTC m=+18.931840395 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r87zw" (UniqueName: "kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/serviceaccounts/cluster-cloud-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.801574 master-0 kubenswrapper[15493]: W0216 17:02:18.801377 15493 reflector.go:561] object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/secrets?fieldSelector=metadata.name%3Dcluster-olm-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.801574 master-0 kubenswrapper[15493]: E0216 17:02:18.801483 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-olm-operator\"/\"cluster-olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/secrets?fieldSelector=metadata.name%3Dcluster-olm-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.802627 master-0 kubenswrapper[15493]: E0216 17:02:18.802573 15493 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.802627 master-0 kubenswrapper[15493]: E0216 17:02:18.802618 15493 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.802751 master-0 kubenswrapper[15493]: E0216 17:02:18.802693 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.802672391 +0000 UTC m=+18.952845501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.821484 master-0 kubenswrapper[15493]: E0216 17:02:18.821401 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.821484 master-0 kubenswrapper[15493]: E0216 17:02:18.821457 15493 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.821848 master-0 kubenswrapper[15493]: E0216 17:02:18.821569 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.82153393 +0000 UTC m=+18.971707100 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.821848 master-0 kubenswrapper[15493]: W0216 17:02:18.821541 15493 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.821848 master-0 kubenswrapper[15493]: E0216 17:02:18.821643 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.842216 master-0 kubenswrapper[15493]: W0216 17:02:18.842075 15493 reflector.go:561] object-"openshift-multus"/"metrics-daemon-secret": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.842216 master-0 kubenswrapper[15493]: E0216 17:02:18.842170 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.843640 master-0 kubenswrapper[15493]: E0216 17:02:18.843559 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.843640 master-0 kubenswrapper[15493]: E0216 17:02:18.843604 15493 projected.go:194] Error preparing data for projected volume kube-api-access-j5qxm for pod openshift-multus/multus-additional-cni-plugins-rjdlk: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.843884 master-0 kubenswrapper[15493]: E0216 17:02:18.843687 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.843666626 +0000 UTC m=+18.993839696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-j5qxm" (UniqueName: "kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.861435 master-0 kubenswrapper[15493]: W0216 17:02:18.861311 15493 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.861593 master-0 kubenswrapper[15493]: E0216 17:02:18.861461 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.862507 master-0 kubenswrapper[15493]: E0216 17:02:18.862459 15493 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.862507 master-0 kubenswrapper[15493]: E0216 17:02:18.862503 15493 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/serviceaccounts/service-ca/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.862625 master-0 kubenswrapper[15493]: E0216 17:02:18.862595 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.862571246 +0000 UTC m=+19.012744316 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/serviceaccounts/service-ca/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.881081 master-0 kubenswrapper[15493]: W0216 17:02:18.880977 15493 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.881081 master-0 kubenswrapper[15493]: E0216 17:02:18.881068 15493 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.881299 master-0 kubenswrapper[15493]: E0216 17:02:18.881102 15493 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.881299 master-0 kubenswrapper[15493]: E0216 17:02:18.881097 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.881299 master-0 kubenswrapper[15493]: E0216 17:02:18.881156 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.881142747 +0000 UTC m=+19.031315817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.900803 master-0 kubenswrapper[15493]: W0216 17:02:18.900701 15493 reflector.go:561] object-"openshift-catalogd"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.900803 master-0 kubenswrapper[15493]: E0216 17:02:18.900785 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.903169 master-0 kubenswrapper[15493]: E0216 17:02:18.903118 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.903169 master-0 kubenswrapper[15493]: E0216 17:02:18.903168 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-baremetal-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.903341 master-0 kubenswrapper[15493]: E0216 17:02:18.903270 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.903244422 +0000 UTC m=+19.053417502 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-baremetal-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.920779 master-0 kubenswrapper[15493]: W0216 17:02:18.920682 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.920779 master-0 kubenswrapper[15493]: E0216 17:02:18.920757 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.921822 master-0 kubenswrapper[15493]: E0216 17:02:18.921780 15493 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.921822 master-0 kubenswrapper[15493]: E0216 17:02:18.921802 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/serviceaccounts/operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.922025 master-0 kubenswrapper[15493]: E0216 17:02:18.921844 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.921833294 +0000 UTC m=+19.072006364 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/serviceaccounts/operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.941541 master-0 kubenswrapper[15493]: W0216 17:02:18.941440 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.941541 master-0 kubenswrapper[15493]: E0216 17:02:18.941533 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.961018 master-0 kubenswrapper[15493]: E0216 17:02:18.960948 15493 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.961018 master-0 kubenswrapper[15493]: E0216 17:02:18.960997 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/serviceaccounts/cloud-credential-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.961265 master-0 kubenswrapper[15493]: W0216 17:02:18.960998 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.961265 master-0 kubenswrapper[15493]: E0216 17:02:18.961067 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:18.961265 master-0 kubenswrapper[15493]: E0216 17:02:18.961079 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.961055842 +0000 UTC m=+19.111228922 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/serviceaccounts/cloud-credential-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.980956 master-0 kubenswrapper[15493]: E0216 17:02:18.980875 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:18.980956 master-0 kubenswrapper[15493]: E0216 17:02:18.980907 15493 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/marketplace-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.981196 master-0 kubenswrapper[15493]: E0216 17:02:18.980994 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:19.98097516 +0000 UTC m=+19.131148230 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/marketplace-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:18.981196 master-0 kubenswrapper[15493]: W0216 17:02:18.981036 15493 reflector.go:561] object-"openshift-network-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:18.981196 master-0 kubenswrapper[15493]: E0216 17:02:18.981135 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.001650 master-0 kubenswrapper[15493]: W0216 17:02:19.001511 15493 reflector.go:561] object-"openshift-dns"/"dns-default": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.001814 master-0 kubenswrapper[15493]: E0216 17:02:19.001652 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.002747 master-0 kubenswrapper[15493]: E0216 17:02:19.002616 15493 projected.go:288] Couldn't get configMap openshift-monitoring/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.002747 master-0 kubenswrapper[15493]: E0216 17:02:19.002677 15493 projected.go:194] Error preparing data for projected volume kube-api-access-j7w67 for pod openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/serviceaccounts/cluster-monitoring-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.002999 master-0 kubenswrapper[15493]: E0216 17:02:19.002800 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67 podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.002767616 +0000 UTC m=+19.152940716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7w67" (UniqueName: "kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/serviceaccounts/cluster-monitoring-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.021287 master-0 kubenswrapper[15493]: E0216 17:02:19.021217 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.021287 master-0 kubenswrapper[15493]: E0216 17:02:19.021266 15493 projected.go:194] Error preparing data for projected volume kube-api-access-9xrw2 for pod openshift-ovn-kubernetes/ovnkube-node-flr86: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-node/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.021517 master-0 kubenswrapper[15493]: E0216 17:02:19.021353 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2 podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.021325357 +0000 UTC m=+19.171498427 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-9xrw2" (UniqueName: "kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-node/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.022256 master-0 kubenswrapper[15493]: W0216 17:02:19.022161 15493 reflector.go:561] object-"openshift-network-node-identity"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.022321 master-0 kubenswrapper[15493]: E0216 17:02:19.022277 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.038867 master-0 kubenswrapper[15493]: I0216 17:02:19.038807 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:19.038970 master-0 kubenswrapper[15493]: I0216 17:02:19.038910 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:19.039148 master-0 kubenswrapper[15493]: I0216 17:02:19.039110 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:19.039248 master-0 kubenswrapper[15493]: I0216 17:02:19.039215 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:19.040981 master-0 kubenswrapper[15493]: W0216 17:02:19.040886 15493 reflector.go:561] object-"openshift-monitoring"/"cluster-monitoring-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dcluster-monitoring-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.041031 master-0 kubenswrapper[15493]: E0216 17:02:19.040989 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dcluster-monitoring-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.042222 master-0 kubenswrapper[15493]: E0216 17:02:19.042184 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.042267 master-0 kubenswrapper[15493]: E0216 17:02:19.042226 15493 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.042325 master-0 kubenswrapper[15493]: E0216 17:02:19.042305 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.042285142 +0000 UTC m=+19.192458222 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.061750 master-0 kubenswrapper[15493]: W0216 17:02:19.061534 15493 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.061750 master-0 kubenswrapper[15493]: E0216 17:02:19.061665 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.080970 master-0 kubenswrapper[15493]: W0216 17:02:19.080849 15493 reflector.go:561] object-"openshift-network-operator"/"iptables-alerter-script": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.081202 master-0 kubenswrapper[15493]: E0216 17:02:19.080986 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"iptables-alerter-script\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.082112 master-0 kubenswrapper[15493]: E0216 17:02:19.082057 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.082164 master-0 kubenswrapper[15493]: E0216 17:02:19.082127 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.082280 master-0 kubenswrapper[15493]: E0216 17:02:19.082250 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.082220289 +0000 UTC m=+19.232393399 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.101558 master-0 kubenswrapper[15493]: E0216 17:02:19.101456 15493 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.101558 master-0 kubenswrapper[15493]: E0216 17:02:19.101506 15493 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.101988 master-0 kubenswrapper[15493]: E0216 17:02:19.101591 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.101569651 +0000 UTC m=+19.251742731 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.101988 master-0 kubenswrapper[15493]: W0216 17:02:19.101415 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.101988 master-0 kubenswrapper[15493]: E0216 17:02:19.101663 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.121437 master-0 kubenswrapper[15493]: E0216 17:02:19.121369 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.121437 master-0 kubenswrapper[15493]: E0216 17:02:19.121415 15493 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-autoscaler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.121729 master-0 kubenswrapper[15493]: E0216 17:02:19.121501 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.121480128 +0000 UTC m=+19.271653228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-autoscaler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.121729 master-0 kubenswrapper[15493]: W0216 17:02:19.121485 15493 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.121729 master-0 kubenswrapper[15493]: E0216 17:02:19.121596 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.132222 master-0 kubenswrapper[15493]: E0216 17:02:19.132148 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.132222 master-0 kubenswrapper[15493]: E0216 17:02:19.132204 15493 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.132546 master-0 kubenswrapper[15493]: E0216 17:02:19.132284 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:02:23.132261603 +0000 UTC m=+22.282434713 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.133418 master-0 kubenswrapper[15493]: E0216 17:02:19.133331 15493 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.133513 master-0 kubenswrapper[15493]: E0216 17:02:19.133415 15493 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.133577 master-0 kubenswrapper[15493]: E0216 17:02:19.133533 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:23.133506856 +0000 UTC m=+22.283680026 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.140364 master-0 kubenswrapper[15493]: I0216 17:02:19.140287 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:19.140530 master-0 kubenswrapper[15493]: I0216 17:02:19.140477 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:19.140605 master-0 kubenswrapper[15493]: I0216 17:02:19.140558 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:19.148186 master-0 kubenswrapper[15493]: I0216 17:02:19.148120 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:19.148186 master-0 kubenswrapper[15493]: W0216 17:02:19.141493 15493 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: E0216 17:02:19.148199 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: E0216 17:02:19.142543 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: E0216 17:02:19.148258 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hmj52 for pod openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: I0216 17:02:19.148327 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: E0216 17:02:19.148397 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52 podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.148340489 +0000 UTC m=+19.298513599 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hmj52" (UniqueName: "kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: I0216 17:02:19.148498 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: I0216 17:02:19.148631 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: I0216 17:02:19.148699 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: I0216 17:02:19.148744 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: I0216 17:02:19.148846 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: I0216 17:02:19.148890 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: I0216 17:02:19.149101 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: I0216 17:02:19.149218 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:19.149309 master-0 kubenswrapper[15493]: I0216 17:02:19.149285 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:19.150580 master-0 kubenswrapper[15493]: I0216 17:02:19.149403 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:19.150580 master-0 kubenswrapper[15493]: I0216 17:02:19.149494 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:19.150580 master-0 kubenswrapper[15493]: I0216 17:02:19.149556 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:19.150580 master-0 kubenswrapper[15493]: I0216 17:02:19.149637 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:19.150580 master-0 kubenswrapper[15493]: I0216 17:02:19.149820 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:19.150580 master-0 kubenswrapper[15493]: I0216 17:02:19.149893 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:19.150580 master-0 kubenswrapper[15493]: I0216 17:02:19.149993 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:19.150580 master-0 kubenswrapper[15493]: I0216 17:02:19.150059 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:19.150580 master-0 kubenswrapper[15493]: I0216 17:02:19.150159 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:19.150580 master-0 kubenswrapper[15493]: I0216 17:02:19.150217 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:19.150580 master-0 kubenswrapper[15493]: I0216 17:02:19.150333 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:19.150580 master-0 kubenswrapper[15493]: I0216 17:02:19.150448 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:19.150580 master-0 kubenswrapper[15493]: I0216 17:02:19.150537 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.150601 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.150659 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.150719 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.150778 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.150844 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.150910 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.151027 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.151087 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.151243 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.151302 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.151371 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.151452 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.151513 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:19.151718 master-0 kubenswrapper[15493]: I0216 17:02:19.151573 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:19.152843 master-0 kubenswrapper[15493]: I0216 17:02:19.151788 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:19.152843 master-0 kubenswrapper[15493]: I0216 17:02:19.151886 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:19.152843 master-0 kubenswrapper[15493]: I0216 17:02:19.152016 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:19.152843 master-0 kubenswrapper[15493]: I0216 17:02:19.152082 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:19.152843 master-0 kubenswrapper[15493]: I0216 17:02:19.152140 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:19.152843 master-0 kubenswrapper[15493]: I0216 17:02:19.152365 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:19.152843 master-0 kubenswrapper[15493]: I0216 17:02:19.152433 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:19.152843 master-0 kubenswrapper[15493]: I0216 17:02:19.152491 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:19.153727 master-0 kubenswrapper[15493]: I0216 17:02:19.153056 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:19.153727 master-0 kubenswrapper[15493]: I0216 17:02:19.153210 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:19.153727 master-0 kubenswrapper[15493]: I0216 17:02:19.153463 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:19.153727 master-0 kubenswrapper[15493]: I0216 17:02:19.153523 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:19.153727 master-0 kubenswrapper[15493]: I0216 17:02:19.153720 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:19.154228 master-0 kubenswrapper[15493]: I0216 17:02:19.153789 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:19.154228 master-0 kubenswrapper[15493]: I0216 17:02:19.153850 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:19.154228 master-0 kubenswrapper[15493]: I0216 17:02:19.154027 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:19.154228 master-0 kubenswrapper[15493]: I0216 17:02:19.154109 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:19.154228 master-0 kubenswrapper[15493]: I0216 17:02:19.154172 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:19.154775 master-0 kubenswrapper[15493]: I0216 17:02:19.154241 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:19.154775 master-0 kubenswrapper[15493]: I0216 17:02:19.154437 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:19.154775 master-0 kubenswrapper[15493]: I0216 17:02:19.154578 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:19.154775 master-0 kubenswrapper[15493]: I0216 17:02:19.154685 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:19.155180 master-0 kubenswrapper[15493]: I0216 17:02:19.154776 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:19.155180 master-0 kubenswrapper[15493]: I0216 17:02:19.154843 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:19.155180 master-0 kubenswrapper[15493]: I0216 17:02:19.155136 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:19.155441 master-0 kubenswrapper[15493]: I0216 17:02:19.155205 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:19.155441 master-0 kubenswrapper[15493]: I0216 17:02:19.155242 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:19.155441 master-0 kubenswrapper[15493]: I0216 17:02:19.155363 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:19.155710 master-0 kubenswrapper[15493]: I0216 17:02:19.155439 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:19.155710 master-0 kubenswrapper[15493]: I0216 17:02:19.155500 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:19.155710 master-0 kubenswrapper[15493]: I0216 17:02:19.155585 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:19.155710 master-0 kubenswrapper[15493]: I0216 17:02:19.155662 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:19.156186 master-0 kubenswrapper[15493]: I0216 17:02:19.155721 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:19.156186 master-0 kubenswrapper[15493]: I0216 17:02:19.155911 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:19.156378 master-0 kubenswrapper[15493]: I0216 17:02:19.156208 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:19.156378 master-0 kubenswrapper[15493]: I0216 17:02:19.156348 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:19.156573 master-0 kubenswrapper[15493]: I0216 17:02:19.156416 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:19.156573 master-0 kubenswrapper[15493]: I0216 17:02:19.156482 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:19.156751 master-0 kubenswrapper[15493]: I0216 17:02:19.156693 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:19.156843 master-0 kubenswrapper[15493]: I0216 17:02:19.156764 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:19.156843 master-0 kubenswrapper[15493]: I0216 17:02:19.156818 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:19.157055 master-0 kubenswrapper[15493]: I0216 17:02:19.156888 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:19.157055 master-0 kubenswrapper[15493]: I0216 17:02:19.157025 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:19.157322 master-0 kubenswrapper[15493]: I0216 17:02:19.157079 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:19.157322 master-0 kubenswrapper[15493]: I0216 17:02:19.157134 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:19.157322 master-0 kubenswrapper[15493]: I0216 17:02:19.157199 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:19.157322 master-0 kubenswrapper[15493]: I0216 17:02:19.157259 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:19.157322 master-0 kubenswrapper[15493]: I0216 17:02:19.157328 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:19.157687 master-0 kubenswrapper[15493]: I0216 17:02:19.157371 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:19.157687 master-0 kubenswrapper[15493]: I0216 17:02:19.157411 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:19.157687 master-0 kubenswrapper[15493]: I0216 17:02:19.157589 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:19.157687 master-0 kubenswrapper[15493]: I0216 17:02:19.157634 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:19.157687 master-0 kubenswrapper[15493]: I0216 17:02:19.157673 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:19.158144 master-0 kubenswrapper[15493]: I0216 17:02:19.157713 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:19.158144 master-0 kubenswrapper[15493]: I0216 17:02:19.157756 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:19.158144 master-0 kubenswrapper[15493]: I0216 17:02:19.157905 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:19.158144 master-0 kubenswrapper[15493]: I0216 17:02:19.158013 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:19.158144 master-0 kubenswrapper[15493]: I0216 17:02:19.158068 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.158172 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.158234 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.158280 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.158321 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.158360 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.158566 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.158610 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.158641 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.158675 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.158706 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.159029 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.159315 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.159356 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.159385 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.159419 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:19.159411 master-0 kubenswrapper[15493]: I0216 17:02:19.159448 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:19.160999 master-0 kubenswrapper[15493]: I0216 17:02:19.159481 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:19.160999 master-0 kubenswrapper[15493]: I0216 17:02:19.159883 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:19.160999 master-0 kubenswrapper[15493]: I0216 17:02:19.160010 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:19.160999 master-0 kubenswrapper[15493]: I0216 17:02:19.160108 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:19.160999 master-0 kubenswrapper[15493]: I0216 17:02:19.160196 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:19.160999 master-0 kubenswrapper[15493]: I0216 17:02:19.160277 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:19.160999 master-0 kubenswrapper[15493]: I0216 17:02:19.160455 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:19.160999 master-0 kubenswrapper[15493]: I0216 17:02:19.160526 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:19.160999 master-0 kubenswrapper[15493]: I0216 17:02:19.160601 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:19.160999 master-0 kubenswrapper[15493]: I0216 17:02:19.160689 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:19.160999 master-0 kubenswrapper[15493]: I0216 17:02:19.160795 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:19.160999 master-0 kubenswrapper[15493]: W0216 17:02:19.160822 15493 reflector.go:561] object-"openshift-service-ca"/"signing-cabundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.160999 master-0 kubenswrapper[15493]: E0216 17:02:19.160892 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-cabundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.162088 master-0 kubenswrapper[15493]: E0216 17:02:19.161909 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.162088 master-0 kubenswrapper[15493]: E0216 17:02:19.162067 15493 projected.go:194] Error preparing data for projected volume kube-api-access-8p2jz for pod openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.162287 master-0 kubenswrapper[15493]: E0216 17:02:19.162151 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.162128253 +0000 UTC m=+19.312301434 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8p2jz" (UniqueName: "kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.181026 master-0 kubenswrapper[15493]: W0216 17:02:19.180887 15493 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.181026 master-0 kubenswrapper[15493]: E0216 17:02:19.181013 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.201304 master-0 kubenswrapper[15493]: E0216 17:02:19.201200 15493 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.201304 master-0 kubenswrapper[15493]: E0216 17:02:19.201277 15493 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.201774 master-0 kubenswrapper[15493]: E0216 17:02:19.201403 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.201355462 +0000 UTC m=+19.351528572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.201774 master-0 kubenswrapper[15493]: I0216 17:02:19.201685 15493 status_manager.go:851] "Failed to get status for pod" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/marketplace-operator-6cc5b65c6b-s4gp2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:19.221283 master-0 kubenswrapper[15493]: E0216 17:02:19.221166 15493 projected.go:288] Couldn't get configMap openshift-network-node-identity/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.221283 master-0 kubenswrapper[15493]: E0216 17:02:19.221214 15493 projected.go:194] Error preparing data for projected volume kube-api-access-vk7xl for pod openshift-network-node-identity/network-node-identity-hhcpr: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/serviceaccounts/network-node-identity/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.221784 master-0 kubenswrapper[15493]: E0216 17:02:19.221319 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.221289609 +0000 UTC m=+19.371462719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vk7xl" (UniqueName: "kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/serviceaccounts/network-node-identity/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.221784 master-0 kubenswrapper[15493]: W0216 17:02:19.221695 15493 reflector.go:561] object-"openshift-service-ca"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.221863 master-0 kubenswrapper[15493]: E0216 17:02:19.221811 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.241065 master-0 kubenswrapper[15493]: E0216 17:02:19.240956 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.241065 master-0 kubenswrapper[15493]: E0216 17:02:19.241029 15493 projected.go:194] Error preparing data for projected volume kube-api-access-wn82n for pod openshift-cluster-node-tuning-operator/tuned-l5kbz: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/tuned/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.241570 master-0 kubenswrapper[15493]: E0216 17:02:19.241139 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n podName:c45ce0e5-c50b-4210-b7bb-82db2b2bc1db nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.241112264 +0000 UTC m=+19.391285384 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wn82n" (UniqueName: "kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n") pod "tuned-l5kbz" (UID: "c45ce0e5-c50b-4210-b7bb-82db2b2bc1db") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/tuned/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.242094 master-0 kubenswrapper[15493]: W0216 17:02:19.241908 15493 reflector.go:561] object-"openshift-monitoring"/"telemetry-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dtelemetry-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.242094 master-0 kubenswrapper[15493]: E0216 17:02:19.242013 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"telemetry-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dtelemetry-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.261256 master-0 kubenswrapper[15493]: W0216 17:02:19.261122 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-j874l": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-dockercfg-j874l&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.261498 master-0 kubenswrapper[15493]: E0216 17:02:19.261256 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cloud-credential-operator-dockercfg-j874l\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-dockercfg-j874l&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.262231 master-0 kubenswrapper[15493]: E0216 17:02:19.262173 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:19.262231 master-0 kubenswrapper[15493]: E0216 17:02:19.262229 15493 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.262462 master-0 kubenswrapper[15493]: E0216 17:02:19.262429 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:20.262405307 +0000 UTC m=+19.412578387 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:19.262898 master-0 kubenswrapper[15493]: I0216 17:02:19.262857 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:02:19.263018 master-0 kubenswrapper[15493]: I0216 17:02:19.262911 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:19.263018 master-0 kubenswrapper[15493]: I0216 17:02:19.262984 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:19.263138 master-0 kubenswrapper[15493]: I0216 17:02:19.263113 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmk2b\" (UniqueName: \"kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:19.263596 master-0 kubenswrapper[15493]: I0216 17:02:19.263538 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:02:19.263677 master-0 kubenswrapper[15493]: I0216 17:02:19.263645 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:19.281391 master-0 kubenswrapper[15493]: W0216 17:02:19.281257 15493 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.281391 master-0 kubenswrapper[15493]: E0216 17:02:19.281380 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.300821 master-0 kubenswrapper[15493]: W0216 17:02:19.300716 15493 reflector.go:561] object-"openshift-dns"/"dns-default-metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.300821 master-0 kubenswrapper[15493]: E0216 17:02:19.300824 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default-metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.321517 master-0 kubenswrapper[15493]: W0216 17:02:19.321353 15493 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.321517 master-0 kubenswrapper[15493]: E0216 17:02:19.321467 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.341231 master-0 kubenswrapper[15493]: W0216 17:02:19.341143 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.341360 master-0 kubenswrapper[15493]: E0216 17:02:19.341228 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.361023 master-0 kubenswrapper[15493]: W0216 17:02:19.360942 15493 reflector.go:561] object-"openshift-etcd-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.361023 master-0 kubenswrapper[15493]: E0216 17:02:19.361023 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.366418 master-0 kubenswrapper[15493]: I0216 17:02:19.366382 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:19.366479 master-0 kubenswrapper[15493]: I0216 17:02:19.366438 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:19.366479 master-0 kubenswrapper[15493]: I0216 17:02:19.366460 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:02:19.381474 master-0 kubenswrapper[15493]: W0216 17:02:19.381367 15493 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.381543 master-0 kubenswrapper[15493]: E0216 17:02:19.381483 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.403011 master-0 kubenswrapper[15493]: W0216 17:02:19.402760 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.403011 master-0 kubenswrapper[15493]: E0216 17:02:19.403006 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.421380 master-0 kubenswrapper[15493]: W0216 17:02:19.421258 15493 reflector.go:561] object-"openshift-catalogd"/"catalogd-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dcatalogd-trusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.421550 master-0 kubenswrapper[15493]: E0216 17:02:19.421393 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"catalogd-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dcatalogd-trusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.441593 master-0 kubenswrapper[15493]: W0216 17:02:19.441460 15493 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.441593 master-0 kubenswrapper[15493]: E0216 17:02:19.441578 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.461067 master-0 kubenswrapper[15493]: W0216 17:02:19.460987 15493 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.461067 master-0 kubenswrapper[15493]: E0216 17:02:19.461064 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.469264 master-0 kubenswrapper[15493]: I0216 17:02:19.469218 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:19.469360 master-0 kubenswrapper[15493]: I0216 17:02:19.469282 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:19.469519 master-0 kubenswrapper[15493]: I0216 17:02:19.469484 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:19.469581 master-0 kubenswrapper[15493]: I0216 17:02:19.469544 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:19.469581 master-0 kubenswrapper[15493]: I0216 17:02:19.469575 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:19.469653 master-0 kubenswrapper[15493]: I0216 17:02:19.469610 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:19.469911 master-0 kubenswrapper[15493]: I0216 17:02:19.469819 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:19.480862 master-0 kubenswrapper[15493]: W0216 17:02:19.480773 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.481193 master-0 kubenswrapper[15493]: E0216 17:02:19.480879 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.501469 master-0 kubenswrapper[15493]: W0216 17:02:19.501353 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.502023 master-0 kubenswrapper[15493]: E0216 17:02:19.501486 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.519839 master-0 kubenswrapper[15493]: I0216 17:02:19.519760 15493 request.go:700] Waited for 3.338393186s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 16 17:02:19.521567 master-0 kubenswrapper[15493]: W0216 17:02:19.521474 15493 reflector.go:561] object-"openshift-network-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.521641 master-0 kubenswrapper[15493]: E0216 17:02:19.521581 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.541572 master-0 kubenswrapper[15493]: W0216 17:02:19.541443 15493 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.541572 master-0 kubenswrapper[15493]: E0216 17:02:19.541544 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.561098 master-0 kubenswrapper[15493]: W0216 17:02:19.561000 15493 reflector.go:561] object-"openshift-insights"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.561098 master-0 kubenswrapper[15493]: E0216 17:02:19.561092 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.573542 master-0 kubenswrapper[15493]: I0216 17:02:19.573416 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:19.573542 master-0 kubenswrapper[15493]: I0216 17:02:19.573488 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:19.573542 master-0 kubenswrapper[15493]: I0216 17:02:19.573517 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:19.574222 master-0 kubenswrapper[15493]: I0216 17:02:19.573616 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:19.580751 master-0 kubenswrapper[15493]: W0216 17:02:19.580684 15493 reflector.go:561] object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Doperator-controller-trusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.580829 master-0 kubenswrapper[15493]: E0216 17:02:19.580745 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-controller\"/\"operator-controller-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Doperator-controller-trusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.601506 master-0 kubenswrapper[15493]: W0216 17:02:19.601424 15493 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.601506 master-0 kubenswrapper[15493]: E0216 17:02:19.601503 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.620839 master-0 kubenswrapper[15493]: W0216 17:02:19.620734 15493 reflector.go:561] object-"openshift-insights"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.620839 master-0 kubenswrapper[15493]: E0216 17:02:19.620822 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.641227 master-0 kubenswrapper[15493]: W0216 17:02:19.641133 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.641227 master-0 kubenswrapper[15493]: E0216 17:02:19.641210 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.661246 master-0 kubenswrapper[15493]: W0216 17:02:19.661185 15493 reflector.go:561] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.661246 master-0 kubenswrapper[15493]: E0216 17:02:19.661237 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.675038 master-0 kubenswrapper[15493]: I0216 17:02:19.674958 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:19.675235 master-0 kubenswrapper[15493]: I0216 17:02:19.675104 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:19.675524 master-0 kubenswrapper[15493]: I0216 17:02:19.675480 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:19.681153 master-0 kubenswrapper[15493]: W0216 17:02:19.681063 15493 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.681153 master-0 kubenswrapper[15493]: E0216 17:02:19.681138 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.701046 master-0 kubenswrapper[15493]: W0216 17:02:19.700954 15493 reflector.go:561] object-"openshift-service-ca"/"signing-key": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.701046 master-0 kubenswrapper[15493]: E0216 17:02:19.701034 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-key\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.720593 master-0 kubenswrapper[15493]: W0216 17:02:19.720527 15493 reflector.go:561] object-"openshift-operator-controller"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.720593 master-0 kubenswrapper[15493]: E0216 17:02:19.720588 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-controller\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.740822 master-0 kubenswrapper[15493]: W0216 17:02:19.740749 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.741021 master-0 kubenswrapper[15493]: E0216 17:02:19.740827 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.761099 master-0 kubenswrapper[15493]: W0216 17:02:19.761024 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.761190 master-0 kubenswrapper[15493]: E0216 17:02:19.761092 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.778851 master-0 kubenswrapper[15493]: I0216 17:02:19.778775 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:19.779274 master-0 kubenswrapper[15493]: I0216 17:02:19.779223 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:19.779435 master-0 kubenswrapper[15493]: I0216 17:02:19.779407 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:19.779587 master-0 kubenswrapper[15493]: I0216 17:02:19.779561 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:02:19.780689 master-0 kubenswrapper[15493]: W0216 17:02:19.780621 15493 reflector.go:561] object-"openshift-operator-controller"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.780741 master-0 kubenswrapper[15493]: E0216 17:02:19.780704 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-controller\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.801608 master-0 kubenswrapper[15493]: W0216 17:02:19.801531 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"cco-trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dcco-trusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.801724 master-0 kubenswrapper[15493]: E0216 17:02:19.801607 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cco-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dcco-trusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.820682 master-0 kubenswrapper[15493]: W0216 17:02:19.820589 15493 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.820682 master-0 kubenswrapper[15493]: E0216 17:02:19.820676 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.840733 master-0 kubenswrapper[15493]: W0216 17:02:19.840587 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.840846 master-0 kubenswrapper[15493]: E0216 17:02:19.840797 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.861904 master-0 kubenswrapper[15493]: W0216 17:02:19.861783 15493 reflector.go:561] object-"openshift-catalogd"/"catalogserver-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/secrets?fieldSelector=metadata.name%3Dcatalogserver-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.861904 master-0 kubenswrapper[15493]: E0216 17:02:19.861879 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"catalogserver-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/secrets?fieldSelector=metadata.name%3Dcatalogserver-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.880461 master-0 kubenswrapper[15493]: W0216 17:02:19.880333 15493 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.880461 master-0 kubenswrapper[15493]: E0216 17:02:19.880432 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.880874 master-0 kubenswrapper[15493]: I0216 17:02:19.880815 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:19.880874 master-0 kubenswrapper[15493]: I0216 17:02:19.880868 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:19.881043 master-0 kubenswrapper[15493]: I0216 17:02:19.880891 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:19.881169 master-0 kubenswrapper[15493]: I0216 17:02:19.881133 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:19.881500 master-0 kubenswrapper[15493]: I0216 17:02:19.881446 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:19.881782 master-0 kubenswrapper[15493]: I0216 17:02:19.881745 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:19.900786 master-0 kubenswrapper[15493]: W0216 17:02:19.900694 15493 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.900786 master-0 kubenswrapper[15493]: E0216 17:02:19.900775 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.920793 master-0 kubenswrapper[15493]: W0216 17:02:19.920720 15493 reflector.go:561] object-"openshift-insights"/"operator-dockercfg-rzjlw": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Doperator-dockercfg-rzjlw&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.920793 master-0 kubenswrapper[15493]: E0216 17:02:19.920790 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"operator-dockercfg-rzjlw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Doperator-dockercfg-rzjlw&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.941717 master-0 kubenswrapper[15493]: W0216 17:02:19.941484 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.941717 master-0 kubenswrapper[15493]: E0216 17:02:19.941627 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.961484 master-0 kubenswrapper[15493]: E0216 17:02:19.961394 15493 projected.go:194] Error preparing data for projected volume bound-sa-token for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.961692 master-0 kubenswrapper[15493]: E0216 17:02:19.961518 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:21.961488648 +0000 UTC m=+21.111661748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "bound-sa-token" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.981041 master-0 kubenswrapper[15493]: W0216 17:02:19.980964 15493 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:19.981254 master-0 kubenswrapper[15493]: E0216 17:02:19.981051 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:19.985588 master-0 kubenswrapper[15493]: I0216 17:02:19.985532 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:19.985855 master-0 kubenswrapper[15493]: I0216 17:02:19.985805 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:19.986019 master-0 kubenswrapper[15493]: I0216 17:02:19.985975 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:19.986096 master-0 kubenswrapper[15493]: I0216 17:02:19.986067 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:20.001234 master-0 kubenswrapper[15493]: W0216 17:02:20.001108 15493 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.001234 master-0 kubenswrapper[15493]: E0216 17:02:20.001228 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.021106 master-0 kubenswrapper[15493]: W0216 17:02:20.020972 15493 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.021106 master-0 kubenswrapper[15493]: E0216 17:02:20.021092 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.040793 master-0 kubenswrapper[15493]: W0216 17:02:20.040670 15493 reflector.go:561] object-"openshift-insights"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.040793 master-0 kubenswrapper[15493]: E0216 17:02:20.040770 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.060658 master-0 kubenswrapper[15493]: W0216 17:02:20.060592 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.060658 master-0 kubenswrapper[15493]: E0216 17:02:20.060657 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.080399 master-0 kubenswrapper[15493]: W0216 17:02:20.080288 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.080399 master-0 kubenswrapper[15493]: E0216 17:02:20.080385 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.088333 master-0 kubenswrapper[15493]: I0216 17:02:20.088274 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:20.088605 master-0 kubenswrapper[15493]: I0216 17:02:20.088536 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:20.088893 master-0 kubenswrapper[15493]: I0216 17:02:20.088862 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:20.088893 master-0 kubenswrapper[15493]: I0216 17:02:20.088890 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:20.101062 master-0 kubenswrapper[15493]: W0216 17:02:20.100860 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.101062 master-0 kubenswrapper[15493]: E0216 17:02:20.100943 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.120469 master-0 kubenswrapper[15493]: W0216 17:02:20.120371 15493 reflector.go:561] object-"openshift-etcd"/"installer-sa-dockercfg-rxv66": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-rxv66&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.120597 master-0 kubenswrapper[15493]: E0216 17:02:20.120470 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd\"/\"installer-sa-dockercfg-rxv66\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-rxv66&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.140637 master-0 kubenswrapper[15493]: E0216 17:02:20.140580 15493 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.140839 master-0 kubenswrapper[15493]: E0216 17:02:20.140675 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.140649739 +0000 UTC m=+27.290822829 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.140839 master-0 kubenswrapper[15493]: E0216 17:02:20.140718 15493 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.140839 master-0 kubenswrapper[15493]: E0216 17:02:20.140762 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.140748722 +0000 UTC m=+27.290921812 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.140839 master-0 kubenswrapper[15493]: W0216 17:02:20.140764 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.141128 master-0 kubenswrapper[15493]: E0216 17:02:20.140865 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.141128 master-0 kubenswrapper[15493]: E0216 17:02:20.140891 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.141128 master-0 kubenswrapper[15493]: E0216 17:02:20.140971 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.140957067 +0000 UTC m=+27.291130157 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.149055 master-0 kubenswrapper[15493]: E0216 17:02:20.148995 15493 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.149055 master-0 kubenswrapper[15493]: E0216 17:02:20.149046 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.149055 master-0 kubenswrapper[15493]: E0216 17:02:20.149060 15493 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149082 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149124 15493 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149124 15493 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149131 15493 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149067 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.149049622 +0000 UTC m=+27.299222692 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149205 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149227 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.149206106 +0000 UTC m=+27.299379196 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149250 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.149238236 +0000 UTC m=+27.299411317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149322 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149281 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149383 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.149332469 +0000 UTC m=+27.299505559 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149411 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.149399791 +0000 UTC m=+27.299572881 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149432 15493 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.149454 master-0 kubenswrapper[15493]: E0216 17:02:20.149433 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.149421491 +0000 UTC m=+27.299594571 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.149509 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.149491513 +0000 UTC m=+27.299664603 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.149534 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.149522444 +0000 UTC m=+27.299695534 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.149569 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.149559535 +0000 UTC m=+27.299732615 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.149589 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.149580376 +0000 UTC m=+27.299753466 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.149609 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.149599406 +0000 UTC m=+27.299772496 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.149619 15493 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.149645 15493 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.149674 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.149660068 +0000 UTC m=+27.299833158 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.149701 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.149685388 +0000 UTC m=+27.299858478 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150270 15493 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150289 15493 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150314 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.150303505 +0000 UTC m=+27.300476585 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150335 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.150324105 +0000 UTC m=+27.300497195 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150354 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150361 15493 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150374 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150388 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.150375177 +0000 UTC m=+27.300548247 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150414 15493 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150417 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.150402647 +0000 UTC m=+27.300575737 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150437 15493 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150447 15493 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150447 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.150439448 +0000 UTC m=+27.300612518 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150476 15493 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150498 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.150483339 +0000 UTC m=+27.300656429 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150534 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.15051349 +0000 UTC m=+27.300686580 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150560 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.150549551 +0000 UTC m=+27.300722641 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.150584 master-0 kubenswrapper[15493]: E0216 17:02:20.150588 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.150569212 +0000 UTC m=+27.300742292 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.150739 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.150907 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.15086964 +0000 UTC m=+27.301042750 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.150965 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.150983 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.150994 15493 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.150984 15493 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.150968 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151025 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151008503 +0000 UTC m=+27.301181583 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151082 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151137 15493 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151252 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151102 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151084685 +0000 UTC m=+27.301257775 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151173 15493 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151293 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151282841 +0000 UTC m=+27.301455921 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151310 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151302971 +0000 UTC m=+27.301476061 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151351 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151324092 +0000 UTC m=+27.301497192 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151393 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151379493 +0000 UTC m=+27.301552603 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151400 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151421 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151410594 +0000 UTC m=+27.301583704 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151431 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151451 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151436875 +0000 UTC m=+27.301609975 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151480 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151466375 +0000 UTC m=+27.301639525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151521 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151506557 +0000 UTC m=+27.301679717 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151536 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151556 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151537827 +0000 UTC m=+27.301710967 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151584 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151571108 +0000 UTC m=+27.301744248 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151598 15493 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151658 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.15164239 +0000 UTC m=+27.301815510 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151850 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151875 15493 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151911 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151895017 +0000 UTC m=+27.302068137 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.151964 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.151950118 +0000 UTC m=+27.302123238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.152138 15493 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.152160 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.152216 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.152201575 +0000 UTC m=+27.302374695 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.152676 master-0 kubenswrapper[15493]: E0216 17:02:20.152243 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.152230296 +0000 UTC m=+27.302403466 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.153284 15493 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.153347 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.153331285 +0000 UTC m=+27.303504375 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.153346 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.153364 15493 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.153396 15493 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.153410 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.153394796 +0000 UTC m=+27.303567886 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.153408 15493 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.153434 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.153424377 +0000 UTC m=+27.303597537 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.153459 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.153469 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.153454148 +0000 UTC m=+27.303627238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.153558 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154056 15493 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154039 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.154029303 +0000 UTC m=+27.304202363 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154094 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154094 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154079 15493 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154111 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.154092555 +0000 UTC m=+27.304265635 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154138 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.154127246 +0000 UTC m=+27.304300336 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154145 15493 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154159 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.154149666 +0000 UTC m=+27.304322746 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154180 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.154170867 +0000 UTC m=+27.304343957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154177 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154191 15493 configmap.go:193] Couldn't get configMap openshift-network-node-identity/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154208 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.154198878 +0000 UTC m=+27.304371968 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154230 15493 configmap.go:193] Couldn't get configMap openshift-network-node-identity/ovnkube-identity-cm: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154234 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.154224558 +0000 UTC m=+27.304397648 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154259 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.154250629 +0000 UTC m=+27.304423709 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154275 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.15426924 +0000 UTC m=+27.304442330 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154293 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.15428302 +0000 UTC m=+27.304456100 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ovnkube-identity-cm" (UniqueName: "kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154526 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.154496396 +0000 UTC m=+27.304669506 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154570 15493 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154601 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154635 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.154619799 +0000 UTC m=+27.304792909 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154648 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154659 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.15464843 +0000 UTC m=+27.304821580 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.155261 master-0 kubenswrapper[15493]: E0216 17:02:20.154719 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.154699491 +0000 UTC m=+27.304872631 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155784 15493 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155828 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.155817101 +0000 UTC m=+27.305990181 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155857 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155867 15493 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155872 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155945 15493 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155953 15493 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155889 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.155880232 +0000 UTC m=+27.306053302 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155953 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155981 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155985 15493 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155998 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156006 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.155982435 +0000 UTC m=+27.306155585 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155965 15493 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156044 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156029686 +0000 UTC m=+27.306202856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156070 15493 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156087 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156065307 +0000 UTC m=+27.306238477 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.155947 15493 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156119 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156103998 +0000 UTC m=+27.306277168 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156149 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156135649 +0000 UTC m=+27.306308819 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156160 15493 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156181 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.1561666 +0000 UTC m=+27.306339750 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156218 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156203321 +0000 UTC m=+27.306376471 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156260 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156245382 +0000 UTC m=+27.306418532 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156299 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156284973 +0000 UTC m=+27.306458123 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156340 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156325744 +0000 UTC m=+27.306498874 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156368 15493 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156374 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156359045 +0000 UTC m=+27.306532205 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156417 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156404096 +0000 UTC m=+27.306577236 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156444 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156431787 +0000 UTC m=+27.306604967 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156735 15493 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156768 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156759076 +0000 UTC m=+27.306932146 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156768 15493 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156787 15493 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156807 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156800967 +0000 UTC m=+27.306974037 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156820 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156815337 +0000 UTC m=+27.306988407 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156963 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156974 15493 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.156984 15493 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157011 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.156997052 +0000 UTC m=+27.307170192 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157034 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.157023793 +0000 UTC m=+27.307196953 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157055 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.157045563 +0000 UTC m=+27.307218733 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157163 15493 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157241 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157255 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.157231708 +0000 UTC m=+27.307404818 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157275 15493 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157298 15493 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157305 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.15728941 +0000 UTC m=+27.307462550 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157336 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.15732466 +0000 UTC m=+27.307497740 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157400 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157415 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.157394612 +0000 UTC m=+27.307567722 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157446 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157488 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.157468294 +0000 UTC m=+27.307641414 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.157494 master-0 kubenswrapper[15493]: E0216 17:02:20.157530 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.157514896 +0000 UTC m=+27.307688036 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158280 15493 configmap.go:193] Couldn't get configMap openshift-multus/whereabouts-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158312 15493 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158333 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158316447 +0000 UTC m=+27.308489567 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "whereabouts-configmap" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158362 15493 secret.go:189] Couldn't get secret openshift-network-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158371 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158355178 +0000 UTC m=+27.308528298 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158394 15493 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158423 15493 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158445 15493 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158507 15493 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158404 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls podName:4549ea98-7379-49e1-8452-5efb643137ca nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158390109 +0000 UTC m=+27.308563239 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls") pod "network-operator-6fcf4c966-6bmf9" (UID: "4549ea98-7379-49e1-8452-5efb643137ca") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158401 15493 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158594 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158576424 +0000 UTC m=+27.308749514 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158444 15493 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158616 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158608604 +0000 UTC m=+27.308781684 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158639 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158630335 +0000 UTC m=+27.308803425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158446 15493 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158656 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158646535 +0000 UTC m=+27.308819625 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158673 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158666616 +0000 UTC m=+27.308839696 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158477 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158690 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158682276 +0000 UTC m=+27.308855366 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158708 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158701687 +0000 UTC m=+27.308874767 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158479 15493 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158724 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158716297 +0000 UTC m=+27.308889377 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158494 15493 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158494 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158768 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158746118 +0000 UTC m=+27.308919268 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158790 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158782639 +0000 UTC m=+27.308955719 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158557 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158804 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.158798099 +0000 UTC m=+27.308971189 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.158827 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.15881891 +0000 UTC m=+27.308991980 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.159682 15493 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-control-plane-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.159755 15493 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.159773 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.159757105 +0000 UTC m=+27.309930235 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" (UniqueName: "kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.159828 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.159807906 +0000 UTC m=+27.309981026 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.159762 15493 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.159866 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.159858878 +0000 UTC m=+27.310031938 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.159896 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.159953 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.159978 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.15996091 +0000 UTC m=+27.310134070 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160001 15493 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.159899 15493 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160011 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.159993191 +0000 UTC m=+27.310166371 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160056 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.160045282 +0000 UTC m=+27.310218432 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160057 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160091 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160071 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.160063083 +0000 UTC m=+27.310236143 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160118 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.160110484 +0000 UTC m=+27.310283644 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160134 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.160127035 +0000 UTC m=+27.310300195 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160176 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160233 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.160216467 +0000 UTC m=+27.310389597 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160257 15493 configmap.go:193] Couldn't get configMap openshift-network-operator/iptables-alerter-script: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160304 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script podName:b3fa6ac1-781f-446c-b6b4-18bdb7723c23 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.160293089 +0000 UTC m=+27.310466169 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "iptables-alerter-script" (UniqueName: "kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script") pod "iptables-alerter-czzz2" (UID: "b3fa6ac1-781f-446c-b6b4-18bdb7723c23") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160469 15493 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160478 15493 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160517 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.160508805 +0000 UTC m=+27.310681895 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160549 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.160527765 +0000 UTC m=+27.310700885 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: W0216 17:02:20.160648 15493 reflector.go:561] object-"openshift-catalogd"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160703 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160719 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160730 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160768 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.160758651 +0000 UTC m=+27.310931731 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160787 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160796 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.160777322 +0000 UTC m=+27.310950482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160801 15493 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160826 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.160815673 +0000 UTC m=+27.310988813 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.161016 master-0 kubenswrapper[15493]: E0216 17:02:20.160871 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.160834053 +0000 UTC m=+27.311007183 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:20.165169 master-0 kubenswrapper[15493]: E0216 17:02:20.161640 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.165169 master-0 kubenswrapper[15493]: E0216 17:02:20.161670 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:28.161662255 +0000 UTC m=+27.311835325 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:20.181337 master-0 kubenswrapper[15493]: W0216 17:02:20.181206 15493 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.181337 master-0 kubenswrapper[15493]: E0216 17:02:20.181326 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.192210 master-0 kubenswrapper[15493]: I0216 17:02:20.192152 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:02:20.192451 master-0 kubenswrapper[15493]: I0216 17:02:20.192403 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:20.192571 master-0 kubenswrapper[15493]: I0216 17:02:20.192457 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:20.192646 master-0 kubenswrapper[15493]: I0216 17:02:20.192599 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:20.201460 master-0 kubenswrapper[15493]: W0216 17:02:20.201341 15493 reflector.go:561] object-"openshift-etcd"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.201460 master-0 kubenswrapper[15493]: E0216 17:02:20.201436 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.221081 master-0 kubenswrapper[15493]: E0216 17:02:20.221011 15493 projected.go:194] Error preparing data for projected volume bound-sa-token for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.221217 master-0 kubenswrapper[15493]: E0216 17:02:20.221134 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:22.221114138 +0000 UTC m=+21.371287228 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "bound-sa-token" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.242001 master-0 kubenswrapper[15493]: W0216 17:02:20.241862 15493 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.242101 master-0 kubenswrapper[15493]: E0216 17:02:20.242019 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.261812 master-0 kubenswrapper[15493]: W0216 17:02:20.261680 15493 reflector.go:561] object-"openshift-cluster-version"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.261812 master-0 kubenswrapper[15493]: E0216 17:02:20.261807 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.281400 master-0 kubenswrapper[15493]: W0216 17:02:20.281290 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.281576 master-0 kubenswrapper[15493]: E0216 17:02:20.281404 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.295105 master-0 kubenswrapper[15493]: I0216 17:02:20.295046 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:20.295241 master-0 kubenswrapper[15493]: I0216 17:02:20.295129 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:20.295518 master-0 kubenswrapper[15493]: I0216 17:02:20.295489 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:20.295738 master-0 kubenswrapper[15493]: I0216 17:02:20.295685 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:20.301934 master-0 kubenswrapper[15493]: W0216 17:02:20.301835 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-gtxjb": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-gtxjb&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.302113 master-0 kubenswrapper[15493]: E0216 17:02:20.301944 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-gtxjb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-gtxjb&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.321553 master-0 kubenswrapper[15493]: W0216 17:02:20.321433 15493 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.321617 master-0 kubenswrapper[15493]: E0216 17:02:20.321561 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.341476 master-0 kubenswrapper[15493]: W0216 17:02:20.341373 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.341538 master-0 kubenswrapper[15493]: E0216 17:02:20.341483 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.361184 master-0 kubenswrapper[15493]: W0216 17:02:20.360982 15493 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.361184 master-0 kubenswrapper[15493]: E0216 17:02:20.361123 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.381884 master-0 kubenswrapper[15493]: W0216 17:02:20.381758 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-x2982": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-dockercfg-x2982&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.382058 master-0 kubenswrapper[15493]: E0216 17:02:20.381904 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"cluster-storage-operator-dockercfg-x2982\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-dockercfg-x2982&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.401175 master-0 kubenswrapper[15493]: W0216 17:02:20.401042 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.401175 master-0 kubenswrapper[15493]: E0216 17:02:20.401169 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cloud-credential-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.421416 master-0 kubenswrapper[15493]: W0216 17:02:20.421312 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.421632 master-0 kubenswrapper[15493]: E0216 17:02:20.421436 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.441661 master-0 kubenswrapper[15493]: W0216 17:02:20.441423 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-t46bw": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-t46bw&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.441661 master-0 kubenswrapper[15493]: E0216 17:02:20.441513 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-t46bw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-t46bw&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.460992 master-0 kubenswrapper[15493]: W0216 17:02:20.460855 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.460992 master-0 kubenswrapper[15493]: E0216 17:02:20.460966 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.481570 master-0 kubenswrapper[15493]: W0216 17:02:20.481461 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-b9gfw": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-dockercfg-b9gfw&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.481570 master-0 kubenswrapper[15493]: E0216 17:02:20.481546 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-autoscaler-operator-dockercfg-b9gfw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-dockercfg-b9gfw&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.501578 master-0 kubenswrapper[15493]: W0216 17:02:20.501437 15493 reflector.go:561] object-"openshift-machine-api"/"baremetal-kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dbaremetal-kube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.501578 master-0 kubenswrapper[15493]: E0216 17:02:20.501565 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"baremetal-kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dbaremetal-kube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.520270 master-0 kubenswrapper[15493]: I0216 17:02:20.520166 15493 request.go:700] Waited for 3.343332667s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-cert&limit=500&resourceVersion=0 Feb 16 17:02:20.521031 master-0 kubenswrapper[15493]: W0216 17:02:20.520968 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-autoscaler-operator-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.521113 master-0 kubenswrapper[15493]: E0216 17:02:20.521030 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-autoscaler-operator-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.541400 master-0 kubenswrapper[15493]: W0216 17:02:20.541331 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-webhook-server-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.541400 master-0 kubenswrapper[15493]: E0216 17:02:20.541393 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-webhook-server-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.560584 master-0 kubenswrapper[15493]: W0216 17:02:20.560469 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-hk5sk": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-hk5sk&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.560584 master-0 kubenswrapper[15493]: E0216 17:02:20.560547 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-hk5sk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-hk5sk&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.581447 master-0 kubenswrapper[15493]: W0216 17:02:20.581335 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"pprof-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.581447 master-0 kubenswrapper[15493]: E0216 17:02:20.581437 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.601314 master-0 kubenswrapper[15493]: W0216 17:02:20.601169 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.601481 master-0 kubenswrapper[15493]: E0216 17:02:20.601328 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.621379 master-0 kubenswrapper[15493]: W0216 17:02:20.621121 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.621379 master-0 kubenswrapper[15493]: E0216 17:02:20.621225 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.641633 master-0 kubenswrapper[15493]: W0216 17:02:20.641520 15493 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.641633 master-0 kubenswrapper[15493]: E0216 17:02:20.641623 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.660837 master-0 kubenswrapper[15493]: W0216 17:02:20.660737 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.660837 master-0 kubenswrapper[15493]: E0216 17:02:20.660831 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.681491 master-0 kubenswrapper[15493]: W0216 17:02:20.681363 15493 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-7mlbn": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-7mlbn&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.681563 master-0 kubenswrapper[15493]: E0216 17:02:20.681504 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-7mlbn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-7mlbn&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.701683 master-0 kubenswrapper[15493]: W0216 17:02:20.701581 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dcluster-baremetal-operator-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.701813 master-0 kubenswrapper[15493]: E0216 17:02:20.701688 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dcluster-baremetal-operator-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.721754 master-0 kubenswrapper[15493]: W0216 17:02:20.721594 15493 reflector.go:561] object-"openshift-machine-config-operator"/"mco-proxy-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.721754 master-0 kubenswrapper[15493]: E0216 17:02:20.721752 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.741789 master-0 kubenswrapper[15493]: W0216 17:02:20.741645 15493 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.742037 master-0 kubenswrapper[15493]: E0216 17:02:20.741795 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.761429 master-0 kubenswrapper[15493]: W0216 17:02:20.761319 15493 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy-cluster-autoscaler-operator&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.761606 master-0 kubenswrapper[15493]: E0216 17:02:20.761436 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy-cluster-autoscaler-operator\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy-cluster-autoscaler-operator&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.781199 master-0 kubenswrapper[15493]: W0216 17:02:20.781087 15493 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.781199 master-0 kubenswrapper[15493]: E0216 17:02:20.781189 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.801525 master-0 kubenswrapper[15493]: W0216 17:02:20.801370 15493 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.801525 master-0 kubenswrapper[15493]: E0216 17:02:20.801468 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.821443 master-0 kubenswrapper[15493]: W0216 17:02:20.821325 15493 reflector.go:561] object-"openshift-insights"/"openshift-insights-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Dopenshift-insights-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.821443 master-0 kubenswrapper[15493]: E0216 17:02:20.821427 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"openshift-insights-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Dopenshift-insights-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.840759 master-0 kubenswrapper[15493]: W0216 17:02:20.840651 15493 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-kh5s4": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-kh5s4&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.840759 master-0 kubenswrapper[15493]: E0216 17:02:20.840748 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-kh5s4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-kh5s4&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.861121 master-0 kubenswrapper[15493]: W0216 17:02:20.861027 15493 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.861301 master-0 kubenswrapper[15493]: E0216 17:02:20.861130 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.880886 master-0 kubenswrapper[15493]: W0216 17:02:20.880703 15493 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.880886 master-0 kubenswrapper[15493]: E0216 17:02:20.880811 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.901731 master-0 kubenswrapper[15493]: W0216 17:02:20.901565 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-wnnb7": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-wnnb7&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.901830 master-0 kubenswrapper[15493]: E0216 17:02:20.901733 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wnnb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-wnnb7&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.921331 master-0 kubenswrapper[15493]: W0216 17:02:20.921157 15493 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.921331 master-0 kubenswrapper[15493]: E0216 17:02:20.921301 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.941726 master-0 kubenswrapper[15493]: W0216 17:02:20.941568 15493 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.941726 master-0 kubenswrapper[15493]: E0216 17:02:20.941720 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.961469 master-0 kubenswrapper[15493]: W0216 17:02:20.961366 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dcloud-controller-manager-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.961469 master-0 kubenswrapper[15493]: E0216 17:02:20.961470 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"cloud-controller-manager-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dcloud-controller-manager-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:20.981186 master-0 kubenswrapper[15493]: W0216 17:02:20.981071 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:20.981352 master-0 kubenswrapper[15493]: E0216 17:02:20.981205 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"cluster-storage-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.001265 master-0 kubenswrapper[15493]: W0216 17:02:21.001105 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lc8g2": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcluster-cloud-controller-manager-dockercfg-lc8g2&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.001265 master-0 kubenswrapper[15493]: E0216 17:02:21.001249 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"cluster-cloud-controller-manager-dockercfg-lc8g2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcluster-cloud-controller-manager-dockercfg-lc8g2&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.020975 master-0 kubenswrapper[15493]: W0216 17:02:21.020772 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.020975 master-0 kubenswrapper[15493]: E0216 17:02:21.020874 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.041345 master-0 kubenswrapper[15493]: W0216 17:02:21.041246 15493 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-6858s": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-6858s&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.041345 master-0 kubenswrapper[15493]: E0216 17:02:21.041336 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-6858s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-6858s&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.061174 master-0 kubenswrapper[15493]: W0216 17:02:21.061055 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-q5h8t": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-q5h8t&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.061385 master-0 kubenswrapper[15493]: E0216 17:02:21.061191 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-q5h8t\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-q5h8t&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.081636 master-0 kubenswrapper[15493]: W0216 17:02:21.081525 15493 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.081737 master-0 kubenswrapper[15493]: E0216 17:02:21.081659 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.101494 master-0 kubenswrapper[15493]: W0216 17:02:21.101350 15493 reflector.go:561] object-"openshift-insights"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.101494 master-0 kubenswrapper[15493]: E0216 17:02:21.101466 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.121354 master-0 kubenswrapper[15493]: W0216 17:02:21.121228 15493 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-5lx84": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-5lx84&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.121495 master-0 kubenswrapper[15493]: E0216 17:02:21.121351 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-5lx84\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-5lx84&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.141533 master-0 kubenswrapper[15493]: W0216 17:02:21.141342 15493 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.141533 master-0 kubenswrapper[15493]: E0216 17:02:21.141456 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.161305 master-0 kubenswrapper[15493]: W0216 17:02:21.161155 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.161305 master-0 kubenswrapper[15493]: E0216 17:02:21.161270 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.181236 master-0 kubenswrapper[15493]: W0216 17:02:21.181078 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-q2gzj": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-q2gzj&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.181236 master-0 kubenswrapper[15493]: E0216 17:02:21.181207 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-q2gzj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-q2gzj&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.201919 master-0 kubenswrapper[15493]: W0216 17:02:21.201689 15493 reflector.go:561] object-"openshift-machine-config-operator"/"mcc-proxy-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.202234 master-0 kubenswrapper[15493]: E0216 17:02:21.201964 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.221016 master-0 kubenswrapper[15493]: W0216 17:02:21.220819 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.221016 master-0 kubenswrapper[15493]: E0216 17:02:21.220914 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.241239 master-0 kubenswrapper[15493]: W0216 17:02:21.241169 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.241239 master-0 kubenswrapper[15493]: E0216 17:02:21.241239 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.301111 master-0 kubenswrapper[15493]: W0216 17:02:21.301008 15493 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.301338 master-0 kubenswrapper[15493]: E0216 17:02:21.301126 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.321594 master-0 kubenswrapper[15493]: W0216 17:02:21.321518 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcloud-controller-manager-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.321782 master-0 kubenswrapper[15493]: E0216 17:02:21.321611 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"cloud-controller-manager-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcloud-controller-manager-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.341303 master-0 kubenswrapper[15493]: W0216 17:02:21.341216 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.341303 master-0 kubenswrapper[15493]: E0216 17:02:21.341276 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.361011 master-0 kubenswrapper[15493]: E0216 17:02:21.360653 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:02:21.361011 master-0 kubenswrapper[15493]: E0216 17:02:21.360956 15493 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.756s" Feb 16 17:02:21.361011 master-0 kubenswrapper[15493]: I0216 17:02:21.361000 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:21.361754 master-0 kubenswrapper[15493]: I0216 17:02:21.361706 15493 scope.go:117] "RemoveContainer" containerID="e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26" Feb 16 17:02:21.373232 master-0 kubenswrapper[15493]: I0216 17:02:21.373157 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:21.381972 master-0 kubenswrapper[15493]: W0216 17:02:21.381692 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.381972 master-0 kubenswrapper[15493]: E0216 17:02:21.381775 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.461726 master-0 kubenswrapper[15493]: W0216 17:02:21.461499 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-mzz6s": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-dockercfg-mzz6s&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.461726 master-0 kubenswrapper[15493]: E0216 17:02:21.461571 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-operator-dockercfg-mzz6s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-dockercfg-mzz6s&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.482067 master-0 kubenswrapper[15493]: W0216 17:02:21.481956 15493 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.482289 master-0 kubenswrapper[15493]: E0216 17:02:21.482074 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.501035 master-0 kubenswrapper[15493]: W0216 17:02:21.500915 15493 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ztpz8": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-ztpz8&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.501171 master-0 kubenswrapper[15493]: E0216 17:02:21.501048 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-ztpz8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-ztpz8&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.540196 master-0 kubenswrapper[15493]: I0216 17:02:21.540005 15493 request.go:700] Waited for 3.285532558s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-r5p9m&limit=500&resourceVersion=0 Feb 16 17:02:21.540853 master-0 kubenswrapper[15493]: W0216 17:02:21.540755 15493 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-r5p9m": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-r5p9m&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.540963 master-0 kubenswrapper[15493]: E0216 17:02:21.540842 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-r5p9m\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-r5p9m&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.561701 master-0 kubenswrapper[15493]: W0216 17:02:21.561593 15493 reflector.go:561] object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-qlqr4": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-qlqr4&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.561701 master-0 kubenswrapper[15493]: E0216 17:02:21.561693 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager\"/\"installer-sa-dockercfg-qlqr4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-qlqr4&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.581464 master-0 kubenswrapper[15493]: W0216 17:02:21.581375 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.581614 master-0 kubenswrapper[15493]: E0216 17:02:21.581491 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.601532 master-0 kubenswrapper[15493]: W0216 17:02:21.601458 15493 reflector.go:561] object-"openshift-kube-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.601677 master-0 kubenswrapper[15493]: E0216 17:02:21.601535 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.621571 master-0 kubenswrapper[15493]: W0216 17:02:21.621262 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.621571 master-0 kubenswrapper[15493]: E0216 17:02:21.621375 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.640756 master-0 kubenswrapper[15493]: W0216 17:02:21.640632 15493 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.640914 master-0 kubenswrapper[15493]: E0216 17:02:21.640756 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.661175 master-0 kubenswrapper[15493]: W0216 17:02:21.660988 15493 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-nslxl": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-nslxl&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.661175 master-0 kubenswrapper[15493]: E0216 17:02:21.661141 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-nslxl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-nslxl&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.682359 master-0 kubenswrapper[15493]: W0216 17:02:21.682181 15493 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.682618 master-0 kubenswrapper[15493]: E0216 17:02:21.682371 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.701598 master-0 kubenswrapper[15493]: W0216 17:02:21.701466 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:21.701870 master-0 kubenswrapper[15493]: E0216 17:02:21.701598 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:21.900903 master-0 kubenswrapper[15493]: I0216 17:02:21.900834 15493 status_manager.go:851] "Failed to get status for pod" podUID="86c571b6-0f65-41f0-b1be-f63d7a974782" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:22.059633 master-0 kubenswrapper[15493]: I0216 17:02:22.059567 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:22.262149 master-0 kubenswrapper[15493]: E0216 17:02:22.262085 15493 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.262149 master-0 kubenswrapper[15493]: E0216 17:02:22.262138 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/serviceaccounts/openshift-kube-scheduler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:22.262473 master-0 kubenswrapper[15493]: E0216 17:02:22.262203 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.262184003 +0000 UTC m=+23.412357093 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/serviceaccounts/openshift-kube-scheduler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:22.267503 master-0 kubenswrapper[15493]: I0216 17:02:22.267430 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:22.282311 master-0 kubenswrapper[15493]: E0216 17:02:22.282240 15493 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.282311 master-0 kubenswrapper[15493]: E0216 17:02:22.282282 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:22.282553 master-0 kubenswrapper[15493]: E0216 17:02:22.282354 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.282333946 +0000 UTC m=+23.432507086 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:22.404220 master-0 kubenswrapper[15493]: E0216 17:02:22.404149 15493 projected.go:288] Couldn't get configMap openshift-cluster-version/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.404220 master-0 kubenswrapper[15493]: E0216 17:02:22.404209 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:22.404540 master-0 kubenswrapper[15493]: E0216 17:02:22.404292 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.404266863 +0000 UTC m=+23.554439953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:22.421390 master-0 kubenswrapper[15493]: E0216 17:02:22.421338 15493 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.421486 master-0 kubenswrapper[15493]: E0216 17:02:22.421397 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/serviceaccounts/kube-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:22.421524 master-0 kubenswrapper[15493]: E0216 17:02:22.421486 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.421465168 +0000 UTC m=+23.571638248 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/serviceaccounts/kube-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:22.441519 master-0 kubenswrapper[15493]: E0216 17:02:22.441469 15493 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.441519 master-0 kubenswrapper[15493]: E0216 17:02:22.441517 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:22.441749 master-0 kubenswrapper[15493]: E0216 17:02:22.441593 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access podName:5d39ed24-4301-4cea-8a42-a08f4ba8b479 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.44157261 +0000 UTC m=+23.591745680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access") pod "installer-2-master-0" (UID: "5d39ed24-4301-4cea-8a42-a08f4ba8b479") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:22.509852 master-0 kubenswrapper[15493]: I0216 17:02:22.509796 15493 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="9563c6ff303edb4e0a6b2f6ce6960067c267be9fe8766c7044d1f1559d05730f" exitCode=1 Feb 16 17:02:22.522133 master-0 kubenswrapper[15493]: E0216 17:02:22.522045 15493 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.522133 master-0 kubenswrapper[15493]: E0216 17:02:22.522082 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-2-master-0: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:22.522351 master-0 kubenswrapper[15493]: E0216 17:02:22.522150 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access podName:b1b4fccc-6bf6-47ac-8ae1-32cad23734da nodeName:}" failed. No retries permitted until 2026-02-16 17:02:24.522131502 +0000 UTC m=+23.672304572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access") pod "installer-2-master-0" (UID: "b1b4fccc-6bf6-47ac-8ae1-32cad23734da") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:22.540078 master-0 kubenswrapper[15493]: I0216 17:02:22.540031 15493 request.go:700] Waited for 2.658418002s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token Feb 16 17:02:22.720894 master-0 kubenswrapper[15493]: E0216 17:02:22.720794 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.742023 master-0 kubenswrapper[15493]: E0216 17:02:22.741915 15493 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.760833 master-0 kubenswrapper[15493]: E0216 17:02:22.760765 15493 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.781103 master-0 kubenswrapper[15493]: E0216 17:02:22.780877 15493 projected.go:288] Couldn't get configMap openshift-network-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.802306 master-0 kubenswrapper[15493]: E0216 17:02:22.802203 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.821584 master-0 kubenswrapper[15493]: E0216 17:02:22.821529 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.842477 master-0 kubenswrapper[15493]: E0216 17:02:22.842376 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.862664 master-0 kubenswrapper[15493]: E0216 17:02:22.862601 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.881279 master-0 kubenswrapper[15493]: E0216 17:02:22.881177 15493 projected.go:288] Couldn't get configMap openshift-network-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.901336 master-0 kubenswrapper[15493]: W0216 17:02:22.901225 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:22.901640 master-0 kubenswrapper[15493]: E0216 17:02:22.901610 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:22.920985 master-0 kubenswrapper[15493]: W0216 17:02:22.920794 15493 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:22.920985 master-0 kubenswrapper[15493]: E0216 17:02:22.920962 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:22.921582 master-0 kubenswrapper[15493]: E0216 17:02:22.921237 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.940766 master-0 kubenswrapper[15493]: E0216 17:02:22.940697 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.941540 master-0 kubenswrapper[15493]: W0216 17:02:22.941460 15493 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:22.941651 master-0 kubenswrapper[15493]: E0216 17:02:22.941600 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:22.961460 master-0 kubenswrapper[15493]: W0216 17:02:22.961360 15493 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:22.961645 master-0 kubenswrapper[15493]: E0216 17:02:22.961442 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:22.961645 master-0 kubenswrapper[15493]: E0216 17:02:22.961473 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:22.981292 master-0 kubenswrapper[15493]: W0216 17:02:22.981201 15493 reflector.go:561] object-"openshift-marketplace"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:22.981435 master-0 kubenswrapper[15493]: E0216 17:02:22.981295 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:22.982493 master-0 kubenswrapper[15493]: E0216 17:02:22.982450 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.001068 master-0 kubenswrapper[15493]: W0216 17:02:23.000984 15493 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.001184 master-0 kubenswrapper[15493]: E0216 17:02:23.001071 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.002082 master-0 kubenswrapper[15493]: E0216 17:02:23.002017 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.021169 master-0 kubenswrapper[15493]: E0216 17:02:23.021070 15493 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.022273 master-0 kubenswrapper[15493]: W0216 17:02:23.022133 15493 reflector.go:561] object-"openshift-multus"/"multus-admission-controller-secret": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.022435 master-0 kubenswrapper[15493]: E0216 17:02:23.022291 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-admission-controller-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.041128 master-0 kubenswrapper[15493]: E0216 17:02:23.040965 15493 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.041550 master-0 kubenswrapper[15493]: W0216 17:02:23.041419 15493 reflector.go:561] object-"openshift-multus"/"whereabouts-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dwhereabouts-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.041624 master-0 kubenswrapper[15493]: E0216 17:02:23.041571 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"whereabouts-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dwhereabouts-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.060837 master-0 kubenswrapper[15493]: E0216 17:02:23.060776 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.061772 master-0 kubenswrapper[15493]: W0216 17:02:23.061681 15493 reflector.go:561] object-"openshift-dns-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.061844 master-0 kubenswrapper[15493]: E0216 17:02:23.061787 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.081331 master-0 kubenswrapper[15493]: E0216 17:02:23.081263 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.081785 master-0 kubenswrapper[15493]: W0216 17:02:23.081680 15493 reflector.go:561] object-"openshift-monitoring"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.081867 master-0 kubenswrapper[15493]: E0216 17:02:23.081800 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.100884 master-0 kubenswrapper[15493]: E0216 17:02:23.100803 15493 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.101345 master-0 kubenswrapper[15493]: E0216 17:02:23.101299 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:02:23.101417 master-0 kubenswrapper[15493]: I0216 17:02:23.101342 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:23.120780 master-0 kubenswrapper[15493]: E0216 17:02:23.120723 15493 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.121328 master-0 kubenswrapper[15493]: W0216 17:02:23.121253 15493 reflector.go:561] object-"openshift-cluster-olm-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.121402 master-0 kubenswrapper[15493]: E0216 17:02:23.121338 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-olm-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.141075 master-0 kubenswrapper[15493]: E0216 17:02:23.141014 15493 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.141521 master-0 kubenswrapper[15493]: W0216 17:02:23.141431 15493 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.141574 master-0 kubenswrapper[15493]: E0216 17:02:23.141541 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.161177 master-0 kubenswrapper[15493]: E0216 17:02:23.161096 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.161733 master-0 kubenswrapper[15493]: W0216 17:02:23.161650 15493 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.161782 master-0 kubenswrapper[15493]: E0216 17:02:23.161751 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.180961 master-0 kubenswrapper[15493]: W0216 17:02:23.180883 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.181071 master-0 kubenswrapper[15493]: E0216 17:02:23.180962 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.181071 master-0 kubenswrapper[15493]: E0216 17:02:23.181038 15493 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.200983 master-0 kubenswrapper[15493]: W0216 17:02:23.200916 15493 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.201064 master-0 kubenswrapper[15493]: E0216 17:02:23.200978 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.201199 master-0 kubenswrapper[15493]: E0216 17:02:23.201156 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.202620 master-0 kubenswrapper[15493]: I0216 17:02:23.202559 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:23.203773 master-0 kubenswrapper[15493]: I0216 17:02:23.203727 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:23.220862 master-0 kubenswrapper[15493]: E0216 17:02:23.220817 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.221212 master-0 kubenswrapper[15493]: W0216 17:02:23.221127 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.221253 master-0 kubenswrapper[15493]: E0216 17:02:23.221228 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.241498 master-0 kubenswrapper[15493]: W0216 17:02:23.241385 15493 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.241498 master-0 kubenswrapper[15493]: E0216 17:02:23.241488 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.242464 master-0 kubenswrapper[15493]: E0216 17:02:23.242394 15493 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.261104 master-0 kubenswrapper[15493]: E0216 17:02:23.261042 15493 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.261333 master-0 kubenswrapper[15493]: W0216 17:02:23.261228 15493 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.261427 master-0 kubenswrapper[15493]: E0216 17:02:23.261361 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.281409 master-0 kubenswrapper[15493]: E0216 17:02:23.281304 15493 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.282436 master-0 kubenswrapper[15493]: W0216 17:02:23.282323 15493 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-metrics": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.282544 master-0 kubenswrapper[15493]: E0216 17:02:23.282451 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.302009 master-0 kubenswrapper[15493]: E0216 17:02:23.301422 15493 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.302009 master-0 kubenswrapper[15493]: W0216 17:02:23.301593 15493 reflector.go:561] object-"openshift-network-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.302009 master-0 kubenswrapper[15493]: E0216 17:02:23.301675 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.321214 master-0 kubenswrapper[15493]: E0216 17:02:23.321157 15493 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.321866 master-0 kubenswrapper[15493]: W0216 17:02:23.321784 15493 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.322158 master-0 kubenswrapper[15493]: E0216 17:02:23.322104 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.341063 master-0 kubenswrapper[15493]: W0216 17:02:23.340990 15493 reflector.go:561] object-"openshift-image-registry"/"image-registry-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.341303 master-0 kubenswrapper[15493]: E0216 17:02:23.341268 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.341600 master-0 kubenswrapper[15493]: E0216 17:02:23.341565 15493 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.361357 master-0 kubenswrapper[15493]: E0216 17:02:23.361288 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.361560 master-0 kubenswrapper[15493]: I0216 17:02:23.361399 15493 status_manager.go:851] "Failed to get status for pod" podUID="1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa" pod="openshift-marketplace/community-operators-n7kjr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-n7kjr\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:23.381468 master-0 kubenswrapper[15493]: E0216 17:02:23.381397 15493 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.381651 master-0 kubenswrapper[15493]: W0216 17:02:23.381563 15493 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.381734 master-0 kubenswrapper[15493]: E0216 17:02:23.381674 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.400962 master-0 kubenswrapper[15493]: E0216 17:02:23.400861 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.401373 master-0 kubenswrapper[15493]: W0216 17:02:23.401254 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.401483 master-0 kubenswrapper[15493]: E0216 17:02:23.401382 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.421831 master-0 kubenswrapper[15493]: E0216 17:02:23.421759 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.422068 master-0 kubenswrapper[15493]: W0216 17:02:23.421845 15493 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.422068 master-0 kubenswrapper[15493]: E0216 17:02:23.421982 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.441200 master-0 kubenswrapper[15493]: W0216 17:02:23.441075 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.441200 master-0 kubenswrapper[15493]: E0216 17:02:23.441184 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.441645 master-0 kubenswrapper[15493]: E0216 17:02:23.441536 15493 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.461268 master-0 kubenswrapper[15493]: E0216 17:02:23.461167 15493 projected.go:194] Error preparing data for projected volume bound-sa-token for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.461591 master-0 kubenswrapper[15493]: E0216 17:02:23.461315 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:27.461280216 +0000 UTC m=+26.611453326 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "bound-sa-token" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.462417 master-0 kubenswrapper[15493]: E0216 17:02:23.462361 15493 projected.go:288] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.481754 master-0 kubenswrapper[15493]: E0216 17:02:23.481666 15493 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.481997 master-0 kubenswrapper[15493]: W0216 17:02:23.481746 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.481997 master-0 kubenswrapper[15493]: E0216 17:02:23.481857 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.501281 master-0 kubenswrapper[15493]: E0216 17:02:23.501212 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.501639 master-0 kubenswrapper[15493]: W0216 17:02:23.501534 15493 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.501734 master-0 kubenswrapper[15493]: E0216 17:02:23.501650 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.521087 master-0 kubenswrapper[15493]: E0216 17:02:23.520984 15493 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.521825 master-0 kubenswrapper[15493]: W0216 17:02:23.521704 15493 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.522022 master-0 kubenswrapper[15493]: E0216 17:02:23.521819 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.540219 master-0 kubenswrapper[15493]: I0216 17:02:23.540078 15493 request.go:700] Waited for 1.338981905s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&limit=500&resourceVersion=0 Feb 16 17:02:23.540692 master-0 kubenswrapper[15493]: E0216 17:02:23.540644 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.541435 master-0 kubenswrapper[15493]: W0216 17:02:23.541316 15493 reflector.go:561] object-"openshift-marketplace"/"marketplace-trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.541509 master-0 kubenswrapper[15493]: E0216 17:02:23.541454 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.560957 master-0 kubenswrapper[15493]: E0216 17:02:23.560768 15493 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.561188 master-0 kubenswrapper[15493]: W0216 17:02:23.561109 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.561251 master-0 kubenswrapper[15493]: E0216 17:02:23.561211 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.581709 master-0 kubenswrapper[15493]: W0216 17:02:23.581560 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.581709 master-0 kubenswrapper[15493]: E0216 17:02:23.581666 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.582090 master-0 kubenswrapper[15493]: E0216 17:02:23.581730 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.601190 master-0 kubenswrapper[15493]: W0216 17:02:23.601078 15493 reflector.go:561] object-"openshift-network-node-identity"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.601396 master-0 kubenswrapper[15493]: E0216 17:02:23.601197 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.602188 master-0 kubenswrapper[15493]: E0216 17:02:23.602151 15493 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.612882 master-0 kubenswrapper[15493]: E0216 17:02:23.612634 15493 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1894c8ca4adada44 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:02:04.628253252 +0000 UTC m=+3.778426332,LastTimestamp:2026-02-16 17:02:04.628253252 +0000 UTC m=+3.778426332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:02:23.621648 master-0 kubenswrapper[15493]: E0216 17:02:23.621579 15493 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.621807 master-0 kubenswrapper[15493]: W0216 17:02:23.621652 15493 reflector.go:561] object-"openshift-service-ca-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.621807 master-0 kubenswrapper[15493]: E0216 17:02:23.621755 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.641410 master-0 kubenswrapper[15493]: E0216 17:02:23.641346 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.641684 master-0 kubenswrapper[15493]: E0216 17:02:23.641642 15493 projected.go:194] Error preparing data for projected volume bound-sa-token for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.641776 master-0 kubenswrapper[15493]: E0216 17:02:23.641746 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:27.641718651 +0000 UTC m=+26.791891751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "bound-sa-token" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.661424 master-0 kubenswrapper[15493]: E0216 17:02:23.661145 15493 projected.go:288] Couldn't get configMap openshift-monitoring/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.661424 master-0 kubenswrapper[15493]: W0216 17:02:23.661249 15493 reflector.go:561] object-"openshift-ingress-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.661424 master-0 kubenswrapper[15493]: E0216 17:02:23.661354 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.681839 master-0 kubenswrapper[15493]: E0216 17:02:23.681631 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.682104 master-0 kubenswrapper[15493]: W0216 17:02:23.681984 15493 reflector.go:561] object-"openshift-dns-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.682104 master-0 kubenswrapper[15493]: E0216 17:02:23.682068 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.701364 master-0 kubenswrapper[15493]: W0216 17:02:23.701223 15493 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.701364 master-0 kubenswrapper[15493]: E0216 17:02:23.701321 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.702348 master-0 kubenswrapper[15493]: E0216 17:02:23.702299 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.721069 master-0 kubenswrapper[15493]: E0216 17:02:23.720975 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.721069 master-0 kubenswrapper[15493]: E0216 17:02:23.721040 15493 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.721679 master-0 kubenswrapper[15493]: E0216 17:02:23.721130 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.721105882 +0000 UTC m=+24.871278952 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.721679 master-0 kubenswrapper[15493]: E0216 17:02:23.721144 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.722086 master-0 kubenswrapper[15493]: W0216 17:02:23.721984 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.722155 master-0 kubenswrapper[15493]: E0216 17:02:23.722121 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.740729 master-0 kubenswrapper[15493]: E0216 17:02:23.740647 15493 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.741068 master-0 kubenswrapper[15493]: W0216 17:02:23.741000 15493 reflector.go:561] object-"openshift-dns"/"dns-default": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.741132 master-0 kubenswrapper[15493]: E0216 17:02:23.741076 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.742111 master-0 kubenswrapper[15493]: E0216 17:02:23.742079 15493 projected.go:288] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.742171 master-0 kubenswrapper[15493]: E0216 17:02:23.742112 15493 projected.go:194] Error preparing data for projected volume kube-api-access-b5mwd for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.742202 master-0 kubenswrapper[15493]: E0216 17:02:23.742189 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.742168129 +0000 UTC m=+24.892341199 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5mwd" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.760941 master-0 kubenswrapper[15493]: E0216 17:02:23.760872 15493 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.761104 master-0 kubenswrapper[15493]: E0216 17:02:23.761055 15493 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.761181 master-0 kubenswrapper[15493]: E0216 17:02:23.761132 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.76110681 +0000 UTC m=+24.911279880 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.761555 master-0 kubenswrapper[15493]: W0216 17:02:23.761497 15493 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.761619 master-0 kubenswrapper[15493]: E0216 17:02:23.761560 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.762658 master-0 kubenswrapper[15493]: E0216 17:02:23.762624 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.781164 master-0 kubenswrapper[15493]: W0216 17:02:23.780990 15493 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.781481 master-0 kubenswrapper[15493]: E0216 17:02:23.781162 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.781481 master-0 kubenswrapper[15493]: E0216 17:02:23.781309 15493 projected.go:288] Couldn't get configMap openshift-network-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.781481 master-0 kubenswrapper[15493]: E0216 17:02:23.781343 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zt8mt for pod openshift-network-operator/network-operator-6fcf4c966-6bmf9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/cluster-network-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.781481 master-0 kubenswrapper[15493]: E0216 17:02:23.781444 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt podName:4549ea98-7379-49e1-8452-5efb643137ca nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.781407306 +0000 UTC m=+24.931580416 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zt8mt" (UniqueName: "kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt") pod "network-operator-6fcf4c966-6bmf9" (UID: "4549ea98-7379-49e1-8452-5efb643137ca") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/cluster-network-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.782278 master-0 kubenswrapper[15493]: E0216 17:02:23.782230 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.801106 master-0 kubenswrapper[15493]: E0216 17:02:23.801008 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.801762 master-0 kubenswrapper[15493]: W0216 17:02:23.801674 15493 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.801909 master-0 kubenswrapper[15493]: E0216 17:02:23.801766 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.802897 master-0 kubenswrapper[15493]: E0216 17:02:23.802810 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.802897 master-0 kubenswrapper[15493]: E0216 17:02:23.802879 15493 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.803107 master-0 kubenswrapper[15493]: E0216 17:02:23.803011 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.802984007 +0000 UTC m=+24.953157117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.821235 master-0 kubenswrapper[15493]: W0216 17:02:23.821116 15493 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.821235 master-0 kubenswrapper[15493]: E0216 17:02:23.821190 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.822236 master-0 kubenswrapper[15493]: E0216 17:02:23.822185 15493 projected.go:288] Couldn't get configMap openshift-network-node-identity/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.822318 master-0 kubenswrapper[15493]: E0216 17:02:23.822268 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.822318 master-0 kubenswrapper[15493]: E0216 17:02:23.822312 15493 projected.go:194] Error preparing data for projected volume kube-api-access-8r28x for pod openshift-multus/multus-6r7wj: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.822581 master-0 kubenswrapper[15493]: E0216 17:02:23.822386 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.82235925 +0000 UTC m=+24.972532320 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8r28x" (UniqueName: "kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.840951 master-0 kubenswrapper[15493]: W0216 17:02:23.840841 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.841143 master-0 kubenswrapper[15493]: E0216 17:02:23.840965 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.841186 master-0 kubenswrapper[15493]: E0216 17:02:23.841152 15493 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.843220 master-0 kubenswrapper[15493]: E0216 17:02:23.843188 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.843303 master-0 kubenswrapper[15493]: E0216 17:02:23.843218 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/serviceaccounts/kube-storage-version-migrator-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.843303 master-0 kubenswrapper[15493]: E0216 17:02:23.843287 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.843265603 +0000 UTC m=+24.993438683 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/serviceaccounts/kube-storage-version-migrator-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.861356 master-0 kubenswrapper[15493]: W0216 17:02:23.861254 15493 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.861356 master-0 kubenswrapper[15493]: E0216 17:02:23.861351 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.861356 master-0 kubenswrapper[15493]: E0216 17:02:23.861293 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.863486 master-0 kubenswrapper[15493]: E0216 17:02:23.863443 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.863486 master-0 kubenswrapper[15493]: E0216 17:02:23.863481 15493 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.863592 master-0 kubenswrapper[15493]: E0216 17:02:23.863557 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.86353633 +0000 UTC m=+25.013709400 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.881356 master-0 kubenswrapper[15493]: E0216 17:02:23.881295 15493 projected.go:288] Couldn't get configMap openshift-network-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.881356 master-0 kubenswrapper[15493]: E0216 17:02:23.881344 15493 projected.go:194] Error preparing data for projected volume kube-api-access-q46jg for pod openshift-network-operator/iptables-alerter-czzz2: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.881472 master-0 kubenswrapper[15493]: E0216 17:02:23.881417 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg podName:b3fa6ac1-781f-446c-b6b4-18bdb7723c23 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.881394942 +0000 UTC m=+25.031568012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-q46jg" (UniqueName: "kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg") pod "iptables-alerter-czzz2" (UID: "b3fa6ac1-781f-446c-b6b4-18bdb7723c23") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.881758 master-0 kubenswrapper[15493]: W0216 17:02:23.881638 15493 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.881758 master-0 kubenswrapper[15493]: E0216 17:02:23.881722 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.881758 master-0 kubenswrapper[15493]: E0216 17:02:23.881752 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.901435 master-0 kubenswrapper[15493]: W0216 17:02:23.901314 15493 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.901435 master-0 kubenswrapper[15493]: E0216 17:02:23.901428 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.921562 master-0 kubenswrapper[15493]: E0216 17:02:23.921483 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.921562 master-0 kubenswrapper[15493]: E0216 17:02:23.921530 15493 projected.go:194] Error preparing data for projected volume kube-api-access-sx92x for pod openshift-machine-config-operator/machine-config-daemon-98q6v: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-daemon/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.921562 master-0 kubenswrapper[15493]: W0216 17:02:23.921507 15493 reflector.go:561] object-"openshift-monitoring"/"cluster-monitoring-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dcluster-monitoring-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.921837 master-0 kubenswrapper[15493]: E0216 17:02:23.921576 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"cluster-monitoring-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/secrets?fieldSelector=metadata.name%3Dcluster-monitoring-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.921837 master-0 kubenswrapper[15493]: E0216 17:02:23.921608 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.921588356 +0000 UTC m=+25.071761436 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-sx92x" (UniqueName: "kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-daemon/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.940863 master-0 kubenswrapper[15493]: E0216 17:02:23.940790 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.940863 master-0 kubenswrapper[15493]: E0216 17:02:23.940832 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xmk2b for pod openshift-multus/multus-admission-controller-7c64d55f8-4jz2t: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.940863 master-0 kubenswrapper[15493]: W0216 17:02:23.940807 15493 reflector.go:561] object-"openshift-service-ca"/"signing-cabundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.940863 master-0 kubenswrapper[15493]: E0216 17:02:23.940861 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-cabundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.941216 master-0 kubenswrapper[15493]: E0216 17:02:23.940907 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.940886097 +0000 UTC m=+25.091059247 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xmk2b" (UniqueName: "kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.961126 master-0 kubenswrapper[15493]: W0216 17:02:23.961025 15493 reflector.go:561] object-"openshift-dns"/"dns-default-metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.961126 master-0 kubenswrapper[15493]: E0216 17:02:23.961115 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default-metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.962296 master-0 kubenswrapper[15493]: E0216 17:02:23.962253 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.962296 master-0 kubenswrapper[15493]: E0216 17:02:23.962282 15493 projected.go:194] Error preparing data for projected volume kube-api-access-fkwxl for pod openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-control-plane/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.962399 master-0 kubenswrapper[15493]: E0216 17:02:23.962336 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.962323084 +0000 UTC m=+25.112496154 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fkwxl" (UniqueName: "kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-control-plane/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.980764 master-0 kubenswrapper[15493]: W0216 17:02:23.980662 15493 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:23.980884 master-0 kubenswrapper[15493]: E0216 17:02:23.980760 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:23.982796 master-0 kubenswrapper[15493]: E0216 17:02:23.982754 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:23.982796 master-0 kubenswrapper[15493]: E0216 17:02:23.982787 15493 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:23.982970 master-0 kubenswrapper[15493]: E0216 17:02:23.982844 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:25.982830937 +0000 UTC m=+25.133004007 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.000858 master-0 kubenswrapper[15493]: W0216 17:02:24.000779 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.000975 master-0 kubenswrapper[15493]: E0216 17:02:24.000861 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.002980 master-0 kubenswrapper[15493]: E0216 17:02:24.002943 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.002980 master-0 kubenswrapper[15493]: E0216 17:02:24.002977 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.003074 master-0 kubenswrapper[15493]: E0216 17:02:24.003023 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.003011721 +0000 UTC m=+25.153184791 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.020854 master-0 kubenswrapper[15493]: W0216 17:02:24.020780 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.020854 master-0 kubenswrapper[15493]: E0216 17:02:24.020844 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.021876 master-0 kubenswrapper[15493]: E0216 17:02:24.021837 15493 projected.go:288] Couldn't get configMap openshift-dns/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.021876 master-0 kubenswrapper[15493]: E0216 17:02:24.021864 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zl5w2 for pod openshift-dns/dns-default-qcgxx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/dns/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.022008 master-0 kubenswrapper[15493]: E0216 17:02:24.021940 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2 podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.021903051 +0000 UTC m=+25.172076121 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zl5w2" (UniqueName: "kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/dns/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.041093 master-0 kubenswrapper[15493]: W0216 17:02:24.041005 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.041093 master-0 kubenswrapper[15493]: E0216 17:02:24.041082 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.041093 master-0 kubenswrapper[15493]: E0216 17:02:24.041090 15493 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.041391 master-0 kubenswrapper[15493]: E0216 17:02:24.041114 15493 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.041391 master-0 kubenswrapper[15493]: E0216 17:02:24.041193 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.0411466 +0000 UTC m=+25.191319670 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.060678 master-0 kubenswrapper[15493]: W0216 17:02:24.060577 15493 reflector.go:561] object-"openshift-etcd-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.060807 master-0 kubenswrapper[15493]: E0216 17:02:24.060682 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.061828 master-0 kubenswrapper[15493]: E0216 17:02:24.061790 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.061828 master-0 kubenswrapper[15493]: E0216 17:02:24.061815 15493 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.062026 master-0 kubenswrapper[15493]: E0216 17:02:24.061867 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.061854998 +0000 UTC m=+25.212028068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.080709 master-0 kubenswrapper[15493]: W0216 17:02:24.080579 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.080709 master-0 kubenswrapper[15493]: E0216 17:02:24.080681 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.081817 master-0 kubenswrapper[15493]: E0216 17:02:24.081780 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.081817 master-0 kubenswrapper[15493]: E0216 17:02:24.081803 15493 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.081991 master-0 kubenswrapper[15493]: E0216 17:02:24.081861 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.081844267 +0000 UTC m=+25.232017337 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.100943 master-0 kubenswrapper[15493]: W0216 17:02:24.100837 15493 reflector.go:561] object-"openshift-network-operator"/"iptables-alerter-script": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.100943 master-0 kubenswrapper[15493]: E0216 17:02:24.100898 15493 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.101228 master-0 kubenswrapper[15493]: E0216 17:02:24.100974 15493 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.101228 master-0 kubenswrapper[15493]: E0216 17:02:24.100978 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"iptables-alerter-script\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.101228 master-0 kubenswrapper[15493]: E0216 17:02:24.101067 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.101042425 +0000 UTC m=+25.251215525 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.121089 master-0 kubenswrapper[15493]: E0216 17:02:24.121022 15493 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.121089 master-0 kubenswrapper[15493]: E0216 17:02:24.121090 15493 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.121303 master-0 kubenswrapper[15493]: E0216 17:02:24.121212 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.121179278 +0000 UTC m=+25.271352388 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.121351 master-0 kubenswrapper[15493]: W0216 17:02:24.121302 15493 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.121398 master-0 kubenswrapper[15493]: E0216 17:02:24.121368 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.141178 master-0 kubenswrapper[15493]: E0216 17:02:24.141110 15493 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.141583 master-0 kubenswrapper[15493]: E0216 17:02:24.141517 15493 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.141648 master-0 kubenswrapper[15493]: E0216 17:02:24.141638 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.141618119 +0000 UTC m=+25.291791189 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.142120 master-0 kubenswrapper[15493]: W0216 17:02:24.142001 15493 reflector.go:561] object-"openshift-service-ca"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.142216 master-0 kubenswrapper[15493]: E0216 17:02:24.142119 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.161299 master-0 kubenswrapper[15493]: E0216 17:02:24.161216 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.161299 master-0 kubenswrapper[15493]: W0216 17:02:24.161179 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.161299 master-0 kubenswrapper[15493]: E0216 17:02:24.161283 15493 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.161779 master-0 kubenswrapper[15493]: E0216 17:02:24.161327 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.161779 master-0 kubenswrapper[15493]: E0216 17:02:24.161403 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.161375382 +0000 UTC m=+25.311548482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.181197 master-0 kubenswrapper[15493]: E0216 17:02:24.181126 15493 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.181197 master-0 kubenswrapper[15493]: E0216 17:02:24.181166 15493 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.181454 master-0 kubenswrapper[15493]: E0216 17:02:24.181225 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.181207567 +0000 UTC m=+25.331380637 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.181454 master-0 kubenswrapper[15493]: W0216 17:02:24.181254 15493 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.181454 master-0 kubenswrapper[15493]: E0216 17:02:24.181358 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.201320 master-0 kubenswrapper[15493]: E0216 17:02:24.201247 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.201320 master-0 kubenswrapper[15493]: E0216 17:02:24.201297 15493 projected.go:194] Error preparing data for projected volume kube-api-access-bnnc5 for pod openshift-multus/network-metrics-daemon-279g6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/metrics-daemon-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.201601 master-0 kubenswrapper[15493]: E0216 17:02:24.201399 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5 podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.20135034 +0000 UTC m=+25.351523450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bnnc5" (UniqueName: "kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/metrics-daemon-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.202114 master-0 kubenswrapper[15493]: W0216 17:02:24.201998 15493 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.202212 master-0 kubenswrapper[15493]: E0216 17:02:24.202132 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.203232 master-0 kubenswrapper[15493]: E0216 17:02:24.203188 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.204191 master-0 kubenswrapper[15493]: E0216 17:02:24.204142 15493 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.221001 master-0 kubenswrapper[15493]: E0216 17:02:24.220899 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.221001 master-0 kubenswrapper[15493]: E0216 17:02:24.220973 15493 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.221290 master-0 kubenswrapper[15493]: E0216 17:02:24.221052 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.22102892 +0000 UTC m=+25.371202030 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.221413 master-0 kubenswrapper[15493]: W0216 17:02:24.221305 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.221483 master-0 kubenswrapper[15493]: E0216 17:02:24.221429 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.241668 master-0 kubenswrapper[15493]: W0216 17:02:24.241535 15493 reflector.go:561] object-"openshift-service-ca"/"signing-key": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.241668 master-0 kubenswrapper[15493]: E0216 17:02:24.241619 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-key\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.242781 master-0 kubenswrapper[15493]: E0216 17:02:24.242710 15493 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.242781 master-0 kubenswrapper[15493]: E0216 17:02:24.242765 15493 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.243095 master-0 kubenswrapper[15493]: E0216 17:02:24.242874 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.242846808 +0000 UTC m=+25.393019908 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.261243 master-0 kubenswrapper[15493]: E0216 17:02:24.261183 15493 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.261243 master-0 kubenswrapper[15493]: E0216 17:02:24.261221 15493 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.261423 master-0 kubenswrapper[15493]: E0216 17:02:24.261362 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.261338817 +0000 UTC m=+25.411511937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.261820 master-0 kubenswrapper[15493]: W0216 17:02:24.261745 15493 reflector.go:561] object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.261872 master-0 kubenswrapper[15493]: E0216 17:02:24.261831 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-olm-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.281364 master-0 kubenswrapper[15493]: E0216 17:02:24.281313 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/bootstrap-kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:02:24.281557 master-0 kubenswrapper[15493]: E0216 17:02:24.281524 15493 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.92s" Feb 16 17:02:24.281623 master-0 kubenswrapper[15493]: I0216 17:02:24.281559 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:24.281623 master-0 kubenswrapper[15493]: I0216 17:02:24.281584 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:24.281719 master-0 kubenswrapper[15493]: I0216 17:02:24.281627 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:24.281719 master-0 kubenswrapper[15493]: I0216 17:02:24.281700 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:24.281845 master-0 kubenswrapper[15493]: E0216 17:02:24.281706 15493 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.281845 master-0 kubenswrapper[15493]: I0216 17:02:24.281740 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:24.281845 master-0 kubenswrapper[15493]: E0216 17:02:24.281759 15493 projected.go:194] Error preparing data for projected volume kube-api-access-6ftld for pod openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.281845 master-0 kubenswrapper[15493]: I0216 17:02:24.281776 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerStarted","Data":"836a8b0540247df6e45c7363dec062ae3f1c759c61215fa36d1a8c35a0e755fb"} Feb 16 17:02:24.282128 master-0 kubenswrapper[15493]: I0216 17:02:24.281854 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:24.282128 master-0 kubenswrapper[15493]: E0216 17:02:24.281870 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.28183446 +0000 UTC m=+25.432007630 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ftld" (UniqueName: "kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.283722 master-0 kubenswrapper[15493]: I0216 17:02:24.283661 15493 scope.go:117] "RemoveContainer" containerID="9563c6ff303edb4e0a6b2f6ce6960067c267be9fe8766c7044d1f1559d05730f" Feb 16 17:02:24.284146 master-0 kubenswrapper[15493]: E0216 17:02:24.284089 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:02:24.292999 master-0 kubenswrapper[15493]: I0216 17:02:24.292903 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:24.301350 master-0 kubenswrapper[15493]: W0216 17:02:24.301247 15493 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.301588 master-0 kubenswrapper[15493]: E0216 17:02:24.301357 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.301672 master-0 kubenswrapper[15493]: E0216 17:02:24.301585 15493 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.301672 master-0 kubenswrapper[15493]: E0216 17:02:24.301614 15493 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.301796 master-0 kubenswrapper[15493]: E0216 17:02:24.301705 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.301680885 +0000 UTC m=+25.451854005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.320873 master-0 kubenswrapper[15493]: W0216 17:02:24.320733 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.321085 master-0 kubenswrapper[15493]: E0216 17:02:24.320873 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.321817 master-0 kubenswrapper[15493]: E0216 17:02:24.321719 15493 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.321817 master-0 kubenswrapper[15493]: E0216 17:02:24.321754 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.322108 master-0 kubenswrapper[15493]: E0216 17:02:24.321830 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.321805858 +0000 UTC m=+25.471978938 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.340280 master-0 kubenswrapper[15493]: I0216 17:02:24.340064 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:24.340955 master-0 kubenswrapper[15493]: I0216 17:02:24.340849 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:24.341168 master-0 kubenswrapper[15493]: W0216 17:02:24.340872 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dperformance-addon-operator-webhook-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.341478 master-0 kubenswrapper[15493]: E0216 17:02:24.341167 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"performance-addon-operator-webhook-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dperformance-addon-operator-webhook-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.341996 master-0 kubenswrapper[15493]: E0216 17:02:24.341882 15493 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.341996 master-0 kubenswrapper[15493]: E0216 17:02:24.341917 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/serviceaccounts/cluster-olm-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.342230 master-0 kubenswrapper[15493]: E0216 17:02:24.342152 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.342120165 +0000 UTC m=+25.492293245 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/serviceaccounts/cluster-olm-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.361516 master-0 kubenswrapper[15493]: E0216 17:02:24.361462 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.361516 master-0 kubenswrapper[15493]: E0216 17:02:24.361494 15493 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.361516 master-0 kubenswrapper[15493]: W0216 17:02:24.361437 15493 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.361842 master-0 kubenswrapper[15493]: E0216 17:02:24.361546 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.361532299 +0000 UTC m=+25.511705379 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.361842 master-0 kubenswrapper[15493]: E0216 17:02:24.361553 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.361842 master-0 kubenswrapper[15493]: I0216 17:02:24.361577 15493 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:02:24.364531 master-0 kubenswrapper[15493]: I0216 17:02:24.364469 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:02:24.364647 master-0 kubenswrapper[15493]: I0216 17:02:24.364580 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:02:24.364647 master-0 kubenswrapper[15493]: I0216 17:02:24.364602 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:02:24.365246 master-0 kubenswrapper[15493]: I0216 17:02:24.365209 15493 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:02:24.381523 master-0 kubenswrapper[15493]: W0216 17:02:24.381384 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.381523 master-0 kubenswrapper[15493]: E0216 17:02:24.381513 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.382231 master-0 kubenswrapper[15493]: E0216 17:02:24.381586 15493 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.382318 master-0 kubenswrapper[15493]: E0216 17:02:24.382231 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.382381 master-0 kubenswrapper[15493]: E0216 17:02:24.382331 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.382304959 +0000 UTC m=+25.532478069 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.401124 master-0 kubenswrapper[15493]: W0216 17:02:24.400972 15493 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.401314 master-0 kubenswrapper[15493]: E0216 17:02:24.401133 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.401314 master-0 kubenswrapper[15493]: E0216 17:02:24.401282 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.401391 master-0 kubenswrapper[15493]: E0216 17:02:24.401331 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2gq8x for pod openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/cluster-node-tuning-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.401481 master-0 kubenswrapper[15493]: E0216 17:02:24.401440 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.401412744 +0000 UTC m=+25.551585864 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2gq8x" (UniqueName: "kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/cluster-node-tuning-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.422665 master-0 kubenswrapper[15493]: E0216 17:02:24.422600 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.422665 master-0 kubenswrapper[15493]: E0216 17:02:24.422648 15493 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.422900 master-0 kubenswrapper[15493]: E0216 17:02:24.422742 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.422719378 +0000 UTC m=+25.572892458 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.423337 master-0 kubenswrapper[15493]: W0216 17:02:24.423182 15493 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.423446 master-0 kubenswrapper[15493]: E0216 17:02:24.423353 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.441975 master-0 kubenswrapper[15493]: E0216 17:02:24.441868 15493 projected.go:288] Couldn't get configMap openshift-dns/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.441975 master-0 kubenswrapper[15493]: E0216 17:02:24.441930 15493 projected.go:194] Error preparing data for projected volume kube-api-access-8m29g for pod openshift-dns/node-resolver-vfxj4: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/node-resolver/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.442166 master-0 kubenswrapper[15493]: E0216 17:02:24.442066 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g podName:a6fe41b0-1a42-4f07-8220-d9aaa50788ad nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.442035669 +0000 UTC m=+25.592208739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8m29g" (UniqueName: "kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g") pod "node-resolver-vfxj4" (UID: "a6fe41b0-1a42-4f07-8220-d9aaa50788ad") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/node-resolver/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.442412 master-0 kubenswrapper[15493]: W0216 17:02:24.442303 15493 reflector.go:561] object-"openshift-ingress-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.442510 master-0 kubenswrapper[15493]: E0216 17:02:24.442425 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.445044 master-0 kubenswrapper[15493]: I0216 17:02:24.444995 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:24.445358 master-0 kubenswrapper[15493]: I0216 17:02:24.445315 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:24.447591 master-0 kubenswrapper[15493]: I0216 17:02:24.447543 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:24.461805 master-0 kubenswrapper[15493]: W0216 17:02:24.461704 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.461961 master-0 kubenswrapper[15493]: E0216 17:02:24.461802 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.462806 master-0 kubenswrapper[15493]: E0216 17:02:24.462755 15493 projected.go:288] Couldn't get configMap openshift-cloud-controller-manager-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.462806 master-0 kubenswrapper[15493]: E0216 17:02:24.462804 15493 projected.go:194] Error preparing data for projected volume kube-api-access-r87zw for pod openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/serviceaccounts/cluster-cloud-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.463122 master-0 kubenswrapper[15493]: E0216 17:02:24.462893 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.462867691 +0000 UTC m=+25.613040781 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r87zw" (UniqueName: "kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/serviceaccounts/cluster-cloud-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.481700 master-0 kubenswrapper[15493]: W0216 17:02:24.481588 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dnode-tuning-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.481888 master-0 kubenswrapper[15493]: E0216 17:02:24.481696 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"node-tuning-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/secrets?fieldSelector=metadata.name%3Dnode-tuning-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.482058 master-0 kubenswrapper[15493]: E0216 17:02:24.481899 15493 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.482058 master-0 kubenswrapper[15493]: E0216 17:02:24.481939 15493 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.482058 master-0 kubenswrapper[15493]: E0216 17:02:24.482024 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.482000367 +0000 UTC m=+25.632173647 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.501339 master-0 kubenswrapper[15493]: I0216 17:02:24.501253 15493 status_manager.go:851] "Failed to get status for pod" podUID="80d3b238-70c3-4e71-96a1-99405352033f" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-74b6595c6d-pfzq2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:24.501480 master-0 kubenswrapper[15493]: E0216 17:02:24.501394 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.501480 master-0 kubenswrapper[15493]: E0216 17:02:24.501426 15493 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.501609 master-0 kubenswrapper[15493]: E0216 17:02:24.501534 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.501504143 +0000 UTC m=+25.651677273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.520813 master-0 kubenswrapper[15493]: W0216 17:02:24.520718 15493 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.520813 master-0 kubenswrapper[15493]: E0216 17:02:24.520803 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.521787 master-0 kubenswrapper[15493]: E0216 17:02:24.521737 15493 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.521787 master-0 kubenswrapper[15493]: E0216 17:02:24.521782 15493 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.521900 master-0 kubenswrapper[15493]: E0216 17:02:24.521861 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.521842621 +0000 UTC m=+25.672015691 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.523695 master-0 kubenswrapper[15493]: I0216 17:02:24.523651 15493 scope.go:117] "RemoveContainer" containerID="9563c6ff303edb4e0a6b2f6ce6960067c267be9fe8766c7044d1f1559d05730f" Feb 16 17:02:24.524001 master-0 kubenswrapper[15493]: E0216 17:02:24.523973 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:02:24.540844 master-0 kubenswrapper[15493]: I0216 17:02:24.540772 15493 request.go:700] Waited for 1.130472656s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 16 17:02:24.541053 master-0 kubenswrapper[15493]: E0216 17:02:24.540795 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.541053 master-0 kubenswrapper[15493]: E0216 17:02:24.540955 15493 projected.go:194] Error preparing data for projected volume kube-api-access-j5qxm for pod openshift-multus/multus-additional-cni-plugins-rjdlk: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.541053 master-0 kubenswrapper[15493]: E0216 17:02:24.541032 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.541011349 +0000 UTC m=+25.691184419 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j5qxm" (UniqueName: "kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.541716 master-0 kubenswrapper[15493]: W0216 17:02:24.541656 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.541790 master-0 kubenswrapper[15493]: E0216 17:02:24.541719 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.549599 master-0 kubenswrapper[15493]: I0216 17:02:24.549551 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:24.560513 master-0 kubenswrapper[15493]: W0216 17:02:24.560437 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.560513 master-0 kubenswrapper[15493]: E0216 17:02:24.560512 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.561447 master-0 kubenswrapper[15493]: E0216 17:02:24.561424 15493 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.561447 master-0 kubenswrapper[15493]: E0216 17:02:24.561446 15493 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/serviceaccounts/service-ca/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.561539 master-0 kubenswrapper[15493]: E0216 17:02:24.561521 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.561501531 +0000 UTC m=+25.711674601 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/serviceaccounts/service-ca/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.573168 master-0 kubenswrapper[15493]: E0216 17:02:24.573103 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Feb 16 17:02:24.581046 master-0 kubenswrapper[15493]: W0216 17:02:24.580974 15493 reflector.go:561] object-"openshift-monitoring"/"telemetry-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dtelemetry-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.581046 master-0 kubenswrapper[15493]: E0216 17:02:24.581059 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"telemetry-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dtelemetry-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.581995 master-0 kubenswrapper[15493]: E0216 17:02:24.581966 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.582035 master-0 kubenswrapper[15493]: E0216 17:02:24.581999 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-baremetal-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.582085 master-0 kubenswrapper[15493]: E0216 17:02:24.582074 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.582054245 +0000 UTC m=+25.732227325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-baremetal-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.601797 master-0 kubenswrapper[15493]: W0216 17:02:24.601692 15493 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.601797 master-0 kubenswrapper[15493]: E0216 17:02:24.601773 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.602874 master-0 kubenswrapper[15493]: E0216 17:02:24.602843 15493 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.602935 master-0 kubenswrapper[15493]: E0216 17:02:24.602874 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/serviceaccounts/operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.602968 master-0 kubenswrapper[15493]: E0216 17:02:24.602960 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.602940767 +0000 UTC m=+25.753113847 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/serviceaccounts/operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.621149 master-0 kubenswrapper[15493]: W0216 17:02:24.621061 15493 reflector.go:561] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.621149 master-0 kubenswrapper[15493]: E0216 17:02:24.621141 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.622174 master-0 kubenswrapper[15493]: E0216 17:02:24.622134 15493 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.622174 master-0 kubenswrapper[15493]: E0216 17:02:24.622165 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/serviceaccounts/cloud-credential-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.622355 master-0 kubenswrapper[15493]: E0216 17:02:24.622256 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.622234098 +0000 UTC m=+25.772407218 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/serviceaccounts/cloud-credential-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.641434 master-0 kubenswrapper[15493]: W0216 17:02:24.641087 15493 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.641434 master-0 kubenswrapper[15493]: E0216 17:02:24.641187 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.641889 master-0 kubenswrapper[15493]: E0216 17:02:24.641481 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.641889 master-0 kubenswrapper[15493]: E0216 17:02:24.641530 15493 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/marketplace-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.641889 master-0 kubenswrapper[15493]: E0216 17:02:24.641615 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.6415929 +0000 UTC m=+25.791766010 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/marketplace-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.660601 master-0 kubenswrapper[15493]: W0216 17:02:24.660500 15493 reflector.go:561] object-"openshift-network-node-identity"/"ovnkube-identity-cm": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.660769 master-0 kubenswrapper[15493]: E0216 17:02:24.660599 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.661385 master-0 kubenswrapper[15493]: E0216 17:02:24.661295 15493 projected.go:288] Couldn't get configMap openshift-monitoring/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.661385 master-0 kubenswrapper[15493]: E0216 17:02:24.661327 15493 projected.go:194] Error preparing data for projected volume kube-api-access-j7w67 for pod openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/serviceaccounts/cluster-monitoring-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.661630 master-0 kubenswrapper[15493]: E0216 17:02:24.661400 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67 podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.661381284 +0000 UTC m=+25.811554404 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7w67" (UniqueName: "kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/serviceaccounts/cluster-monitoring-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.681361 master-0 kubenswrapper[15493]: W0216 17:02:24.681214 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.681361 master-0 kubenswrapper[15493]: E0216 17:02:24.681322 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.682230 master-0 kubenswrapper[15493]: E0216 17:02:24.682188 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.682230 master-0 kubenswrapper[15493]: E0216 17:02:24.682222 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.682412 master-0 kubenswrapper[15493]: E0216 17:02:24.682303 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.682276187 +0000 UTC m=+25.832449297 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.701419 master-0 kubenswrapper[15493]: W0216 17:02:24.701169 15493 reflector.go:561] object-"openshift-marketplace"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.701419 master-0 kubenswrapper[15493]: E0216 17:02:24.701243 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.703463 master-0 kubenswrapper[15493]: E0216 17:02:24.703430 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.703463 master-0 kubenswrapper[15493]: E0216 17:02:24.703458 15493 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.703580 master-0 kubenswrapper[15493]: E0216 17:02:24.703528 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.703510419 +0000 UTC m=+25.853683489 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.720999 master-0 kubenswrapper[15493]: W0216 17:02:24.720865 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.720999 master-0 kubenswrapper[15493]: E0216 17:02:24.720997 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.721859 master-0 kubenswrapper[15493]: E0216 17:02:24.721820 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.721859 master-0 kubenswrapper[15493]: E0216 17:02:24.721859 15493 projected.go:194] Error preparing data for projected volume kube-api-access-9xrw2 for pod openshift-ovn-kubernetes/ovnkube-node-flr86: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-node/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.722024 master-0 kubenswrapper[15493]: E0216 17:02:24.721951 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2 podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.721905736 +0000 UTC m=+25.872078846 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-9xrw2" (UniqueName: "kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-node/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.740875 master-0 kubenswrapper[15493]: E0216 17:02:24.740786 15493 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.740875 master-0 kubenswrapper[15493]: E0216 17:02:24.740822 15493 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.741088 master-0 kubenswrapper[15493]: E0216 17:02:24.740942 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.740900278 +0000 UTC m=+25.891073388 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.741088 master-0 kubenswrapper[15493]: W0216 17:02:24.741022 15493 reflector.go:561] object-"openshift-operator-controller"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.741205 master-0 kubenswrapper[15493]: E0216 17:02:24.741118 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-controller\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.761459 master-0 kubenswrapper[15493]: W0216 17:02:24.761378 15493 reflector.go:561] object-"openshift-etcd"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.761554 master-0 kubenswrapper[15493]: E0216 17:02:24.761464 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.763541 master-0 kubenswrapper[15493]: E0216 17:02:24.763494 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.763621 master-0 kubenswrapper[15493]: E0216 17:02:24.763542 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hmj52 for pod openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.763667 master-0 kubenswrapper[15493]: E0216 17:02:24.763626 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52 podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.763602629 +0000 UTC m=+25.913775739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hmj52" (UniqueName: "kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.781104 master-0 kubenswrapper[15493]: W0216 17:02:24.781021 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.781184 master-0 kubenswrapper[15493]: E0216 17:02:24.781109 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.783203 master-0 kubenswrapper[15493]: E0216 17:02:24.783173 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.783203 master-0 kubenswrapper[15493]: E0216 17:02:24.783204 15493 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-autoscaler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.783326 master-0 kubenswrapper[15493]: E0216 17:02:24.783258 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.783244469 +0000 UTC m=+25.933417539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-autoscaler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.801051 master-0 kubenswrapper[15493]: W0216 17:02:24.800904 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.801051 master-0 kubenswrapper[15493]: E0216 17:02:24.801040 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.801870 master-0 kubenswrapper[15493]: E0216 17:02:24.801829 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.801870 master-0 kubenswrapper[15493]: E0216 17:02:24.801852 15493 projected.go:194] Error preparing data for projected volume kube-api-access-8p2jz for pod openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.802043 master-0 kubenswrapper[15493]: E0216 17:02:24.801888 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.801879312 +0000 UTC m=+25.952052382 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8p2jz" (UniqueName: "kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.821670 master-0 kubenswrapper[15493]: W0216 17:02:24.821580 15493 reflector.go:561] object-"openshift-catalogd"/"catalogd-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dcatalogd-trusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.821670 master-0 kubenswrapper[15493]: E0216 17:02:24.821662 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"catalogd-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dcatalogd-trusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.822619 master-0 kubenswrapper[15493]: E0216 17:02:24.822576 15493 projected.go:288] Couldn't get configMap openshift-network-node-identity/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.822619 master-0 kubenswrapper[15493]: E0216 17:02:24.822612 15493 projected.go:194] Error preparing data for projected volume kube-api-access-vk7xl for pod openshift-network-node-identity/network-node-identity-hhcpr: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/serviceaccounts/network-node-identity/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.822746 master-0 kubenswrapper[15493]: E0216 17:02:24.822681 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.822663612 +0000 UTC m=+25.972836712 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vk7xl" (UniqueName: "kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/serviceaccounts/network-node-identity/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.841357 master-0 kubenswrapper[15493]: E0216 17:02:24.841294 15493 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.841357 master-0 kubenswrapper[15493]: E0216 17:02:24.841328 15493 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.841581 master-0 kubenswrapper[15493]: E0216 17:02:24.841390 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.841373277 +0000 UTC m=+25.991546377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.841718 master-0 kubenswrapper[15493]: W0216 17:02:24.841648 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.841765 master-0 kubenswrapper[15493]: E0216 17:02:24.841725 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.861607 master-0 kubenswrapper[15493]: E0216 17:02:24.861477 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.861607 master-0 kubenswrapper[15493]: W0216 17:02:24.861443 15493 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.861607 master-0 kubenswrapper[15493]: E0216 17:02:24.861515 15493 projected.go:194] Error preparing data for projected volume kube-api-access-wn82n for pod openshift-cluster-node-tuning-operator/tuned-l5kbz: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/tuned/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.861607 master-0 kubenswrapper[15493]: E0216 17:02:24.861555 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.861978 master-0 kubenswrapper[15493]: E0216 17:02:24.861690 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n podName:c45ce0e5-c50b-4210-b7bb-82db2b2bc1db nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.861662684 +0000 UTC m=+26.011835774 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wn82n" (UniqueName: "kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n") pod "tuned-l5kbz" (UID: "c45ce0e5-c50b-4210-b7bb-82db2b2bc1db") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/tuned/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.880989 master-0 kubenswrapper[15493]: W0216 17:02:24.880822 15493 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.880989 master-0 kubenswrapper[15493]: E0216 17:02:24.880906 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.882989 master-0 kubenswrapper[15493]: E0216 17:02:24.882944 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:24.883039 master-0 kubenswrapper[15493]: E0216 17:02:24.882991 15493 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.883094 master-0 kubenswrapper[15493]: E0216 17:02:24.883067 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:26.88304827 +0000 UTC m=+26.033221350 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:24.901308 master-0 kubenswrapper[15493]: W0216 17:02:24.901148 15493 reflector.go:561] object-"openshift-ingress-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.901566 master-0 kubenswrapper[15493]: E0216 17:02:24.901358 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.920935 master-0 kubenswrapper[15493]: W0216 17:02:24.920829 15493 reflector.go:561] object-"openshift-network-node-identity"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.921177 master-0 kubenswrapper[15493]: E0216 17:02:24.920913 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.941454 master-0 kubenswrapper[15493]: W0216 17:02:24.941324 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.941653 master-0 kubenswrapper[15493]: E0216 17:02:24.941452 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.961098 master-0 kubenswrapper[15493]: W0216 17:02:24.960971 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dcluster-baremetal-operator-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.961293 master-0 kubenswrapper[15493]: E0216 17:02:24.961106 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dcluster-baremetal-operator-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:24.980677 master-0 kubenswrapper[15493]: W0216 17:02:24.980577 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-autoscaler-operator-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:24.980891 master-0 kubenswrapper[15493]: E0216 17:02:24.980677 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-autoscaler-operator-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.001730 master-0 kubenswrapper[15493]: W0216 17:02:25.001608 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-j874l": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-dockercfg-j874l&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.001730 master-0 kubenswrapper[15493]: E0216 17:02:25.001697 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cloud-credential-operator-dockercfg-j874l\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-dockercfg-j874l&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.021006 master-0 kubenswrapper[15493]: W0216 17:02:25.020891 15493 reflector.go:561] object-"openshift-monitoring"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.021006 master-0 kubenswrapper[15493]: E0216 17:02:25.020992 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-monitoring\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.040882 master-0 kubenswrapper[15493]: W0216 17:02:25.040752 15493 reflector.go:561] object-"openshift-network-node-identity"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.041131 master-0 kubenswrapper[15493]: E0216 17:02:25.040898 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.061058 master-0 kubenswrapper[15493]: W0216 17:02:25.060976 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.061058 master-0 kubenswrapper[15493]: E0216 17:02:25.061051 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.081531 master-0 kubenswrapper[15493]: W0216 17:02:25.081425 15493 reflector.go:561] object-"openshift-insights"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.081531 master-0 kubenswrapper[15493]: E0216 17:02:25.081528 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.100900 master-0 kubenswrapper[15493]: W0216 17:02:25.100803 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-gtxjb": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-gtxjb&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.100900 master-0 kubenswrapper[15493]: E0216 17:02:25.100897 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-gtxjb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-gtxjb&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.121910 master-0 kubenswrapper[15493]: W0216 17:02:25.121725 15493 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.121910 master-0 kubenswrapper[15493]: E0216 17:02:25.121869 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.141863 master-0 kubenswrapper[15493]: W0216 17:02:25.141733 15493 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.142222 master-0 kubenswrapper[15493]: E0216 17:02:25.141866 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.161543 master-0 kubenswrapper[15493]: W0216 17:02:25.161421 15493 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.161724 master-0 kubenswrapper[15493]: E0216 17:02:25.161529 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.181194 master-0 kubenswrapper[15493]: W0216 17:02:25.181041 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.181194 master-0 kubenswrapper[15493]: E0216 17:02:25.181137 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.201388 master-0 kubenswrapper[15493]: W0216 17:02:25.201286 15493 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.201512 master-0 kubenswrapper[15493]: E0216 17:02:25.201392 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.203819 master-0 kubenswrapper[15493]: E0216 17:02:25.203749 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:25.203819 master-0 kubenswrapper[15493]: E0216 17:02:25.203817 15493 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:25.204009 master-0 kubenswrapper[15493]: E0216 17:02:25.203908 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.203881691 +0000 UTC m=+32.354054801 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:25.204905 master-0 kubenswrapper[15493]: E0216 17:02:25.204861 15493 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:25.204905 master-0 kubenswrapper[15493]: E0216 17:02:25.204899 15493 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:25.205128 master-0 kubenswrapper[15493]: E0216 17:02:25.205018 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.20499152 +0000 UTC m=+32.355164590 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:25.221544 master-0 kubenswrapper[15493]: W0216 17:02:25.221432 15493 reflector.go:561] object-"openshift-dns-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.221544 master-0 kubenswrapper[15493]: E0216 17:02:25.221538 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.241183 master-0 kubenswrapper[15493]: W0216 17:02:25.241093 15493 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.241475 master-0 kubenswrapper[15493]: E0216 17:02:25.241228 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.261242 master-0 kubenswrapper[15493]: E0216 17:02:25.261133 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/bootstrap-kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:02:25.261242 master-0 kubenswrapper[15493]: I0216 17:02:25.261199 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:25.281407 master-0 kubenswrapper[15493]: W0216 17:02:25.281298 15493 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.281407 master-0 kubenswrapper[15493]: E0216 17:02:25.281398 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.301588 master-0 kubenswrapper[15493]: W0216 17:02:25.301445 15493 reflector.go:561] object-"openshift-insights"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.301588 master-0 kubenswrapper[15493]: E0216 17:02:25.301567 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.321356 master-0 kubenswrapper[15493]: W0216 17:02:25.321183 15493 reflector.go:561] object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.321356 master-0 kubenswrapper[15493]: E0216 17:02:25.321328 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-node-tuning-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.340792 master-0 kubenswrapper[15493]: W0216 17:02:25.340700 15493 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.340995 master-0 kubenswrapper[15493]: E0216 17:02:25.340789 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.401601 master-0 kubenswrapper[15493]: W0216 17:02:25.401301 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-q5h8t": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-q5h8t&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.401601 master-0 kubenswrapper[15493]: E0216 17:02:25.401520 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-q5h8t\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-q5h8t&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.421311 master-0 kubenswrapper[15493]: E0216 17:02:25.421235 15493 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:02:25.501611 master-0 kubenswrapper[15493]: W0216 17:02:25.501322 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-b9gfw": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-dockercfg-b9gfw&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.501611 master-0 kubenswrapper[15493]: E0216 17:02:25.501457 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-autoscaler-operator-dockercfg-b9gfw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-autoscaler-operator-dockercfg-b9gfw&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.520950 master-0 kubenswrapper[15493]: W0216 17:02:25.520828 15493 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.520950 master-0 kubenswrapper[15493]: E0216 17:02:25.520953 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.540762 master-0 kubenswrapper[15493]: W0216 17:02:25.540675 15493 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.540762 master-0 kubenswrapper[15493]: E0216 17:02:25.540767 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.560564 master-0 kubenswrapper[15493]: I0216 17:02:25.560484 15493 request.go:700] Waited for 1.059084708s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/pods/catalogd-controller-manager-67bc7c997f-mn6cr Feb 16 17:02:25.561735 master-0 kubenswrapper[15493]: I0216 17:02:25.561632 15493 status_manager.go:851] "Failed to get status for pod" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/pods/catalogd-controller-manager-67bc7c997f-mn6cr\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:25.582566 master-0 kubenswrapper[15493]: W0216 17:02:25.582383 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.582566 master-0 kubenswrapper[15493]: E0216 17:02:25.582543 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.600707 master-0 kubenswrapper[15493]: W0216 17:02:25.600601 15493 reflector.go:561] object-"openshift-service-ca-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.600707 master-0 kubenswrapper[15493]: E0216 17:02:25.600682 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.621377 master-0 kubenswrapper[15493]: W0216 17:02:25.621297 15493 reflector.go:561] object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/secrets?fieldSelector=metadata.name%3Dcluster-olm-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.621377 master-0 kubenswrapper[15493]: E0216 17:02:25.621370 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-olm-operator\"/\"cluster-olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/secrets?fieldSelector=metadata.name%3Dcluster-olm-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.660652 master-0 kubenswrapper[15493]: W0216 17:02:25.660485 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.660652 master-0 kubenswrapper[15493]: E0216 17:02:25.660580 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.681012 master-0 kubenswrapper[15493]: W0216 17:02:25.680843 15493 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.681223 master-0 kubenswrapper[15493]: E0216 17:02:25.681030 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.701572 master-0 kubenswrapper[15493]: W0216 17:02:25.701465 15493 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.701684 master-0 kubenswrapper[15493]: E0216 17:02:25.701575 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.721126 master-0 kubenswrapper[15493]: W0216 17:02:25.721049 15493 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.721126 master-0 kubenswrapper[15493]: E0216 17:02:25.721126 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.740802 master-0 kubenswrapper[15493]: W0216 17:02:25.740704 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.740914 master-0 kubenswrapper[15493]: E0216 17:02:25.740815 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.761225 master-0 kubenswrapper[15493]: W0216 17:02:25.761107 15493 reflector.go:561] object-"openshift-operator-controller"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.761316 master-0 kubenswrapper[15493]: E0216 17:02:25.761245 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-controller\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.780695 master-0 kubenswrapper[15493]: W0216 17:02:25.780477 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.780817 master-0 kubenswrapper[15493]: E0216 17:02:25.780696 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"cluster-storage-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.793851 master-0 kubenswrapper[15493]: I0216 17:02:25.793799 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:25.794013 master-0 kubenswrapper[15493]: I0216 17:02:25.793972 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:25.794259 master-0 kubenswrapper[15493]: I0216 17:02:25.794217 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:25.794370 master-0 kubenswrapper[15493]: I0216 17:02:25.794342 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:25.801289 master-0 kubenswrapper[15493]: W0216 17:02:25.801191 15493 reflector.go:561] object-"openshift-network-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.801402 master-0 kubenswrapper[15493]: E0216 17:02:25.801298 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.821665 master-0 kubenswrapper[15493]: W0216 17:02:25.821536 15493 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-5lx84": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-5lx84&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.821842 master-0 kubenswrapper[15493]: E0216 17:02:25.821669 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-5lx84\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-5lx84&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.841411 master-0 kubenswrapper[15493]: W0216 17:02:25.841297 15493 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.841645 master-0 kubenswrapper[15493]: E0216 17:02:25.841426 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.861066 master-0 kubenswrapper[15493]: W0216 17:02:25.860969 15493 reflector.go:561] object-"openshift-catalogd"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.861206 master-0 kubenswrapper[15493]: E0216 17:02:25.861089 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.882162 master-0 kubenswrapper[15493]: W0216 17:02:25.882032 15493 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.882323 master-0 kubenswrapper[15493]: E0216 17:02:25.882186 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.898277 master-0 kubenswrapper[15493]: I0216 17:02:25.898189 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:25.898277 master-0 kubenswrapper[15493]: I0216 17:02:25.898263 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:25.898509 master-0 kubenswrapper[15493]: I0216 17:02:25.898311 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:25.898509 master-0 kubenswrapper[15493]: I0216 17:02:25.898334 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:25.898509 master-0 kubenswrapper[15493]: I0216 17:02:25.898393 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:25.901682 master-0 kubenswrapper[15493]: W0216 17:02:25.901574 15493 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.901753 master-0 kubenswrapper[15493]: E0216 17:02:25.901712 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.921558 master-0 kubenswrapper[15493]: W0216 17:02:25.921352 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.921558 master-0 kubenswrapper[15493]: E0216 17:02:25.921493 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.941522 master-0 kubenswrapper[15493]: W0216 17:02:25.941404 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"cco-trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dcco-trusted-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.941716 master-0 kubenswrapper[15493]: E0216 17:02:25.941607 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cco-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dcco-trusted-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.960823 master-0 kubenswrapper[15493]: W0216 17:02:25.960718 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.961075 master-0 kubenswrapper[15493]: E0216 17:02:25.960831 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:25.981012 master-0 kubenswrapper[15493]: W0216 17:02:25.980955 15493 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:25.981012 master-0 kubenswrapper[15493]: E0216 17:02:25.981008 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.001093 master-0 kubenswrapper[15493]: W0216 17:02:26.001008 15493 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.001344 master-0 kubenswrapper[15493]: E0216 17:02:26.001320 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.002326 master-0 kubenswrapper[15493]: I0216 17:02:26.002255 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:02:26.002526 master-0 kubenswrapper[15493]: I0216 17:02:26.002506 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:26.002662 master-0 kubenswrapper[15493]: I0216 17:02:26.002646 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:26.002791 master-0 kubenswrapper[15493]: I0216 17:02:26.002773 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmk2b\" (UniqueName: \"kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:26.021359 master-0 kubenswrapper[15493]: W0216 17:02:26.021268 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-hk5sk": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-hk5sk&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.021532 master-0 kubenswrapper[15493]: E0216 17:02:26.021367 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-hk5sk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-hk5sk&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.041511 master-0 kubenswrapper[15493]: W0216 17:02:26.041431 15493 reflector.go:561] object-"openshift-multus"/"metrics-daemon-secret": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.041854 master-0 kubenswrapper[15493]: E0216 17:02:26.041816 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.061072 master-0 kubenswrapper[15493]: W0216 17:02:26.061010 15493 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.061286 master-0 kubenswrapper[15493]: E0216 17:02:26.061269 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.081041 master-0 kubenswrapper[15493]: W0216 17:02:26.080914 15493 reflector.go:561] object-"openshift-ingress-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.081244 master-0 kubenswrapper[15493]: E0216 17:02:26.081054 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.101718 master-0 kubenswrapper[15493]: W0216 17:02:26.101604 15493 reflector.go:561] object-"openshift-insights"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.101983 master-0 kubenswrapper[15493]: E0216 17:02:26.101730 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.106572 master-0 kubenswrapper[15493]: I0216 17:02:26.106504 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:02:26.106688 master-0 kubenswrapper[15493]: I0216 17:02:26.106653 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:26.106730 master-0 kubenswrapper[15493]: I0216 17:02:26.106706 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:26.107171 master-0 kubenswrapper[15493]: I0216 17:02:26.107142 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:02:26.107303 master-0 kubenswrapper[15493]: I0216 17:02:26.107290 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:26.107413 master-0 kubenswrapper[15493]: I0216 17:02:26.107399 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:26.122002 master-0 kubenswrapper[15493]: W0216 17:02:26.121882 15493 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.122175 master-0 kubenswrapper[15493]: E0216 17:02:26.122018 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.140939 master-0 kubenswrapper[15493]: W0216 17:02:26.140826 15493 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-6858s": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-6858s&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.141070 master-0 kubenswrapper[15493]: E0216 17:02:26.140993 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-6858s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-6858s&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.161692 master-0 kubenswrapper[15493]: W0216 17:02:26.161612 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.161692 master-0 kubenswrapper[15493]: E0216 17:02:26.161690 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.182052 master-0 kubenswrapper[15493]: W0216 17:02:26.181838 15493 reflector.go:561] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.182052 master-0 kubenswrapper[15493]: E0216 17:02:26.181993 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.201243 master-0 kubenswrapper[15493]: W0216 17:02:26.201158 15493 reflector.go:561] object-"openshift-catalogd"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.201636 master-0 kubenswrapper[15493]: E0216 17:02:26.201593 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.211241 master-0 kubenswrapper[15493]: I0216 17:02:26.211191 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:26.211474 master-0 kubenswrapper[15493]: I0216 17:02:26.211327 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:26.211474 master-0 kubenswrapper[15493]: I0216 17:02:26.211405 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:26.211474 master-0 kubenswrapper[15493]: I0216 17:02:26.211445 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:26.211865 master-0 kubenswrapper[15493]: I0216 17:02:26.211495 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:26.221644 master-0 kubenswrapper[15493]: W0216 17:02:26.221421 15493 reflector.go:561] object-"openshift-service-ca"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.221644 master-0 kubenswrapper[15493]: E0216 17:02:26.221513 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.241588 master-0 kubenswrapper[15493]: W0216 17:02:26.241480 15493 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.241777 master-0 kubenswrapper[15493]: E0216 17:02:26.241598 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.262016 master-0 kubenswrapper[15493]: W0216 17:02:26.261747 15493 reflector.go:561] object-"openshift-network-node-identity"/"network-node-identity-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.262016 master-0 kubenswrapper[15493]: E0216 17:02:26.261869 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.281793 master-0 kubenswrapper[15493]: W0216 17:02:26.281698 15493 reflector.go:561] object-"openshift-cluster-version"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.282013 master-0 kubenswrapper[15493]: E0216 17:02:26.281799 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.301262 master-0 kubenswrapper[15493]: W0216 17:02:26.301167 15493 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ztpz8": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-ztpz8&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.301262 master-0 kubenswrapper[15493]: E0216 17:02:26.301251 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-ztpz8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-ztpz8&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.314891 master-0 kubenswrapper[15493]: I0216 17:02:26.314819 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:26.315103 master-0 kubenswrapper[15493]: I0216 17:02:26.314908 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:26.315433 master-0 kubenswrapper[15493]: I0216 17:02:26.315312 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:26.315433 master-0 kubenswrapper[15493]: I0216 17:02:26.315414 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:26.315970 master-0 kubenswrapper[15493]: I0216 17:02:26.315872 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:26.321586 master-0 kubenswrapper[15493]: W0216 17:02:26.321507 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.321692 master-0 kubenswrapper[15493]: E0216 17:02:26.321587 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.344556 master-0 kubenswrapper[15493]: W0216 17:02:26.342124 15493 reflector.go:561] object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-x2982": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-dockercfg-x2982&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.344556 master-0 kubenswrapper[15493]: E0216 17:02:26.342258 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-storage-operator\"/\"cluster-storage-operator-dockercfg-x2982\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/secrets?fieldSelector=metadata.name%3Dcluster-storage-operator-dockercfg-x2982&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.361056 master-0 kubenswrapper[15493]: E0216 17:02:26.360974 15493 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:26.361056 master-0 kubenswrapper[15493]: E0216 17:02:26.361049 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/serviceaccounts/openshift-kube-scheduler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:26.361332 master-0 kubenswrapper[15493]: E0216 17:02:26.361159 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:30.361125616 +0000 UTC m=+29.511298726 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/serviceaccounts/openshift-kube-scheduler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:26.362127 master-0 kubenswrapper[15493]: W0216 17:02:26.362011 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.362251 master-0 kubenswrapper[15493]: E0216 17:02:26.362134 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.381132 master-0 kubenswrapper[15493]: W0216 17:02:26.380995 15493 reflector.go:561] object-"openshift-insights"/"operator-dockercfg-rzjlw": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Doperator-dockercfg-rzjlw&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.381132 master-0 kubenswrapper[15493]: E0216 17:02:26.381109 15493 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:26.381355 master-0 kubenswrapper[15493]: E0216 17:02:26.381166 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:26.381355 master-0 kubenswrapper[15493]: E0216 17:02:26.381285 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:30.381244259 +0000 UTC m=+29.531417369 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/serviceaccounts/kube-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:26.381355 master-0 kubenswrapper[15493]: E0216 17:02:26.381151 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"operator-dockercfg-rzjlw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Doperator-dockercfg-rzjlw&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.401406 master-0 kubenswrapper[15493]: W0216 17:02:26.401254 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-webhook-server-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.401406 master-0 kubenswrapper[15493]: E0216 17:02:26.401392 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-webhook-server-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.421153 master-0 kubenswrapper[15493]: I0216 17:02:26.421060 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:26.421402 master-0 kubenswrapper[15493]: E0216 17:02:26.421149 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:02:26.421437 master-0 kubenswrapper[15493]: E0216 17:02:26.421420 15493 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.14s" Feb 16 17:02:26.421580 master-0 kubenswrapper[15493]: I0216 17:02:26.421418 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:26.421727 master-0 kubenswrapper[15493]: I0216 17:02:26.421686 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:26.421825 master-0 kubenswrapper[15493]: I0216 17:02:26.421780 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:26.422304 master-0 kubenswrapper[15493]: I0216 17:02:26.422254 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:26.439467 master-0 kubenswrapper[15493]: I0216 17:02:26.439322 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:26.441275 master-0 kubenswrapper[15493]: E0216 17:02:26.441230 15493 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:26.441345 master-0 kubenswrapper[15493]: E0216 17:02:26.441294 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/serviceaccounts/kube-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:26.441457 master-0 kubenswrapper[15493]: W0216 17:02:26.441356 15493 reflector.go:561] object-"openshift-insights"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.441517 master-0 kubenswrapper[15493]: E0216 17:02:26.441461 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.441517 master-0 kubenswrapper[15493]: E0216 17:02:26.441403 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:30.44136501 +0000 UTC m=+29.591538130 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/serviceaccounts/kube-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:26.460844 master-0 kubenswrapper[15493]: E0216 17:02:26.460767 15493 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:26.460844 master-0 kubenswrapper[15493]: E0216 17:02:26.460835 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:26.461166 master-0 kubenswrapper[15493]: E0216 17:02:26.460959 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access podName:5d39ed24-4301-4cea-8a42-a08f4ba8b479 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:30.460908167 +0000 UTC m=+29.611081267 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access") pod "installer-2-master-0" (UID: "5d39ed24-4301-4cea-8a42-a08f4ba8b479") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:26.461387 master-0 kubenswrapper[15493]: W0216 17:02:26.461316 15493 reflector.go:561] object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-mzz6s": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-dockercfg-mzz6s&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.461438 master-0 kubenswrapper[15493]: E0216 17:02:26.461395 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"cluster-baremetal-operator-dockercfg-mzz6s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcluster-baremetal-operator-dockercfg-mzz6s&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.481436 master-0 kubenswrapper[15493]: W0216 17:02:26.481281 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.481436 master-0 kubenswrapper[15493]: E0216 17:02:26.481402 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.482280 master-0 kubenswrapper[15493]: E0216 17:02:26.482227 15493 projected.go:288] Couldn't get configMap openshift-cluster-version/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:26.482280 master-0 kubenswrapper[15493]: E0216 17:02:26.482276 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:26.482465 master-0 kubenswrapper[15493]: E0216 17:02:26.482368 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:30.482339314 +0000 UTC m=+29.632512424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:26.500884 master-0 kubenswrapper[15493]: W0216 17:02:26.500775 15493 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.500884 master-0 kubenswrapper[15493]: E0216 17:02:26.500877 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.521942 master-0 kubenswrapper[15493]: W0216 17:02:26.521815 15493 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.522176 master-0 kubenswrapper[15493]: E0216 17:02:26.521913 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.526484 master-0 kubenswrapper[15493]: I0216 17:02:26.526401 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:02:26.526600 master-0 kubenswrapper[15493]: I0216 17:02:26.526553 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:26.526847 master-0 kubenswrapper[15493]: I0216 17:02:26.526791 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:26.526904 master-0 kubenswrapper[15493]: I0216 17:02:26.526867 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:26.527212 master-0 kubenswrapper[15493]: I0216 17:02:26.527148 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:26.529965 master-0 kubenswrapper[15493]: I0216 17:02:26.529888 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:26.542162 master-0 kubenswrapper[15493]: W0216 17:02:26.542001 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-t46bw": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-t46bw&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.542162 master-0 kubenswrapper[15493]: E0216 17:02:26.542143 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-t46bw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-t46bw&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.560964 master-0 kubenswrapper[15493]: W0216 17:02:26.560833 15493 reflector.go:561] object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Doperator-controller-trusted-ca-bundle&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.561210 master-0 kubenswrapper[15493]: E0216 17:02:26.560990 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-controller\"/\"operator-controller-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/configmaps?fieldSelector=metadata.name%3Doperator-controller-trusted-ca-bundle&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.580544 master-0 kubenswrapper[15493]: I0216 17:02:26.580455 15493 request.go:700] Waited for 1.146386048s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-nslxl&limit=500&resourceVersion=0 Feb 16 17:02:26.581647 master-0 kubenswrapper[15493]: W0216 17:02:26.581519 15493 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-nslxl": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-nslxl&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.581647 master-0 kubenswrapper[15493]: E0216 17:02:26.581624 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-nslxl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-nslxl&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.601595 master-0 kubenswrapper[15493]: I0216 17:02:26.601500 15493 status_manager.go:851] "Failed to get status for pod" podUID="4549ea98-7379-49e1-8452-5efb643137ca" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-6fcf4c966-6bmf9\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:26.621152 master-0 kubenswrapper[15493]: W0216 17:02:26.621018 15493 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.621152 master-0 kubenswrapper[15493]: E0216 17:02:26.621147 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.632051 master-0 kubenswrapper[15493]: I0216 17:02:26.631957 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:26.632345 master-0 kubenswrapper[15493]: I0216 17:02:26.632295 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:26.632719 master-0 kubenswrapper[15493]: I0216 17:02:26.632671 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:26.632887 master-0 kubenswrapper[15493]: I0216 17:02:26.632817 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:26.633192 master-0 kubenswrapper[15493]: I0216 17:02:26.633136 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:26.641386 master-0 kubenswrapper[15493]: W0216 17:02:26.641217 15493 reflector.go:561] object-"openshift-etcd-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.641386 master-0 kubenswrapper[15493]: E0216 17:02:26.641383 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.642289 master-0 kubenswrapper[15493]: E0216 17:02:26.642234 15493 projected.go:288] Couldn't get configMap openshift-etcd/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:26.642404 master-0 kubenswrapper[15493]: E0216 17:02:26.642290 15493 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-etcd/installer-2-master-0: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:26.642541 master-0 kubenswrapper[15493]: E0216 17:02:26.642406 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access podName:b1b4fccc-6bf6-47ac-8ae1-32cad23734da nodeName:}" failed. No retries permitted until 2026-02-16 17:02:30.642372419 +0000 UTC m=+29.792545549 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access") pod "installer-2-master-0" (UID: "b1b4fccc-6bf6-47ac-8ae1-32cad23734da") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:26.661356 master-0 kubenswrapper[15493]: W0216 17:02:26.661251 15493 reflector.go:561] object-"openshift-machine-api"/"baremetal-kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dbaremetal-kube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.661356 master-0 kubenswrapper[15493]: E0216 17:02:26.661326 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"baremetal-kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dbaremetal-kube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.681528 master-0 kubenswrapper[15493]: W0216 17:02:26.681395 15493 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.681528 master-0 kubenswrapper[15493]: E0216 17:02:26.681510 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.701476 master-0 kubenswrapper[15493]: W0216 17:02:26.701304 15493 reflector.go:561] object-"openshift-kube-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.701476 master-0 kubenswrapper[15493]: E0216 17:02:26.701385 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.721527 master-0 kubenswrapper[15493]: W0216 17:02:26.721444 15493 reflector.go:561] object-"openshift-network-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.722156 master-0 kubenswrapper[15493]: E0216 17:02:26.721523 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.737083 master-0 kubenswrapper[15493]: I0216 17:02:26.737000 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:26.737224 master-0 kubenswrapper[15493]: I0216 17:02:26.737127 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:26.737961 master-0 kubenswrapper[15493]: I0216 17:02:26.737882 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:26.738040 master-0 kubenswrapper[15493]: I0216 17:02:26.737984 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:26.740342 master-0 kubenswrapper[15493]: I0216 17:02:26.740290 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:26.741228 master-0 kubenswrapper[15493]: W0216 17:02:26.741134 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.741338 master-0 kubenswrapper[15493]: E0216 17:02:26.741233 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.841148 master-0 kubenswrapper[15493]: W0216 17:02:26.841053 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.841148 master-0 kubenswrapper[15493]: E0216 17:02:26.841147 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.842402 master-0 kubenswrapper[15493]: I0216 17:02:26.842280 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:02:26.842631 master-0 kubenswrapper[15493]: I0216 17:02:26.842494 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:26.842631 master-0 kubenswrapper[15493]: I0216 17:02:26.842563 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:26.842878 master-0 kubenswrapper[15493]: I0216 17:02:26.842847 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:26.842953 master-0 kubenswrapper[15493]: I0216 17:02:26.842886 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:26.842996 master-0 kubenswrapper[15493]: I0216 17:02:26.842953 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:26.861277 master-0 kubenswrapper[15493]: W0216 17:02:26.861141 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.861531 master-0 kubenswrapper[15493]: E0216 17:02:26.861286 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:26.945715 master-0 kubenswrapper[15493]: I0216 17:02:26.945655 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:26.946005 master-0 kubenswrapper[15493]: I0216 17:02:26.945753 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:26.981767 master-0 kubenswrapper[15493]: W0216 17:02:26.981537 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:26.981767 master-0 kubenswrapper[15493]: E0216 17:02:26.981682 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.001897 master-0 kubenswrapper[15493]: W0216 17:02:27.001753 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"pprof-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.001897 master-0 kubenswrapper[15493]: E0216 17:02:27.001872 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.020980 master-0 kubenswrapper[15493]: W0216 17:02:27.020891 15493 reflector.go:561] object-"openshift-catalogd"/"catalogserver-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/secrets?fieldSelector=metadata.name%3Dcatalogserver-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.020980 master-0 kubenswrapper[15493]: E0216 17:02:27.020972 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-catalogd\"/\"catalogserver-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/secrets?fieldSelector=metadata.name%3Dcatalogserver-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.041557 master-0 kubenswrapper[15493]: W0216 17:02:27.041432 15493 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.041714 master-0 kubenswrapper[15493]: E0216 17:02:27.041560 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.061566 master-0 kubenswrapper[15493]: W0216 17:02:27.061365 15493 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.061566 master-0 kubenswrapper[15493]: E0216 17:02:27.061524 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.080886 master-0 kubenswrapper[15493]: W0216 17:02:27.080791 15493 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-7mlbn": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-7mlbn&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.081069 master-0 kubenswrapper[15493]: E0216 17:02:27.080883 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-7mlbn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-7mlbn&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.101711 master-0 kubenswrapper[15493]: W0216 17:02:27.101608 15493 reflector.go:561] object-"openshift-machine-config-operator"/"mcc-proxy-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.101711 master-0 kubenswrapper[15493]: E0216 17:02:27.101695 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.121529 master-0 kubenswrapper[15493]: W0216 17:02:27.121380 15493 reflector.go:561] object-"openshift-etcd"/"installer-sa-dockercfg-rxv66": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-rxv66&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.121529 master-0 kubenswrapper[15493]: E0216 17:02:27.121500 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd\"/\"installer-sa-dockercfg-rxv66\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-rxv66&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.141081 master-0 kubenswrapper[15493]: W0216 17:02:27.140887 15493 reflector.go:561] object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-qlqr4": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-qlqr4&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.141081 master-0 kubenswrapper[15493]: E0216 17:02:27.141068 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager\"/\"installer-sa-dockercfg-qlqr4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?fieldSelector=metadata.name%3Dinstaller-sa-dockercfg-qlqr4&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.241594 master-0 kubenswrapper[15493]: W0216 17:02:27.241425 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dcloud-controller-manager-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.241987 master-0 kubenswrapper[15493]: E0216 17:02:27.241885 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"cloud-controller-manager-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dcloud-controller-manager-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.260991 master-0 kubenswrapper[15493]: W0216 17:02:27.260823 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.260991 master-0 kubenswrapper[15493]: E0216 17:02:27.260962 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.401752 master-0 kubenswrapper[15493]: W0216 17:02:27.401627 15493 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.401752 master-0 kubenswrapper[15493]: E0216 17:02:27.401732 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.421075 master-0 kubenswrapper[15493]: W0216 17:02:27.420889 15493 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy-cluster-autoscaler-operator&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.421075 master-0 kubenswrapper[15493]: E0216 17:02:27.421059 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy-cluster-autoscaler-operator\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy-cluster-autoscaler-operator&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.442750 master-0 kubenswrapper[15493]: W0216 17:02:27.442232 15493 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-kh5s4": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-kh5s4&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.442750 master-0 kubenswrapper[15493]: E0216 17:02:27.442341 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-kh5s4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-kh5s4&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.467578 master-0 kubenswrapper[15493]: I0216 17:02:27.467500 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:27.561493 master-0 kubenswrapper[15493]: W0216 17:02:27.561405 15493 reflector.go:561] object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.561751 master-0 kubenswrapper[15493]: E0216 17:02:27.561506 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-credential-operator\"/\"cloud-credential-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/secrets?fieldSelector=metadata.name%3Dcloud-credential-operator-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.580525 master-0 kubenswrapper[15493]: I0216 17:02:27.580466 15493 request.go:700] Waited for 1.265267723s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token Feb 16 17:02:27.674350 master-0 kubenswrapper[15493]: I0216 17:02:27.674269 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:27.681402 master-0 kubenswrapper[15493]: W0216 17:02:27.681321 15493 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.681529 master-0 kubenswrapper[15493]: E0216 17:02:27.681409 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.701009 master-0 kubenswrapper[15493]: W0216 17:02:27.700873 15493 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.701009 master-0 kubenswrapper[15493]: E0216 17:02:27.700992 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.721103 master-0 kubenswrapper[15493]: W0216 17:02:27.721002 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.721103 master-0 kubenswrapper[15493]: E0216 17:02:27.721101 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.762111 master-0 kubenswrapper[15493]: E0216 17:02:27.761996 15493 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:27.782615 master-0 kubenswrapper[15493]: E0216 17:02:27.782518 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:27.801203 master-0 kubenswrapper[15493]: E0216 17:02:27.801133 15493 projected.go:288] Couldn't get configMap openshift-network-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:27.822150 master-0 kubenswrapper[15493]: E0216 17:02:27.821974 15493 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:27.841048 master-0 kubenswrapper[15493]: W0216 17:02:27.840902 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.841152 master-0 kubenswrapper[15493]: E0216 17:02:27.841060 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.860945 master-0 kubenswrapper[15493]: W0216 17:02:27.860827 15493 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.861090 master-0 kubenswrapper[15493]: E0216 17:02:27.861028 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.881006 master-0 kubenswrapper[15493]: E0216 17:02:27.880913 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:27.881893 master-0 kubenswrapper[15493]: E0216 17:02:27.881826 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:02:27.882010 master-0 kubenswrapper[15493]: I0216 17:02:27.881887 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:27.901387 master-0 kubenswrapper[15493]: W0216 17:02:27.901229 15493 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.901387 master-0 kubenswrapper[15493]: E0216 17:02:27.901369 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.902352 master-0 kubenswrapper[15493]: E0216 17:02:27.902304 15493 projected.go:288] Couldn't get configMap openshift-network-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:27.921318 master-0 kubenswrapper[15493]: E0216 17:02:27.921252 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:27.921537 master-0 kubenswrapper[15493]: W0216 17:02:27.921407 15493 reflector.go:561] object-"openshift-insights"/"openshift-insights-serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Dopenshift-insights-serving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.921624 master-0 kubenswrapper[15493]: E0216 17:02:27.921524 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-insights\"/\"openshift-insights-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/secrets?fieldSelector=metadata.name%3Dopenshift-insights-serving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.941261 master-0 kubenswrapper[15493]: E0216 17:02:27.941200 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:27.941514 master-0 kubenswrapper[15493]: W0216 17:02:27.941422 15493 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:27.941573 master-0 kubenswrapper[15493]: E0216 17:02:27.941534 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:27.962691 master-0 kubenswrapper[15493]: E0216 17:02:27.962618 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.080975 master-0 kubenswrapper[15493]: I0216 17:02:28.080745 15493 status_manager.go:851] "Failed to get status for pod" podUID="48801344-a48a-493e-aea4-19d998d0b708" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-676cd8b9b5-cp9rb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:28.161400 master-0 kubenswrapper[15493]: E0216 17:02:28.161318 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.180854 master-0 kubenswrapper[15493]: E0216 17:02:28.180793 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.191503 master-0 kubenswrapper[15493]: I0216 17:02:28.191432 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:28.191613 master-0 kubenswrapper[15493]: I0216 17:02:28.191553 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:28.191668 master-0 kubenswrapper[15493]: I0216 17:02:28.191641 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:28.191732 master-0 kubenswrapper[15493]: I0216 17:02:28.191697 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:28.191802 master-0 kubenswrapper[15493]: I0216 17:02:28.191771 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:28.191876 master-0 kubenswrapper[15493]: I0216 17:02:28.191843 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:28.191976 master-0 kubenswrapper[15493]: I0216 17:02:28.191914 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:28.192035 master-0 kubenswrapper[15493]: I0216 17:02:28.192012 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:28.192139 master-0 kubenswrapper[15493]: I0216 17:02:28.192099 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:28.192224 master-0 kubenswrapper[15493]: I0216 17:02:28.192198 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:28.192280 master-0 kubenswrapper[15493]: I0216 17:02:28.192259 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:28.192340 master-0 kubenswrapper[15493]: I0216 17:02:28.192314 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:28.192425 master-0 kubenswrapper[15493]: I0216 17:02:28.192397 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:28.192491 master-0 kubenswrapper[15493]: I0216 17:02:28.192464 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:28.192548 master-0 kubenswrapper[15493]: I0216 17:02:28.192523 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:28.192712 master-0 kubenswrapper[15493]: I0216 17:02:28.192673 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:28.192768 master-0 kubenswrapper[15493]: I0216 17:02:28.192744 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:28.192831 master-0 kubenswrapper[15493]: I0216 17:02:28.192803 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:28.192894 master-0 kubenswrapper[15493]: I0216 17:02:28.192869 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:28.193186 master-0 kubenswrapper[15493]: I0216 17:02:28.193146 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:28.193322 master-0 kubenswrapper[15493]: I0216 17:02:28.193283 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:28.193501 master-0 kubenswrapper[15493]: I0216 17:02:28.193380 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:28.193501 master-0 kubenswrapper[15493]: I0216 17:02:28.193459 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:28.193569 master-0 kubenswrapper[15493]: I0216 17:02:28.193518 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:28.193606 master-0 kubenswrapper[15493]: I0216 17:02:28.193577 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:28.193750 master-0 kubenswrapper[15493]: I0216 17:02:28.193662 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:28.193818 master-0 kubenswrapper[15493]: I0216 17:02:28.193755 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:28.193859 master-0 kubenswrapper[15493]: I0216 17:02:28.193817 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:28.193899 master-0 kubenswrapper[15493]: I0216 17:02:28.193874 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:28.194010 master-0 kubenswrapper[15493]: I0216 17:02:28.193972 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:28.194700 master-0 kubenswrapper[15493]: I0216 17:02:28.194650 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:28.194776 master-0 kubenswrapper[15493]: I0216 17:02:28.194741 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:28.194831 master-0 kubenswrapper[15493]: I0216 17:02:28.194794 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:28.194866 master-0 kubenswrapper[15493]: I0216 17:02:28.194849 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:28.195110 master-0 kubenswrapper[15493]: I0216 17:02:28.195069 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:28.195199 master-0 kubenswrapper[15493]: I0216 17:02:28.195134 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:28.195232 master-0 kubenswrapper[15493]: I0216 17:02:28.195205 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:28.195260 master-0 kubenswrapper[15493]: I0216 17:02:28.195249 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:28.195372 master-0 kubenswrapper[15493]: I0216 17:02:28.195348 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:28.195588 master-0 kubenswrapper[15493]: I0216 17:02:28.195522 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:28.195685 master-0 kubenswrapper[15493]: I0216 17:02:28.195648 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:28.195725 master-0 kubenswrapper[15493]: I0216 17:02:28.195710 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:28.195808 master-0 kubenswrapper[15493]: I0216 17:02:28.195776 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:28.195871 master-0 kubenswrapper[15493]: I0216 17:02:28.195824 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:28.195904 master-0 kubenswrapper[15493]: I0216 17:02:28.195876 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:28.195970 master-0 kubenswrapper[15493]: I0216 17:02:28.195914 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:28.196106 master-0 kubenswrapper[15493]: I0216 17:02:28.196057 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:28.196219 master-0 kubenswrapper[15493]: I0216 17:02:28.196187 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:28.196306 master-0 kubenswrapper[15493]: I0216 17:02:28.196275 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:28.196377 master-0 kubenswrapper[15493]: I0216 17:02:28.196348 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:28.197032 master-0 kubenswrapper[15493]: I0216 17:02:28.196413 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:28.197032 master-0 kubenswrapper[15493]: I0216 17:02:28.196485 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:28.197032 master-0 kubenswrapper[15493]: I0216 17:02:28.196544 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:28.197032 master-0 kubenswrapper[15493]: I0216 17:02:28.196584 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:28.197032 master-0 kubenswrapper[15493]: I0216 17:02:28.196634 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:28.197032 master-0 kubenswrapper[15493]: I0216 17:02:28.196725 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:28.197032 master-0 kubenswrapper[15493]: I0216 17:02:28.196789 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:28.197032 master-0 kubenswrapper[15493]: I0216 17:02:28.196852 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:28.197032 master-0 kubenswrapper[15493]: I0216 17:02:28.196890 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:28.197032 master-0 kubenswrapper[15493]: I0216 17:02:28.197024 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:28.197969 master-0 kubenswrapper[15493]: I0216 17:02:28.197097 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:28.197969 master-0 kubenswrapper[15493]: I0216 17:02:28.197153 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:28.197969 master-0 kubenswrapper[15493]: I0216 17:02:28.197230 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:28.197969 master-0 kubenswrapper[15493]: I0216 17:02:28.197330 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:28.197969 master-0 kubenswrapper[15493]: I0216 17:02:28.197443 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:28.197969 master-0 kubenswrapper[15493]: I0216 17:02:28.197592 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:28.197969 master-0 kubenswrapper[15493]: I0216 17:02:28.197653 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:28.197969 master-0 kubenswrapper[15493]: I0216 17:02:28.197707 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:28.197969 master-0 kubenswrapper[15493]: I0216 17:02:28.197761 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:28.197969 master-0 kubenswrapper[15493]: I0216 17:02:28.197816 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:28.197969 master-0 kubenswrapper[15493]: I0216 17:02:28.197891 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:28.198967 master-0 kubenswrapper[15493]: I0216 17:02:28.198062 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:28.198967 master-0 kubenswrapper[15493]: I0216 17:02:28.198187 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:28.198967 master-0 kubenswrapper[15493]: I0216 17:02:28.198224 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:28.198967 master-0 kubenswrapper[15493]: I0216 17:02:28.198248 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:28.198967 master-0 kubenswrapper[15493]: I0216 17:02:28.198398 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:28.198967 master-0 kubenswrapper[15493]: I0216 17:02:28.198450 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:28.198967 master-0 kubenswrapper[15493]: I0216 17:02:28.198561 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:28.198967 master-0 kubenswrapper[15493]: I0216 17:02:28.198673 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:28.198967 master-0 kubenswrapper[15493]: I0216 17:02:28.198744 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:28.198967 master-0 kubenswrapper[15493]: I0216 17:02:28.198826 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:28.198967 master-0 kubenswrapper[15493]: I0216 17:02:28.198885 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:28.200008 master-0 kubenswrapper[15493]: I0216 17:02:28.198993 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:28.200008 master-0 kubenswrapper[15493]: I0216 17:02:28.199328 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:28.200008 master-0 kubenswrapper[15493]: I0216 17:02:28.199438 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:28.200008 master-0 kubenswrapper[15493]: I0216 17:02:28.199501 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:28.200008 master-0 kubenswrapper[15493]: I0216 17:02:28.199581 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:28.200008 master-0 kubenswrapper[15493]: I0216 17:02:28.199646 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:28.200008 master-0 kubenswrapper[15493]: I0216 17:02:28.199704 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:28.200008 master-0 kubenswrapper[15493]: I0216 17:02:28.199901 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:28.200008 master-0 kubenswrapper[15493]: I0216 17:02:28.199956 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: I0216 17:02:28.200118 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: I0216 17:02:28.200163 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: I0216 17:02:28.200190 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: I0216 17:02:28.200290 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: I0216 17:02:28.200416 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: W0216 17:02:28.200408 15493 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: E0216 17:02:28.200561 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: I0216 17:02:28.200490 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: I0216 17:02:28.200632 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: I0216 17:02:28.200654 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: I0216 17:02:28.200673 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: I0216 17:02:28.200689 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: I0216 17:02:28.200707 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:28.200711 master-0 kubenswrapper[15493]: I0216 17:02:28.200731 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: I0216 17:02:28.200821 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: I0216 17:02:28.200954 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: I0216 17:02:28.201100 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: I0216 17:02:28.201204 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: I0216 17:02:28.201303 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: I0216 17:02:28.201360 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: I0216 17:02:28.201419 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: I0216 17:02:28.201609 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: I0216 17:02:28.201696 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: E0216 17:02:28.201721 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: I0216 17:02:28.201757 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: I0216 17:02:28.201812 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: I0216 17:02:28.201871 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:28.202017 master-0 kubenswrapper[15493]: I0216 17:02:28.201989 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:28.203140 master-0 kubenswrapper[15493]: I0216 17:02:28.202104 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:28.203140 master-0 kubenswrapper[15493]: I0216 17:02:28.202161 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:28.203140 master-0 kubenswrapper[15493]: I0216 17:02:28.202214 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:28.203140 master-0 kubenswrapper[15493]: I0216 17:02:28.202270 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:28.203140 master-0 kubenswrapper[15493]: I0216 17:02:28.202325 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:28.222561 master-0 kubenswrapper[15493]: E0216 17:02:28.222501 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.281888 master-0 kubenswrapper[15493]: E0216 17:02:28.281847 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.302548 master-0 kubenswrapper[15493]: E0216 17:02:28.302476 15493 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.320898 master-0 kubenswrapper[15493]: E0216 17:02:28.320843 15493 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.321345 master-0 kubenswrapper[15493]: W0216 17:02:28.321286 15493 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.321408 master-0 kubenswrapper[15493]: E0216 17:02:28.321363 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.342479 master-0 kubenswrapper[15493]: E0216 17:02:28.342229 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.362421 master-0 kubenswrapper[15493]: E0216 17:02:28.362338 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.381869 master-0 kubenswrapper[15493]: E0216 17:02:28.381777 15493 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.461507 master-0 kubenswrapper[15493]: E0216 17:02:28.461431 15493 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.461803 master-0 kubenswrapper[15493]: W0216 17:02:28.461708 15493 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.461896 master-0 kubenswrapper[15493]: E0216 17:02:28.461825 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.481793 master-0 kubenswrapper[15493]: E0216 17:02:28.481698 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.482239 master-0 kubenswrapper[15493]: W0216 17:02:28.482121 15493 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-q2gzj": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-q2gzj&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.482345 master-0 kubenswrapper[15493]: E0216 17:02:28.482243 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-q2gzj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-q2gzj&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.501368 master-0 kubenswrapper[15493]: E0216 17:02:28.501293 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.501749 master-0 kubenswrapper[15493]: W0216 17:02:28.501669 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-wnnb7": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-wnnb7&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.501844 master-0 kubenswrapper[15493]: E0216 17:02:28.501747 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wnnb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-wnnb7&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.522250 master-0 kubenswrapper[15493]: E0216 17:02:28.522173 15493 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.540747 master-0 kubenswrapper[15493]: E0216 17:02:28.540681 15493 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.561386 master-0 kubenswrapper[15493]: W0216 17:02:28.561265 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lc8g2": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcluster-cloud-controller-manager-dockercfg-lc8g2&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.561584 master-0 kubenswrapper[15493]: E0216 17:02:28.561381 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"cluster-cloud-controller-manager-dockercfg-lc8g2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcluster-cloud-controller-manager-dockercfg-lc8g2&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.581255 master-0 kubenswrapper[15493]: W0216 17:02:28.581087 15493 reflector.go:561] object-"openshift-machine-config-operator"/"mco-proxy-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.581255 master-0 kubenswrapper[15493]: E0216 17:02:28.581177 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.581255 master-0 kubenswrapper[15493]: E0216 17:02:28.581266 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.600743 master-0 kubenswrapper[15493]: I0216 17:02:28.600619 15493 request.go:700] Waited for 1.421853917s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Feb 16 17:02:28.600743 master-0 kubenswrapper[15493]: E0216 17:02:28.600632 15493 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.601455 master-0 kubenswrapper[15493]: W0216 17:02:28.601369 15493 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.601455 master-0 kubenswrapper[15493]: E0216 17:02:28.601437 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.621135 master-0 kubenswrapper[15493]: E0216 17:02:28.621084 15493 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.621602 master-0 kubenswrapper[15493]: W0216 17:02:28.621498 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.621682 master-0 kubenswrapper[15493]: E0216 17:02:28.621632 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.640524 master-0 kubenswrapper[15493]: E0216 17:02:28.640480 15493 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.640667 master-0 kubenswrapper[15493]: W0216 17:02:28.640588 15493 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.640720 master-0 kubenswrapper[15493]: E0216 17:02:28.640681 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.660850 master-0 kubenswrapper[15493]: E0216 17:02:28.660769 15493 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.661556 master-0 kubenswrapper[15493]: W0216 17:02:28.661366 15493 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.661556 master-0 kubenswrapper[15493]: E0216 17:02:28.661435 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.681716 master-0 kubenswrapper[15493]: W0216 17:02:28.681534 15493 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-r5p9m": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-r5p9m&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.681716 master-0 kubenswrapper[15493]: E0216 17:02:28.681601 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-r5p9m\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-r5p9m&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.701518 master-0 kubenswrapper[15493]: E0216 17:02:28.701439 15493 projected.go:194] Error preparing data for projected volume bound-sa-token for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.701753 master-0 kubenswrapper[15493]: E0216 17:02:28.701563 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:36.701529612 +0000 UTC m=+35.851702682 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "bound-sa-token" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.720993 master-0 kubenswrapper[15493]: W0216 17:02:28.720876 15493 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.721219 master-0 kubenswrapper[15493]: E0216 17:02:28.721004 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.741865 master-0 kubenswrapper[15493]: W0216 17:02:28.741707 15493 reflector.go:561] object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcloud-controller-manager-operator-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.741865 master-0 kubenswrapper[15493]: E0216 17:02:28.741850 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cloud-controller-manager-operator\"/\"cloud-controller-manager-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dcloud-controller-manager-operator-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.742221 master-0 kubenswrapper[15493]: E0216 17:02:28.742048 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.760728 master-0 kubenswrapper[15493]: W0216 17:02:28.760629 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.760728 master-0 kubenswrapper[15493]: E0216 17:02:28.760714 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.761023 master-0 kubenswrapper[15493]: E0216 17:02:28.760835 15493 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.763112 master-0 kubenswrapper[15493]: E0216 17:02:28.763026 15493 projected.go:288] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.763112 master-0 kubenswrapper[15493]: E0216 17:02:28.763091 15493 projected.go:194] Error preparing data for projected volume kube-api-access-b5mwd for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.763848 master-0 kubenswrapper[15493]: E0216 17:02:28.763185 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:32.763158953 +0000 UTC m=+31.913332033 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5mwd" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.780891 master-0 kubenswrapper[15493]: E0216 17:02:28.780805 15493 projected.go:194] Error preparing data for projected volume bound-sa-token for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.781193 master-0 kubenswrapper[15493]: E0216 17:02:28.780964 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:36.780907662 +0000 UTC m=+35.931080772 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "bound-sa-token" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.782178 master-0 kubenswrapper[15493]: E0216 17:02:28.782138 15493 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.783386 master-0 kubenswrapper[15493]: E0216 17:02:28.783347 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.783455 master-0 kubenswrapper[15493]: E0216 17:02:28.783390 15493 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.783493 master-0 kubenswrapper[15493]: E0216 17:02:28.783466 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:32.78344553 +0000 UTC m=+31.933618640 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.800756 master-0 kubenswrapper[15493]: W0216 17:02:28.800666 15493 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:02:28.800852 master-0 kubenswrapper[15493]: E0216 17:02:28.800765 15493 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:02:28.801817 master-0 kubenswrapper[15493]: E0216 17:02:28.801778 15493 projected.go:288] Couldn't get configMap openshift-network-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.801817 master-0 kubenswrapper[15493]: E0216 17:02:28.801808 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zt8mt for pod openshift-network-operator/network-operator-6fcf4c966-6bmf9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/cluster-network-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.801947 master-0 kubenswrapper[15493]: E0216 17:02:28.801862 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt podName:4549ea98-7379-49e1-8452-5efb643137ca nodeName:}" failed. No retries permitted until 2026-02-16 17:02:32.801841576 +0000 UTC m=+31.952014666 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zt8mt" (UniqueName: "kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt") pod "network-operator-6fcf4c966-6bmf9" (UID: "4549ea98-7379-49e1-8452-5efb643137ca") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/cluster-network-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.801947 master-0 kubenswrapper[15493]: E0216 17:02:28.801882 15493 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.820878 master-0 kubenswrapper[15493]: E0216 17:02:28.820815 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.821794 master-0 kubenswrapper[15493]: E0216 17:02:28.821739 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/bootstrap-kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:02:28.821993 master-0 kubenswrapper[15493]: E0216 17:02:28.821972 15493 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.401s" Feb 16 17:02:28.822951 master-0 kubenswrapper[15493]: E0216 17:02:28.822844 15493 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.822951 master-0 kubenswrapper[15493]: E0216 17:02:28.822877 15493 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.823143 master-0 kubenswrapper[15493]: E0216 17:02:28.823118 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:32.823090719 +0000 UTC m=+31.973263809 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.830206 master-0 kubenswrapper[15493]: I0216 17:02:28.830151 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:28.840609 master-0 kubenswrapper[15493]: I0216 17:02:28.840525 15493 status_manager.go:851] "Failed to get status for pod" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-d8bf84b88-m66tx\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:28.861433 master-0 kubenswrapper[15493]: E0216 17:02:28.861252 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/bootstrap-kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:02:28.861433 master-0 kubenswrapper[15493]: I0216 17:02:28.861330 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:28.881184 master-0 kubenswrapper[15493]: E0216 17:02:28.881125 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.881275 master-0 kubenswrapper[15493]: E0216 17:02:28.881199 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/serviceaccounts/kube-storage-version-migrator-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.881339 master-0 kubenswrapper[15493]: I0216 17:02:28.881127 15493 status_manager.go:851] "Failed to get status for pod" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-75b869db96-twmsp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:28.881390 master-0 kubenswrapper[15493]: E0216 17:02:28.881338 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:32.881299299 +0000 UTC m=+32.031472399 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/serviceaccounts/kube-storage-version-migrator-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.900520 master-0 kubenswrapper[15493]: E0216 17:02:28.900459 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:02:28.900706 master-0 kubenswrapper[15493]: I0216 17:02:28.900677 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerStarted","Data":"837a858734b801f62c18bbc1ac1678d7076080812a795cc7c558fa08b748a43c"} Feb 16 17:02:28.900760 master-0 kubenswrapper[15493]: I0216 17:02:28.900722 15493 status_manager.go:379] "Container startup changed for unknown container" pod="kube-system/bootstrap-kube-controller-manager-master-0" containerID="cri-o://e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26" Feb 16 17:02:28.900760 master-0 kubenswrapper[15493]: I0216 17:02:28.900735 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:28.900847 master-0 kubenswrapper[15493]: I0216 17:02:28.900786 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerStarted","Data":"06940a658879a063b012c2bf76a3258fbdd61e5203f5587e2a2a955dfa358b02"} Feb 16 17:02:28.900847 master-0 kubenswrapper[15493]: I0216 17:02:28.900804 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerStarted","Data":"fa96b9440dbcead07a8e8a2883de97575b011436686f4fab2170bdfcc0a3f79e"} Feb 16 17:02:28.900847 master-0 kubenswrapper[15493]: I0216 17:02:28.900819 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerStarted","Data":"7bcf62830ed108bcbaff872a01506e1cfaba1ae290ee01528f3fca2ecf257682"} Feb 16 17:02:28.900847 master-0 kubenswrapper[15493]: I0216 17:02:28.900831 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" event={"ID":"e10d0b0c-4c2a-45b3-8d69-3070d566b97d","Type":"ContainerStarted","Data":"81ddf55b61540f7b5e030d229eea51d26c8a5bda0650c33851cbe3fbbeefd261"} Feb 16 17:02:28.900847 master-0 kubenswrapper[15493]: I0216 17:02:28.900842 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" event={"ID":"e10d0b0c-4c2a-45b3-8d69-3070d566b97d","Type":"ContainerStarted","Data":"add87c6ecc390c11dd4bb671cf6c85cf8d43a3b5be958bf533ae60889482daca"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.900854 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.900867 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"67636bc611814bbf34e6bb9093e3c3fce5ce2b828a2dd05d2b7fdd2dd015348f"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.900879 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.900895 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"be29035bd3f07d8681e71946753c9f5c4233d203be4ff12561b76d96bc674177"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.900939 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerStarted","Data":"8b5f186fc636a0c8960f76cfef6732841109955fd2f4967d010972e20332e869"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.900953 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerDied","Data":"925f178f46a1d5c4c22dbeed05e4d6e9975a60d252305dcd17064d2bc8dfab6e"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.900965 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerStarted","Data":"84fbcf4f8c4afda2e79a3be2b73e332fc9f2d8ce27d17523a1712d76d3ce752e"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.900978 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerStarted","Data":"f3e3e8e94dc6c217da7c3312700e3c981cf01212e798fb2c9ea5fc2b31f6b8aa"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.900990 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerStarted","Data":"b2ea1bb15f78693382433f2c7f09878ee2e059e95bab8649c9ca7870ea580187"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.901018 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerStarted","Data":"8d9325183d87d503ed41689fd08cf0ecd5e5cd428a5bae6824cddf556b030e2a"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.901031 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" event={"ID":"642e5115-b7f2-4561-bc6b-1a74b6d891c4","Type":"ContainerStarted","Data":"221dc64441e450195317a3ad8eacbbb293523d0726dbd96812217d44d6f1da31"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.901044 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" event={"ID":"642e5115-b7f2-4561-bc6b-1a74b6d891c4","Type":"ContainerStarted","Data":"b077d967ff0915e46adebbfea57fba17bebbd700385551a20b3c9d4bda18abd6"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.901056 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerStarted","Data":"e843fe6093fddb4f0608997e3c887c540c5790a52102b7ba7e769e8ae9904f7d"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.901071 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerStarted","Data":"af4f06c8656e24dc76c11a21937d73b5e139ad31b06bedcdf3957bacba32069a"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.901082 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerStarted","Data":"c68b78ea048e3e05fd1fcd40eae1c2d97a33dc3cbf3cea258f66da49798e5912"} Feb 16 17:02:28.901086 master-0 kubenswrapper[15493]: I0216 17:02:28.901094 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerStarted","Data":"71373993bd8fa85e34385967dc668cef9cf33a45809ff033e291394c3abdeb57"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901106 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"b5e6e0c200ef6468da128fab1a901d498e73068beb07a54310f215479193099d"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901111 15493 scope.go:117] "RemoveContainer" containerID="e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26" Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901120 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"96c8b16be41a61f78ae9a0d158764cfb3f1dc1be9541f6dde4356d45ed489d8c"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901270 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"07ee05b11ab243298aba0652acab149107fdee4d056b25a8d70e009ebf722842"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901284 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"9b508704ca913b3676949d448345a8f778d17c4d3d7c7156e1db34b5da7a8c96"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901295 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"6f850c8263f7a5fffe361664a6b474015b2a97155111509d5a8154875803d4f3"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901306 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"f47270eadf232a1b51b70eb1069033d1ee831e9e2a83cf22e20d3b2db1ceb184"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901322 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerStarted","Data":"d49374ae3fffc96a5ea6ebfe8e306371f24f9cbc5677024d9ced60c8e5b1a65e"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901332 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"5d39ed24-4301-4cea-8a42-a08f4ba8b479","Type":"ContainerStarted","Data":"2c53a58c131794a80fa1c0999460553c2cc95a04f4d47697c0e7fb42de126acf"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901344 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"5d39ed24-4301-4cea-8a42-a08f4ba8b479","Type":"ContainerStarted","Data":"4b0c06fa22c4b9fdff535e3051201dc0bd36447aab43eba5f5549b527b9cff7e"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901358 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vfxj4" event={"ID":"a6fe41b0-1a42-4f07-8220-d9aaa50788ad","Type":"ContainerStarted","Data":"f10912c30fd4a11ea42c60b953841baf59f4219d858a735d1f2aa7871453e0dd"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901369 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vfxj4" event={"ID":"a6fe41b0-1a42-4f07-8220-d9aaa50788ad","Type":"ContainerStarted","Data":"631bd21307bb03e494699eafea36a1ef9835bef8edb1d35b0dd997809ea4fddf"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901380 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"d333f86f0a8ab06d569bfb3d4f4ee86bbc505f7ff52162d4fe6868c5e30caf74"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901391 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"df55732c40e933ea372f1214a91cde4306eb5555441b3143bbda5066dd5d87f2"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901402 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"b3cfd9fdcf09c3cddf191f185c0ac9e10d9889f1bf6c1c0a015d8feabfcf56b7"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901412 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerStarted","Data":"b0576ce377a5cee2ae182a3190bd7d01c4057d29cbcd5c8c32f7d95440a684f0"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901424 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerStarted","Data":"45b31aa01f6a85e0dcf85670319be85b9e6d0c112d9bd0004ef655a9654d75f6"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901477 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerDied","Data":"e310e36fd740b75515307293e697ecd768c9c8241ff939db071d778913f35a7a"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901494 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerStarted","Data":"0cff847538436e1b2bb3434e2e04b8332738e465e04639ea97b586f2461bb9fc"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901508 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"d85f4bae9120dd5571ac4aef5b4bc508cd0c2e61ac41e2e016d2fca33cf2c0df"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901521 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"6bb85739a7a836abfdb346023915e77abb0b10b023f88b2b7e7c9536a35657a8"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901564 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"aa5ecc6a98445fdbcf4dc0a764b1f3d8e109d603e5ddc36d010d08e31acfcc8f"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901579 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"a493ec972b676ff0c630095722c8d8d6f05ae211809b90b8791aa422b9dcb2fb"} Feb 16 17:02:28.901718 master-0 kubenswrapper[15493]: I0216 17:02:28.901673 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerStarted","Data":"41145f961148dffbd55b7be77a9591605ef99767213da81b0ba442326c4b3012"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.901695 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerStarted","Data":"8621c35772b0fa0c74746882d26cde088c3ee0e7e232d2738c23c769fa66118c"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.901795 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerStarted","Data":"7c0822a4b748eb1f3f4a4167fcf68aef3951b37e78e3f357e137483a9da93da7"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.901454 15493 scope.go:117] "RemoveContainer" containerID="9563c6ff303edb4e0a6b2f6ce6960067c267be9fe8766c7044d1f1559d05730f" Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.901817 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerStarted","Data":"4680b8c2d5e31d1d35cc0e3e5320c2ad6ac1474aaaf6f05440e71e203962ad7d"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.901878 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerStarted","Data":"a4258db92ccb0e5bfb5051d02b4ac371ae71dd3a55d7950001a7b771cb5d1c29"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.901894 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerStarted","Data":"f669ceacf6e4215d33879fd75925e984def643e57187c462c685b966c75f2673"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.901908 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" event={"ID":"ab6e5720-2c30-4962-9c67-89f1607d137f","Type":"ContainerStarted","Data":"4a3f00327e72eb182ca9d24f6345e55e740d3bc96d139c82176b1ad867248cfd"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.901950 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" event={"ID":"ab6e5720-2c30-4962-9c67-89f1607d137f","Type":"ContainerStarted","Data":"8596ea544be0a448a19f843f8fb2963353f75aac2b39d7b1fc12540532ae6bdc"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.901987 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" event={"ID":"ab6e5720-2c30-4962-9c67-89f1607d137f","Type":"ContainerStarted","Data":"2ab21ee08c6858b29e0d5402811c6a5058510ebfc99fdee4dceca48abf0ebb37"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902002 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" event={"ID":"188e42e5-9f9c-42af-ba15-5548c4fa4b52","Type":"ContainerStarted","Data":"22751fbbdf7aa3224dae4e546afa76f6812f3b8e22c34ed3ba395d1643038f1f"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902016 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" event={"ID":"188e42e5-9f9c-42af-ba15-5548c4fa4b52","Type":"ContainerStarted","Data":"960647e5dd274ec370d3ea843747f832b88bbc5e8bbea57e384a265bf5609dcc"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902027 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" event={"ID":"e1a7c783-2e23-4284-b648-147984cf1022","Type":"ContainerStarted","Data":"abc0a1f84bde8763c28cad4b7f880d6652bce9442417fc89848d5368bf9822ad"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902039 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" event={"ID":"e1a7c783-2e23-4284-b648-147984cf1022","Type":"ContainerStarted","Data":"95008d005493fc2ada0d9b7ff7c718284548b7f519269f9c8d8a7c1fae08fbf6"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902052 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerStarted","Data":"0003ee69c56b0c73d7d4526fa1f5d5fb937628023fcef99de3436e9f297fc1a8"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902066 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerStarted","Data":"e2e9f120a9e16219c47ddb40ab80ffcfe27430f9f99e0080976b18f917b8870a"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902080 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerStarted","Data":"399ed6a40d22a74f841b008ea9aabb72324ddc42397c926051d186c2a8be50e2"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902091 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vwvwx" event={"ID":"c303189e-adae-4fe2-8dd7-cc9b80f73e66","Type":"ContainerStarted","Data":"ade1880aca33a2a6fecd8de7a6fb9caa6cf30a4d0a9280f0ea929a2643dc290b"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902104 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vwvwx" event={"ID":"c303189e-adae-4fe2-8dd7-cc9b80f73e66","Type":"ContainerStarted","Data":"f4ce4120d8890f765a717b5f92a49bb939a1d012d4ecb18b255dc6309ea6d107"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902117 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerStarted","Data":"461a2f0f61f0fcc0eb519485188a2e4212d395f0c1a67321cce2d8f4b7ef3e1c"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902131 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerStarted","Data":"3af7f55a17ec60042c0482aa69809fbba4e6ed0269b1409544298283f99ef1ef"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902143 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerDied","Data":"0c316f0475ab0d19308e3571553a8196d11f7628c2f61de84b97dea8ed48cf58"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902155 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerStarted","Data":"34d279c74bd940d5ab6f0f7e4b7983d57ebc4d60ff3c8f38850791761b56d54b"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902166 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerStarted","Data":"0bee485c8968ce0e68dba41fcbcee4d323847661d4d7322172f3a42844676150"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902376 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerStarted","Data":"370077a3a36bce27e444d5d7ac12daf42269e596b3cbd5fa257c45ddbfe8edf1"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: E0216 17:02:28.902402 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902419 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerStarted","Data":"1d90441ff6782f784fce85c87a44597213ff8f98913ae13bfc6b95e97a8d2532"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902489 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" event={"ID":"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db","Type":"ContainerStarted","Data":"303f8e1c362195fd4193cfe61e0e57d53326ff15f9ff7312804a028571094c23"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: E0216 17:02:28.902515 15493 projected.go:288] Couldn't get configMap openshift-network-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: E0216 17:02:28.902540 15493 projected.go:194] Error preparing data for projected volume kube-api-access-q46jg for pod openshift-network-operator/iptables-alerter-czzz2: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902516 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" event={"ID":"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db","Type":"ContainerStarted","Data":"038865dec18a0d018db14cc0ef7eef2e93b57b0f6f010be3c036aa9f30e0bec0"} Feb 16 17:02:28.902650 master-0 kubenswrapper[15493]: I0216 17:02:28.902642 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f","Type":"ContainerDied","Data":"90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.902674 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f","Type":"ContainerStarted","Data":"dd3ceeb0da8db938eae2cfa500166d7af7a50e381f011dcd54ec971db54cfcba"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.902706 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" event={"ID":"7390ccc6-dfbe-4f51-960c-7628f49bffb7","Type":"ContainerStarted","Data":"fb06e1ce2942ea95f146315a11dd8bc05e374eacc49a86ee457b9eb98dde18f6"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.902730 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" event={"ID":"7390ccc6-dfbe-4f51-960c-7628f49bffb7","Type":"ContainerDied","Data":"97d671c2a336b225236f0499e973eab6ef7683203f7b46f7e3767de75b466dd3"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.902752 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" event={"ID":"7390ccc6-dfbe-4f51-960c-7628f49bffb7","Type":"ContainerStarted","Data":"6518a84f5d47511f5f25592fd8ed06e7ac0d8f38709f9e4fcd73acdf3eb6490c"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.902775 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" event={"ID":"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41","Type":"ContainerStarted","Data":"12b0103601aa9d452d5c380b8174f625698cf75c8ec9ba10415964e9b65d2f4f"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.902802 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" event={"ID":"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41","Type":"ContainerDied","Data":"8fa44b1ac9949e31fd12e8a885f114d1074a93f335ef9c428586ae9835e14643"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.902828 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" event={"ID":"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41","Type":"ContainerStarted","Data":"c5fa73884bbf6d82e89a9b049cd7e08d54171e2ca181ad4d436172b3a8202990"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.902846 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" event={"ID":"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd","Type":"ContainerStarted","Data":"a620812d00ec1d27fd80352b095f2c12e6234eb0d4bf84ea3c70b1cbd9af080b"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.902868 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" event={"ID":"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd","Type":"ContainerStarted","Data":"b595f395aac0332f79c685e4f9b8d1184bc8d65ea7662129777c88a2f4b6d75c"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.902894 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" event={"ID":"e73ee493-de15-44c2-bd51-e12fcbb27a15","Type":"ContainerStarted","Data":"7ac85a9051d1d62aecdb0aea9f364c312e41f09a8b1c3d2e9cdedd31994406f5"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.902949 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" event={"ID":"e73ee493-de15-44c2-bd51-e12fcbb27a15","Type":"ContainerStarted","Data":"a8742926579beb3bc6f4cff1fa7c25f0bdd68039ed37e9a331f3e110c7838ff1"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.902974 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" event={"ID":"970d4376-f299-412c-a8ee-90aa980c689e","Type":"ContainerStarted","Data":"f0cff197ff851b70b3d6e59d84a65158829bfc95e73a2af61cd24901eaa4cfe8"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.902996 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" event={"ID":"970d4376-f299-412c-a8ee-90aa980c689e","Type":"ContainerStarted","Data":"99e6140d34fdb87885ac6f6b6458e82728271da4c3991473cfe40419736e575d"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: E0216 17:02:28.903011 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg podName:b3fa6ac1-781f-446c-b6b4-18bdb7723c23 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:32.902586383 +0000 UTC m=+32.052759543 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-q46jg" (UniqueName: "kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg") pod "iptables-alerter-czzz2" (UID: "b3fa6ac1-781f-446c-b6b4-18bdb7723c23") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/iptables-alerter/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.903013 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" event={"ID":"8e623376-9e14-4341-9dcf-7a7c218b6f9f","Type":"ContainerDied","Data":"8399cb1f8a954f603085247154bb48084a1a6283fe2b99aa8facab4cb78f381d"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.903052 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" event={"ID":"8e623376-9e14-4341-9dcf-7a7c218b6f9f","Type":"ContainerStarted","Data":"11ed7f8e3ea465f63c87bfe4f19d1af5fa7ffa1300819fc95ebd1dd0c7c845d0"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.903069 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" event={"ID":"442600dc-09b2-4fee-9f89-777296b2ee40","Type":"ContainerStarted","Data":"8d2e502f120afcfdbf733271dac9a15c4729b61975022a5fc8946190d6a66af4"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.903082 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" event={"ID":"442600dc-09b2-4fee-9f89-777296b2ee40","Type":"ContainerStarted","Data":"4c0337d0eb1672f3cea8f23ec0619b33320b69137b0d2e9bc66e94fe15bbe412"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.903251 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" event={"ID":"48801344-a48a-493e-aea4-19d998d0b708","Type":"ContainerStarted","Data":"554bf64355b5b3eed04f706e68cd50dcf6f9b6576e2e066858b9fbe0374728cf"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.903270 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" event={"ID":"48801344-a48a-493e-aea4-19d998d0b708","Type":"ContainerStarted","Data":"af61cb28f0ada5d7c4d2b6d4eb5d095894f589e163adb675a754c13d082c9ab9"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.903284 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerStarted","Data":"f779b4dca39b490013fbc325e5d662f7110932224ce922a8167a1dd7f4ad51ac"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.903298 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerStarted","Data":"ebcc47375c8090ea868a5deccf7dc1e91eebca2d21948753da2f002b09800231"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.903312 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerStarted","Data":"d0a7109bce95d1a32301d6e84ffc12bd1d37b091b1ee1ee044686d1a38898e0f"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.903327 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerStarted","Data":"4a3fbb1a388ca141e061ddd3f456a30e0ea19e4b3d5d971ef21b891853ddad88"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.903340 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerStarted","Data":"510990a72db12a97eef2b9c9fbdaec55abf5d52c68ce419a7f5a87a3062f73f1"} Feb 16 17:02:28.903973 master-0 kubenswrapper[15493]: I0216 17:02:28.903351 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerStarted","Data":"d377c24744b60cc35617b8e88be818c3a9283d990df16a27d5112c6aed9ce981"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904484 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" event={"ID":"62220aa5-4065-472c-8a17-c0a58942ab8a","Type":"ContainerStarted","Data":"976b039c9b06af0f3723d83d4469ee022692218ee590a3983454ac89413005ba"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904508 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" event={"ID":"62220aa5-4065-472c-8a17-c0a58942ab8a","Type":"ContainerStarted","Data":"f1e81c2caa02917ae2e1efaeab30f34c00bb80423dce6819a41e6640d4fdc6d5"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904519 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" event={"ID":"5192fa49-d81c-47ce-b2ab-f90996cc0bd5","Type":"ContainerStarted","Data":"a0b5d7ea986d410582d28daade693c0e0c2c5f11b8996357f83090e55f5232a7"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904531 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" event={"ID":"5192fa49-d81c-47ce-b2ab-f90996cc0bd5","Type":"ContainerStarted","Data":"48fe704f6b9f25810dcd5004b13a7c413fb8fc4a4e972dfe51f7142aa16f0fee"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904542 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" event={"ID":"edbaac23-11f0-4bc7-a7ce-b593c774c0fa","Type":"ContainerDied","Data":"58c88a445d8c10824c3855b7412ae17cbbff466b8394e38c4224ab694839c37d"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904557 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" event={"ID":"edbaac23-11f0-4bc7-a7ce-b593c774c0fa","Type":"ContainerStarted","Data":"1a6fc168713ed892fb86b4e303cafe982f512d1d221599bf5dd49b75c3751ce5"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904568 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerStarted","Data":"dae1a04576d4d712d2d5bb1de6d3e36f80a9ba9aa32a0acd1c2d40512ad5b174"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904578 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerDied","Data":"a650093628feaa4193c1b7c57ea685e55d5af706446f54a283f32836e6d703a9"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904590 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerDied","Data":"ff9b3b2992b50e55900986e351d7a1b84719ad88820b81ad374c423bd1f1a2a8"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904604 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerStarted","Data":"95380b516961f947b4de886138c9d7adc4beb7c7579d206d803e4d6c415fb290"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904615 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7kjr" event={"ID":"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa","Type":"ContainerDied","Data":"12b334066ee229a3063ed554a4dd75cf5da1c898112391b182e46bd9935b002b"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904628 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7kjr" event={"ID":"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa","Type":"ContainerStarted","Data":"bd5dcd2c4add7ffc4e409d02664e000a9abb556798c746bb479a7c76fa9d67b8"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904644 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerStarted","Data":"7c3069013a087b4b128510ad9f826bdcec64055b56ce1f6796106b46734c14be"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904655 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerStarted","Data":"6f66af8b0562664573bf8d9a4bb0da731f2d18edeb2c73c463d4bf0acaedcb60"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904666 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerStarted","Data":"d13fb6fe74528b3a775305b065aa4a4f2df12cd45d9b3cf6d3d699a5fdafc519"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904679 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"86c571b6-0f65-41f0-b1be-f63d7a974782","Type":"ContainerStarted","Data":"e607db32e1640f4a53c9cd19e2f52a26fa9cbdb5cdabb553570529d03baa71fa"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904690 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"86c571b6-0f65-41f0-b1be-f63d7a974782","Type":"ContainerStarted","Data":"cd7158aca6c004ac8177200d17fa2e56721dfe46e78c27563fd124a05f790d1a"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904700 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" event={"ID":"5a275679-b7b6-4c28-b389-94cd2b014d6c","Type":"ContainerStarted","Data":"e6cda4ab3c9867af7a01fb5a090799b3598cbcc97267a527ff61d80c779d1d83"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904711 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" event={"ID":"5a275679-b7b6-4c28-b389-94cd2b014d6c","Type":"ContainerStarted","Data":"fbcae0a406fe6ca88ac22ca96551fc1de219ee3e9705034ce16efd7971fc9fed"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904723 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b1b4fccc-6bf6-47ac-8ae1-32cad23734da","Type":"ContainerStarted","Data":"a6f2cc640b5de57d7f65239e3dfae00a6c9cda6decad3cf4c15c3e87bd2e0a2d"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904735 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b1b4fccc-6bf6-47ac-8ae1-32cad23734da","Type":"ContainerStarted","Data":"aa37dd5bc712a6e66397e0efcad2c702b51d3841d761278b212389b13ad668e0"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904746 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" event={"ID":"62fc29f4-557f-4a75-8b78-6ca425c81b81","Type":"ContainerStarted","Data":"b62943328f5fac54686a2ebf612b57c71fd7fbf45329dc96f7bdc742f3287d41"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904758 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" event={"ID":"62fc29f4-557f-4a75-8b78-6ca425c81b81","Type":"ContainerStarted","Data":"6802981ef2e5cdad643b58e0253f48a1465df01861501821550ee2ca659e7e88"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904769 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" event={"ID":"62fc29f4-557f-4a75-8b78-6ca425c81b81","Type":"ContainerStarted","Data":"7f3624c603b0a3ab1d6d22b9ebbf3c00bc31ae7c696fca7464238b99ca1dc1bf"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904783 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904795 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904807 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"400a178a4d5e9a88ba5bbbd1da2ad15e","Type":"ContainerStarted","Data":"0a3ce6339796232d6462786af4891ac2f6ae4477b24c445386f55fd5ad1be497"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904818 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerDied","Data":"dc9e8dbf3a74fb329eb23f61fe7acc2cbbecad6e0ad9994f107aa3c7b0c60d14"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904831 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerStarted","Data":"dbac153ecd4a3f99cd0e69d237edceb4a48ab6c1d711adc4be460a302948a462"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904843 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" event={"ID":"80d3b238-70c3-4e71-96a1-99405352033f","Type":"ContainerStarted","Data":"7573bf948e4a5ccb81f3214838cf4ecabd14ac4f2c4a11558ad134016b1c1851"} Feb 16 17:02:28.905075 master-0 kubenswrapper[15493]: I0216 17:02:28.904856 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" event={"ID":"80d3b238-70c3-4e71-96a1-99405352033f","Type":"ContainerStarted","Data":"905e4fdbfe2147706f13434bc2e5b9a3cbf884a588d29ecfca730d73382fb68f"} Feb 16 17:02:28.908230 master-0 kubenswrapper[15493]: I0216 17:02:28.908183 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerStarted","Data":"e89782b445c861c527809a14cee2fd06738fb3945a0c6a3181ecbf78934d2bda"} Feb 16 17:02:28.908230 master-0 kubenswrapper[15493]: I0216 17:02:28.908219 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerStarted","Data":"a60e6d4793a7edacd573013c98b5733c94b908b9bbe63d3fd698f772eed289c7"} Feb 16 17:02:28.908345 master-0 kubenswrapper[15493]: I0216 17:02:28.908235 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"035c8af0-95f3-4ab6-939c-d7fa8bda40a3","Type":"ContainerDied","Data":"8d78fa623e175273ca9fb1b430de0aa7e6c7b81ae465f33ce572879406853709"} Feb 16 17:02:28.908345 master-0 kubenswrapper[15493]: I0216 17:02:28.908253 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"035c8af0-95f3-4ab6-939c-d7fa8bda40a3","Type":"ContainerStarted","Data":"eced2e42d842617466080f00d7950ca196964eff7f84fad83ac2c918e5c89adc"} Feb 16 17:02:28.908345 master-0 kubenswrapper[15493]: I0216 17:02:28.908265 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerDied","Data":"923a71501b419dfeeea5a3bc9e6232ad282276a9f4cb4239a8c0e6dc182d5ef7"} Feb 16 17:02:28.908345 master-0 kubenswrapper[15493]: I0216 17:02:28.908279 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerStarted","Data":"0c31bbb582da4a5c2f2c01e8ab5dbd9246ddce55c685733c6872e97a601d53de"} Feb 16 17:02:28.908345 master-0 kubenswrapper[15493]: I0216 17:02:28.908292 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kkl7" event={"ID":"a6d86b04-1d3f-4f27-a262-b732c1295997","Type":"ContainerDied","Data":"b61f3a0c3ac4f93f0d72928ae09f6e157b6ae98210058a751bcc300beda92cf1"} Feb 16 17:02:28.908345 master-0 kubenswrapper[15493]: I0216 17:02:28.908305 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kkl7" event={"ID":"a6d86b04-1d3f-4f27-a262-b732c1295997","Type":"ContainerStarted","Data":"69df1f56628ba74129afddfead0ccc135e8c4f4c22ab04aa82de33d67e1e6121"} Feb 16 17:02:28.908345 master-0 kubenswrapper[15493]: I0216 17:02:28.908317 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-thhq2" event={"ID":"f8589094-f18e-4070-a550-b2da6f8acfc0","Type":"ContainerDied","Data":"a029a9519b0af6df58434184bb4dd337dec578276ce41db33a7f4964a78b38d1"} Feb 16 17:02:28.908345 master-0 kubenswrapper[15493]: I0216 17:02:28.908330 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-thhq2" event={"ID":"f8589094-f18e-4070-a550-b2da6f8acfc0","Type":"ContainerDied","Data":"032b64d679f57601688a8c909d1648c2b2ff07b1d0ed9eae1ac157ec69dbfe35"} Feb 16 17:02:28.908345 master-0 kubenswrapper[15493]: I0216 17:02:28.908342 15493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="032b64d679f57601688a8c909d1648c2b2ff07b1d0ed9eae1ac157ec69dbfe35" Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908355 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" event={"ID":"78be97a3-18d1-4962-804f-372974dc8ccc","Type":"ContainerStarted","Data":"7d077fd0a75015a74c26fba1db2c5751ce399c05a04ad0dce4ab7670133702c9"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908367 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" event={"ID":"78be97a3-18d1-4962-804f-372974dc8ccc","Type":"ContainerStarted","Data":"e596a971faed7fb65d78d19abb83585c95e9a5de18c154df5de65c3d54692d18"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908379 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerStarted","Data":"079e840529eb6d74a125e4d8873e01bd5f48d0a6e891c798f77f912c0e2b6249"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908391 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerDied","Data":"d0f7e8be40545fa33b748eaa6f879efc2d956e86b6534dcac117b6e66db8cbc2"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908403 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerDied","Data":"a0dc239cad7cf5c0f46eaeb5867ad213f7711a1950bb1f960b003e867bacaff0"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908415 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerDied","Data":"16a0cd95be2918fe98e0a8ede15fe5203c9e491ca6e96550b8c7ea95ff6081d2"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908426 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerStarted","Data":"27e2fd204d60ad6b8a779a11015379244968cbe2949b9df72430bd5ea3162c81"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908436 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-czzz2" event={"ID":"b3fa6ac1-781f-446c-b6b4-18bdb7723c23","Type":"ContainerStarted","Data":"1dd5d8988b37bdb2482d281fd59a39049b27b81843f30e0726690490865aefa6"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908447 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-czzz2" event={"ID":"b3fa6ac1-781f-446c-b6b4-18bdb7723c23","Type":"ContainerStarted","Data":"a51ce5dfcf0ae0215dfb9bb56d30c910b8c1e31cb77a303efaded16db5c0b84f"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908459 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" event={"ID":"b6ad958f-25e4-40cb-89ec-5da9cb6395c7","Type":"ContainerStarted","Data":"2dafd39a483160a1d39cbe9a3a9409c939da33f2a648ec553387255240b550e9"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908471 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" event={"ID":"b6ad958f-25e4-40cb-89ec-5da9cb6395c7","Type":"ContainerStarted","Data":"82c53f2c6633be154a699c2073aab7e95ac6eb9355d0da7ab4ff59d5ab695ebf"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908486 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerStarted","Data":"179f3f8e9463125ded1c5a4f832192a17edba6e13a5506acf48e86abcd40cda7"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908499 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerStarted","Data":"be513303bf4ed878c6c5f6ef9c7437f58c4a298f57e9e8964fc17527ef538c38"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908511 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerStarted","Data":"f9d46acb28343da01106746c6081478cf22731025778bc59637f1078390a8865"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908523 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" event={"ID":"737fcc7d-d850-4352-9f17-383c85d5bc28","Type":"ContainerStarted","Data":"435ef5863cc155441a05593945ab2775001a00c9f99d0e797a375813404c36ac"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908539 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" event={"ID":"737fcc7d-d850-4352-9f17-383c85d5bc28","Type":"ContainerStarted","Data":"bacf9b29c15cf47bbdc9ebe2fcba6bca3cfdec92ee8864175a5412cbbe3c9659"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908550 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerStarted","Data":"11df536ab46de7aea5d67794ede57f343d242c66232b1933e38f8621505f15f7"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908562 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerStarted","Data":"8e6333d17c854be811265371ff3fa3a77118514f88a15fbd08c26eea148ad400"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908572 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerStarted","Data":"f6a17f679ed7a7fbe57a462f9ffd2577eef58e5ba226eff8515fa879120c4750"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908592 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"9f614b14cbff08be0e14be8cba5e89de122b81583a34321af46bbe62e5a802b3"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908603 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"e9b033b48182246ed491c211e63d13c81386e7e5e19d72d1dd3822fc6dd2d4e4"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908614 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"6964245d58587f66b762b5ac2d9e1b1dc13364bf0c4f27c746f3f696d56a4d52"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908625 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"80a684ef556f1e87f3c9e02305940474602ddfe3de8f6beeb708e0f676fea206"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908636 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"75939ba6ce47e33cbb255166206afbbb5bb2eddc8618e626a18427d506fc7a2f"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908646 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"21bc05ac92fb28745962add720939690ac3c68281bef41a2c339dfc844b33eb9"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908660 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"9c906689512d1a1264797a823e480178e96aca8c88376bbe95cad584cee2c02c"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908671 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"bfff95a0d14f0841a22b2fd65881101b798827da455a93e9bb8b076c265fc42a"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908681 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"6a76b7400b08797d8e5d6ecf8b5e5677ebdccdcb8c93451e24cae607d87b5dde"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908695 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"dadda19ba6587a75b418addc51b36c2e0a7c53f63977a60f6f393649c7b6d587"} Feb 16 17:02:28.908691 master-0 kubenswrapper[15493]: I0216 17:02:28.908706 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r7wj" event={"ID":"43f65f23-4ddd-471a-9cb3-b0945382d83c","Type":"ContainerStarted","Data":"5d094f2876f5545b7c63fc8765883d9a87f0c59f12737ba412250f81627afa8d"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.908717 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r7wj" event={"ID":"43f65f23-4ddd-471a-9cb3-b0945382d83c","Type":"ContainerStarted","Data":"4b4cf6ce22ab8720cdceaa9299137fdb7eefaf7a73cc07e0b511a6eb79ff2810"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.908728 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerStarted","Data":"1a75bfcb3d6ee6e289b7323fbc3d24c63e7fcd67393fd211cfa30edcae278f7a"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.908741 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerStarted","Data":"fbbdab5ef2164d5b878fbbf6c9e025a67f22281db5fb649f6dbfc4b829160d91"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.908753 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerDied","Data":"f31ff62ede3b23583193a8479095d460885c4665f91f714a80c48601aa1a71ad"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.908766 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerStarted","Data":"74ced4b4e3fdce2aecbb38a4d03ec1a93853cd8aa3de1fd3350c1e935e0a300f"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.908985 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" event={"ID":"4549ea98-7379-49e1-8452-5efb643137ca","Type":"ContainerDied","Data":"01bf42c6c3bf4f293fd2294a37aff703b4c469002ae6a87f7c50eefa7c6ae11b"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909000 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" event={"ID":"4549ea98-7379-49e1-8452-5efb643137ca","Type":"ContainerStarted","Data":"75a3e91157092df61ab323caf67fd50fd02c9c52e83eb981207e27d0552a17af"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909021 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" event={"ID":"eaf7edff-0a89-4ac0-b9dd-511e098b5434","Type":"ContainerDied","Data":"e8c4ffcf7c4ece8cb912757e2c966b100c9bb74e9a2ec208a540c26e8e9187ce"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909034 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" event={"ID":"eaf7edff-0a89-4ac0-b9dd-511e098b5434","Type":"ContainerStarted","Data":"de6b829dcdd7c76ab814893ea3af1edbdeba5f9048feac58739f12fbe595c34c"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909046 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"86c571b6-0f65-41f0-b1be-f63d7a974782","Type":"ContainerDied","Data":"e607db32e1640f4a53c9cd19e2f52a26fa9cbdb5cdabb553570529d03baa71fa"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909060 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"035c8af0-95f3-4ab6-939c-d7fa8bda40a3","Type":"ContainerDied","Data":"eced2e42d842617466080f00d7950ca196964eff7f84fad83ac2c918e5c89adc"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909071 15493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eced2e42d842617466080f00d7950ca196964eff7f84fad83ac2c918e5c89adc" Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909082 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"7fe1c16d-061a-4a57-aea4-cf1d4b24d02f","Type":"ContainerDied","Data":"dd3ceeb0da8db938eae2cfa500166d7af7a50e381f011dcd54ec971db54cfcba"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909094 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" event={"ID":"737fcc7d-d850-4352-9f17-383c85d5bc28","Type":"ContainerDied","Data":"435ef5863cc155441a05593945ab2775001a00c9f99d0e797a375813404c36ac"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909110 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n7kjr" event={"ID":"1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa","Type":"ContainerDied","Data":"bd5dcd2c4add7ffc4e409d02664e000a9abb556798c746bb479a7c76fa9d67b8"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909122 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"d6a60e780e1ec46760e0dc56aa2e2446cbe16e5a5a365f586b3a5546ece26409"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909137 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"99b64ee5f7d6979576c32746b6a26000626ad37737e1451b1e8438444ca6ebc8"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909147 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5b26dae9694224e04f0cdc3841408c63","Type":"ContainerStarted","Data":"1083aa6beb90f48dd5db6f69c3ba490b4f6ca8d9fefde7aaf7754452f48f28b5"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909160 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"5b26dae9694224e04f0cdc3841408c63","Type":"ContainerStarted","Data":"6ed10c719626895a56c640d52897f9df71e58a092d735647e7a477dbbca9f847"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909171 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"86c571b6-0f65-41f0-b1be-f63d7a974782","Type":"ContainerDied","Data":"cd7158aca6c004ac8177200d17fa2e56721dfe46e78c27563fd124a05f790d1a"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909183 15493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd7158aca6c004ac8177200d17fa2e56721dfe46e78c27563fd124a05f790d1a" Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909193 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerDied","Data":"38ea24f9e1f52bb2d156d60e530f0e5be87fcb1b940f2e51d9ccf98bf655afc7"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909206 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"55448d8bea6b7d300f8becd37c0b5654a24938ecf842378babc2a1e0bcb81d5b"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909217 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kkl7" event={"ID":"a6d86b04-1d3f-4f27-a262-b732c1295997","Type":"ContainerDied","Data":"69df1f56628ba74129afddfead0ccc135e8c4f4c22ab04aa82de33d67e1e6121"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909231 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"144a82e5a5354aec78564d4133ad4c697667355391b76ed74ba7976065eba35a"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909242 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"3bfeebda3fb13280918301b723ffaeab4eecd5112011293cfc4c1452d31aedfc"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909254 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"5da187904655b0c19cc34caacbc09e4445fb954c92e48787fb496fa93fb862d3"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909267 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b1b4fccc-6bf6-47ac-8ae1-32cad23734da","Type":"ContainerDied","Data":"a6f2cc640b5de57d7f65239e3dfae00a6c9cda6decad3cf4c15c3e87bd2e0a2d"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909286 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909300 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"9563c6ff303edb4e0a6b2f6ce6960067c267be9fe8766c7044d1f1559d05730f"} Feb 16 17:02:28.910129 master-0 kubenswrapper[15493]: I0216 17:02:28.909437 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:02:28.921424 master-0 kubenswrapper[15493]: E0216 17:02:28.921373 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.921424 master-0 kubenswrapper[15493]: E0216 17:02:28.921421 15493 projected.go:194] Error preparing data for projected volume kube-api-access-8r28x for pod openshift-multus/multus-6r7wj: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.921572 master-0 kubenswrapper[15493]: I0216 17:02:28.921473 15493 status_manager.go:851] "Failed to get status for pod" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-5f5f84757d-ktmm9\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:28.921620 master-0 kubenswrapper[15493]: E0216 17:02:28.921516 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:32.921485583 +0000 UTC m=+32.071658703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8r28x" (UniqueName: "kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.940488 master-0 kubenswrapper[15493]: E0216 17:02:28.940423 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:28.941562 master-0 kubenswrapper[15493]: E0216 17:02:28.941461 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.941731 master-0 kubenswrapper[15493]: E0216 17:02:28.941611 15493 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.941829 master-0 kubenswrapper[15493]: E0216 17:02:28.941780 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:32.941716308 +0000 UTC m=+32.091889418 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/cluster-storage-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.961449 master-0 kubenswrapper[15493]: E0216 17:02:28.961364 15493 projected.go:288] Couldn't get configMap openshift-dns/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.961760 master-0 kubenswrapper[15493]: E0216 17:02:28.961679 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:02:28.962727 master-0 kubenswrapper[15493]: E0216 17:02:28.962676 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:28.962727 master-0 kubenswrapper[15493]: E0216 17:02:28.962705 15493 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.962908 master-0 kubenswrapper[15493]: E0216 17:02:28.962774 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:32.962752735 +0000 UTC m=+32.112925815 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/community-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:28.981679 master-0 kubenswrapper[15493]: I0216 17:02:28.981560 15493 status_manager.go:851] "Failed to get status for pod" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7c6bdb986f-v8dr8\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:28.983911 master-0 kubenswrapper[15493]: E0216 17:02:28.983820 15493 projected.go:288] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.001171 master-0 kubenswrapper[15493]: I0216 17:02:29.001028 15493 status_manager.go:851] "Failed to get status for pod" podUID="a6fe41b0-1a42-4f07-8220-d9aaa50788ad" pod="openshift-dns/node-resolver-vfxj4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/node-resolver-vfxj4\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:29.001383 master-0 kubenswrapper[15493]: E0216 17:02:29.001356 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.021283 master-0 kubenswrapper[15493]: E0216 17:02:29.021185 15493 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.021513 master-0 kubenswrapper[15493]: I0216 17:02:29.021445 15493 status_manager.go:851] "Failed to get status for pod" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-84976bb859-rsnqc\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:29.040598 master-0 kubenswrapper[15493]: I0216 17:02:29.040529 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:29.041821 master-0 kubenswrapper[15493]: E0216 17:02:29.041760 15493 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.060942 master-0 kubenswrapper[15493]: I0216 17:02:29.060876 15493 status_manager.go:851] "Failed to get status for pod" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-c588d8cb4-wjr7d\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:29.061319 master-0 kubenswrapper[15493]: E0216 17:02:29.061296 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.063181 master-0 kubenswrapper[15493]: I0216 17:02:29.063154 15493 scope.go:117] "RemoveContainer" containerID="c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e" Feb 16 17:02:29.064168 master-0 kubenswrapper[15493]: I0216 17:02:29.064141 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:29.081851 master-0 kubenswrapper[15493]: I0216 17:02:29.081808 15493 status_manager.go:851] "Failed to get status for pod" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-755d954778-lf4cb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:29.087642 master-0 kubenswrapper[15493]: I0216 17:02:29.087586 15493 scope.go:117] "RemoveContainer" containerID="90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99" Feb 16 17:02:29.101406 master-0 kubenswrapper[15493]: E0216 17:02:29.101349 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.101586 master-0 kubenswrapper[15493]: E0216 17:02:29.101419 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/bootstrap-kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Feb 16 17:02:29.101586 master-0 kubenswrapper[15493]: I0216 17:02:29.101457 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:29.117650 master-0 kubenswrapper[15493]: I0216 17:02:29.117601 15493 scope.go:117] "RemoveContainer" containerID="12b334066ee229a3063ed554a4dd75cf5da1c898112391b182e46bd9935b002b" Feb 16 17:02:29.120753 master-0 kubenswrapper[15493]: I0216 17:02:29.120701 15493 status_manager.go:851] "Failed to get status for pod" podUID="970d4376-f299-412c-a8ee-90aa980c689e" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-operator-7b87b97578-q55rf\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:29.121798 master-0 kubenswrapper[15493]: E0216 17:02:29.121771 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.132959 master-0 kubenswrapper[15493]: I0216 17:02:29.132899 15493 scope.go:117] "RemoveContainer" containerID="b61f3a0c3ac4f93f0d72928ae09f6e157b6ae98210058a751bcc300beda92cf1" Feb 16 17:02:29.141473 master-0 kubenswrapper[15493]: E0216 17:02:29.141424 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="kube-system/bootstrap-kube-scheduler-master-0" Feb 16 17:02:29.141695 master-0 kubenswrapper[15493]: E0216 17:02:29.141672 15493 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.161249 master-0 kubenswrapper[15493]: I0216 17:02:29.161202 15493 status_manager.go:851] "Failed to get status for pod" podUID="5d39ed24-4301-4cea-8a42-a08f4ba8b479" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:29.161465 master-0 kubenswrapper[15493]: E0216 17:02:29.161423 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.161465 master-0 kubenswrapper[15493]: E0216 17:02:29.161459 15493 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.161847 master-0 kubenswrapper[15493]: E0216 17:02:29.161431 15493 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.161847 master-0 kubenswrapper[15493]: E0216 17:02:29.161516 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.161500134 +0000 UTC m=+32.311673204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/serviceaccounts/kube-storage-version-migrator-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.169297 master-0 kubenswrapper[15493]: I0216 17:02:29.169258 15493 scope.go:117] "RemoveContainer" containerID="e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098" Feb 16 17:02:29.181008 master-0 kubenswrapper[15493]: E0216 17:02:29.180967 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.181008 master-0 kubenswrapper[15493]: E0216 17:02:29.180993 15493 projected.go:194] Error preparing data for projected volume kube-api-access-sx92x for pod openshift-machine-config-operator/machine-config-daemon-98q6v: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-daemon/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.181130 master-0 kubenswrapper[15493]: I0216 17:02:29.181004 15493 status_manager.go:851] "Failed to get status for pod" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" pod="openshift-marketplace/redhat-marketplace-4kd66" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4kd66\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:29.181130 master-0 kubenswrapper[15493]: E0216 17:02:29.181044 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.181028541 +0000 UTC m=+32.331201611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-sx92x" (UniqueName: "kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-daemon/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.181215 master-0 kubenswrapper[15493]: E0216 17:02:29.181169 15493 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.189439 master-0 kubenswrapper[15493]: I0216 17:02:29.189405 15493 scope.go:117] "RemoveContainer" containerID="78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31" Feb 16 17:02:29.191708 master-0 kubenswrapper[15493]: E0216 17:02:29.191671 15493 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.191812 master-0 kubenswrapper[15493]: E0216 17:02:29.191741 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.191812 master-0 kubenswrapper[15493]: E0216 17:02:29.191743 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.191727304 +0000 UTC m=+44.341900374 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.191902 master-0 kubenswrapper[15493]: E0216 17:02:29.191844 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.191826097 +0000 UTC m=+44.341999157 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.191902 master-0 kubenswrapper[15493]: E0216 17:02:29.191873 15493 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.191902 master-0 kubenswrapper[15493]: E0216 17:02:29.191885 15493 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.192147 master-0 kubenswrapper[15493]: E0216 17:02:29.191898 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.191891799 +0000 UTC m=+44.342064859 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.192147 master-0 kubenswrapper[15493]: E0216 17:02:29.191895 15493 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.192147 master-0 kubenswrapper[15493]: E0216 17:02:29.191981 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.191963581 +0000 UTC m=+44.342136651 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.192147 master-0 kubenswrapper[15493]: E0216 17:02:29.192017 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.192147 master-0 kubenswrapper[15493]: E0216 17:02:29.192019 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.192000272 +0000 UTC m=+44.342173412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.192147 master-0 kubenswrapper[15493]: E0216 17:02:29.192044 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.192038113 +0000 UTC m=+44.342211183 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.192813 master-0 kubenswrapper[15493]: E0216 17:02:29.192783 15493 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.192813 master-0 kubenswrapper[15493]: E0216 17:02:29.192808 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.192901 master-0 kubenswrapper[15493]: E0216 17:02:29.192827 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.192901 master-0 kubenswrapper[15493]: E0216 17:02:29.192843 15493 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.192901 master-0 kubenswrapper[15493]: E0216 17:02:29.192830 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.192818653 +0000 UTC m=+44.342991773 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.192901 master-0 kubenswrapper[15493]: E0216 17:02:29.192867 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.192859664 +0000 UTC m=+44.343032724 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.192901 master-0 kubenswrapper[15493]: E0216 17:02:29.192872 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.192901 master-0 kubenswrapper[15493]: E0216 17:02:29.192878 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.192901 master-0 kubenswrapper[15493]: E0216 17:02:29.192880 15493 configmap.go:193] Couldn't get configMap openshift-network-node-identity/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.192901 master-0 kubenswrapper[15493]: E0216 17:02:29.192881 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.192876395 +0000 UTC m=+44.343049465 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.192901 master-0 kubenswrapper[15493]: E0216 17:02:29.192897 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.192911 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.192906415 +0000 UTC m=+44.343079485 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.192963 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.192974 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.192967527 +0000 UTC m=+44.343140597 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.192988 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.192982928 +0000 UTC m=+44.343155998 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.192898 15493 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.193002 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.192996398 +0000 UTC m=+44.343169468 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.192901 15493 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.193042 15493 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.193015 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.193009498 +0000 UTC m=+44.343182558 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.193085 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.19306169 +0000 UTC m=+44.343234800 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.193092 15493 configmap.go:193] Couldn't get configMap openshift-network-node-identity/ovnkube-identity-cm: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.193111 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.193104641 +0000 UTC m=+44.343277711 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.193133 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.193119961 +0000 UTC m=+44.343293031 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ovnkube-identity-cm" (UniqueName: "kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.193147 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.193142282 +0000 UTC m=+44.343315342 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.193289 master-0 kubenswrapper[15493]: E0216 17:02:29.193169 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.193161482 +0000 UTC m=+44.343334542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.194335 master-0 kubenswrapper[15493]: E0216 17:02:29.194306 15493 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.194335 master-0 kubenswrapper[15493]: E0216 17:02:29.194330 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.194446 master-0 kubenswrapper[15493]: E0216 17:02:29.194344 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.194336253 +0000 UTC m=+44.344509323 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.194446 master-0 kubenswrapper[15493]: E0216 17:02:29.194359 15493 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.194446 master-0 kubenswrapper[15493]: E0216 17:02:29.194368 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.194359324 +0000 UTC m=+44.344532394 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.194446 master-0 kubenswrapper[15493]: E0216 17:02:29.194366 15493 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.194446 master-0 kubenswrapper[15493]: E0216 17:02:29.194382 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.194376355 +0000 UTC m=+44.344549425 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.194446 master-0 kubenswrapper[15493]: E0216 17:02:29.194384 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.194446 master-0 kubenswrapper[15493]: E0216 17:02:29.194397 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.194392035 +0000 UTC m=+44.344565105 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.194446 master-0 kubenswrapper[15493]: E0216 17:02:29.194407 15493 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.194446 master-0 kubenswrapper[15493]: E0216 17:02:29.194419 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.194446 master-0 kubenswrapper[15493]: E0216 17:02:29.194430 15493 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.194446 master-0 kubenswrapper[15493]: E0216 17:02:29.194449 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.194446 master-0 kubenswrapper[15493]: E0216 17:02:29.194431 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.194419616 +0000 UTC m=+44.344592686 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195187 master-0 kubenswrapper[15493]: E0216 17:02:29.194462 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195187 master-0 kubenswrapper[15493]: E0216 17:02:29.194468 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.194462747 +0000 UTC m=+44.344635817 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195187 master-0 kubenswrapper[15493]: E0216 17:02:29.194484 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.194478407 +0000 UTC m=+44.344651477 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195187 master-0 kubenswrapper[15493]: E0216 17:02:29.194503 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.194493338 +0000 UTC m=+44.344666528 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.195187 master-0 kubenswrapper[15493]: E0216 17:02:29.194524 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.194516578 +0000 UTC m=+44.344689748 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195187 master-0 kubenswrapper[15493]: E0216 17:02:29.194528 15493 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195187 master-0 kubenswrapper[15493]: E0216 17:02:29.194547 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.194539129 +0000 UTC m=+44.344712309 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195187 master-0 kubenswrapper[15493]: E0216 17:02:29.194580 15493 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.195187 master-0 kubenswrapper[15493]: E0216 17:02:29.194656 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.194645032 +0000 UTC m=+44.344818102 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195187 master-0 kubenswrapper[15493]: E0216 17:02:29.194675 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.194669082 +0000 UTC m=+44.344842152 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.195653 master-0 kubenswrapper[15493]: E0216 17:02:29.195589 15493 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.195653 master-0 kubenswrapper[15493]: E0216 17:02:29.195623 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195653 master-0 kubenswrapper[15493]: E0216 17:02:29.195642 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.195629468 +0000 UTC m=+44.345802638 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.195822 master-0 kubenswrapper[15493]: E0216 17:02:29.195663 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.195653758 +0000 UTC m=+44.345826928 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195822 master-0 kubenswrapper[15493]: E0216 17:02:29.195665 15493 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195822 master-0 kubenswrapper[15493]: E0216 17:02:29.195684 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195822 master-0 kubenswrapper[15493]: E0216 17:02:29.195690 15493 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195822 master-0 kubenswrapper[15493]: E0216 17:02:29.195703 15493 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.195822 master-0 kubenswrapper[15493]: E0216 17:02:29.195710 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.19569953 +0000 UTC m=+44.345872700 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195822 master-0 kubenswrapper[15493]: E0216 17:02:29.195690 15493 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.195822 master-0 kubenswrapper[15493]: E0216 17:02:29.195791 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.195778572 +0000 UTC m=+44.345951642 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195822 master-0 kubenswrapper[15493]: E0216 17:02:29.195794 15493 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195822 master-0 kubenswrapper[15493]: E0216 17:02:29.195814 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.195805082 +0000 UTC m=+44.345978282 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.195822 master-0 kubenswrapper[15493]: E0216 17:02:29.195820 15493 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.195822 master-0 kubenswrapper[15493]: E0216 17:02:29.195747 15493 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.195836 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.195824383 +0000 UTC m=+44.345997453 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.195883 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.195871264 +0000 UTC m=+44.346044424 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.195901 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.195890905 +0000 UTC m=+44.346064095 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.195933 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.195909465 +0000 UTC m=+44.346082655 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.195956 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca podName:b6ad958f-25e4-40cb-89ec-5da9cb6395c7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.195949896 +0000 UTC m=+44.346123126 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca") pod "cluster-version-operator-649c4f5445-vt6wb" (UID: "b6ad958f-25e4-40cb-89ec-5da9cb6395c7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.195955 15493 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.195990 15493 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.196011 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.196002528 +0000 UTC m=+44.346175598 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.196030 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.196023548 +0000 UTC m=+44.346196618 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.196031 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.196059 15493 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.196071 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.196062749 +0000 UTC m=+44.346235809 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.196091 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.19608419 +0000 UTC m=+44.346257250 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.196101 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.196401 master-0 kubenswrapper[15493]: E0216 17:02:29.196139 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.196130731 +0000 UTC m=+44.346303911 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.196890 15493 secret.go:189] Couldn't get secret openshift-network-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.196909 15493 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.196951 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls podName:4549ea98-7379-49e1-8452-5efb643137ca nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.196940622 +0000 UTC m=+44.347113772 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls") pod "network-operator-6fcf4c966-6bmf9" (UID: "4549ea98-7379-49e1-8452-5efb643137ca") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.196969 15493 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.196976 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.196987 15493 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.197001 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.196990564 +0000 UTC m=+44.347163694 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.196955 15493 configmap.go:193] Couldn't get configMap openshift-multus/whereabouts-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.197016 15493 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.197027 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197017284 +0000 UTC m=+44.347190464 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.197032 15493 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.197043 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197035615 +0000 UTC m=+44.347208805 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.197022 15493 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.197061 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197053035 +0000 UTC m=+44.347226225 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.197071 15493 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.196989 15493 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197065 master-0 kubenswrapper[15493]: E0216 17:02:29.197081 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197070256 +0000 UTC m=+44.347243446 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "whereabouts-configmap" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197102 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197091616 +0000 UTC m=+44.347264686 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197107 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197111 15493 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197118 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197111407 +0000 UTC m=+44.347284477 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197137 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197129737 +0000 UTC m=+44.347302807 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197150 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197144718 +0000 UTC m=+44.347317788 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197162 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197157128 +0000 UTC m=+44.347330198 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197174 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197168448 +0000 UTC m=+44.347341518 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197187 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197179829 +0000 UTC m=+44.347352899 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197215 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197235 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197238 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.19723227 +0000 UTC m=+44.347405340 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197287 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197273871 +0000 UTC m=+44.347447041 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197517 15493 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197518 15493 secret.go:189] Couldn't get secret openshift-ovn-kubernetes/ovn-control-plane-metrics-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197531 15493 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197543 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197536258 +0000 UTC m=+44.347709328 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197562 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197552049 +0000 UTC m=+44.347725199 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ovn-control-plane-metrics-cert" (UniqueName: "kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197576 15493 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197581 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.197573149 +0000 UTC m=+44.347746339 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.197717 master-0 kubenswrapper[15493]: E0216 17:02:29.197612 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.19760241 +0000 UTC m=+44.347775580 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198226 15493 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198257 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.198248317 +0000 UTC m=+44.348421387 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198267 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198288 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198301 15493 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198274 15493 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198324 15493 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198315 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198346 15493 configmap.go:193] Couldn't get configMap openshift-network-operator/iptables-alerter-script: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198308 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.198300278 +0000 UTC m=+44.348473348 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198367 15493 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198373 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script podName:b3fa6ac1-781f-446c-b6b4-18bdb7723c23 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.19836538 +0000 UTC m=+44.348538440 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "iptables-alerter-script" (UniqueName: "kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script") pod "iptables-alerter-czzz2" (UID: "b3fa6ac1-781f-446c-b6b4-18bdb7723c23") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198389 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.198382171 +0000 UTC m=+44.348555361 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198391 15493 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198407 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.198399031 +0000 UTC m=+44.348572221 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198424 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.198416161 +0000 UTC m=+44.348589351 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198440 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.198432542 +0000 UTC m=+44.348605722 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198457 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.198452812 +0000 UTC m=+44.348625882 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198473 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.198468353 +0000 UTC m=+44.348641413 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.198610 master-0 kubenswrapper[15493]: E0216 17:02:29.198490 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.198484263 +0000 UTC m=+44.348657333 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.199646 master-0 kubenswrapper[15493]: E0216 17:02:29.199536 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.199646 master-0 kubenswrapper[15493]: E0216 17:02:29.199565 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.199557192 +0000 UTC m=+44.349730262 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.199646 master-0 kubenswrapper[15493]: E0216 17:02:29.199576 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.199646 master-0 kubenswrapper[15493]: E0216 17:02:29.199589 15493 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.199646 master-0 kubenswrapper[15493]: E0216 17:02:29.199594 15493 configmap.go:193] Couldn't get configMap openshift-multus/default-cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.199646 master-0 kubenswrapper[15493]: E0216 17:02:29.199611 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.199601813 +0000 UTC m=+44.349774873 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.199646 master-0 kubenswrapper[15493]: E0216 17:02:29.199620 15493 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.199646 master-0 kubenswrapper[15493]: E0216 17:02:29.199626 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.199619813 +0000 UTC m=+44.349792883 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.199646 master-0 kubenswrapper[15493]: E0216 17:02:29.199641 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.199635004 +0000 UTC m=+44.349808074 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.199646 master-0 kubenswrapper[15493]: E0216 17:02:29.199649 15493 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.199646 master-0 kubenswrapper[15493]: E0216 17:02:29.199654 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.199648684 +0000 UTC m=+44.349821754 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199667 15493 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199672 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.199665514 +0000 UTC m=+44.349838584 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199685 15493 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199693 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.199685045 +0000 UTC m=+44.349858115 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199707 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.199700635 +0000 UTC m=+44.349873705 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199715 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199728 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199737 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.199732036 +0000 UTC m=+44.349905106 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199748 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.199743187 +0000 UTC m=+44.349916257 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199788 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199795 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199810 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.199804108 +0000 UTC m=+44.349977178 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199823 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.199815688 +0000 UTC m=+44.349988758 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199936 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199966 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.199959022 +0000 UTC m=+44.350132092 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.199990 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.200017 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.200010134 +0000 UTC m=+44.350183194 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.200074 15493 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.200111 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.200102976 +0000 UTC m=+44.350276046 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.200126 15493 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.200153 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.200146397 +0000 UTC m=+44.350319467 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.200258 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.200284 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.200293 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.200284171 +0000 UTC m=+44.350457241 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.200304 15493 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.200317 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.200308762 +0000 UTC m=+44.350481832 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: E0216 17:02:29.200336 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.200327472 +0000 UTC m=+44.350500632 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.200375 master-0 kubenswrapper[15493]: I0216 17:02:29.200331 15493 status_manager.go:851] "Failed to get status for pod" podUID="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/pods/tuned-l5kbz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.200421 15493 configmap.go:193] Couldn't get configMap openshift-multus/cni-copy-resources: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.200485 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.200468196 +0000 UTC m=+44.350641316 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cni-binary-copy" (UniqueName: "kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.200573 15493 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.200610 15493 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.200613 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.200602539 +0000 UTC m=+44.350775679 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.200694 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.200674971 +0000 UTC m=+44.350848141 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201421 15493 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201448 15493 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201475 15493 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201497 15493 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201502 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.201480693 +0000 UTC m=+44.351653803 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201536 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.201525964 +0000 UTC m=+44.351699114 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201540 15493 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201502 15493 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201569 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.201544294 +0000 UTC m=+44.351717364 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201567 15493 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201577 15493 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201478 15493 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201608 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201642 15493 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201589 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201586 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.201580415 +0000 UTC m=+44.351753475 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: I0216 17:02:29.201665 15493 scope.go:117] "RemoveContainer" containerID="a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b" Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201681 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.201670608 +0000 UTC m=+44.351843678 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201698 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.201689668 +0000 UTC m=+44.351862728 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201698 15493 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201711 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.201704928 +0000 UTC m=+44.351877998 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201726 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.201719929 +0000 UTC m=+44.351892999 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201768 15493 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201778 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.20177134 +0000 UTC m=+44.351944410 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201770 15493 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201813 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201829 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xmk2b for pod openshift-multus/multus-admission-controller-7c64d55f8-4jz2t: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201792 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.201787481 +0000 UTC m=+44.351960551 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201901 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.201874123 +0000 UTC m=+44.352047233 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201967 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.201946625 +0000 UTC m=+44.352119745 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.202001 15493 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.202004 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.201990256 +0000 UTC m=+44.352163436 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.202029 15493 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.202046 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.202036607 +0000 UTC m=+44.352209677 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.202063 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b podName:ab6e5720-2c30-4962-9c67-89f1607d137f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.202052978 +0000 UTC m=+32.352226048 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-xmk2b" (UniqueName: "kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b") pod "multus-admission-controller-7c64d55f8-4jz2t" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ac/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.201972 15493 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.202076 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.202070418 +0000 UTC m=+44.352243488 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.202090 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.202084709 +0000 UTC m=+44.352257779 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.202111 15493 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.202116 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.202109709 +0000 UTC m=+44.352282779 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.202119 master-0 kubenswrapper[15493]: E0216 17:02:29.202156 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.2021406 +0000 UTC m=+44.352313800 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.204496 master-0 kubenswrapper[15493]: E0216 17:02:29.202247 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.202236103 +0000 UTC m=+44.352409173 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.204496 master-0 kubenswrapper[15493]: E0216 17:02:29.202305 15493 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.204496 master-0 kubenswrapper[15493]: E0216 17:02:29.202322 15493 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.204496 master-0 kubenswrapper[15493]: E0216 17:02:29.202369 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.202350846 +0000 UTC m=+44.352523996 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.204496 master-0 kubenswrapper[15493]: E0216 17:02:29.202405 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.202389297 +0000 UTC m=+44.352562427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.204496 master-0 kubenswrapper[15493]: E0216 17:02:29.202414 15493 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.204496 master-0 kubenswrapper[15493]: E0216 17:02:29.202439 15493 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.204496 master-0 kubenswrapper[15493]: E0216 17:02:29.202449 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.202440108 +0000 UTC m=+44.352613288 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.204496 master-0 kubenswrapper[15493]: E0216 17:02:29.202522 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.20250558 +0000 UTC m=+44.352678760 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.204496 master-0 kubenswrapper[15493]: E0216 17:02:29.202544 15493 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.204496 master-0 kubenswrapper[15493]: E0216 17:02:29.202576 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:45.202568421 +0000 UTC m=+44.352741491 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:02:29.218670 master-0 kubenswrapper[15493]: I0216 17:02:29.218575 15493 scope.go:117] "RemoveContainer" containerID="ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa" Feb 16 17:02:29.221421 master-0 kubenswrapper[15493]: I0216 17:02:29.221266 15493 status_manager.go:851] "Failed to get status for pod" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" pod="openshift-dns/dns-default-qcgxx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/dns-default-qcgxx\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:29.221642 master-0 kubenswrapper[15493]: E0216 17:02:29.221592 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.223704 master-0 kubenswrapper[15493]: E0216 17:02:29.223645 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.223843 master-0 kubenswrapper[15493]: E0216 17:02:29.223708 15493 projected.go:194] Error preparing data for projected volume kube-api-access-fkwxl for pod openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-control-plane/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.223843 master-0 kubenswrapper[15493]: E0216 17:02:29.223834 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.223805573 +0000 UTC m=+32.373978683 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fkwxl" (UniqueName: "kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-control-plane/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.240896 master-0 kubenswrapper[15493]: E0216 17:02:29.240795 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.241312 master-0 kubenswrapper[15493]: I0216 17:02:29.241264 15493 status_manager.go:851] "Failed to get status for pod" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b56bd877c-p7k2k\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:29.244914 master-0 kubenswrapper[15493]: I0216 17:02:29.244886 15493 scope.go:117] "RemoveContainer" containerID="78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31" Feb 16 17:02:29.245458 master-0 kubenswrapper[15493]: E0216 17:02:29.245406 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31\": container with ID starting with 78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31 not found: ID does not exist" containerID="78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31" Feb 16 17:02:29.245530 master-0 kubenswrapper[15493]: I0216 17:02:29.245467 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31"} err="failed to get container status \"78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31\": rpc error: code = NotFound desc = could not find container \"78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31\": container with ID starting with 78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31 not found: ID does not exist" Feb 16 17:02:29.245570 master-0 kubenswrapper[15493]: I0216 17:02:29.245530 15493 scope.go:117] "RemoveContainer" containerID="a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b" Feb 16 17:02:29.245851 master-0 kubenswrapper[15493]: E0216 17:02:29.245826 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b\": container with ID starting with a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b not found: ID does not exist" containerID="a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b" Feb 16 17:02:29.245890 master-0 kubenswrapper[15493]: I0216 17:02:29.245852 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b"} err="failed to get container status \"a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b\": rpc error: code = NotFound desc = could not find container \"a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b\": container with ID starting with a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b not found: ID does not exist" Feb 16 17:02:29.245890 master-0 kubenswrapper[15493]: I0216 17:02:29.245873 15493 scope.go:117] "RemoveContainer" containerID="ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa" Feb 16 17:02:29.248058 master-0 kubenswrapper[15493]: E0216 17:02:29.248032 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa\": container with ID starting with ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa not found: ID does not exist" containerID="ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa" Feb 16 17:02:29.248109 master-0 kubenswrapper[15493]: I0216 17:02:29.248055 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa"} err="failed to get container status \"ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa\": rpc error: code = NotFound desc = could not find container \"ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa\": container with ID starting with ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa not found: ID does not exist" Feb 16 17:02:29.248109 master-0 kubenswrapper[15493]: I0216 17:02:29.248068 15493 scope.go:117] "RemoveContainer" containerID="3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169" Feb 16 17:02:29.261138 master-0 kubenswrapper[15493]: I0216 17:02:29.260134 15493 scope.go:117] "RemoveContainer" containerID="5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2" Feb 16 17:02:29.261287 master-0 kubenswrapper[15493]: E0216 17:02:29.261257 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.262361 master-0 kubenswrapper[15493]: I0216 17:02:29.262311 15493 status_manager.go:851] "Failed to get status for pod" podUID="a6d86b04-1d3f-4f27-a262-b732c1295997" pod="openshift-marketplace/certified-operators-8kkl7" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8kkl7\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:02:29.280077 master-0 kubenswrapper[15493]: I0216 17:02:29.280029 15493 scope.go:117] "RemoveContainer" containerID="3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169" Feb 16 17:02:29.281203 master-0 kubenswrapper[15493]: E0216 17:02:29.280961 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.281203 master-0 kubenswrapper[15493]: E0216 17:02:29.281012 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169\": container with ID starting with 3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169 not found: ID does not exist" containerID="3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169" Feb 16 17:02:29.281203 master-0 kubenswrapper[15493]: I0216 17:02:29.281064 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169"} err="failed to get container status \"3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169\": rpc error: code = NotFound desc = could not find container \"3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169\": container with ID starting with 3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169 not found: ID does not exist" Feb 16 17:02:29.281203 master-0 kubenswrapper[15493]: I0216 17:02:29.281093 15493 scope.go:117] "RemoveContainer" containerID="5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2" Feb 16 17:02:29.281581 master-0 kubenswrapper[15493]: E0216 17:02:29.281544 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2\": container with ID starting with 5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2 not found: ID does not exist" containerID="5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2" Feb 16 17:02:29.281652 master-0 kubenswrapper[15493]: I0216 17:02:29.281591 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2"} err="failed to get container status \"5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2\": rpc error: code = NotFound desc = could not find container \"5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2\": container with ID starting with 5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2 not found: ID does not exist" Feb 16 17:02:29.281652 master-0 kubenswrapper[15493]: I0216 17:02:29.281626 15493 scope.go:117] "RemoveContainer" containerID="3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169" Feb 16 17:02:29.282241 master-0 kubenswrapper[15493]: I0216 17:02:29.282076 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169"} err="failed to get container status \"3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169\": rpc error: code = NotFound desc = could not find container \"3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169\": container with ID starting with 3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169 not found: ID does not exist" Feb 16 17:02:29.282241 master-0 kubenswrapper[15493]: I0216 17:02:29.282109 15493 scope.go:117] "RemoveContainer" containerID="5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2" Feb 16 17:02:29.282241 master-0 kubenswrapper[15493]: E0216 17:02:29.282118 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.282241 master-0 kubenswrapper[15493]: E0216 17:02:29.282141 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.282241 master-0 kubenswrapper[15493]: E0216 17:02:29.282209 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.282188328 +0000 UTC m=+32.432361408 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.282498 master-0 kubenswrapper[15493]: I0216 17:02:29.282348 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2"} err="failed to get container status \"5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2\": rpc error: code = NotFound desc = could not find container \"5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2\": container with ID starting with 5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2 not found: ID does not exist" Feb 16 17:02:29.282498 master-0 kubenswrapper[15493]: I0216 17:02:29.282373 15493 scope.go:117] "RemoveContainer" containerID="3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169" Feb 16 17:02:29.282769 master-0 kubenswrapper[15493]: I0216 17:02:29.282730 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169"} err="failed to get container status \"3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169\": rpc error: code = NotFound desc = could not find container \"3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169\": container with ID starting with 3eb568c1132222d63a23bd5ba13fe759fb68da9e68c33113020857811761f169 not found: ID does not exist" Feb 16 17:02:29.282769 master-0 kubenswrapper[15493]: I0216 17:02:29.282759 15493 scope.go:117] "RemoveContainer" containerID="5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2" Feb 16 17:02:29.283111 master-0 kubenswrapper[15493]: I0216 17:02:29.283075 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2"} err="failed to get container status \"5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2\": rpc error: code = NotFound desc = could not find container \"5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2\": container with ID starting with 5def91a5adb9f3e0f8fa26dad926a58a2e15101df1665ea9ac530e4644de23f2 not found: ID does not exist" Feb 16 17:02:29.283111 master-0 kubenswrapper[15493]: I0216 17:02:29.283101 15493 scope.go:117] "RemoveContainer" containerID="90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99" Feb 16 17:02:29.283451 master-0 kubenswrapper[15493]: E0216 17:02:29.283416 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99\": container with ID starting with 90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99 not found: ID does not exist" containerID="90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99" Feb 16 17:02:29.283516 master-0 kubenswrapper[15493]: I0216 17:02:29.283444 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99"} err="failed to get container status \"90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99\": rpc error: code = NotFound desc = could not find container \"90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99\": container with ID starting with 90c1485b9eff2f47c59634f1f1a1e69f89fd42d52ba7489a5ed23317030b7e99 not found: ID does not exist" Feb 16 17:02:29.283516 master-0 kubenswrapper[15493]: I0216 17:02:29.283464 15493 scope.go:117] "RemoveContainer" containerID="78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31" Feb 16 17:02:29.283740 master-0 kubenswrapper[15493]: I0216 17:02:29.283710 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31"} err="failed to get container status \"78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31\": rpc error: code = NotFound desc = could not find container \"78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31\": container with ID starting with 78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31 not found: ID does not exist" Feb 16 17:02:29.283740 master-0 kubenswrapper[15493]: I0216 17:02:29.283728 15493 scope.go:117] "RemoveContainer" containerID="a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b" Feb 16 17:02:29.284009 master-0 kubenswrapper[15493]: I0216 17:02:29.283981 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b"} err="failed to get container status \"a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b\": rpc error: code = NotFound desc = could not find container \"a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b\": container with ID starting with a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b not found: ID does not exist" Feb 16 17:02:29.284114 master-0 kubenswrapper[15493]: I0216 17:02:29.284008 15493 scope.go:117] "RemoveContainer" containerID="ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa" Feb 16 17:02:29.284266 master-0 kubenswrapper[15493]: I0216 17:02:29.284242 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa"} err="failed to get container status \"ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa\": rpc error: code = NotFound desc = could not find container \"ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa\": container with ID starting with ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa not found: ID does not exist" Feb 16 17:02:29.284266 master-0 kubenswrapper[15493]: I0216 17:02:29.284265 15493 scope.go:117] "RemoveContainer" containerID="12b334066ee229a3063ed554a4dd75cf5da1c898112391b182e46bd9935b002b" Feb 16 17:02:29.284512 master-0 kubenswrapper[15493]: E0216 17:02:29.284474 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b334066ee229a3063ed554a4dd75cf5da1c898112391b182e46bd9935b002b\": container with ID starting with 12b334066ee229a3063ed554a4dd75cf5da1c898112391b182e46bd9935b002b not found: ID does not exist" containerID="12b334066ee229a3063ed554a4dd75cf5da1c898112391b182e46bd9935b002b" Feb 16 17:02:29.284571 master-0 kubenswrapper[15493]: I0216 17:02:29.284517 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b334066ee229a3063ed554a4dd75cf5da1c898112391b182e46bd9935b002b"} err="failed to get container status \"12b334066ee229a3063ed554a4dd75cf5da1c898112391b182e46bd9935b002b\": rpc error: code = NotFound desc = could not find container \"12b334066ee229a3063ed554a4dd75cf5da1c898112391b182e46bd9935b002b\": container with ID starting with 12b334066ee229a3063ed554a4dd75cf5da1c898112391b182e46bd9935b002b not found: ID does not exist" Feb 16 17:02:29.284571 master-0 kubenswrapper[15493]: I0216 17:02:29.284547 15493 scope.go:117] "RemoveContainer" containerID="b61f3a0c3ac4f93f0d72928ae09f6e157b6ae98210058a751bcc300beda92cf1" Feb 16 17:02:29.284788 master-0 kubenswrapper[15493]: E0216 17:02:29.284765 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b61f3a0c3ac4f93f0d72928ae09f6e157b6ae98210058a751bcc300beda92cf1\": container with ID starting with b61f3a0c3ac4f93f0d72928ae09f6e157b6ae98210058a751bcc300beda92cf1 not found: ID does not exist" containerID="b61f3a0c3ac4f93f0d72928ae09f6e157b6ae98210058a751bcc300beda92cf1" Feb 16 17:02:29.284913 master-0 kubenswrapper[15493]: I0216 17:02:29.284790 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b61f3a0c3ac4f93f0d72928ae09f6e157b6ae98210058a751bcc300beda92cf1"} err="failed to get container status \"b61f3a0c3ac4f93f0d72928ae09f6e157b6ae98210058a751bcc300beda92cf1\": rpc error: code = NotFound desc = could not find container \"b61f3a0c3ac4f93f0d72928ae09f6e157b6ae98210058a751bcc300beda92cf1\": container with ID starting with b61f3a0c3ac4f93f0d72928ae09f6e157b6ae98210058a751bcc300beda92cf1 not found: ID does not exist" Feb 16 17:02:29.284913 master-0 kubenswrapper[15493]: I0216 17:02:29.284808 15493 scope.go:117] "RemoveContainer" containerID="e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098" Feb 16 17:02:29.285210 master-0 kubenswrapper[15493]: E0216 17:02:29.285097 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098\": container with ID starting with e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098 not found: ID does not exist" containerID="e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098" Feb 16 17:02:29.285210 master-0 kubenswrapper[15493]: I0216 17:02:29.285121 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098"} err="failed to get container status \"e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098\": rpc error: code = NotFound desc = could not find container \"e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098\": container with ID starting with e2c414ddf96fb8c5bea54b1a6b99603cc192042c090f6ca58416c99228252098 not found: ID does not exist" Feb 16 17:02:29.285210 master-0 kubenswrapper[15493]: I0216 17:02:29.285138 15493 scope.go:117] "RemoveContainer" containerID="78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31" Feb 16 17:02:29.285509 master-0 kubenswrapper[15493]: I0216 17:02:29.285366 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31"} err="failed to get container status \"78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31\": rpc error: code = NotFound desc = could not find container \"78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31\": container with ID starting with 78be6b61182dfe6eb73eb4b2ec9dfffc8495250ac5ff6b9c1fb17d64d5e91a31 not found: ID does not exist" Feb 16 17:02:29.285509 master-0 kubenswrapper[15493]: I0216 17:02:29.285387 15493 scope.go:117] "RemoveContainer" containerID="a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b" Feb 16 17:02:29.285854 master-0 kubenswrapper[15493]: I0216 17:02:29.285767 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b"} err="failed to get container status \"a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b\": rpc error: code = NotFound desc = could not find container \"a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b\": container with ID starting with a0c11510a2d04ca22d6c9d335f9769b33bba56be9183947d32a5b006aea2071b not found: ID does not exist" Feb 16 17:02:29.285854 master-0 kubenswrapper[15493]: I0216 17:02:29.285788 15493 scope.go:117] "RemoveContainer" containerID="ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa" Feb 16 17:02:29.286036 master-0 kubenswrapper[15493]: I0216 17:02:29.285994 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa"} err="failed to get container status \"ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa\": rpc error: code = NotFound desc = could not find container \"ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa\": container with ID starting with ba7ab8529b3edd730f3a69bb53f4a8a1259551559054330e6f700d68cfb8d8fa not found: ID does not exist" Feb 16 17:02:29.286036 master-0 kubenswrapper[15493]: I0216 17:02:29.286025 15493 scope.go:117] "RemoveContainer" containerID="e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26" Feb 16 17:02:29.286320 master-0 kubenswrapper[15493]: E0216 17:02:29.286298 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26\": container with ID starting with e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26 not found: ID does not exist" containerID="e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26" Feb 16 17:02:29.286386 master-0 kubenswrapper[15493]: I0216 17:02:29.286322 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26"} err="failed to get container status \"e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26\": rpc error: code = NotFound desc = could not find container \"e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26\": container with ID starting with e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26 not found: ID does not exist" Feb 16 17:02:29.286386 master-0 kubenswrapper[15493]: I0216 17:02:29.286338 15493 scope.go:117] "RemoveContainer" containerID="c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e" Feb 16 17:02:29.286642 master-0 kubenswrapper[15493]: E0216 17:02:29.286552 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e\": container with ID starting with c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e not found: ID does not exist" containerID="c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e" Feb 16 17:02:29.286642 master-0 kubenswrapper[15493]: I0216 17:02:29.286575 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e"} err="failed to get container status \"c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e\": rpc error: code = NotFound desc = could not find container \"c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e\": container with ID starting with c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e not found: ID does not exist" Feb 16 17:02:29.286642 master-0 kubenswrapper[15493]: I0216 17:02:29.286590 15493 scope.go:117] "RemoveContainer" containerID="e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26" Feb 16 17:02:29.286827 master-0 kubenswrapper[15493]: I0216 17:02:29.286802 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26"} err="failed to get container status \"e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26\": rpc error: code = NotFound desc = could not find container \"e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26\": container with ID starting with e1a618950e46fb3782e67acccb119c30b5d641f8a3d68294b423081f9a319a26 not found: ID does not exist" Feb 16 17:02:29.286887 master-0 kubenswrapper[15493]: I0216 17:02:29.286826 15493 scope.go:117] "RemoveContainer" containerID="c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e" Feb 16 17:02:29.287071 master-0 kubenswrapper[15493]: I0216 17:02:29.287046 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e"} err="failed to get container status \"c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e\": rpc error: code = NotFound desc = could not find container \"c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e\": container with ID starting with c82a315f2fc5cfd41f3cf5d051afec5fbbaf8f73471c3cb29769f12a3c1a9e5e not found: ID does not exist" Feb 16 17:02:29.301622 master-0 kubenswrapper[15493]: E0216 17:02:29.301405 15493 projected.go:288] Couldn't get configMap openshift-monitoring/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.302749 master-0 kubenswrapper[15493]: E0216 17:02:29.302656 15493 projected.go:288] Couldn't get configMap openshift-dns/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.302749 master-0 kubenswrapper[15493]: E0216 17:02:29.302675 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zl5w2 for pod openshift-dns/dns-default-qcgxx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/dns/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.302749 master-0 kubenswrapper[15493]: E0216 17:02:29.302730 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2 podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.302716532 +0000 UTC m=+32.452889602 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zl5w2" (UniqueName: "kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/dns/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.321140 master-0 kubenswrapper[15493]: E0216 17:02:29.321091 15493 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.321140 master-0 kubenswrapper[15493]: E0216 17:02:29.321127 15493 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.321315 master-0 kubenswrapper[15493]: E0216 17:02:29.321199 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.32117592 +0000 UTC m=+32.471349000 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.341521 master-0 kubenswrapper[15493]: E0216 17:02:29.341435 15493 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.342759 master-0 kubenswrapper[15493]: E0216 17:02:29.342693 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.342759 master-0 kubenswrapper[15493]: E0216 17:02:29.342722 15493 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.342906 master-0 kubenswrapper[15493]: E0216 17:02:29.342788 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.342767902 +0000 UTC m=+32.492940972 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.361848 master-0 kubenswrapper[15493]: E0216 17:02:29.361783 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.363140 master-0 kubenswrapper[15493]: E0216 17:02:29.362951 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.363140 master-0 kubenswrapper[15493]: E0216 17:02:29.363002 15493 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.363140 master-0 kubenswrapper[15493]: E0216 17:02:29.363106 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.363081379 +0000 UTC m=+32.513254519 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/machine-api-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.382144 master-0 kubenswrapper[15493]: E0216 17:02:29.382055 15493 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.382144 master-0 kubenswrapper[15493]: E0216 17:02:29.382115 15493 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.382262 master-0 kubenswrapper[15493]: E0216 17:02:29.382060 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.382262 master-0 kubenswrapper[15493]: E0216 17:02:29.382205 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.382179624 +0000 UTC m=+32.532352744 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/serviceaccounts/dns-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.406344 master-0 kubenswrapper[15493]: E0216 17:02:29.403053 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.422433 master-0 kubenswrapper[15493]: E0216 17:02:29.422377 15493 projected.go:288] Couldn't get configMap openshift-network-node-identity/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.441889 master-0 kubenswrapper[15493]: E0216 17:02:29.441794 15493 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.461966 master-0 kubenswrapper[15493]: E0216 17:02:29.461580 15493 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.461966 master-0 kubenswrapper[15493]: E0216 17:02:29.461818 15493 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.461966 master-0 kubenswrapper[15493]: E0216 17:02:29.461899 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.461880844 +0000 UTC m=+32.612053914 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-oauth-apiserver/serviceaccounts/oauth-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.482270 master-0 kubenswrapper[15493]: E0216 17:02:29.482193 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.482270 master-0 kubenswrapper[15493]: E0216 17:02:29.482242 15493 projected.go:194] Error preparing data for projected volume kube-api-access-bnnc5 for pod openshift-multus/network-metrics-daemon-279g6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/metrics-daemon-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.482582 master-0 kubenswrapper[15493]: E0216 17:02:29.482317 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5 podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.482300234 +0000 UTC m=+32.632473304 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bnnc5" (UniqueName: "kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/metrics-daemon-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.502038 master-0 kubenswrapper[15493]: E0216 17:02:29.501989 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.502038 master-0 kubenswrapper[15493]: E0216 17:02:29.502034 15493 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.502270 master-0 kubenswrapper[15493]: E0216 17:02:29.502100 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.502082468 +0000 UTC m=+32.652255538 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/serviceaccounts/operator-controller-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.521043 master-0 kubenswrapper[15493]: E0216 17:02:29.520981 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.523263 master-0 kubenswrapper[15493]: E0216 17:02:29.523222 15493 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.523263 master-0 kubenswrapper[15493]: E0216 17:02:29.523253 15493 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.523400 master-0 kubenswrapper[15493]: E0216 17:02:29.523322 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.523306029 +0000 UTC m=+32.673479099 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.541594 master-0 kubenswrapper[15493]: E0216 17:02:29.541452 15493 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.541594 master-0 kubenswrapper[15493]: E0216 17:02:29.541502 15493 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.541832 master-0 kubenswrapper[15493]: E0216 17:02:29.541616 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.541595593 +0000 UTC m=+32.691768673 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.541832 master-0 kubenswrapper[15493]: E0216 17:02:29.541674 15493 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.557636 master-0 kubenswrapper[15493]: I0216 17:02:29.557579 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"ab4cf257e6e0f29ed254052561254039fa1b8a8f9b4ce54fa741917d9a4c1648"} Feb 16 17:02:29.557636 master-0 kubenswrapper[15493]: I0216 17:02:29.557623 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"01f4f89971ebb359e0eeec52882d07d21354b8e08fc6c1173fde440ef3e5d38b"} Feb 16 17:02:29.561464 master-0 kubenswrapper[15493]: I0216 17:02:29.561432 15493 scope.go:117] "RemoveContainer" containerID="9563c6ff303edb4e0a6b2f6ce6960067c267be9fe8766c7044d1f1559d05730f" Feb 16 17:02:29.561898 master-0 kubenswrapper[15493]: E0216 17:02:29.561866 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:02:29.582739 master-0 kubenswrapper[15493]: E0216 17:02:29.582606 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.582739 master-0 kubenswrapper[15493]: E0216 17:02:29.582647 15493 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.582739 master-0 kubenswrapper[15493]: E0216 17:02:29.582706 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.582688821 +0000 UTC m=+32.732861891 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/control-plane-machine-set-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.601021 master-0 kubenswrapper[15493]: E0216 17:02:29.600960 15493 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.601021 master-0 kubenswrapper[15493]: E0216 17:02:29.601022 15493 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.601260 master-0 kubenswrapper[15493]: E0216 17:02:29.601118 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.601090378 +0000 UTC m=+32.751263458 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.621650 master-0 kubenswrapper[15493]: E0216 17:02:29.621594 15493 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.621650 master-0 kubenswrapper[15493]: E0216 17:02:29.621646 15493 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.621887 master-0 kubenswrapper[15493]: E0216 17:02:29.621733 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.621711224 +0000 UTC m=+32.771884304 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.645909 master-0 kubenswrapper[15493]: E0216 17:02:29.645769 15493 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.645909 master-0 kubenswrapper[15493]: E0216 17:02:29.645821 15493 projected.go:194] Error preparing data for projected volume kube-api-access-6ftld for pod openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.645909 master-0 kubenswrapper[15493]: E0216 17:02:29.645894 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.645875583 +0000 UTC m=+32.796048653 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ftld" (UniqueName: "kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/serviceaccounts/machine-approver-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.662168 master-0 kubenswrapper[15493]: E0216 17:02:29.662087 15493 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.662168 master-0 kubenswrapper[15493]: E0216 17:02:29.662159 15493 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.662379 master-0 kubenswrapper[15493]: E0216 17:02:29.662255 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.662232586 +0000 UTC m=+32.812405676 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/serviceaccounts/catalogd-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.742750 master-0 kubenswrapper[15493]: E0216 17:02:29.742707 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.742878 master-0 kubenswrapper[15493]: E0216 17:02:29.742757 15493 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.742878 master-0 kubenswrapper[15493]: E0216 17:02:29.742826 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.742807358 +0000 UTC m=+32.892980418 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.764299 master-0 kubenswrapper[15493]: E0216 17:02:29.764104 15493 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.764299 master-0 kubenswrapper[15493]: E0216 17:02:29.764159 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.764299 master-0 kubenswrapper[15493]: E0216 17:02:29.764256 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.764226045 +0000 UTC m=+32.914399155 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.782787 master-0 kubenswrapper[15493]: E0216 17:02:29.782632 15493 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.782787 master-0 kubenswrapper[15493]: E0216 17:02:29.782673 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.782787 master-0 kubenswrapper[15493]: E0216 17:02:29.782746 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.782724325 +0000 UTC m=+32.932897395 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.802538 master-0 kubenswrapper[15493]: E0216 17:02:29.802341 15493 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.802538 master-0 kubenswrapper[15493]: E0216 17:02:29.802396 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/serviceaccounts/cluster-olm-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.802538 master-0 kubenswrapper[15493]: E0216 17:02:29.802509 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.802487448 +0000 UTC m=+32.952660518 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/serviceaccounts/cluster-olm-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.821287 master-0 kubenswrapper[15493]: E0216 17:02:29.821219 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.821287 master-0 kubenswrapper[15493]: E0216 17:02:29.821285 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2gq8x for pod openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/cluster-node-tuning-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.821429 master-0 kubenswrapper[15493]: E0216 17:02:29.821405 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.821368367 +0000 UTC m=+32.971541437 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2gq8x" (UniqueName: "kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/cluster-node-tuning-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.962499 master-0 kubenswrapper[15493]: E0216 17:02:29.962412 15493 projected.go:288] Couldn't get configMap openshift-dns/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.962499 master-0 kubenswrapper[15493]: E0216 17:02:29.962466 15493 projected.go:194] Error preparing data for projected volume kube-api-access-8m29g for pod openshift-dns/node-resolver-vfxj4: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/node-resolver/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.962794 master-0 kubenswrapper[15493]: E0216 17:02:29.962538 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g podName:a6fe41b0-1a42-4f07-8220-d9aaa50788ad nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.962519773 +0000 UTC m=+33.112692843 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8m29g" (UniqueName: "kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g") pod "node-resolver-vfxj4" (UID: "a6fe41b0-1a42-4f07-8220-d9aaa50788ad") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/serviceaccounts/node-resolver/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.985834 master-0 kubenswrapper[15493]: E0216 17:02:29.984128 15493 projected.go:288] Couldn't get configMap openshift-cloud-controller-manager-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:29.985834 master-0 kubenswrapper[15493]: E0216 17:02:29.984178 15493 projected.go:194] Error preparing data for projected volume kube-api-access-r87zw for pod openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/serviceaccounts/cluster-cloud-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:29.985834 master-0 kubenswrapper[15493]: E0216 17:02:29.984275 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:33.984256968 +0000 UTC m=+33.134430028 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r87zw" (UniqueName: "kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/serviceaccounts/cluster-cloud-controller-manager/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.001511 master-0 kubenswrapper[15493]: E0216 17:02:30.001473 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.001511 master-0 kubenswrapper[15493]: E0216 17:02:30.001510 15493 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.001778 master-0 kubenswrapper[15493]: E0216 17:02:30.001583 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.001564846 +0000 UTC m=+33.151737916 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.022429 master-0 kubenswrapper[15493]: E0216 17:02:30.022394 15493 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.022429 master-0 kubenswrapper[15493]: E0216 17:02:30.022427 15493 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.022548 master-0 kubenswrapper[15493]: E0216 17:02:30.022487 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.022473589 +0000 UTC m=+33.172646659 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/serviceaccounts/service-ca-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.042077 master-0 kubenswrapper[15493]: E0216 17:02:30.042023 15493 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.042077 master-0 kubenswrapper[15493]: E0216 17:02:30.042070 15493 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.042264 master-0 kubenswrapper[15493]: E0216 17:02:30.042143 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.042124989 +0000 UTC m=+33.192298059 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.062226 master-0 kubenswrapper[15493]: E0216 17:02:30.062181 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.062226 master-0 kubenswrapper[15493]: E0216 17:02:30.062222 15493 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.062419 master-0 kubenswrapper[15493]: E0216 17:02:30.062292 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.062274723 +0000 UTC m=+33.212447793 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/redhat-marketplace/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.101967 master-0 kubenswrapper[15493]: E0216 17:02:30.101890 15493 projected.go:288] Couldn't get configMap openshift-multus/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.102157 master-0 kubenswrapper[15493]: E0216 17:02:30.101983 15493 projected.go:194] Error preparing data for projected volume kube-api-access-j5qxm for pod openshift-multus/multus-additional-cni-plugins-rjdlk: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.102157 master-0 kubenswrapper[15493]: E0216 17:02:30.102064 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm podName:ab5760f1-b2e0-4138-9383-e4827154ac50 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.102042295 +0000 UTC m=+33.252215365 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j5qxm" (UniqueName: "kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm") pod "multus-additional-cni-plugins-rjdlk" (UID: "ab5760f1-b2e0-4138-9383-e4827154ac50") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/multus-ancillary-tools/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.122746 master-0 kubenswrapper[15493]: E0216 17:02:30.122688 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.122746 master-0 kubenswrapper[15493]: E0216 17:02:30.122748 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-baremetal-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.122990 master-0 kubenswrapper[15493]: E0216 17:02:30.122857 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.122830795 +0000 UTC m=+33.273003885 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-baremetal-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.142759 master-0 kubenswrapper[15493]: E0216 17:02:30.142695 15493 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.142759 master-0 kubenswrapper[15493]: E0216 17:02:30.142751 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/serviceaccounts/operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.143067 master-0 kubenswrapper[15493]: E0216 17:02:30.142853 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.142826224 +0000 UTC m=+33.292999324 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/serviceaccounts/operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.161735 master-0 kubenswrapper[15493]: E0216 17:02:30.161692 15493 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.161735 master-0 kubenswrapper[15493]: E0216 17:02:30.161731 15493 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/serviceaccounts/service-ca/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.162037 master-0 kubenswrapper[15493]: E0216 17:02:30.161798 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.161779826 +0000 UTC m=+33.311952926 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/serviceaccounts/service-ca/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.181614 master-0 kubenswrapper[15493]: E0216 17:02:30.181510 15493 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.181614 master-0 kubenswrapper[15493]: E0216 17:02:30.181544 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/serviceaccounts/cloud-credential-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.181614 master-0 kubenswrapper[15493]: E0216 17:02:30.181613 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.18159354 +0000 UTC m=+33.331766610 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/serviceaccounts/cloud-credential-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.221812 master-0 kubenswrapper[15493]: E0216 17:02:30.221733 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.221812 master-0 kubenswrapper[15493]: E0216 17:02:30.221797 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.222079 master-0 kubenswrapper[15493]: E0216 17:02:30.221889 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.221866566 +0000 UTC m=+33.372039636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.241031 master-0 kubenswrapper[15493]: E0216 17:02:30.240959 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.241031 master-0 kubenswrapper[15493]: E0216 17:02:30.241017 15493 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/marketplace-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.241341 master-0 kubenswrapper[15493]: E0216 17:02:30.241101 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.241080264 +0000 UTC m=+33.391253334 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/marketplace-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.262567 master-0 kubenswrapper[15493]: E0216 17:02:30.262486 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.262567 master-0 kubenswrapper[15493]: E0216 17:02:30.262551 15493 projected.go:194] Error preparing data for projected volume kube-api-access-9xrw2 for pod openshift-ovn-kubernetes/ovnkube-node-flr86: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-node/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.262947 master-0 kubenswrapper[15493]: E0216 17:02:30.262657 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2 podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.262627045 +0000 UTC m=+33.412800125 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-9xrw2" (UniqueName: "kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/serviceaccounts/ovn-kubernetes-node/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.281455 master-0 kubenswrapper[15493]: E0216 17:02:30.281340 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.281455 master-0 kubenswrapper[15493]: E0216 17:02:30.281403 15493 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.281754 master-0 kubenswrapper[15493]: E0216 17:02:30.281520 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.281492934 +0000 UTC m=+33.431666014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.293715 master-0 kubenswrapper[15493]: I0216 17:02:30.293599 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:30.301966 master-0 kubenswrapper[15493]: E0216 17:02:30.301876 15493 projected.go:288] Couldn't get configMap openshift-monitoring/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.301966 master-0 kubenswrapper[15493]: E0216 17:02:30.301953 15493 projected.go:194] Error preparing data for projected volume kube-api-access-j7w67 for pod openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/serviceaccounts/cluster-monitoring-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.302203 master-0 kubenswrapper[15493]: E0216 17:02:30.302049 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67 podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.302025717 +0000 UTC m=+33.452198787 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7w67" (UniqueName: "kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/serviceaccounts/cluster-monitoring-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.342296 master-0 kubenswrapper[15493]: E0216 17:02:30.342234 15493 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.342296 master-0 kubenswrapper[15493]: E0216 17:02:30.342279 15493 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.342616 master-0 kubenswrapper[15493]: E0216 17:02:30.342353 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.342334204 +0000 UTC m=+33.492507274 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/serviceaccounts/default/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.362339 master-0 kubenswrapper[15493]: E0216 17:02:30.362263 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.362489 master-0 kubenswrapper[15493]: E0216 17:02:30.362343 15493 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-autoscaler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.362489 master-0 kubenswrapper[15493]: E0216 17:02:30.362480 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.362446336 +0000 UTC m=+33.512619456 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/serviceaccounts/cluster-autoscaler-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.375339 master-0 kubenswrapper[15493]: I0216 17:02:30.375217 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:30.383079 master-0 kubenswrapper[15493]: E0216 17:02:30.383004 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.383079 master-0 kubenswrapper[15493]: E0216 17:02:30.383065 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hmj52 for pod openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.383274 master-0 kubenswrapper[15493]: E0216 17:02:30.383164 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52 podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.383139044 +0000 UTC m=+33.533312154 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hmj52" (UniqueName: "kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-controller/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.403287 master-0 kubenswrapper[15493]: E0216 17:02:30.403181 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.403287 master-0 kubenswrapper[15493]: E0216 17:02:30.403249 15493 projected.go:194] Error preparing data for projected volume kube-api-access-8p2jz for pod openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.403287 master-0 kubenswrapper[15493]: E0216 17:02:30.403328 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.403307478 +0000 UTC m=+33.553480628 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-8p2jz" (UniqueName: "kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/serviceaccounts/machine-config-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.423694 master-0 kubenswrapper[15493]: E0216 17:02:30.423621 15493 projected.go:288] Couldn't get configMap openshift-network-node-identity/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.423694 master-0 kubenswrapper[15493]: E0216 17:02:30.423691 15493 projected.go:194] Error preparing data for projected volume kube-api-access-vk7xl for pod openshift-network-node-identity/network-node-identity-hhcpr: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/serviceaccounts/network-node-identity/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.423991 master-0 kubenswrapper[15493]: E0216 17:02:30.423790 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.423763579 +0000 UTC m=+33.573936679 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vk7xl" (UniqueName: "kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/serviceaccounts/network-node-identity/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.442573 master-0 kubenswrapper[15493]: E0216 17:02:30.442425 15493 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.442573 master-0 kubenswrapper[15493]: E0216 17:02:30.442503 15493 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.442780 master-0 kubenswrapper[15493]: E0216 17:02:30.442618 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.442584997 +0000 UTC m=+33.592758107 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver-operator/serviceaccounts/openshift-apiserver-operator/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.478384 master-0 kubenswrapper[15493]: I0216 17:02:30.478289 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:30.478685 master-0 kubenswrapper[15493]: I0216 17:02:30.478632 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:30.478904 master-0 kubenswrapper[15493]: I0216 17:02:30.478867 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:30.521235 master-0 kubenswrapper[15493]: E0216 17:02:30.521173 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.521235 master-0 kubenswrapper[15493]: E0216 17:02:30.521231 15493 projected.go:194] Error preparing data for projected volume kube-api-access-wn82n for pod openshift-cluster-node-tuning-operator/tuned-l5kbz: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/tuned/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.521503 master-0 kubenswrapper[15493]: E0216 17:02:30.521332 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n podName:c45ce0e5-c50b-4210-b7bb-82db2b2bc1db nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.52130647 +0000 UTC m=+33.671479560 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wn82n" (UniqueName: "kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n") pod "tuned-l5kbz" (UID: "c45ce0e5-c50b-4210-b7bb-82db2b2bc1db") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-node-tuning-operator/serviceaccounts/tuned/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.542181 master-0 kubenswrapper[15493]: E0216 17:02:30.542078 15493 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:30.542181 master-0 kubenswrapper[15493]: E0216 17:02:30.542156 15493 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.542470 master-0 kubenswrapper[15493]: E0216 17:02:30.542296 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:34.542254625 +0000 UTC m=+33.692427695 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/serviceaccounts/certified-operators/token": dial tcp 192.168.32.10:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Feb 16 17:02:30.577200 master-0 kubenswrapper[15493]: I0216 17:02:30.577119 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"85003325a476039d5eb44432fdcf7d2532212a32580f381790355fd21b4f3730"} Feb 16 17:02:30.577200 master-0 kubenswrapper[15493]: I0216 17:02:30.577188 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"fe91129f000124aca4ffcb46fef9c002b10b433ceec06a4c01ffe2fc33ca2b2a"} Feb 16 17:02:30.577200 master-0 kubenswrapper[15493]: I0216 17:02:30.577202 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"e300ec3a145c1339a627607b3c84b99d","Type":"ContainerStarted","Data":"cd7ca8585e9770f668282d25d327e2a91ec2db08fd3ae45538225dc8a5ab9091"} Feb 16 17:02:30.577657 master-0 kubenswrapper[15493]: I0216 17:02:30.577614 15493 scope.go:117] "RemoveContainer" containerID="9563c6ff303edb4e0a6b2f6ce6960067c267be9fe8766c7044d1f1559d05730f" Feb 16 17:02:30.577855 master-0 kubenswrapper[15493]: E0216 17:02:30.577823 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:02:30.582308 master-0 kubenswrapper[15493]: I0216 17:02:30.582252 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:30.685613 master-0 kubenswrapper[15493]: I0216 17:02:30.685557 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:30.953692 master-0 kubenswrapper[15493]: I0216 17:02:30.953613 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:31.068734 master-0 kubenswrapper[15493]: I0216 17:02:31.068640 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:31.582869 master-0 kubenswrapper[15493]: I0216 17:02:31.582826 15493 scope.go:117] "RemoveContainer" containerID="9563c6ff303edb4e0a6b2f6ce6960067c267be9fe8766c7044d1f1559d05730f" Feb 16 17:02:31.583426 master-0 kubenswrapper[15493]: E0216 17:02:31.583400 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:02:31.919993 master-0 kubenswrapper[15493]: I0216 17:02:31.919768 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 17:02:31.947465 master-0 kubenswrapper[15493]: I0216 17:02:31.947417 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 17:02:31.959466 master-0 kubenswrapper[15493]: I0216 17:02:31.959445 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:02:31.981456 master-0 kubenswrapper[15493]: I0216 17:02:31.981392 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 17:02:32.023855 master-0 kubenswrapper[15493]: I0216 17:02:32.023826 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 17:02:32.053130 master-0 kubenswrapper[15493]: I0216 17:02:32.053061 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 17:02:32.071273 master-0 kubenswrapper[15493]: I0216 17:02:32.071232 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 16 17:02:32.079745 master-0 kubenswrapper[15493]: I0216 17:02:32.079698 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 17:02:32.081278 master-0 kubenswrapper[15493]: I0216 17:02:32.081221 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 17:02:32.120402 master-0 kubenswrapper[15493]: I0216 17:02:32.120359 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 17:02:32.285826 master-0 kubenswrapper[15493]: I0216 17:02:32.285758 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 16 17:02:32.334777 master-0 kubenswrapper[15493]: I0216 17:02:32.334683 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 17:02:32.373045 master-0 kubenswrapper[15493]: I0216 17:02:32.372967 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 17:02:32.396850 master-0 kubenswrapper[15493]: I0216 17:02:32.396790 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 17:02:32.421772 master-0 kubenswrapper[15493]: I0216 17:02:32.421710 15493 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:02:32.424780 master-0 kubenswrapper[15493]: I0216 17:02:32.424736 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:02:32.424839 master-0 kubenswrapper[15493]: I0216 17:02:32.424789 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:02:32.424839 master-0 kubenswrapper[15493]: I0216 17:02:32.424801 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:02:32.425093 master-0 kubenswrapper[15493]: I0216 17:02:32.425068 15493 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:02:32.431040 master-0 kubenswrapper[15493]: E0216 17:02:32.430976 15493 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"master-0\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="master-0" Feb 16 17:02:32.555834 master-0 kubenswrapper[15493]: I0216 17:02:32.555726 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 17:02:32.588101 master-0 kubenswrapper[15493]: I0216 17:02:32.588037 15493 scope.go:117] "RemoveContainer" containerID="9563c6ff303edb4e0a6b2f6ce6960067c267be9fe8766c7044d1f1559d05730f" Feb 16 17:02:32.590894 master-0 kubenswrapper[15493]: I0216 17:02:32.590862 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:02:32.658046 master-0 kubenswrapper[15493]: I0216 17:02:32.657964 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 16 17:02:32.693838 master-0 kubenswrapper[15493]: I0216 17:02:32.693786 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 16 17:02:32.706530 master-0 kubenswrapper[15493]: I0216 17:02:32.706374 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 17:02:32.770114 master-0 kubenswrapper[15493]: I0216 17:02:32.770037 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 17:02:32.795677 master-0 kubenswrapper[15493]: I0216 17:02:32.795617 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 17:02:32.827363 master-0 kubenswrapper[15493]: I0216 17:02:32.827226 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:02:32.865551 master-0 kubenswrapper[15493]: I0216 17:02:32.865428 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:32.866053 master-0 kubenswrapper[15493]: I0216 17:02:32.866012 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:32.866352 master-0 kubenswrapper[15493]: I0216 17:02:32.866324 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:32.866554 master-0 kubenswrapper[15493]: I0216 17:02:32.866532 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:32.905717 master-0 kubenswrapper[15493]: I0216 17:02:32.905652 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 17:02:32.970944 master-0 kubenswrapper[15493]: I0216 17:02:32.970842 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:32.971459 master-0 kubenswrapper[15493]: I0216 17:02:32.970961 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:32.971459 master-0 kubenswrapper[15493]: I0216 17:02:32.971006 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:32.971459 master-0 kubenswrapper[15493]: I0216 17:02:32.971032 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:32.971459 master-0 kubenswrapper[15493]: I0216 17:02:32.971344 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:33.002463 master-0 kubenswrapper[15493]: I0216 17:02:33.002404 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 17:02:33.092995 master-0 kubenswrapper[15493]: I0216 17:02:33.092839 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 17:02:33.103072 master-0 kubenswrapper[15493]: I0216 17:02:33.103017 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 17:02:33.106706 master-0 kubenswrapper[15493]: I0216 17:02:33.106653 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 17:02:33.121500 master-0 kubenswrapper[15493]: I0216 17:02:33.121471 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 17:02:33.178703 master-0 kubenswrapper[15493]: I0216 17:02:33.178602 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:02:33.226955 master-0 kubenswrapper[15493]: I0216 17:02:33.226853 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 17:02:33.272598 master-0 kubenswrapper[15493]: I0216 17:02:33.272537 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 17:02:33.280644 master-0 kubenswrapper[15493]: I0216 17:02:33.280588 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:33.280755 master-0 kubenswrapper[15493]: I0216 17:02:33.280675 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmk2b\" (UniqueName: \"kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:33.280966 master-0 kubenswrapper[15493]: I0216 17:02:33.280886 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:33.281377 master-0 kubenswrapper[15493]: I0216 17:02:33.281337 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:33.282318 master-0 kubenswrapper[15493]: I0216 17:02:33.282280 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:33.309248 master-0 kubenswrapper[15493]: I0216 17:02:33.308762 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 17:02:33.320326 master-0 kubenswrapper[15493]: I0216 17:02:33.320289 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 16 17:02:33.400881 master-0 kubenswrapper[15493]: I0216 17:02:33.400738 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:02:33.401117 master-0 kubenswrapper[15493]: I0216 17:02:33.401079 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:33.401198 master-0 kubenswrapper[15493]: I0216 17:02:33.401163 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:33.401273 master-0 kubenswrapper[15493]: I0216 17:02:33.401220 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:02:33.401342 master-0 kubenswrapper[15493]: I0216 17:02:33.401307 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:33.401572 master-0 kubenswrapper[15493]: I0216 17:02:33.401517 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:33.418235 master-0 kubenswrapper[15493]: I0216 17:02:33.418170 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 17:02:33.429506 master-0 kubenswrapper[15493]: I0216 17:02:33.429441 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:33.470829 master-0 kubenswrapper[15493]: I0216 17:02:33.470755 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 17:02:33.506591 master-0 kubenswrapper[15493]: I0216 17:02:33.506493 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:33.506913 master-0 kubenswrapper[15493]: I0216 17:02:33.506850 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:33.507155 master-0 kubenswrapper[15493]: I0216 17:02:33.507102 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:33.547785 master-0 kubenswrapper[15493]: I0216 17:02:33.547667 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 17:02:33.549384 master-0 kubenswrapper[15493]: I0216 17:02:33.549299 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:33.553650 master-0 kubenswrapper[15493]: I0216 17:02:33.553489 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 17:02:33.566525 master-0 kubenswrapper[15493]: I0216 17:02:33.565330 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-hk5sk" Feb 16 17:02:33.569810 master-0 kubenswrapper[15493]: I0216 17:02:33.569763 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 17:02:33.605332 master-0 kubenswrapper[15493]: I0216 17:02:33.603154 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 17:02:33.605332 master-0 kubenswrapper[15493]: I0216 17:02:33.604578 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 17:02:33.606130 master-0 kubenswrapper[15493]: I0216 17:02:33.605938 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:33.619977 master-0 kubenswrapper[15493]: I0216 17:02:33.619905 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:33.619977 master-0 kubenswrapper[15493]: I0216 17:02:33.619982 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:33.620274 master-0 kubenswrapper[15493]: I0216 17:02:33.620030 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:33.620853 master-0 kubenswrapper[15493]: I0216 17:02:33.620817 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:33.621087 master-0 kubenswrapper[15493]: I0216 17:02:33.621061 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 16 17:02:33.680913 master-0 kubenswrapper[15493]: I0216 17:02:33.680763 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 17:02:33.688662 master-0 kubenswrapper[15493]: I0216 17:02:33.688597 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 17:02:33.706942 master-0 kubenswrapper[15493]: I0216 17:02:33.706869 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 17:02:33.721074 master-0 kubenswrapper[15493]: I0216 17:02:33.721011 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 17:02:33.723002 master-0 kubenswrapper[15493]: I0216 17:02:33.722954 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:33.723083 master-0 kubenswrapper[15493]: I0216 17:02:33.723021 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:33.723732 master-0 kubenswrapper[15493]: I0216 17:02:33.723675 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:33.763691 master-0 kubenswrapper[15493]: I0216 17:02:33.763644 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 17:02:33.780361 master-0 kubenswrapper[15493]: I0216 17:02:33.780330 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 17:02:33.807452 master-0 kubenswrapper[15493]: I0216 17:02:33.807402 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:02:33.808720 master-0 kubenswrapper[15493]: I0216 17:02:33.808586 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:02:33.831549 master-0 kubenswrapper[15493]: I0216 17:02:33.831478 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:33.831836 master-0 kubenswrapper[15493]: I0216 17:02:33.831556 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:33.831836 master-0 kubenswrapper[15493]: I0216 17:02:33.831598 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:33.831836 master-0 kubenswrapper[15493]: I0216 17:02:33.831636 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:33.832211 master-0 kubenswrapper[15493]: I0216 17:02:33.832123 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:33.883563 master-0 kubenswrapper[15493]: I0216 17:02:33.883498 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 17:02:33.904031 master-0 kubenswrapper[15493]: I0216 17:02:33.903851 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 17:02:33.936379 master-0 kubenswrapper[15493]: I0216 17:02:33.936271 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:33.937096 master-0 kubenswrapper[15493]: I0216 17:02:33.937046 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 16 17:02:33.945079 master-0 kubenswrapper[15493]: I0216 17:02:33.945043 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 17:02:33.957852 master-0 kubenswrapper[15493]: I0216 17:02:33.957814 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 17:02:33.974671 master-0 kubenswrapper[15493]: I0216 17:02:33.974599 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 16 17:02:33.984221 master-0 kubenswrapper[15493]: I0216 17:02:33.984167 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:34.002240 master-0 kubenswrapper[15493]: I0216 17:02:34.002165 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 17:02:34.024055 master-0 kubenswrapper[15493]: E0216 17:02:34.023520 15493 projected.go:288] Couldn't get configMap openshift-network-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.031888 master-0 kubenswrapper[15493]: E0216 17:02:34.031797 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.031888 master-0 kubenswrapper[15493]: E0216 17:02:34.031865 15493 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.032222 master-0 kubenswrapper[15493]: E0216 17:02:34.031989 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.031962126 +0000 UTC m=+41.182135226 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.034231 master-0 kubenswrapper[15493]: E0216 17:02:34.034184 15493 projected.go:288] Couldn't get configMap openshift-network-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.036472 master-0 kubenswrapper[15493]: E0216 17:02:34.036413 15493 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.037694 master-0 kubenswrapper[15493]: E0216 17:02:34.037624 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.037694 master-0 kubenswrapper[15493]: E0216 17:02:34.037674 15493 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.037918 master-0 kubenswrapper[15493]: E0216 17:02:34.037751 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.037725958 +0000 UTC m=+41.187899068 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.038525 master-0 kubenswrapper[15493]: I0216 17:02:34.038451 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:02:34.038753 master-0 kubenswrapper[15493]: I0216 17:02:34.038681 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:34.038915 master-0 kubenswrapper[15493]: I0216 17:02:34.038847 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:34.038915 master-0 kubenswrapper[15493]: I0216 17:02:34.038885 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:34.040016 master-0 kubenswrapper[15493]: I0216 17:02:34.039900 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 17:02:34.043581 master-0 kubenswrapper[15493]: I0216 17:02:34.043509 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:02:34.044729 master-0 kubenswrapper[15493]: I0216 17:02:34.044673 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 17:02:34.048389 master-0 kubenswrapper[15493]: I0216 17:02:34.048334 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 17:02:34.048389 master-0 kubenswrapper[15493]: I0216 17:02:34.048379 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 17:02:34.065127 master-0 kubenswrapper[15493]: I0216 17:02:34.065056 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 17:02:34.065614 master-0 kubenswrapper[15493]: I0216 17:02:34.065574 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-qlqr4" Feb 16 17:02:34.104878 master-0 kubenswrapper[15493]: I0216 17:02:34.104788 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:34.107887 master-0 kubenswrapper[15493]: E0216 17:02:34.107846 15493 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.107887 master-0 kubenswrapper[15493]: E0216 17:02:34.107879 15493 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.108072 master-0 kubenswrapper[15493]: E0216 17:02:34.107960 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.107918586 +0000 UTC m=+41.258091666 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.108956 master-0 kubenswrapper[15493]: E0216 17:02:34.108877 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.108956 master-0 kubenswrapper[15493]: E0216 17:02:34.108954 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.109308 master-0 kubenswrapper[15493]: E0216 17:02:34.109257 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.10922694 +0000 UTC m=+41.259400040 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.109960 master-0 kubenswrapper[15493]: I0216 17:02:34.109911 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-5lx84" Feb 16 17:02:34.115821 master-0 kubenswrapper[15493]: I0216 17:02:34.115774 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 17:02:34.144362 master-0 kubenswrapper[15493]: I0216 17:02:34.144276 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:34.144691 master-0 kubenswrapper[15493]: I0216 17:02:34.144634 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:34.145086 master-0 kubenswrapper[15493]: I0216 17:02:34.145007 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:34.145086 master-0 kubenswrapper[15493]: I0216 17:02:34.145061 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:34.145319 master-0 kubenswrapper[15493]: I0216 17:02:34.145198 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:34.149326 master-0 kubenswrapper[15493]: I0216 17:02:34.149283 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 17:02:34.152725 master-0 kubenswrapper[15493]: I0216 17:02:34.152679 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:34.163087 master-0 kubenswrapper[15493]: I0216 17:02:34.163039 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 17:02:34.174280 master-0 kubenswrapper[15493]: I0216 17:02:34.174238 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:02:34.180518 master-0 kubenswrapper[15493]: I0216 17:02:34.180468 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:34.192555 master-0 kubenswrapper[15493]: I0216 17:02:34.192468 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:02:34.195209 master-0 kubenswrapper[15493]: I0216 17:02:34.195176 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 17:02:34.198686 master-0 kubenswrapper[15493]: E0216 17:02:34.198646 15493 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.232365 master-0 kubenswrapper[15493]: I0216 17:02:34.232291 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 17:02:34.249209 master-0 kubenswrapper[15493]: I0216 17:02:34.249150 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:34.249400 master-0 kubenswrapper[15493]: I0216 17:02:34.249290 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:34.249400 master-0 kubenswrapper[15493]: I0216 17:02:34.249334 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:34.249400 master-0 kubenswrapper[15493]: I0216 17:02:34.249363 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:34.273312 master-0 kubenswrapper[15493]: I0216 17:02:34.273253 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 17:02:34.281590 master-0 kubenswrapper[15493]: E0216 17:02:34.281563 15493 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.281590 master-0 kubenswrapper[15493]: E0216 17:02:34.281593 15493 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.281722 master-0 kubenswrapper[15493]: E0216 17:02:34.281647 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:02:50.281631893 +0000 UTC m=+49.431804963 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.319792 master-0 kubenswrapper[15493]: E0216 17:02:34.319728 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.321059 master-0 kubenswrapper[15493]: E0216 17:02:34.321019 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.351991 master-0 kubenswrapper[15493]: I0216 17:02:34.351946 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 17:02:34.353358 master-0 kubenswrapper[15493]: I0216 17:02:34.353310 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:34.353425 master-0 kubenswrapper[15493]: I0216 17:02:34.353392 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 17:02:34.354170 master-0 kubenswrapper[15493]: I0216 17:02:34.354111 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 17:02:34.354355 master-0 kubenswrapper[15493]: I0216 17:02:34.354290 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:02:34.354690 master-0 kubenswrapper[15493]: I0216 17:02:34.354648 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:34.354846 master-0 kubenswrapper[15493]: I0216 17:02:34.354706 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:34.357318 master-0 kubenswrapper[15493]: I0216 17:02:34.357266 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:34.359047 master-0 kubenswrapper[15493]: I0216 17:02:34.359016 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:02:34.380032 master-0 kubenswrapper[15493]: I0216 17:02:34.378696 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 17:02:34.394846 master-0 kubenswrapper[15493]: I0216 17:02:34.394543 15493 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:02:34.397124 master-0 kubenswrapper[15493]: I0216 17:02:34.397088 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 16 17:02:34.397880 master-0 kubenswrapper[15493]: I0216 17:02:34.397542 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:34.403563 master-0 kubenswrapper[15493]: I0216 17:02:34.403518 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:34.410794 master-0 kubenswrapper[15493]: I0216 17:02:34.410746 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 17:02:34.414635 master-0 kubenswrapper[15493]: E0216 17:02:34.414592 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.414635 master-0 kubenswrapper[15493]: E0216 17:02:34.414633 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.414862 master-0 kubenswrapper[15493]: E0216 17:02:34.414706 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.414682954 +0000 UTC m=+41.564856044 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.422811 master-0 kubenswrapper[15493]: E0216 17:02:34.422772 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.426530 master-0 kubenswrapper[15493]: E0216 17:02:34.426479 15493 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.426615 master-0 kubenswrapper[15493]: E0216 17:02:34.426544 15493 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.426690 master-0 kubenswrapper[15493]: E0216 17:02:34.426633 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.42660827 +0000 UTC m=+41.576781370 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.428969 master-0 kubenswrapper[15493]: E0216 17:02:34.428905 15493 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.429071 master-0 kubenswrapper[15493]: E0216 17:02:34.428980 15493 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.429071 master-0 kubenswrapper[15493]: E0216 17:02:34.429051 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.429030994 +0000 UTC m=+41.579204144 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.447853 master-0 kubenswrapper[15493]: I0216 17:02:34.447696 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 17:02:34.457851 master-0 kubenswrapper[15493]: I0216 17:02:34.457754 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:34.457978 master-0 kubenswrapper[15493]: I0216 17:02:34.457885 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:34.458260 master-0 kubenswrapper[15493]: I0216 17:02:34.458195 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:34.458323 master-0 kubenswrapper[15493]: I0216 17:02:34.458277 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:34.458552 master-0 kubenswrapper[15493]: I0216 17:02:34.458511 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:34.482617 master-0 kubenswrapper[15493]: I0216 17:02:34.482573 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 17:02:34.510382 master-0 kubenswrapper[15493]: I0216 17:02:34.510300 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 17:02:34.539235 master-0 kubenswrapper[15493]: I0216 17:02:34.539161 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 17:02:34.544980 master-0 kubenswrapper[15493]: I0216 17:02:34.544886 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:34.545149 master-0 kubenswrapper[15493]: I0216 17:02:34.544885 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:34.546600 master-0 kubenswrapper[15493]: E0216 17:02:34.546538 15493 projected.go:194] Error preparing data for projected volume kube-api-access-8r28x for pod openshift-multus/multus-6r7wj: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.546698 master-0 kubenswrapper[15493]: E0216 17:02:34.546643 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x podName:43f65f23-4ddd-471a-9cb3-b0945382d83c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.546621085 +0000 UTC m=+41.696794175 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-8r28x" (UniqueName: "kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x") pod "multus-6r7wj" (UID: "43f65f23-4ddd-471a-9cb3-b0945382d83c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.548530 master-0 kubenswrapper[15493]: I0216 17:02:34.548483 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmk2b\" (UniqueName: \"kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:34.553837 master-0 kubenswrapper[15493]: E0216 17:02:34.553784 15493 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.553837 master-0 kubenswrapper[15493]: E0216 17:02:34.553837 15493 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.554072 master-0 kubenswrapper[15493]: E0216 17:02:34.553910 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.553889837 +0000 UTC m=+41.704062937 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.562184 master-0 kubenswrapper[15493]: I0216 17:02:34.562128 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:34.562300 master-0 kubenswrapper[15493]: I0216 17:02:34.562231 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:34.699396 master-0 kubenswrapper[15493]: I0216 17:02:34.699277 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 17:02:34.714648 master-0 kubenswrapper[15493]: I0216 17:02:34.714578 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 16 17:02:34.729400 master-0 kubenswrapper[15493]: I0216 17:02:34.729334 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:34.737274 master-0 kubenswrapper[15493]: I0216 17:02:34.737247 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 17:02:34.746403 master-0 kubenswrapper[15493]: I0216 17:02:34.746361 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-nslxl" Feb 16 17:02:34.754054 master-0 kubenswrapper[15493]: I0216 17:02:34.749501 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 17:02:34.773781 master-0 kubenswrapper[15493]: I0216 17:02:34.773724 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 17:02:34.794230 master-0 kubenswrapper[15493]: I0216 17:02:34.794142 15493 kubelet_pods.go:1000] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/community-operators-7w4km" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 16 17:02:34.794374 master-0 kubenswrapper[15493]: I0216 17:02:34.794283 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:34.807753 master-0 kubenswrapper[15493]: I0216 17:02:34.807494 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-j874l" Feb 16 17:02:34.938793 master-0 kubenswrapper[15493]: E0216 17:02:34.891254 15493 projected.go:288] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.938793 master-0 kubenswrapper[15493]: E0216 17:02:34.891286 15493 projected.go:194] Error preparing data for projected volume kube-api-access-b5mwd for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.938793 master-0 kubenswrapper[15493]: E0216 17:02:34.891372 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.891347057 +0000 UTC m=+42.041520137 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5mwd" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.939245 master-0 kubenswrapper[15493]: E0216 17:02:34.939135 15493 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.939245 master-0 kubenswrapper[15493]: E0216 17:02:34.939177 15493 projected.go:288] Couldn't get configMap openshift-cluster-node-tuning-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.939245 master-0 kubenswrapper[15493]: E0216 17:02:34.939216 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2gq8x for pod openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.939398 master-0 kubenswrapper[15493]: E0216 17:02:34.939296 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.939274346 +0000 UTC m=+42.089447416 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2gq8x" (UniqueName: "kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.939643 master-0 kubenswrapper[15493]: E0216 17:02:34.939607 15493 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.940551 master-0 kubenswrapper[15493]: I0216 17:02:34.940502 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ztpz8" Feb 16 17:02:34.940653 master-0 kubenswrapper[15493]: E0216 17:02:34.940623 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.940653 master-0 kubenswrapper[15493]: E0216 17:02:34.940649 15493 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.940770 master-0 kubenswrapper[15493]: E0216 17:02:34.940652 15493 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.940770 master-0 kubenswrapper[15493]: E0216 17:02:34.940694 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.940770 master-0 kubenswrapper[15493]: E0216 17:02:34.940697 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.940682243 +0000 UTC m=+42.090855313 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.940894 master-0 kubenswrapper[15493]: E0216 17:02:34.940827 15493 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.940894 master-0 kubenswrapper[15493]: E0216 17:02:34.940835 15493 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.940894 master-0 kubenswrapper[15493]: E0216 17:02:34.940861 15493 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.941046 master-0 kubenswrapper[15493]: E0216 17:02:34.940902 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.940890268 +0000 UTC m=+42.091063338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.941046 master-0 kubenswrapper[15493]: I0216 17:02:34.940964 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 17:02:34.941134 master-0 kubenswrapper[15493]: I0216 17:02:34.941070 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 17:02:34.964330 master-0 kubenswrapper[15493]: E0216 17:02:34.963999 15493 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.964330 master-0 kubenswrapper[15493]: E0216 17:02:34.964265 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.964541 master-0 kubenswrapper[15493]: E0216 17:02:34.964363 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.964336639 +0000 UTC m=+42.114509729 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.975368 master-0 kubenswrapper[15493]: I0216 17:02:34.975310 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 17:02:34.982242 master-0 kubenswrapper[15493]: I0216 17:02:34.982202 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 17:02:34.990585 master-0 kubenswrapper[15493]: I0216 17:02:34.990547 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 17:02:34.991486 master-0 kubenswrapper[15493]: I0216 17:02:34.991445 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:02:34.998615 master-0 kubenswrapper[15493]: I0216 17:02:34.998537 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:34.998787 master-0 kubenswrapper[15493]: E0216 17:02:34.998743 15493 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:34.998854 master-0 kubenswrapper[15493]: E0216 17:02:34.998811 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:42.998788151 +0000 UTC m=+42.148961221 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.021574 master-0 kubenswrapper[15493]: I0216 17:02:35.021523 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 17:02:35.025698 master-0 kubenswrapper[15493]: E0216 17:02:35.025656 15493 projected.go:288] Couldn't get configMap openshift-network-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.025899 master-0 kubenswrapper[15493]: E0216 17:02:35.025702 15493 projected.go:194] Error preparing data for projected volume kube-api-access-q46jg for pod openshift-network-operator/iptables-alerter-czzz2: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.025899 master-0 kubenswrapper[15493]: E0216 17:02:35.025843 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg podName:b3fa6ac1-781f-446c-b6b4-18bdb7723c23 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.025789725 +0000 UTC m=+42.175962795 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-q46jg" (UniqueName: "kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg") pod "iptables-alerter-czzz2" (UID: "b3fa6ac1-781f-446c-b6b4-18bdb7723c23") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.034332 master-0 kubenswrapper[15493]: E0216 17:02:35.034273 15493 projected.go:288] Couldn't get configMap openshift-network-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.034332 master-0 kubenswrapper[15493]: E0216 17:02:35.034322 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zt8mt for pod openshift-network-operator/network-operator-6fcf4c966-6bmf9: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.034549 master-0 kubenswrapper[15493]: E0216 17:02:35.034393 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt podName:4549ea98-7379-49e1-8452-5efb643137ca nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.034372012 +0000 UTC m=+42.184545092 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-zt8mt" (UniqueName: "kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt") pod "network-operator-6fcf4c966-6bmf9" (UID: "4549ea98-7379-49e1-8452-5efb643137ca") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.048890 master-0 kubenswrapper[15493]: I0216 17:02:35.048820 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 16 17:02:35.097221 master-0 kubenswrapper[15493]: I0216 17:02:35.097142 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 17:02:35.107950 master-0 kubenswrapper[15493]: E0216 17:02:35.107900 15493 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.110001 master-0 kubenswrapper[15493]: I0216 17:02:35.109962 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 17:02:35.114194 master-0 kubenswrapper[15493]: E0216 17:02:35.114146 15493 projected.go:288] Couldn't get configMap openshift-cloud-controller-manager-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.114194 master-0 kubenswrapper[15493]: E0216 17:02:35.114173 15493 projected.go:194] Error preparing data for projected volume kube-api-access-r87zw for pod openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.114326 master-0 kubenswrapper[15493]: E0216 17:02:35.114227 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw podName:5a939dd0-fc27-4d47-b81b-96e13e4bbca9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.114209305 +0000 UTC m=+42.264382375 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-r87zw" (UniqueName: "kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw") pod "cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" (UID: "5a939dd0-fc27-4d47-b81b-96e13e4bbca9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.115713 master-0 kubenswrapper[15493]: I0216 17:02:35.115681 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 17:02:35.243033 master-0 kubenswrapper[15493]: I0216 17:02:35.242563 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 17:02:35.246142 master-0 kubenswrapper[15493]: I0216 17:02:35.246108 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-rzjlw" Feb 16 17:02:35.246334 master-0 kubenswrapper[15493]: I0216 17:02:35.246286 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 17:02:35.266907 master-0 kubenswrapper[15493]: I0216 17:02:35.266821 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 17:02:35.269215 master-0 kubenswrapper[15493]: I0216 17:02:35.269173 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 17:02:35.306796 master-0 kubenswrapper[15493]: I0216 17:02:35.306718 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 17:02:35.320273 master-0 kubenswrapper[15493]: E0216 17:02:35.320215 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.320273 master-0 kubenswrapper[15493]: E0216 17:02:35.320247 15493 projected.go:194] Error preparing data for projected volume kube-api-access-fkwxl for pod openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.320495 master-0 kubenswrapper[15493]: E0216 17:02:35.320313 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl podName:ab80e0fb-09dd-4c93-b235-1487024105d2 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.320296139 +0000 UTC m=+42.470469209 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fkwxl" (UniqueName: "kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl") pod "ovnkube-control-plane-bb7ffbb8d-lzgs9" (UID: "ab80e0fb-09dd-4c93-b235-1487024105d2") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.321341 master-0 kubenswrapper[15493]: E0216 17:02:35.321309 15493 projected.go:288] Couldn't get configMap openshift-machine-config-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.321341 master-0 kubenswrapper[15493]: E0216 17:02:35.321326 15493 projected.go:194] Error preparing data for projected volume kube-api-access-sx92x for pod openshift-machine-config-operator/machine-config-daemon-98q6v: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.321535 master-0 kubenswrapper[15493]: E0216 17:02:35.321352 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x podName:648abb6c-9c81-4e5c-b5f1-3b7eb254f743 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.321344687 +0000 UTC m=+42.471517757 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-sx92x" (UniqueName: "kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x") pod "machine-config-daemon-98q6v" (UID: "648abb6c-9c81-4e5c-b5f1-3b7eb254f743") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.344834 master-0 kubenswrapper[15493]: I0216 17:02:35.344744 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:02:35.363652 master-0 kubenswrapper[15493]: I0216 17:02:35.363593 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 17:02:35.390014 master-0 kubenswrapper[15493]: E0216 17:02:35.389906 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.390014 master-0 kubenswrapper[15493]: E0216 17:02:35.389906 15493 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.390014 master-0 kubenswrapper[15493]: E0216 17:02:35.390016 15493 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.390420 master-0 kubenswrapper[15493]: E0216 17:02:35.390067 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.390052615 +0000 UTC m=+42.540225675 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.391121 master-0 kubenswrapper[15493]: E0216 17:02:35.391077 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.393449 master-0 kubenswrapper[15493]: E0216 17:02:35.393426 15493 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.393449 master-0 kubenswrapper[15493]: E0216 17:02:35.393454 15493 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.393449 master-0 kubenswrapper[15493]: E0216 17:02:35.393458 15493 projected.go:288] Couldn't get configMap openshift-monitoring/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.393691 master-0 kubenswrapper[15493]: E0216 17:02:35.393512 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.393496286 +0000 UTC m=+42.543669356 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.397774 master-0 kubenswrapper[15493]: E0216 17:02:35.397716 15493 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.397774 master-0 kubenswrapper[15493]: E0216 17:02:35.397784 15493 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.398037 master-0 kubenswrapper[15493]: E0216 17:02:35.397858 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.397823531 +0000 UTC m=+42.547996651 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.400147 master-0 kubenswrapper[15493]: E0216 17:02:35.400097 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.400147 master-0 kubenswrapper[15493]: E0216 17:02:35.400138 15493 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.400304 master-0 kubenswrapper[15493]: E0216 17:02:35.400216 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.400195424 +0000 UTC m=+42.550368524 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.401445 master-0 kubenswrapper[15493]: E0216 17:02:35.401401 15493 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.402280 master-0 kubenswrapper[15493]: E0216 17:02:35.402247 15493 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.402369 master-0 kubenswrapper[15493]: E0216 17:02:35.402312 15493 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.402369 master-0 kubenswrapper[15493]: E0216 17:02:35.402325 15493 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.402496 master-0 kubenswrapper[15493]: E0216 17:02:35.402365 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.402353761 +0000 UTC m=+42.552526831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.402751 master-0 kubenswrapper[15493]: I0216 17:02:35.402717 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 17:02:35.405657 master-0 kubenswrapper[15493]: I0216 17:02:35.405619 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:02:35.423253 master-0 kubenswrapper[15493]: E0216 17:02:35.423200 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.423253 master-0 kubenswrapper[15493]: E0216 17:02:35.423238 15493 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.423525 master-0 kubenswrapper[15493]: E0216 17:02:35.423306 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.423289905 +0000 UTC m=+42.573462965 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.493829 master-0 kubenswrapper[15493]: I0216 17:02:35.493702 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-gtxjb" Feb 16 17:02:35.520302 master-0 kubenswrapper[15493]: I0216 17:02:35.520102 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 17:02:35.522539 master-0 kubenswrapper[15493]: I0216 17:02:35.522066 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 17:02:35.525887 master-0 kubenswrapper[15493]: I0216 17:02:35.525846 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 17:02:35.526839 master-0 kubenswrapper[15493]: I0216 17:02:35.526264 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 17:02:35.526839 master-0 kubenswrapper[15493]: I0216 17:02:35.526293 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 17:02:35.530968 master-0 kubenswrapper[15493]: I0216 17:02:35.527080 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 17:02:35.530968 master-0 kubenswrapper[15493]: I0216 17:02:35.527312 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 17:02:35.608613 master-0 kubenswrapper[15493]: I0216 17:02:35.608502 15493 generic.go:334] "Generic (PLEG): container finished" podID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" containerID="1a75bfcb3d6ee6e289b7323fbc3d24c63e7fcd67393fd211cfa30edcae278f7a" exitCode=0 Feb 16 17:02:35.630947 master-0 kubenswrapper[15493]: I0216 17:02:35.630869 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:02:35.630947 master-0 kubenswrapper[15493]: E0216 17:02:35.630898 15493 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.631207 master-0 kubenswrapper[15493]: E0216 17:02:35.631008 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.630987991 +0000 UTC m=+42.781161061 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.643676 master-0 kubenswrapper[15493]: I0216 17:02:35.643629 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 17:02:35.649734 master-0 kubenswrapper[15493]: I0216 17:02:35.649693 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 17:02:35.688786 master-0 kubenswrapper[15493]: I0216 17:02:35.688643 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-x2982" Feb 16 17:02:35.693541 master-0 kubenswrapper[15493]: I0216 17:02:35.693501 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 17:02:35.693727 master-0 kubenswrapper[15493]: I0216 17:02:35.693492 15493 kubelet_pods.go:1000] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/redhat-marketplace-4kd66" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 16 17:02:35.721516 master-0 kubenswrapper[15493]: E0216 17:02:35.721456 15493 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.721719 master-0 kubenswrapper[15493]: E0216 17:02:35.721465 15493 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.721719 master-0 kubenswrapper[15493]: E0216 17:02:35.721573 15493 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.721719 master-0 kubenswrapper[15493]: E0216 17:02:35.721698 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.721659311 +0000 UTC m=+42.871832421 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.725018 master-0 kubenswrapper[15493]: E0216 17:02:35.724977 15493 projected.go:288] Couldn't get configMap openshift-network-node-identity/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.725111 master-0 kubenswrapper[15493]: E0216 17:02:35.725026 15493 projected.go:194] Error preparing data for projected volume kube-api-access-vk7xl for pod openshift-network-node-identity/network-node-identity-hhcpr: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.725166 master-0 kubenswrapper[15493]: E0216 17:02:35.725146 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl podName:39387549-c636-4bd4-b463-f6a93810f277 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.725117242 +0000 UTC m=+42.875290322 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vk7xl" (UniqueName: "kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl") pod "network-node-identity-hhcpr" (UID: "39387549-c636-4bd4-b463-f6a93810f277") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.767238 master-0 kubenswrapper[15493]: I0216 17:02:35.764985 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-mzz6s" Feb 16 17:02:35.797984 master-0 kubenswrapper[15493]: I0216 17:02:35.797900 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 17:02:35.815013 master-0 kubenswrapper[15493]: I0216 17:02:35.814960 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 17:02:35.861704 master-0 kubenswrapper[15493]: I0216 17:02:35.861644 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 17:02:35.874698 master-0 kubenswrapper[15493]: I0216 17:02:35.874597 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 17:02:35.883646 master-0 kubenswrapper[15493]: I0216 17:02:35.883582 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 17:02:35.937822 master-0 kubenswrapper[15493]: I0216 17:02:35.937754 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 17:02:35.940150 master-0 kubenswrapper[15493]: E0216 17:02:35.940108 15493 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.940150 master-0 kubenswrapper[15493]: E0216 17:02:35.940138 15493 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.940304 master-0 kubenswrapper[15493]: E0216 17:02:35.940201 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.940182084 +0000 UTC m=+43.090355164 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.940425 master-0 kubenswrapper[15493]: E0216 17:02:35.940310 15493 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.940425 master-0 kubenswrapper[15493]: E0216 17:02:35.940341 15493 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.940425 master-0 kubenswrapper[15493]: E0216 17:02:35.940413 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.940390789 +0000 UTC m=+43.090563859 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.943599 master-0 kubenswrapper[15493]: E0216 17:02:35.943490 15493 projected.go:288] Couldn't get configMap openshift-cluster-machine-approver/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.943599 master-0 kubenswrapper[15493]: E0216 17:02:35.943530 15493 projected.go:194] Error preparing data for projected volume kube-api-access-6ftld for pod openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.943599 master-0 kubenswrapper[15493]: E0216 17:02:35.943600 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld podName:702322ac-7610-4568-9a68-b6acbd1f0c12 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.943577104 +0000 UTC m=+43.093750234 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6ftld" (UniqueName: "kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld") pod "machine-approver-8569dd85ff-4vxmz" (UID: "702322ac-7610-4568-9a68-b6acbd1f0c12") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.943846 master-0 kubenswrapper[15493]: E0216 17:02:35.943652 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.943846 master-0 kubenswrapper[15493]: E0216 17:02:35.943666 15493 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.943846 master-0 kubenswrapper[15493]: E0216 17:02:35.943697 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:43.943688277 +0000 UTC m=+43.093861347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:35.989192 master-0 kubenswrapper[15493]: I0216 17:02:35.989139 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 17:02:36.009353 master-0 kubenswrapper[15493]: I0216 17:02:36.009282 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 17:02:36.044718 master-0 kubenswrapper[15493]: I0216 17:02:36.044621 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 17:02:36.047461 master-0 kubenswrapper[15493]: I0216 17:02:36.047417 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 16 17:02:36.061336 master-0 kubenswrapper[15493]: I0216 17:02:36.061269 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 17:02:36.096951 master-0 kubenswrapper[15493]: I0216 17:02:36.096785 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-t46bw" Feb 16 17:02:36.109006 master-0 kubenswrapper[15493]: E0216 17:02:36.108954 15493 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.109006 master-0 kubenswrapper[15493]: E0216 17:02:36.108996 15493 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.109263 master-0 kubenswrapper[15493]: E0216 17:02:36.109080 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:44.109059883 +0000 UTC m=+43.259232943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.111974 master-0 kubenswrapper[15493]: I0216 17:02:36.111763 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 17:02:36.120636 master-0 kubenswrapper[15493]: I0216 17:02:36.120609 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:02:36.121641 master-0 kubenswrapper[15493]: E0216 17:02:36.121605 15493 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.121805 master-0 kubenswrapper[15493]: E0216 17:02:36.121776 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:02:44.121707228 +0000 UTC m=+43.271880298 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.143811 master-0 kubenswrapper[15493]: I0216 17:02:36.143766 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 16 17:02:36.156350 master-0 kubenswrapper[15493]: I0216 17:02:36.156313 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 17:02:36.205890 master-0 kubenswrapper[15493]: I0216 17:02:36.205837 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 17:02:36.208210 master-0 kubenswrapper[15493]: I0216 17:02:36.208176 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 17:02:36.209812 master-0 kubenswrapper[15493]: I0216 17:02:36.209777 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 17:02:36.217557 master-0 kubenswrapper[15493]: I0216 17:02:36.217520 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 17:02:36.223805 master-0 kubenswrapper[15493]: I0216 17:02:36.223769 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 17:02:36.238690 master-0 kubenswrapper[15493]: I0216 17:02:36.238591 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 17:02:36.243706 master-0 kubenswrapper[15493]: I0216 17:02:36.243662 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:36.244308 master-0 kubenswrapper[15493]: I0216 17:02:36.244272 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:36.257421 master-0 kubenswrapper[15493]: I0216 17:02:36.257384 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 16 17:02:36.264976 master-0 kubenswrapper[15493]: I0216 17:02:36.264898 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 17:02:36.315888 master-0 kubenswrapper[15493]: I0216 17:02:36.315775 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 17:02:36.343815 master-0 kubenswrapper[15493]: I0216 17:02:36.343767 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 17:02:36.347364 master-0 kubenswrapper[15493]: I0216 17:02:36.347320 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 17:02:36.369947 master-0 kubenswrapper[15493]: I0216 17:02:36.369891 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 17:02:36.390747 master-0 kubenswrapper[15493]: E0216 17:02:36.390679 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.390747 master-0 kubenswrapper[15493]: E0216 17:02:36.390730 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.390967 master-0 kubenswrapper[15493]: E0216 17:02:36.390818 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:44.390797949 +0000 UTC m=+43.540971019 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.391851 master-0 kubenswrapper[15493]: E0216 17:02:36.391812 15493 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.391947 master-0 kubenswrapper[15493]: E0216 17:02:36.391857 15493 projected.go:194] Error preparing data for projected volume kube-api-access-9xrw2 for pod openshift-ovn-kubernetes/ovnkube-node-flr86: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.392001 master-0 kubenswrapper[15493]: E0216 17:02:36.391985 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2 podName:9f9bf4ab-5415-4616-aa36-ea387c699ea9 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:44.39195534 +0000 UTC m=+43.542128450 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-9xrw2" (UniqueName: "kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2") pod "ovnkube-node-flr86" (UID: "9f9bf4ab-5415-4616-aa36-ea387c699ea9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.394215 master-0 kubenswrapper[15493]: E0216 17:02:36.394143 15493 projected.go:288] Couldn't get configMap openshift-monitoring/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.394215 master-0 kubenswrapper[15493]: E0216 17:02:36.394186 15493 projected.go:194] Error preparing data for projected volume kube-api-access-j7w67 for pod openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.395071 master-0 kubenswrapper[15493]: E0216 17:02:36.394264 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67 podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:02:44.39424427 +0000 UTC m=+43.544417380 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-j7w67" (UniqueName: "kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.403144 master-0 kubenswrapper[15493]: E0216 17:02:36.403087 15493 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.403144 master-0 kubenswrapper[15493]: E0216 17:02:36.403137 15493 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.403416 master-0 kubenswrapper[15493]: E0216 17:02:36.403246 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:44.403225768 +0000 UTC m=+43.553398838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.424811 master-0 kubenswrapper[15493]: I0216 17:02:36.424752 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 16 17:02:36.429628 master-0 kubenswrapper[15493]: I0216 17:02:36.429574 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 17:02:36.469211 master-0 kubenswrapper[15493]: I0216 17:02:36.469138 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 17:02:36.516067 master-0 kubenswrapper[15493]: I0216 17:02:36.516007 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 17:02:36.547724 master-0 kubenswrapper[15493]: I0216 17:02:36.547670 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-q5h8t" Feb 16 17:02:36.549866 master-0 kubenswrapper[15493]: I0216 17:02:36.549801 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 16 17:02:36.556761 master-0 kubenswrapper[15493]: I0216 17:02:36.556719 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 17:02:36.612994 master-0 kubenswrapper[15493]: I0216 17:02:36.612860 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 17:02:36.615412 master-0 kubenswrapper[15493]: I0216 17:02:36.615363 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_5d39ed24-4301-4cea-8a42-a08f4ba8b479/installer/0.log" Feb 16 17:02:36.615512 master-0 kubenswrapper[15493]: I0216 17:02:36.615435 15493 generic.go:334] "Generic (PLEG): container finished" podID="5d39ed24-4301-4cea-8a42-a08f4ba8b479" containerID="2c53a58c131794a80fa1c0999460553c2cc95a04f4d47697c0e7fb42de126acf" exitCode=1 Feb 16 17:02:36.640637 master-0 kubenswrapper[15493]: I0216 17:02:36.640596 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 17:02:36.653907 master-0 kubenswrapper[15493]: I0216 17:02:36.653822 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 17:02:36.654248 master-0 kubenswrapper[15493]: I0216 17:02:36.654208 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 17:02:36.693015 master-0 kubenswrapper[15493]: I0216 17:02:36.692916 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 17:02:36.707425 master-0 kubenswrapper[15493]: I0216 17:02:36.707364 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 17:02:36.715700 master-0 kubenswrapper[15493]: I0216 17:02:36.715654 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 17:02:36.722235 master-0 kubenswrapper[15493]: E0216 17:02:36.722195 15493 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.722330 master-0 kubenswrapper[15493]: E0216 17:02:36.722237 15493 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.722380 master-0 kubenswrapper[15493]: E0216 17:02:36.722336 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:02:44.722309762 +0000 UTC m=+43.872482872 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:02:36.726365 master-0 kubenswrapper[15493]: I0216 17:02:36.726316 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:02:36.734119 master-0 kubenswrapper[15493]: I0216 17:02:36.733833 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:36.750015 master-0 kubenswrapper[15493]: I0216 17:02:36.739833 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 17:02:36.784652 master-0 kubenswrapper[15493]: I0216 17:02:36.784534 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:02:36.817590 master-0 kubenswrapper[15493]: W0216 17:02:36.817507 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc9a20f4_255a_4312_8f43_174a28c06340.slice/crio-48cc0c3f310c4bb32e4e4290e0aed8e7c169b09f9f6116fdf08ecaaa9cda88a8 WatchSource:0}: Error finding container 48cc0c3f310c4bb32e4e4290e0aed8e7c169b09f9f6116fdf08ecaaa9cda88a8: Status 404 returned error can't find the container with id 48cc0c3f310c4bb32e4e4290e0aed8e7c169b09f9f6116fdf08ecaaa9cda88a8 Feb 16 17:02:36.820305 master-0 kubenswrapper[15493]: W0216 17:02:36.820253 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3beb7bf_922f_425d_8a19_fd407a7153a8.slice/crio-0b18b6fb4326fd4ddb1b7af91ab1343a14b58ba3e93ee612bff1534e666e659b WatchSource:0}: Error finding container 0b18b6fb4326fd4ddb1b7af91ab1343a14b58ba3e93ee612bff1534e666e659b: Status 404 returned error can't find the container with id 0b18b6fb4326fd4ddb1b7af91ab1343a14b58ba3e93ee612bff1534e666e659b Feb 16 17:02:36.833140 master-0 kubenswrapper[15493]: I0216 17:02:36.833106 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 17:02:36.840880 master-0 kubenswrapper[15493]: I0216 17:02:36.839472 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:36.891341 master-0 kubenswrapper[15493]: I0216 17:02:36.891294 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 17:02:36.942960 master-0 kubenswrapper[15493]: I0216 17:02:36.942915 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 17:02:36.994764 master-0 kubenswrapper[15493]: I0216 17:02:36.994690 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 17:02:36.996464 master-0 kubenswrapper[15493]: I0216 17:02:36.995372 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 16 17:02:36.996464 master-0 kubenswrapper[15493]: I0216 17:02:36.996176 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 17:02:37.039850 master-0 kubenswrapper[15493]: I0216 17:02:37.039786 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-rxv66" Feb 16 17:02:37.075015 master-0 kubenswrapper[15493]: I0216 17:02:37.074952 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 17:02:37.144203 master-0 kubenswrapper[15493]: I0216 17:02:37.144164 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 17:02:37.219549 master-0 kubenswrapper[15493]: I0216 17:02:37.219490 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:37.220558 master-0 kubenswrapper[15493]: I0216 17:02:37.220523 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:37.224514 master-0 kubenswrapper[15493]: I0216 17:02:37.224480 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:37.231669 master-0 kubenswrapper[15493]: I0216 17:02:37.231601 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:37.238239 master-0 kubenswrapper[15493]: I0216 17:02:37.238187 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:37.239008 master-0 kubenswrapper[15493]: I0216 17:02:37.238957 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:37.240862 master-0 kubenswrapper[15493]: I0216 17:02:37.240789 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access\") pod \"installer-2-master-0\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:37.314339 master-0 kubenswrapper[15493]: I0216 17:02:37.314276 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 17:02:37.325744 master-0 kubenswrapper[15493]: I0216 17:02:37.325691 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 16 17:02:37.352836 master-0 kubenswrapper[15493]: I0216 17:02:37.352775 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:37.364948 master-0 kubenswrapper[15493]: I0216 17:02:37.364876 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:37.447001 master-0 kubenswrapper[15493]: I0216 17:02:37.442110 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 16 17:02:37.447001 master-0 kubenswrapper[15493]: E0216 17:02:37.445516 15493 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.391s" Feb 16 17:02:37.447001 master-0 kubenswrapper[15493]: I0216 17:02:37.445561 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:37.447001 master-0 kubenswrapper[15493]: I0216 17:02:37.445580 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:37.447001 master-0 kubenswrapper[15493]: I0216 17:02:37.445591 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:37.447001 master-0 kubenswrapper[15493]: I0216 17:02:37.445601 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"26fb7956e8f3c69eb64ff1fc06e8f70aea162bbaa7e679a2b8dbe11e568d160a"} Feb 16 17:02:37.447001 master-0 kubenswrapper[15493]: I0216 17:02:37.445700 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:37.454421 master-0 kubenswrapper[15493]: I0216 17:02:37.453949 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:37.461939 master-0 kubenswrapper[15493]: I0216 17:02:37.459132 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:37.461939 master-0 kubenswrapper[15493]: I0216 17:02:37.461465 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerDied","Data":"1a75bfcb3d6ee6e289b7323fbc3d24c63e7fcd67393fd211cfa30edcae278f7a"} Feb 16 17:02:37.461939 master-0 kubenswrapper[15493]: I0216 17:02:37.461510 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"5d39ed24-4301-4cea-8a42-a08f4ba8b479","Type":"ContainerDied","Data":"2c53a58c131794a80fa1c0999460553c2cc95a04f4d47697c0e7fb42de126acf"} Feb 16 17:02:37.461939 master-0 kubenswrapper[15493]: I0216 17:02:37.461533 15493 status_manager.go:379] "Container startup changed for unknown container" pod="kube-system/bootstrap-kube-controller-manager-master-0" containerID="cri-o://9563c6ff303edb4e0a6b2f6ce6960067c267be9fe8766c7044d1f1559d05730f" Feb 16 17:02:37.461939 master-0 kubenswrapper[15493]: I0216 17:02:37.461575 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:37.461939 master-0 kubenswrapper[15493]: I0216 17:02:37.461592 15493 status_manager.go:379] "Container startup changed for unknown container" pod="kube-system/bootstrap-kube-controller-manager-master-0" containerID="cri-o://9563c6ff303edb4e0a6b2f6ce6960067c267be9fe8766c7044d1f1559d05730f" Feb 16 17:02:37.461939 master-0 kubenswrapper[15493]: I0216 17:02:37.461603 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:37.463625 master-0 kubenswrapper[15493]: I0216 17:02:37.463270 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 17:02:37.472667 master-0 kubenswrapper[15493]: I0216 17:02:37.472610 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 16 17:02:37.479299 master-0 kubenswrapper[15493]: I0216 17:02:37.479265 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 17:02:37.546638 master-0 kubenswrapper[15493]: I0216 17:02:37.546553 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:02:37.597385 master-0 kubenswrapper[15493]: I0216 17:02:37.597355 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 16 17:02:37.621768 master-0 kubenswrapper[15493]: I0216 17:02:37.621716 15493 generic.go:334] "Generic (PLEG): container finished" podID="cc9a20f4-255a-4312-8f43-174a28c06340" containerID="82a27ee2d1353ba35ac8109559b6007bd24248ad0c6bd7df30d7343f2b1b206f" exitCode=0 Feb 16 17:02:37.622019 master-0 kubenswrapper[15493]: I0216 17:02:37.621807 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerDied","Data":"82a27ee2d1353ba35ac8109559b6007bd24248ad0c6bd7df30d7343f2b1b206f"} Feb 16 17:02:37.622019 master-0 kubenswrapper[15493]: I0216 17:02:37.621835 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerStarted","Data":"48cc0c3f310c4bb32e4e4290e0aed8e7c169b09f9f6116fdf08ecaaa9cda88a8"} Feb 16 17:02:37.624284 master-0 kubenswrapper[15493]: I0216 17:02:37.624196 15493 generic.go:334] "Generic (PLEG): container finished" podID="f3beb7bf-922f-425d-8a19-fd407a7153a8" containerID="3175e1b324e7bf1ff66717724dd9a556a90d98f089042583a839c43cb93e82d8" exitCode=0 Feb 16 17:02:37.624284 master-0 kubenswrapper[15493]: I0216 17:02:37.624276 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerDied","Data":"3175e1b324e7bf1ff66717724dd9a556a90d98f089042583a839c43cb93e82d8"} Feb 16 17:02:37.624370 master-0 kubenswrapper[15493]: I0216 17:02:37.624301 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerStarted","Data":"0b18b6fb4326fd4ddb1b7af91ab1343a14b58ba3e93ee612bff1534e666e659b"} Feb 16 17:02:37.625847 master-0 kubenswrapper[15493]: I0216 17:02:37.625816 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hhcpr_39387549-c636-4bd4-b463-f6a93810f277/approver/0.log" Feb 16 17:02:37.626552 master-0 kubenswrapper[15493]: I0216 17:02:37.626522 15493 generic.go:334] "Generic (PLEG): container finished" podID="39387549-c636-4bd4-b463-f6a93810f277" containerID="0003ee69c56b0c73d7d4526fa1f5d5fb937628023fcef99de3436e9f297fc1a8" exitCode=1 Feb 16 17:02:37.626621 master-0 kubenswrapper[15493]: I0216 17:02:37.626547 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerDied","Data":"0003ee69c56b0c73d7d4526fa1f5d5fb937628023fcef99de3436e9f297fc1a8"} Feb 16 17:02:37.630656 master-0 kubenswrapper[15493]: I0216 17:02:37.630609 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:37.637579 master-0 kubenswrapper[15493]: E0216 17:02:37.637546 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:37.641692 master-0 kubenswrapper[15493]: I0216 17:02:37.641517 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 17:02:37.723430 master-0 kubenswrapper[15493]: I0216 17:02:37.723312 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 17:02:37.743529 master-0 kubenswrapper[15493]: I0216 17:02:37.743483 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-q2gzj" Feb 16 17:02:37.767631 master-0 kubenswrapper[15493]: I0216 17:02:37.767564 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 17:02:37.772421 master-0 kubenswrapper[15493]: I0216 17:02:37.772359 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 17:02:37.787620 master-0 kubenswrapper[15493]: I0216 17:02:37.787565 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 16 17:02:37.857019 master-0 kubenswrapper[15493]: I0216 17:02:37.856969 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 17:02:37.903910 master-0 kubenswrapper[15493]: I0216 17:02:37.903852 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 17:02:37.973599 master-0 kubenswrapper[15493]: I0216 17:02:37.973459 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 17:02:38.051837 master-0 kubenswrapper[15493]: I0216 17:02:38.051777 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 17:02:38.164035 master-0 kubenswrapper[15493]: I0216 17:02:38.163976 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 17:02:38.175840 master-0 kubenswrapper[15493]: I0216 17:02:38.175786 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-b9gfw" Feb 16 17:02:38.192467 master-0 kubenswrapper[15493]: I0216 17:02:38.192371 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lc8g2" Feb 16 17:02:38.262659 master-0 kubenswrapper[15493]: I0216 17:02:38.262492 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-kh5s4" Feb 16 17:02:38.382884 master-0 kubenswrapper[15493]: I0216 17:02:38.382822 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 17:02:38.623470 master-0 kubenswrapper[15493]: I0216 17:02:38.623317 15493 kubelet_pods.go:1000] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/community-operators-7w4km" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 16 17:02:38.643718 master-0 kubenswrapper[15493]: I0216 17:02:38.643649 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerStarted","Data":"e1f2a481a6b378453431197a9573c0d770327c066c372ed58b165d78ec1276e9"} Feb 16 17:02:38.647150 master-0 kubenswrapper[15493]: I0216 17:02:38.647103 15493 generic.go:334] "Generic (PLEG): container finished" podID="0393fe12-2533-4c9c-a8e4-a58003c88f36" containerID="8033baffcd25996abcf0d91243da61c239ce82e7b4d4649ed5e72b54031d1787" exitCode=0 Feb 16 17:02:38.647228 master-0 kubenswrapper[15493]: I0216 17:02:38.647176 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerDied","Data":"8033baffcd25996abcf0d91243da61c239ce82e7b4d4649ed5e72b54031d1787"} Feb 16 17:02:38.649384 master-0 kubenswrapper[15493]: I0216 17:02:38.649315 15493 generic.go:334] "Generic (PLEG): container finished" podID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" containerID="502718b039f767a383fb50034240c25f816d37b03988ef5ce645debcc52fb39d" exitCode=0 Feb 16 17:02:38.649477 master-0 kubenswrapper[15493]: I0216 17:02:38.649426 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerDied","Data":"502718b039f767a383fb50034240c25f816d37b03988ef5ce645debcc52fb39d"} Feb 16 17:02:38.666349 master-0 kubenswrapper[15493]: E0216 17:02:38.666285 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:38.679055 master-0 kubenswrapper[15493]: I0216 17:02:38.678982 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 17:02:38.723433 master-0 kubenswrapper[15493]: I0216 17:02:38.722757 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-6858s" Feb 16 17:02:38.764879 master-0 kubenswrapper[15493]: I0216 17:02:38.764808 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 16 17:02:38.834033 master-0 kubenswrapper[15493]: I0216 17:02:38.833967 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 17:02:38.904668 master-0 kubenswrapper[15493]: I0216 17:02:38.904561 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-7mlbn" Feb 16 17:02:38.934164 master-0 kubenswrapper[15493]: I0216 17:02:38.934113 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 17:02:39.005035 master-0 kubenswrapper[15493]: I0216 17:02:39.004992 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_b1b4fccc-6bf6-47ac-8ae1-32cad23734da/installer/0.log" Feb 16 17:02:39.005192 master-0 kubenswrapper[15493]: I0216 17:02:39.005053 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:39.061739 master-0 kubenswrapper[15493]: I0216 17:02:39.061685 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:39.063687 master-0 kubenswrapper[15493]: I0216 17:02:39.063650 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:39.065372 master-0 kubenswrapper[15493]: I0216 17:02:39.065008 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_5d39ed24-4301-4cea-8a42-a08f4ba8b479/installer/0.log" Feb 16 17:02:39.065372 master-0 kubenswrapper[15493]: I0216 17:02:39.065072 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:39.071790 master-0 kubenswrapper[15493]: I0216 17:02:39.071603 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 16 17:02:39.104564 master-0 kubenswrapper[15493]: I0216 17:02:39.104512 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-var-lock\") pod \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " Feb 16 17:02:39.104765 master-0 kubenswrapper[15493]: I0216 17:02:39.104596 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-var-lock" (OuterVolumeSpecName: "var-lock") pod "b1b4fccc-6bf6-47ac-8ae1-32cad23734da" (UID: "b1b4fccc-6bf6-47ac-8ae1-32cad23734da"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:02:39.104765 master-0 kubenswrapper[15493]: I0216 17:02:39.104752 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access\") pod \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " Feb 16 17:02:39.104873 master-0 kubenswrapper[15493]: I0216 17:02:39.104801 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kubelet-dir\") pod \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\" (UID: \"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\") " Feb 16 17:02:39.105035 master-0 kubenswrapper[15493]: I0216 17:02:39.104995 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b1b4fccc-6bf6-47ac-8ae1-32cad23734da" (UID: "b1b4fccc-6bf6-47ac-8ae1-32cad23734da"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:02:39.107042 master-0 kubenswrapper[15493]: I0216 17:02:39.107007 15493 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:39.107042 master-0 kubenswrapper[15493]: I0216 17:02:39.107038 15493 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:39.107145 master-0 kubenswrapper[15493]: I0216 17:02:39.107110 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b1b4fccc-6bf6-47ac-8ae1-32cad23734da" (UID: "b1b4fccc-6bf6-47ac-8ae1-32cad23734da"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:39.208196 master-0 kubenswrapper[15493]: I0216 17:02:39.208070 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access\") pod \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " Feb 16 17:02:39.208196 master-0 kubenswrapper[15493]: I0216 17:02:39.208126 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-var-lock\") pod \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " Feb 16 17:02:39.208196 master-0 kubenswrapper[15493]: I0216 17:02:39.208159 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kubelet-dir\") pod \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\" (UID: \"5d39ed24-4301-4cea-8a42-a08f4ba8b479\") " Feb 16 17:02:39.209023 master-0 kubenswrapper[15493]: I0216 17:02:39.208611 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5d39ed24-4301-4cea-8a42-a08f4ba8b479" (UID: "5d39ed24-4301-4cea-8a42-a08f4ba8b479"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:02:39.209023 master-0 kubenswrapper[15493]: I0216 17:02:39.208612 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-var-lock" (OuterVolumeSpecName: "var-lock") pod "5d39ed24-4301-4cea-8a42-a08f4ba8b479" (UID: "5d39ed24-4301-4cea-8a42-a08f4ba8b479"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:02:39.209999 master-0 kubenswrapper[15493]: I0216 17:02:39.209907 15493 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:39.209999 master-0 kubenswrapper[15493]: I0216 17:02:39.209953 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1b4fccc-6bf6-47ac-8ae1-32cad23734da-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:39.209999 master-0 kubenswrapper[15493]: I0216 17:02:39.209967 15493 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5d39ed24-4301-4cea-8a42-a08f4ba8b479-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:39.211143 master-0 kubenswrapper[15493]: I0216 17:02:39.211094 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5d39ed24-4301-4cea-8a42-a08f4ba8b479" (UID: "5d39ed24-4301-4cea-8a42-a08f4ba8b479"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:02:39.220844 master-0 kubenswrapper[15493]: I0216 17:02:39.220797 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:02:39.236798 master-0 kubenswrapper[15493]: I0216 17:02:39.236652 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 17:02:39.268084 master-0 kubenswrapper[15493]: I0216 17:02:39.268003 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 17:02:39.313445 master-0 kubenswrapper[15493]: I0216 17:02:39.313070 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d39ed24-4301-4cea-8a42-a08f4ba8b479-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:02:39.438845 master-0 kubenswrapper[15493]: I0216 17:02:39.438048 15493 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:02:39.450302 master-0 kubenswrapper[15493]: I0216 17:02:39.449608 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:02:39.450302 master-0 kubenswrapper[15493]: I0216 17:02:39.449651 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:02:39.450302 master-0 kubenswrapper[15493]: I0216 17:02:39.449661 15493 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:02:39.450302 master-0 kubenswrapper[15493]: I0216 17:02:39.449946 15493 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:02:39.466902 master-0 kubenswrapper[15493]: I0216 17:02:39.465272 15493 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 16 17:02:39.466902 master-0 kubenswrapper[15493]: I0216 17:02:39.465375 15493 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 16 17:02:39.585414 master-0 kubenswrapper[15493]: I0216 17:02:39.585362 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-r5p9m" Feb 16 17:02:39.656177 master-0 kubenswrapper[15493]: I0216 17:02:39.656133 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerStarted","Data":"c831785a2bf908d1b34bf073428566bcb11bb2eb5f427f3a32e57d474a29ad1b"} Feb 16 17:02:39.658412 master-0 kubenswrapper[15493]: I0216 17:02:39.658387 15493 generic.go:334] "Generic (PLEG): container finished" podID="f3beb7bf-922f-425d-8a19-fd407a7153a8" containerID="e1f2a481a6b378453431197a9573c0d770327c066c372ed58b165d78ec1276e9" exitCode=0 Feb 16 17:02:39.658531 master-0 kubenswrapper[15493]: I0216 17:02:39.658462 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerDied","Data":"e1f2a481a6b378453431197a9573c0d770327c066c372ed58b165d78ec1276e9"} Feb 16 17:02:39.660599 master-0 kubenswrapper[15493]: I0216 17:02:39.660571 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_5d39ed24-4301-4cea-8a42-a08f4ba8b479/installer/0.log" Feb 16 17:02:39.660704 master-0 kubenswrapper[15493]: I0216 17:02:39.660683 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:02:39.660784 master-0 kubenswrapper[15493]: I0216 17:02:39.660754 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"5d39ed24-4301-4cea-8a42-a08f4ba8b479","Type":"ContainerDied","Data":"4b0c06fa22c4b9fdff535e3051201dc0bd36447aab43eba5f5549b527b9cff7e"} Feb 16 17:02:39.660834 master-0 kubenswrapper[15493]: I0216 17:02:39.660785 15493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b0c06fa22c4b9fdff535e3051201dc0bd36447aab43eba5f5549b527b9cff7e" Feb 16 17:02:39.662360 master-0 kubenswrapper[15493]: I0216 17:02:39.662139 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_b1b4fccc-6bf6-47ac-8ae1-32cad23734da/installer/0.log" Feb 16 17:02:39.662466 master-0 kubenswrapper[15493]: I0216 17:02:39.662386 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"b1b4fccc-6bf6-47ac-8ae1-32cad23734da","Type":"ContainerDied","Data":"aa37dd5bc712a6e66397e0efcad2c702b51d3841d761278b212389b13ad668e0"} Feb 16 17:02:39.662466 master-0 kubenswrapper[15493]: I0216 17:02:39.662412 15493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa37dd5bc712a6e66397e0efcad2c702b51d3841d761278b212389b13ad668e0" Feb 16 17:02:39.662553 master-0 kubenswrapper[15493]: I0216 17:02:39.662490 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 17:02:40.296448 master-0 kubenswrapper[15493]: I0216 17:02:40.296278 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:02:40.335302 master-0 kubenswrapper[15493]: I0216 17:02:40.335247 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 17:02:40.436253 master-0 kubenswrapper[15493]: I0216 17:02:40.436193 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-wnnb7" Feb 16 17:02:40.625091 master-0 kubenswrapper[15493]: I0216 17:02:40.624932 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 17:02:40.653677 master-0 kubenswrapper[15493]: I0216 17:02:40.653603 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:02:40.672255 master-0 kubenswrapper[15493]: I0216 17:02:40.672204 15493 generic.go:334] "Generic (PLEG): container finished" podID="cc9a20f4-255a-4312-8f43-174a28c06340" containerID="c831785a2bf908d1b34bf073428566bcb11bb2eb5f427f3a32e57d474a29ad1b" exitCode=0 Feb 16 17:02:40.672457 master-0 kubenswrapper[15493]: I0216 17:02:40.672245 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerDied","Data":"c831785a2bf908d1b34bf073428566bcb11bb2eb5f427f3a32e57d474a29ad1b"} Feb 16 17:02:40.953555 master-0 kubenswrapper[15493]: I0216 17:02:40.953438 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:41.076625 master-0 kubenswrapper[15493]: I0216 17:02:41.076575 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:41.079891 master-0 kubenswrapper[15493]: I0216 17:02:41.079849 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:41.206469 master-0 kubenswrapper[15493]: I0216 17:02:41.206370 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:41.221523 master-0 kubenswrapper[15493]: E0216 17:02:41.221456 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:02:42.099409 master-0 kubenswrapper[15493]: I0216 17:02:42.099306 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:42.100212 master-0 kubenswrapper[15493]: I0216 17:02:42.100017 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:42.100891 master-0 kubenswrapper[15493]: I0216 17:02:42.100850 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:42.101018 master-0 kubenswrapper[15493]: I0216 17:02:42.100959 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:42.202860 master-0 kubenswrapper[15493]: I0216 17:02:42.201913 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:42.202860 master-0 kubenswrapper[15493]: I0216 17:02:42.202022 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:42.202860 master-0 kubenswrapper[15493]: I0216 17:02:42.202785 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:42.203201 master-0 kubenswrapper[15493]: I0216 17:02:42.203138 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:42.511817 master-0 kubenswrapper[15493]: I0216 17:02:42.511743 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:02:42.512030 master-0 kubenswrapper[15493]: I0216 17:02:42.511867 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:02:42.512030 master-0 kubenswrapper[15493]: I0216 17:02:42.511987 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:42.513324 master-0 kubenswrapper[15493]: I0216 17:02:42.513046 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:02:42.513324 master-0 kubenswrapper[15493]: I0216 17:02:42.513155 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:42.513417 master-0 kubenswrapper[15493]: I0216 17:02:42.513336 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:02:42.614475 master-0 kubenswrapper[15493]: I0216 17:02:42.614408 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:42.614966 master-0 kubenswrapper[15493]: I0216 17:02:42.614933 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:42.615724 master-0 kubenswrapper[15493]: I0216 17:02:42.615692 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:42.615793 master-0 kubenswrapper[15493]: I0216 17:02:42.615717 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:42.924704 master-0 kubenswrapper[15493]: I0216 17:02:42.924263 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:42.926194 master-0 kubenswrapper[15493]: I0216 17:02:42.926145 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:43.025964 master-0 kubenswrapper[15493]: I0216 17:02:43.025877 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:43.027512 master-0 kubenswrapper[15493]: I0216 17:02:43.026000 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:43.027512 master-0 kubenswrapper[15493]: I0216 17:02:43.026224 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:43.027615 master-0 kubenswrapper[15493]: I0216 17:02:43.027585 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:43.028575 master-0 kubenswrapper[15493]: I0216 17:02:43.027696 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:43.028575 master-0 kubenswrapper[15493]: I0216 17:02:43.027875 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:43.028575 master-0 kubenswrapper[15493]: I0216 17:02:43.028007 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:02:43.028575 master-0 kubenswrapper[15493]: I0216 17:02:43.028421 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:43.028724 master-0 kubenswrapper[15493]: I0216 17:02:43.028684 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:43.028991 master-0 kubenswrapper[15493]: I0216 17:02:43.028839 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:43.029559 master-0 kubenswrapper[15493]: I0216 17:02:43.029514 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:02:43.029730 master-0 kubenswrapper[15493]: I0216 17:02:43.029685 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:43.063216 master-0 kubenswrapper[15493]: I0216 17:02:43.063142 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:43.132332 master-0 kubenswrapper[15493]: I0216 17:02:43.130744 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:43.132332 master-0 kubenswrapper[15493]: I0216 17:02:43.131701 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:43.132332 master-0 kubenswrapper[15493]: I0216 17:02:43.132204 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:43.133109 master-0 kubenswrapper[15493]: I0216 17:02:43.132995 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:43.328743 master-0 kubenswrapper[15493]: I0216 17:02:43.328462 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:43.337266 master-0 kubenswrapper[15493]: I0216 17:02:43.337215 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:43.338970 master-0 kubenswrapper[15493]: I0216 17:02:43.337645 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:43.338970 master-0 kubenswrapper[15493]: I0216 17:02:43.338011 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:43.338970 master-0 kubenswrapper[15493]: I0216 17:02:43.338305 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:43.440678 master-0 kubenswrapper[15493]: I0216 17:02:43.440596 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:43.440678 master-0 kubenswrapper[15493]: I0216 17:02:43.440665 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:43.440973 master-0 kubenswrapper[15493]: I0216 17:02:43.440720 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:43.440973 master-0 kubenswrapper[15493]: I0216 17:02:43.440780 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:02:43.441073 master-0 kubenswrapper[15493]: I0216 17:02:43.440991 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:43.442517 master-0 kubenswrapper[15493]: I0216 17:02:43.442421 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:43.442608 master-0 kubenswrapper[15493]: I0216 17:02:43.442581 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:43.442659 master-0 kubenswrapper[15493]: I0216 17:02:43.442612 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:43.442706 master-0 kubenswrapper[15493]: I0216 17:02:43.442650 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:43.442779 master-0 kubenswrapper[15493]: I0216 17:02:43.442738 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:02:43.442843 master-0 kubenswrapper[15493]: I0216 17:02:43.442784 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:43.443576 master-0 kubenswrapper[15493]: I0216 17:02:43.443541 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:43.648569 master-0 kubenswrapper[15493]: I0216 17:02:43.648447 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:43.650009 master-0 kubenswrapper[15493]: I0216 17:02:43.649972 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:43.671191 master-0 kubenswrapper[15493]: I0216 17:02:43.671149 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:02:43.673936 master-0 kubenswrapper[15493]: I0216 17:02:43.673839 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:02:43.723773 master-0 kubenswrapper[15493]: I0216 17:02:43.723721 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:43.727215 master-0 kubenswrapper[15493]: I0216 17:02:43.727179 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:43.752734 master-0 kubenswrapper[15493]: I0216 17:02:43.752676 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:43.752939 master-0 kubenswrapper[15493]: I0216 17:02:43.752745 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:43.753542 master-0 kubenswrapper[15493]: I0216 17:02:43.753509 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:43.753992 master-0 kubenswrapper[15493]: I0216 17:02:43.753957 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:43.959449 master-0 kubenswrapper[15493]: I0216 17:02:43.959345 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:43.959449 master-0 kubenswrapper[15493]: I0216 17:02:43.959394 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:43.959725 master-0 kubenswrapper[15493]: I0216 17:02:43.959687 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:43.959787 master-0 kubenswrapper[15493]: I0216 17:02:43.959765 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:43.960348 master-0 kubenswrapper[15493]: I0216 17:02:43.960315 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:43.960532 master-0 kubenswrapper[15493]: I0216 17:02:43.960506 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:43.960593 master-0 kubenswrapper[15493]: I0216 17:02:43.960507 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:43.960705 master-0 kubenswrapper[15493]: I0216 17:02:43.960676 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:44.165059 master-0 kubenswrapper[15493]: I0216 17:02:44.164992 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:44.166332 master-0 kubenswrapper[15493]: I0216 17:02:44.166302 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:44.166403 master-0 kubenswrapper[15493]: I0216 17:02:44.166364 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:44.166986 master-0 kubenswrapper[15493]: I0216 17:02:44.166961 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:44.472137 master-0 kubenswrapper[15493]: I0216 17:02:44.472084 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:44.472908 master-0 kubenswrapper[15493]: I0216 17:02:44.472873 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:44.472988 master-0 kubenswrapper[15493]: I0216 17:02:44.472915 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:44.473041 master-0 kubenswrapper[15493]: I0216 17:02:44.473031 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:44.473420 master-0 kubenswrapper[15493]: I0216 17:02:44.473329 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:44.473647 master-0 kubenswrapper[15493]: I0216 17:02:44.473618 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:44.473719 master-0 kubenswrapper[15493]: I0216 17:02:44.473670 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:44.474317 master-0 kubenswrapper[15493]: I0216 17:02:44.474295 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:44.703864 master-0 kubenswrapper[15493]: I0216 17:02:44.703820 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:02:44.780370 master-0 kubenswrapper[15493]: I0216 17:02:44.780321 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:44.781812 master-0 kubenswrapper[15493]: I0216 17:02:44.781780 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:45.065365 master-0 kubenswrapper[15493]: I0216 17:02:45.065151 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:45.080861 master-0 kubenswrapper[15493]: I0216 17:02:45.080735 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:45.294228 master-0 kubenswrapper[15493]: I0216 17:02:45.294145 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:45.295454 master-0 kubenswrapper[15493]: I0216 17:02:45.294269 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:45.295454 master-0 kubenswrapper[15493]: I0216 17:02:45.295263 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:45.295454 master-0 kubenswrapper[15493]: I0216 17:02:45.295291 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:45.295658 master-0 kubenswrapper[15493]: I0216 17:02:45.295535 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:45.295951 master-0 kubenswrapper[15493]: I0216 17:02:45.295837 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:45.296042 master-0 kubenswrapper[15493]: I0216 17:02:45.295971 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:45.296042 master-0 kubenswrapper[15493]: I0216 17:02:45.296006 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.296133 master-0 kubenswrapper[15493]: I0216 17:02:45.296053 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.296284 master-0 kubenswrapper[15493]: I0216 17:02:45.296260 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.296344 master-0 kubenswrapper[15493]: I0216 17:02:45.296309 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:45.296344 master-0 kubenswrapper[15493]: I0216 17:02:45.296336 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.296425 master-0 kubenswrapper[15493]: I0216 17:02:45.296358 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:45.296425 master-0 kubenswrapper[15493]: I0216 17:02:45.296381 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:45.296425 master-0 kubenswrapper[15493]: I0216 17:02:45.296404 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:45.296425 master-0 kubenswrapper[15493]: I0216 17:02:45.296407 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:45.296425 master-0 kubenswrapper[15493]: I0216 17:02:45.296426 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.296626 master-0 kubenswrapper[15493]: I0216 17:02:45.296456 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.296626 master-0 kubenswrapper[15493]: I0216 17:02:45.296496 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:45.296626 master-0 kubenswrapper[15493]: I0216 17:02:45.296536 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:45.296626 master-0 kubenswrapper[15493]: I0216 17:02:45.296562 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.296626 master-0 kubenswrapper[15493]: I0216 17:02:45.296572 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:45.296626 master-0 kubenswrapper[15493]: I0216 17:02:45.296607 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:45.297127 master-0 kubenswrapper[15493]: I0216 17:02:45.296640 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:45.297127 master-0 kubenswrapper[15493]: I0216 17:02:45.296685 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:45.297127 master-0 kubenswrapper[15493]: I0216 17:02:45.296863 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:45.297127 master-0 kubenswrapper[15493]: I0216 17:02:45.296946 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:45.297127 master-0 kubenswrapper[15493]: I0216 17:02:45.296986 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.297127 master-0 kubenswrapper[15493]: I0216 17:02:45.296969 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:45.297127 master-0 kubenswrapper[15493]: I0216 17:02:45.296999 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:45.297127 master-0 kubenswrapper[15493]: I0216 17:02:45.297025 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:45.297531 master-0 kubenswrapper[15493]: I0216 17:02:45.297262 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:02:45.297531 master-0 kubenswrapper[15493]: I0216 17:02:45.297326 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:45.297531 master-0 kubenswrapper[15493]: I0216 17:02:45.297334 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.297531 master-0 kubenswrapper[15493]: I0216 17:02:45.297100 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:45.297531 master-0 kubenswrapper[15493]: I0216 17:02:45.297384 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:45.297531 master-0 kubenswrapper[15493]: I0216 17:02:45.297410 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:45.297531 master-0 kubenswrapper[15493]: I0216 17:02:45.297430 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:45.297531 master-0 kubenswrapper[15493]: I0216 17:02:45.297478 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:45.297531 master-0 kubenswrapper[15493]: I0216 17:02:45.297511 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:45.297908 master-0 kubenswrapper[15493]: I0216 17:02:45.297528 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:45.297908 master-0 kubenswrapper[15493]: I0216 17:02:45.297615 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.297908 master-0 kubenswrapper[15493]: I0216 17:02:45.297621 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.297908 master-0 kubenswrapper[15493]: I0216 17:02:45.297640 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:45.297908 master-0 kubenswrapper[15493]: I0216 17:02:45.297665 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:45.297908 master-0 kubenswrapper[15493]: I0216 17:02:45.297674 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:45.297908 master-0 kubenswrapper[15493]: I0216 17:02:45.297683 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:45.297908 master-0 kubenswrapper[15493]: I0216 17:02:45.297698 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:45.297908 master-0 kubenswrapper[15493]: I0216 17:02:45.297850 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:45.297908 master-0 kubenswrapper[15493]: I0216 17:02:45.297821 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.297946 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.297961 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.297971 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.297997 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.298000 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.298037 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.298068 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.298108 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.298140 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.298144 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.298190 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.298212 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.298244 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.298282 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:45.298310 master-0 kubenswrapper[15493]: I0216 17:02:45.298315 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:45.298865 master-0 kubenswrapper[15493]: I0216 17:02:45.298353 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.298865 master-0 kubenswrapper[15493]: I0216 17:02:45.298407 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:02:45.298865 master-0 kubenswrapper[15493]: I0216 17:02:45.298415 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:45.298865 master-0 kubenswrapper[15493]: I0216 17:02:45.298465 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:45.298865 master-0 kubenswrapper[15493]: I0216 17:02:45.298635 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:45.298865 master-0 kubenswrapper[15493]: I0216 17:02:45.298866 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:02:45.299109 master-0 kubenswrapper[15493]: I0216 17:02:45.298870 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:45.299109 master-0 kubenswrapper[15493]: I0216 17:02:45.298861 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:45.299109 master-0 kubenswrapper[15493]: I0216 17:02:45.298913 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:45.299109 master-0 kubenswrapper[15493]: I0216 17:02:45.298964 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:45.299109 master-0 kubenswrapper[15493]: I0216 17:02:45.298993 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:45.299109 master-0 kubenswrapper[15493]: I0216 17:02:45.299018 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:45.299109 master-0 kubenswrapper[15493]: I0216 17:02:45.299053 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.299109 master-0 kubenswrapper[15493]: I0216 17:02:45.299067 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:45.299750 master-0 kubenswrapper[15493]: I0216 17:02:45.299709 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:45.299835 master-0 kubenswrapper[15493]: I0216 17:02:45.299768 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:45.299956 master-0 kubenswrapper[15493]: I0216 17:02:45.299914 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:45.300173 master-0 kubenswrapper[15493]: I0216 17:02:45.299972 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:45.300173 master-0 kubenswrapper[15493]: I0216 17:02:45.299992 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:45.300173 master-0 kubenswrapper[15493]: I0216 17:02:45.299996 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:02:45.300173 master-0 kubenswrapper[15493]: I0216 17:02:45.299994 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:02:45.300173 master-0 kubenswrapper[15493]: I0216 17:02:45.300000 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:45.300776 master-0 kubenswrapper[15493]: I0216 17:02:45.300754 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:45.300862 master-0 kubenswrapper[15493]: I0216 17:02:45.300792 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:45.300862 master-0 kubenswrapper[15493]: I0216 17:02:45.300847 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:02:45.300975 master-0 kubenswrapper[15493]: I0216 17:02:45.300943 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:45.301022 master-0 kubenswrapper[15493]: I0216 17:02:45.300989 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:45.301074 master-0 kubenswrapper[15493]: I0216 17:02:45.301034 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:45.301074 master-0 kubenswrapper[15493]: I0216 17:02:45.301063 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:45.301149 master-0 kubenswrapper[15493]: I0216 17:02:45.301091 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:45.301149 master-0 kubenswrapper[15493]: I0216 17:02:45.301116 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:45.301149 master-0 kubenswrapper[15493]: I0216 17:02:45.301132 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:45.301149 master-0 kubenswrapper[15493]: I0216 17:02:45.301140 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:45.301355 master-0 kubenswrapper[15493]: I0216 17:02:45.301178 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:45.301355 master-0 kubenswrapper[15493]: I0216 17:02:45.301339 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:02:45.301452 master-0 kubenswrapper[15493]: I0216 17:02:45.301437 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:45.301803 master-0 kubenswrapper[15493]: I0216 17:02:45.301467 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:02:45.301803 master-0 kubenswrapper[15493]: I0216 17:02:45.301509 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:45.301803 master-0 kubenswrapper[15493]: I0216 17:02:45.301619 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:45.301803 master-0 kubenswrapper[15493]: I0216 17:02:45.301672 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:45.301803 master-0 kubenswrapper[15493]: I0216 17:02:45.301707 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:45.302025 master-0 kubenswrapper[15493]: I0216 17:02:45.301870 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:02:45.302025 master-0 kubenswrapper[15493]: I0216 17:02:45.301932 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:45.302025 master-0 kubenswrapper[15493]: I0216 17:02:45.301972 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:45.302025 master-0 kubenswrapper[15493]: I0216 17:02:45.302016 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:45.302181 master-0 kubenswrapper[15493]: I0216 17:02:45.302012 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:02:45.302181 master-0 kubenswrapper[15493]: I0216 17:02:45.302050 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.302181 master-0 kubenswrapper[15493]: I0216 17:02:45.302094 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:45.302181 master-0 kubenswrapper[15493]: I0216 17:02:45.302129 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:45.302181 master-0 kubenswrapper[15493]: I0216 17:02:45.302164 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:45.302181 master-0 kubenswrapper[15493]: I0216 17:02:45.302173 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.302437 master-0 kubenswrapper[15493]: I0216 17:02:45.302199 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.302437 master-0 kubenswrapper[15493]: I0216 17:02:45.302206 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:45.302437 master-0 kubenswrapper[15493]: I0216 17:02:45.302234 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:45.302437 master-0 kubenswrapper[15493]: I0216 17:02:45.302340 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:45.302437 master-0 kubenswrapper[15493]: I0216 17:02:45.302365 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:45.302437 master-0 kubenswrapper[15493]: I0216 17:02:45.302372 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.302437 master-0 kubenswrapper[15493]: I0216 17:02:45.302258 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:45.302437 master-0 kubenswrapper[15493]: I0216 17:02:45.302212 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:02:45.302437 master-0 kubenswrapper[15493]: I0216 17:02:45.302438 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:45.302783 master-0 kubenswrapper[15493]: I0216 17:02:45.302450 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:45.302783 master-0 kubenswrapper[15493]: I0216 17:02:45.302516 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.302783 master-0 kubenswrapper[15493]: I0216 17:02:45.302545 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.302783 master-0 kubenswrapper[15493]: I0216 17:02:45.302565 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:45.302783 master-0 kubenswrapper[15493]: I0216 17:02:45.302586 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:45.302783 master-0 kubenswrapper[15493]: I0216 17:02:45.302592 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:45.302783 master-0 kubenswrapper[15493]: I0216 17:02:45.302609 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:45.302783 master-0 kubenswrapper[15493]: I0216 17:02:45.302632 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:45.302783 master-0 kubenswrapper[15493]: I0216 17:02:45.302638 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.302783 master-0 kubenswrapper[15493]: I0216 17:02:45.302677 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.302783 master-0 kubenswrapper[15493]: I0216 17:02:45.302660 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:45.302783 master-0 kubenswrapper[15493]: I0216 17:02:45.302764 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:45.302783 master-0 kubenswrapper[15493]: I0216 17:02:45.302799 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:45.303354 master-0 kubenswrapper[15493]: I0216 17:02:45.302828 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.303354 master-0 kubenswrapper[15493]: I0216 17:02:45.302839 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:45.303354 master-0 kubenswrapper[15493]: I0216 17:02:45.302864 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:45.303354 master-0 kubenswrapper[15493]: I0216 17:02:45.302903 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:45.303354 master-0 kubenswrapper[15493]: I0216 17:02:45.302941 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:45.303354 master-0 kubenswrapper[15493]: I0216 17:02:45.302950 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:45.303354 master-0 kubenswrapper[15493]: I0216 17:02:45.303039 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:45.303354 master-0 kubenswrapper[15493]: I0216 17:02:45.303072 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:45.303354 master-0 kubenswrapper[15493]: I0216 17:02:45.303059 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.303354 master-0 kubenswrapper[15493]: I0216 17:02:45.303103 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:45.303354 master-0 kubenswrapper[15493]: I0216 17:02:45.303209 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:45.303354 master-0 kubenswrapper[15493]: I0216 17:02:45.303231 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303615 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303622 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303642 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303708 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303759 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303789 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303819 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303854 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303851 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303043 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303891 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303912 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303952 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303972 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.303995 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.304013 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:45.304029 master-0 kubenswrapper[15493]: I0216 17:02:45.304022 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304115 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304122 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304207 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304311 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304335 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304397 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"multus-admission-controller-7c64d55f8-4jz2t\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304421 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304564 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304571 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304030 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304618 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304645 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304664 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304681 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304701 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304715 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304719 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304799 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304813 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:45.304809 master-0 kubenswrapper[15493]: I0216 17:02:45.304827 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.304889 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.304968 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305033 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305064 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305070 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305093 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305132 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305168 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305221 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305248 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305274 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305303 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305322 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305352 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305404 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305432 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305457 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305487 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305517 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305521 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305543 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305582 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305336 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305617 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305666 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305695 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:45.305700 master-0 kubenswrapper[15493]: I0216 17:02:45.305728 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:45.306595 master-0 kubenswrapper[15493]: I0216 17:02:45.305750 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:02:45.306595 master-0 kubenswrapper[15493]: I0216 17:02:45.305760 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.306595 master-0 kubenswrapper[15493]: I0216 17:02:45.305789 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:45.306595 master-0 kubenswrapper[15493]: I0216 17:02:45.306035 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.306595 master-0 kubenswrapper[15493]: I0216 17:02:45.306078 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.306595 master-0 kubenswrapper[15493]: I0216 17:02:45.306192 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:02:45.306595 master-0 kubenswrapper[15493]: I0216 17:02:45.306225 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.306595 master-0 kubenswrapper[15493]: I0216 17:02:45.306356 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:45.306595 master-0 kubenswrapper[15493]: I0216 17:02:45.306587 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:45.306899 master-0 kubenswrapper[15493]: I0216 17:02:45.306741 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:02:45.306899 master-0 kubenswrapper[15493]: I0216 17:02:45.306752 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:02:45.307035 master-0 kubenswrapper[15493]: I0216 17:02:45.307010 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:45.307193 master-0 kubenswrapper[15493]: I0216 17:02:45.307159 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:45.307236 master-0 kubenswrapper[15493]: I0216 17:02:45.307210 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:02:45.307236 master-0 kubenswrapper[15493]: I0216 17:02:45.307207 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:02:45.307499 master-0 kubenswrapper[15493]: I0216 17:02:45.307416 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:02:45.307499 master-0 kubenswrapper[15493]: I0216 17:02:45.307440 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:02:45.307499 master-0 kubenswrapper[15493]: I0216 17:02:45.307495 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:02:45.307998 master-0 kubenswrapper[15493]: I0216 17:02:45.307561 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:02:45.307998 master-0 kubenswrapper[15493]: I0216 17:02:45.307737 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.307998 master-0 kubenswrapper[15493]: I0216 17:02:45.307787 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:45.307998 master-0 kubenswrapper[15493]: I0216 17:02:45.307819 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.308174 master-0 kubenswrapper[15493]: I0216 17:02:45.308059 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:45.308174 master-0 kubenswrapper[15493]: I0216 17:02:45.308117 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:02:45.308236 master-0 kubenswrapper[15493]: I0216 17:02:45.308200 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:45.308267 master-0 kubenswrapper[15493]: I0216 17:02:45.308207 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:02:45.308956 master-0 kubenswrapper[15493]: I0216 17:02:45.308871 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:45.309015 master-0 kubenswrapper[15493]: I0216 17:02:45.309002 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:45.462996 master-0 kubenswrapper[15493]: I0216 17:02:45.462879 15493 scope.go:117] "RemoveContainer" containerID="e310e36fd740b75515307293e697ecd768c9c8241ff939db071d778913f35a7a" Feb 16 17:02:45.463153 master-0 kubenswrapper[15493]: I0216 17:02:45.462999 15493 scope.go:117] "RemoveContainer" containerID="01bf42c6c3bf4f293fd2294a37aff703b4c469002ae6a87f7c50eefa7c6ae11b" Feb 16 17:02:45.465759 master-0 kubenswrapper[15493]: I0216 17:02:45.465726 15493 scope.go:117] "RemoveContainer" containerID="f31ff62ede3b23583193a8479095d460885c4665f91f714a80c48601aa1a71ad" Feb 16 17:02:45.466197 master-0 kubenswrapper[15493]: I0216 17:02:45.466145 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:45.467035 master-0 kubenswrapper[15493]: I0216 17:02:45.466875 15493 scope.go:117] "RemoveContainer" containerID="435ef5863cc155441a05593945ab2775001a00c9f99d0e797a375813404c36ac" Feb 16 17:02:45.467707 master-0 kubenswrapper[15493]: I0216 17:02:45.467667 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:02:45.468376 master-0 kubenswrapper[15493]: I0216 17:02:45.468320 15493 scope.go:117] "RemoveContainer" containerID="8399cb1f8a954f603085247154bb48084a1a6283fe2b99aa8facab4cb78f381d" Feb 16 17:02:45.469374 master-0 kubenswrapper[15493]: I0216 17:02:45.469346 15493 scope.go:117] "RemoveContainer" containerID="58c88a445d8c10824c3855b7412ae17cbbff466b8394e38c4224ab694839c37d" Feb 16 17:02:45.469771 master-0 kubenswrapper[15493]: I0216 17:02:45.469749 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:45.471155 master-0 kubenswrapper[15493]: I0216 17:02:45.471118 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:45.471511 master-0 kubenswrapper[15493]: I0216 17:02:45.471481 15493 scope.go:117] "RemoveContainer" containerID="e8c4ffcf7c4ece8cb912757e2c966b100c9bb74e9a2ec208a540c26e8e9187ce" Feb 16 17:02:45.472271 master-0 kubenswrapper[15493]: I0216 17:02:45.472239 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.472319 master-0 kubenswrapper[15493]: I0216 17:02:45.472295 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.472352 master-0 kubenswrapper[15493]: I0216 17:02:45.472338 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.472443 master-0 kubenswrapper[15493]: I0216 17:02:45.472418 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:02:45.478935 master-0 kubenswrapper[15493]: I0216 17:02:45.473723 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.478935 master-0 kubenswrapper[15493]: I0216 17:02:45.473768 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:45.478935 master-0 kubenswrapper[15493]: I0216 17:02:45.473795 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:45.478935 master-0 kubenswrapper[15493]: I0216 17:02:45.474081 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.478935 master-0 kubenswrapper[15493]: I0216 17:02:45.474221 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.478935 master-0 kubenswrapper[15493]: I0216 17:02:45.475066 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:45.478935 master-0 kubenswrapper[15493]: I0216 17:02:45.477663 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:45.478935 master-0 kubenswrapper[15493]: I0216 17:02:45.477773 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:02:45.478935 master-0 kubenswrapper[15493]: I0216 17:02:45.478449 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:02:45.479323 master-0 kubenswrapper[15493]: I0216 17:02:45.478961 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:02:45.483754 master-0 kubenswrapper[15493]: I0216 17:02:45.483682 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:45.486809 master-0 kubenswrapper[15493]: I0216 17:02:45.483772 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:45.486809 master-0 kubenswrapper[15493]: I0216 17:02:45.485507 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:45.487662 master-0 kubenswrapper[15493]: I0216 17:02:45.487616 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:02:45.487813 master-0 kubenswrapper[15493]: I0216 17:02:45.487780 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:45.488100 master-0 kubenswrapper[15493]: I0216 17:02:45.488078 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:02:45.495803 master-0 kubenswrapper[15493]: I0216 17:02:45.495754 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:02:45.499090 master-0 kubenswrapper[15493]: I0216 17:02:45.498590 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:45.499090 master-0 kubenswrapper[15493]: I0216 17:02:45.498939 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:02:45.502374 master-0 kubenswrapper[15493]: I0216 17:02:45.500201 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:02:45.502374 master-0 kubenswrapper[15493]: I0216 17:02:45.502260 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:02:45.503800 master-0 kubenswrapper[15493]: I0216 17:02:45.503750 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:02:45.508979 master-0 kubenswrapper[15493]: I0216 17:02:45.508254 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:02:45.513232 master-0 kubenswrapper[15493]: I0216 17:02:45.513194 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.515605 master-0 kubenswrapper[15493]: I0216 17:02:45.515468 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.545062 master-0 kubenswrapper[15493]: I0216 17:02:45.523037 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:02:45.709513 master-0 kubenswrapper[15493]: I0216 17:02:45.709376 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerStarted","Data":"dc3b4571309a88f03db49c8f3410740df7ca0758d3a470ee04a34d6d5a032bdd"} Feb 16 17:02:45.711975 master-0 kubenswrapper[15493]: I0216 17:02:45.711906 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerStarted","Data":"a9210cafcee697a056420bdead07853310517c4cc797e6c77df93d1b6e4de7de"} Feb 16 17:02:45.714578 master-0 kubenswrapper[15493]: I0216 17:02:45.714478 15493 scope.go:117] "RemoveContainer" containerID="0003ee69c56b0c73d7d4526fa1f5d5fb937628023fcef99de3436e9f297fc1a8" Feb 16 17:02:45.714883 master-0 kubenswrapper[15493]: I0216 17:02:45.714845 15493 scope.go:117] "RemoveContainer" containerID="1a75bfcb3d6ee6e289b7323fbc3d24c63e7fcd67393fd211cfa30edcae278f7a" Feb 16 17:02:46.719281 master-0 kubenswrapper[15493]: I0216 17:02:46.719243 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerStarted","Data":"db8e6dde9089415ec50ea395cc6048bd2122d36d369cf40adfb691513d4759ff"} Feb 16 17:02:46.721656 master-0 kubenswrapper[15493]: I0216 17:02:46.721602 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerStarted","Data":"d371c36e93606a6be62a05a6e38d4e0131418dc0eaea65b286323f5ff81944ef"} Feb 16 17:02:46.723282 master-0 kubenswrapper[15493]: I0216 17:02:46.723237 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" event={"ID":"8e623376-9e14-4341-9dcf-7a7c218b6f9f","Type":"ContainerStarted","Data":"5d742ee8f3ff4d437ae51d12ae2509ff6a091914c30d3aa55203939de62735fd"} Feb 16 17:02:47.073455 master-0 kubenswrapper[15493]: I0216 17:02:47.073410 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:47.103350 master-0 kubenswrapper[15493]: I0216 17:02:47.103268 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:47.133748 master-0 kubenswrapper[15493]: W0216 17:02:47.133689 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8729b1a_e365_4cf7_8a05_91a9987dabe9.slice/crio-aa7c6db0456e2be8e4e0281f3da976389985f784c7e13d09ec84f3667105b7cf WatchSource:0}: Error finding container aa7c6db0456e2be8e4e0281f3da976389985f784c7e13d09ec84f3667105b7cf: Status 404 returned error can't find the container with id aa7c6db0456e2be8e4e0281f3da976389985f784c7e13d09ec84f3667105b7cf Feb 16 17:02:47.733449 master-0 kubenswrapper[15493]: I0216 17:02:47.733385 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerStarted","Data":"915f8db950ac7ab932dfa55756083249cd00e3b20e2ab5de6ceb63fdfe934d23"} Feb 16 17:02:47.738068 master-0 kubenswrapper[15493]: I0216 17:02:47.735636 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" event={"ID":"4549ea98-7379-49e1-8452-5efb643137ca","Type":"ContainerStarted","Data":"8928e3bf46f9c2e9543fd483f5a6715160d68a0e0514884803acf476ebf5679a"} Feb 16 17:02:47.740878 master-0 kubenswrapper[15493]: I0216 17:02:47.740553 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerStarted","Data":"8fca295bb1baf8b775d772272c7b49fe8ab92fdfd4a954cb26df77e5bc91d265"} Feb 16 17:02:47.742357 master-0 kubenswrapper[15493]: I0216 17:02:47.742325 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerStarted","Data":"6f91303a217830df2deeb75c5b85b160be832bd4796840ca479eb3bf0757299c"} Feb 16 17:02:47.742467 master-0 kubenswrapper[15493]: I0216 17:02:47.742359 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerStarted","Data":"49c7bd744b82a93bdf93e0a45df427c7a02a45864796806bad6477a11f0df882"} Feb 16 17:02:47.745370 master-0 kubenswrapper[15493]: I0216 17:02:47.745342 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerStarted","Data":"9713e0568adf454e7586d1d021067b3f58ea3654d5eca48f5359291f1475c373"} Feb 16 17:02:47.745370 master-0 kubenswrapper[15493]: I0216 17:02:47.745368 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerStarted","Data":"f0fc172e061d9b845719aaf0e0bb5928b1dd8b2b359ec58034b976a3ab24fcfb"} Feb 16 17:02:47.747467 master-0 kubenswrapper[15493]: I0216 17:02:47.747414 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hhcpr_39387549-c636-4bd4-b463-f6a93810f277/approver/0.log" Feb 16 17:02:47.747937 master-0 kubenswrapper[15493]: I0216 17:02:47.747879 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerStarted","Data":"cf00a7735d0ab343338acb080927ee517385e8abb1b426c1e996a640ce7fcbfa"} Feb 16 17:02:47.751134 master-0 kubenswrapper[15493]: I0216 17:02:47.751086 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" event={"ID":"737fcc7d-d850-4352-9f17-383c85d5bc28","Type":"ContainerStarted","Data":"1110ab99d776a3d68ff736e046cffc3c590f742752e3080d6ce45308e9fb665f"} Feb 16 17:02:47.753267 master-0 kubenswrapper[15493]: I0216 17:02:47.753242 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerStarted","Data":"321cf557aeb107d8d573f4ad125d9c41970fc9988ae80bf9900e02e207922125"} Feb 16 17:02:47.758252 master-0 kubenswrapper[15493]: I0216 17:02:47.758199 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerStarted","Data":"80e725c54b230c9d93fa31b6c0bcfa809e24c7926e13502059b3983fcd1b3d79"} Feb 16 17:02:47.760243 master-0 kubenswrapper[15493]: I0216 17:02:47.760201 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerStarted","Data":"9b7fb081aed84bf24e9c173f0f69fce2b7aba0738037ca960752a4a6a87b8388"} Feb 16 17:02:47.762270 master-0 kubenswrapper[15493]: I0216 17:02:47.762195 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" event={"ID":"eaf7edff-0a89-4ac0-b9dd-511e098b5434","Type":"ContainerStarted","Data":"385e9821f6c9a23f9fd968b241bfd034b46ca6792159d22bc7ee49611730173e"} Feb 16 17:02:47.765044 master-0 kubenswrapper[15493]: I0216 17:02:47.764999 15493 generic.go:334] "Generic (PLEG): container finished" podID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" containerID="8d35c6a6b35c47a5c10c534dadefc639a3089e7d5b20515fda1b5ea33afe7af6" exitCode=0 Feb 16 17:02:47.765144 master-0 kubenswrapper[15493]: I0216 17:02:47.765076 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerDied","Data":"8d35c6a6b35c47a5c10c534dadefc639a3089e7d5b20515fda1b5ea33afe7af6"} Feb 16 17:02:47.765144 master-0 kubenswrapper[15493]: I0216 17:02:47.765113 15493 scope.go:117] "RemoveContainer" containerID="1a75bfcb3d6ee6e289b7323fbc3d24c63e7fcd67393fd211cfa30edcae278f7a" Feb 16 17:02:47.766454 master-0 kubenswrapper[15493]: I0216 17:02:47.765685 15493 scope.go:117] "RemoveContainer" containerID="8d35c6a6b35c47a5c10c534dadefc639a3089e7d5b20515fda1b5ea33afe7af6" Feb 16 17:02:47.766454 master-0 kubenswrapper[15493]: E0216 17:02:47.766022 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=insights-operator pod=insights-operator-cb4f7b4cf-6qrw5_openshift-insights(c2511146-1d04-4ecd-a28e-79662ef7b9d3)\"" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:02:47.778225 master-0 kubenswrapper[15493]: I0216 17:02:47.776719 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9_edbaac23-11f0-4bc7-a7ce-b593c774c0fa/openshift-controller-manager-operator/0.log" Feb 16 17:02:47.778225 master-0 kubenswrapper[15493]: I0216 17:02:47.776966 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" event={"ID":"edbaac23-11f0-4bc7-a7ce-b593c774c0fa","Type":"ContainerStarted","Data":"3c107344ed7506b61b6ef1a5ca57eedb7069a294a5c75b6d6c41f82bdc6b94c0"} Feb 16 17:02:47.785158 master-0 kubenswrapper[15493]: I0216 17:02:47.785111 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerStarted","Data":"86f6bc50c80ff4e338be6c35167f66696d920488cfb3363a969927c949d6b7a8"} Feb 16 17:02:47.785311 master-0 kubenswrapper[15493]: I0216 17:02:47.785164 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerStarted","Data":"e71887dbe9906e96a642b9c5769b2a44ce06586a2f093a64ec9c69238b229e86"} Feb 16 17:02:47.785311 master-0 kubenswrapper[15493]: I0216 17:02:47.785184 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerStarted","Data":"aa7c6db0456e2be8e4e0281f3da976389985f784c7e13d09ec84f3667105b7cf"} Feb 16 17:02:48.798408 master-0 kubenswrapper[15493]: I0216 17:02:48.798354 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerStarted","Data":"b5d57d5fbcb5111bf8621480b2ca2d7036238ef5e1ed6356c78b025bd5430216"} Feb 16 17:02:49.065637 master-0 kubenswrapper[15493]: I0216 17:02:49.065523 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:49.293379 master-0 kubenswrapper[15493]: I0216 17:02:49.293304 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:50.307482 master-0 kubenswrapper[15493]: I0216 17:02:50.307370 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:50.308708 master-0 kubenswrapper[15493]: I0216 17:02:50.308262 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:50.482894 master-0 kubenswrapper[15493]: I0216 17:02:50.482852 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:50.482894 master-0 kubenswrapper[15493]: I0216 17:02:50.482952 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:02:50.487577 master-0 kubenswrapper[15493]: I0216 17:02:50.487521 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:02:50.574739 master-0 kubenswrapper[15493]: I0216 17:02:50.574566 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:50.576647 master-0 kubenswrapper[15493]: I0216 17:02:50.576592 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:02:50.814647 master-0 kubenswrapper[15493]: I0216 17:02:50.814592 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-wjr7d_9609a4f3-b947-47af-a685-baae26c50fa3/ingress-operator/0.log" Feb 16 17:02:50.814647 master-0 kubenswrapper[15493]: I0216 17:02:50.814644 15493 generic.go:334] "Generic (PLEG): container finished" podID="9609a4f3-b947-47af-a685-baae26c50fa3" containerID="b2ea1bb15f78693382433f2c7f09878ee2e059e95bab8649c9ca7870ea580187" exitCode=1 Feb 16 17:02:50.814896 master-0 kubenswrapper[15493]: I0216 17:02:50.814694 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerDied","Data":"b2ea1bb15f78693382433f2c7f09878ee2e059e95bab8649c9ca7870ea580187"} Feb 16 17:02:50.815282 master-0 kubenswrapper[15493]: I0216 17:02:50.815242 15493 scope.go:117] "RemoveContainer" containerID="b2ea1bb15f78693382433f2c7f09878ee2e059e95bab8649c9ca7870ea580187" Feb 16 17:02:50.816511 master-0 kubenswrapper[15493]: I0216 17:02:50.816479 15493 generic.go:334] "Generic (PLEG): container finished" podID="442600dc-09b2-4fee-9f89-777296b2ee40" containerID="8d2e502f120afcfdbf733271dac9a15c4729b61975022a5fc8946190d6a66af4" exitCode=0 Feb 16 17:02:50.816679 master-0 kubenswrapper[15493]: I0216 17:02:50.816615 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" event={"ID":"442600dc-09b2-4fee-9f89-777296b2ee40","Type":"ContainerDied","Data":"8d2e502f120afcfdbf733271dac9a15c4729b61975022a5fc8946190d6a66af4"} Feb 16 17:02:50.817255 master-0 kubenswrapper[15493]: I0216 17:02:50.817229 15493 scope.go:117] "RemoveContainer" containerID="8d2e502f120afcfdbf733271dac9a15c4729b61975022a5fc8946190d6a66af4" Feb 16 17:02:51.066478 master-0 kubenswrapper[15493]: I0216 17:02:51.065156 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:51.121335 master-0 kubenswrapper[15493]: I0216 17:02:51.120661 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:51.824669 master-0 kubenswrapper[15493]: I0216 17:02:51.824611 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-wjr7d_9609a4f3-b947-47af-a685-baae26c50fa3/ingress-operator/0.log" Feb 16 17:02:51.825222 master-0 kubenswrapper[15493]: I0216 17:02:51.824717 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerStarted","Data":"d3da8ba42d4b0c66c19212e6b0d32e25aca9d72e06c94a833859dc0c4a30c389"} Feb 16 17:02:51.828397 master-0 kubenswrapper[15493]: I0216 17:02:51.828346 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" event={"ID":"442600dc-09b2-4fee-9f89-777296b2ee40","Type":"ContainerStarted","Data":"eecec016977d0933f995cec094efa7991dea3fd076989159458e21d05f18d3bb"} Feb 16 17:02:53.064786 master-0 kubenswrapper[15493]: I0216 17:02:53.064721 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:53.068610 master-0 kubenswrapper[15493]: I0216 17:02:53.068553 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:54.394555 master-0 kubenswrapper[15493]: I0216 17:02:54.394506 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:54.395402 master-0 kubenswrapper[15493]: I0216 17:02:54.395322 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:54.451120 master-0 kubenswrapper[15493]: I0216 17:02:54.451046 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:54.795027 master-0 kubenswrapper[15493]: I0216 17:02:54.794982 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:54.795298 master-0 kubenswrapper[15493]: I0216 17:02:54.795252 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:54.866888 master-0 kubenswrapper[15493]: I0216 17:02:54.866839 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:54.893533 master-0 kubenswrapper[15493]: I0216 17:02:54.893480 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:02:54.998828 master-0 kubenswrapper[15493]: I0216 17:02:54.998756 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:54.998828 master-0 kubenswrapper[15493]: I0216 17:02:54.998812 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:55.064348 master-0 kubenswrapper[15493]: I0216 17:02:55.064232 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:55.066820 master-0 kubenswrapper[15493]: I0216 17:02:55.066777 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:55.069003 master-0 kubenswrapper[15493]: I0216 17:02:55.068963 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:55.431822 master-0 kubenswrapper[15493]: I0216 17:02:55.431706 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:02:55.694145 master-0 kubenswrapper[15493]: I0216 17:02:55.693988 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:55.694145 master-0 kubenswrapper[15493]: I0216 17:02:55.694034 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:55.752988 master-0 kubenswrapper[15493]: I0216 17:02:55.752708 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:55.899660 master-0 kubenswrapper[15493]: I0216 17:02:55.899603 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:02:55.910538 master-0 kubenswrapper[15493]: I0216 17:02:55.910489 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:02:55.912997 master-0 kubenswrapper[15493]: I0216 17:02:55.912954 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:02:57.067001 master-0 kubenswrapper[15493]: I0216 17:02:57.066910 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:57.069351 master-0 kubenswrapper[15493]: I0216 17:02:57.069325 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:02:59.064668 master-0 kubenswrapper[15493]: I0216 17:02:59.064550 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:02:59.168097 master-0 kubenswrapper[15493]: I0216 17:02:59.167628 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:03:00.055450 master-0 kubenswrapper[15493]: I0216 17:03:00.055369 15493 scope.go:117] "RemoveContainer" containerID="8d35c6a6b35c47a5c10c534dadefc639a3089e7d5b20515fda1b5ea33afe7af6" Feb 16 17:03:00.889137 master-0 kubenswrapper[15493]: I0216 17:03:00.888902 15493 generic.go:334] "Generic (PLEG): container finished" podID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" containerID="357614bbfff97cddae555d53246293360c153a7a8cf94c75b2ec64088b6da4e3" exitCode=0 Feb 16 17:03:00.889137 master-0 kubenswrapper[15493]: I0216 17:03:00.888996 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerDied","Data":"357614bbfff97cddae555d53246293360c153a7a8cf94c75b2ec64088b6da4e3"} Feb 16 17:03:00.889137 master-0 kubenswrapper[15493]: I0216 17:03:00.889082 15493 scope.go:117] "RemoveContainer" containerID="8d35c6a6b35c47a5c10c534dadefc639a3089e7d5b20515fda1b5ea33afe7af6" Feb 16 17:03:00.890273 master-0 kubenswrapper[15493]: I0216 17:03:00.890152 15493 scope.go:117] "RemoveContainer" containerID="357614bbfff97cddae555d53246293360c153a7a8cf94c75b2ec64088b6da4e3" Feb 16 17:03:00.890722 master-0 kubenswrapper[15493]: E0216 17:03:00.890613 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=insights-operator pod=insights-operator-cb4f7b4cf-6qrw5_openshift-insights(c2511146-1d04-4ecd-a28e-79662ef7b9d3)\"" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:03:01.099333 master-0 kubenswrapper[15493]: I0216 17:03:01.099190 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:03:01.106584 master-0 kubenswrapper[15493]: I0216 17:03:01.106002 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:03:03.062762 master-0 kubenswrapper[15493]: I0216 17:03:03.062677 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:03:03.066191 master-0 kubenswrapper[15493]: I0216 17:03:03.066142 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:03:05.060864 master-0 kubenswrapper[15493]: I0216 17:03:05.060776 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:03:05.063779 master-0 kubenswrapper[15493]: I0216 17:03:05.063683 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:03:07.063801 master-0 kubenswrapper[15493]: I0216 17:03:07.063724 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:03:07.068017 master-0 kubenswrapper[15493]: I0216 17:03:07.067986 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:03:09.063014 master-0 kubenswrapper[15493]: I0216 17:03:09.062966 15493 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Feb 16 17:03:09.066504 master-0 kubenswrapper[15493]: I0216 17:03:09.066452 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Feb 16 17:03:09.088009 master-0 kubenswrapper[15493]: I0216 17:03:09.087964 15493 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 17:03:09.094801 master-0 kubenswrapper[15493]: I0216 17:03:09.094746 15493 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n7kjr","kube-system/bootstrap-kube-scheduler-master-0","openshift-marketplace/certified-operators-8kkl7","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-controller-manager/installer-1-master-0"] Feb 16 17:03:09.094912 master-0 kubenswrapper[15493]: I0216 17:03:09.094811 15493 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="31b75d0b-8694-4e17-995b-76e2288745c2" Feb 16 17:03:09.094912 master-0 kubenswrapper[15493]: I0216 17:03:09.094843 15493 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="919d325a-e3bb-4db5-8ebc-382d41928e44" Feb 16 17:03:09.094912 master-0 kubenswrapper[15493]: I0216 17:03:09.094879 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-5n9cl","openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:03:09.095203 master-0 kubenswrapper[15493]: E0216 17:03:09.095174 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 16 17:03:09.095245 master-0 kubenswrapper[15493]: I0216 17:03:09.095201 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 16 17:03:09.095245 master-0 kubenswrapper[15493]: E0216 17:03:09.095221 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d39ed24-4301-4cea-8a42-a08f4ba8b479" containerName="installer" Feb 16 17:03:09.095245 master-0 kubenswrapper[15493]: I0216 17:03:09.095234 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d39ed24-4301-4cea-8a42-a08f4ba8b479" containerName="installer" Feb 16 17:03:09.095341 master-0 kubenswrapper[15493]: E0216 17:03:09.095252 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="config-sync-controllers" Feb 16 17:03:09.095341 master-0 kubenswrapper[15493]: I0216 17:03:09.095293 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="config-sync-controllers" Feb 16 17:03:09.095341 master-0 kubenswrapper[15493]: E0216 17:03:09.095310 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fe1c16d-061a-4a57-aea4-cf1d4b24d02f" containerName="installer" Feb 16 17:03:09.095341 master-0 kubenswrapper[15493]: I0216 17:03:09.095323 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fe1c16d-061a-4a57-aea4-cf1d4b24d02f" containerName="installer" Feb 16 17:03:09.095497 master-0 kubenswrapper[15493]: E0216 17:03:09.095339 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 16 17:03:09.095497 master-0 kubenswrapper[15493]: I0216 17:03:09.095351 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 16 17:03:09.095497 master-0 kubenswrapper[15493]: E0216 17:03:09.095372 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d86b04-1d3f-4f27-a262-b732c1295997" containerName="extract-utilities" Feb 16 17:03:09.095497 master-0 kubenswrapper[15493]: I0216 17:03:09.095385 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d86b04-1d3f-4f27-a262-b732c1295997" containerName="extract-utilities" Feb 16 17:03:09.095497 master-0 kubenswrapper[15493]: E0216 17:03:09.095400 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 16 17:03:09.095497 master-0 kubenswrapper[15493]: I0216 17:03:09.095412 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 16 17:03:09.095497 master-0 kubenswrapper[15493]: E0216 17:03:09.095428 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="cluster-cloud-controller-manager" Feb 16 17:03:09.095497 master-0 kubenswrapper[15493]: I0216 17:03:09.095441 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="cluster-cloud-controller-manager" Feb 16 17:03:09.095497 master-0 kubenswrapper[15493]: E0216 17:03:09.095461 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b4fccc-6bf6-47ac-8ae1-32cad23734da" containerName="installer" Feb 16 17:03:09.095497 master-0 kubenswrapper[15493]: I0216 17:03:09.095473 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b4fccc-6bf6-47ac-8ae1-32cad23734da" containerName="installer" Feb 16 17:03:09.095497 master-0 kubenswrapper[15493]: E0216 17:03:09.095491 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8589094-f18e-4070-a550-b2da6f8acfc0" containerName="assisted-installer-controller" Feb 16 17:03:09.095808 master-0 kubenswrapper[15493]: I0216 17:03:09.095504 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8589094-f18e-4070-a550-b2da6f8acfc0" containerName="assisted-installer-controller" Feb 16 17:03:09.095808 master-0 kubenswrapper[15493]: E0216 17:03:09.095525 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10280d4e-9a32-4fea-aea0-211e7c9f0502" containerName="prober" Feb 16 17:03:09.095808 master-0 kubenswrapper[15493]: I0216 17:03:09.095540 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="10280d4e-9a32-4fea-aea0-211e7c9f0502" containerName="prober" Feb 16 17:03:09.095808 master-0 kubenswrapper[15493]: E0216 17:03:09.095557 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="035c8af0-95f3-4ab6-939c-d7fa8bda40a3" containerName="installer" Feb 16 17:03:09.095808 master-0 kubenswrapper[15493]: I0216 17:03:09.095569 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="035c8af0-95f3-4ab6-939c-d7fa8bda40a3" containerName="installer" Feb 16 17:03:09.095808 master-0 kubenswrapper[15493]: E0216 17:03:09.095587 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa" containerName="extract-utilities" Feb 16 17:03:09.095808 master-0 kubenswrapper[15493]: I0216 17:03:09.095600 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa" containerName="extract-utilities" Feb 16 17:03:09.095808 master-0 kubenswrapper[15493]: E0216 17:03:09.095619 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86c571b6-0f65-41f0-b1be-f63d7a974782" containerName="installer" Feb 16 17:03:09.095808 master-0 kubenswrapper[15493]: I0216 17:03:09.095631 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="86c571b6-0f65-41f0-b1be-f63d7a974782" containerName="installer" Feb 16 17:03:09.095808 master-0 kubenswrapper[15493]: E0216 17:03:09.095645 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 17:03:09.095808 master-0 kubenswrapper[15493]: I0216 17:03:09.095657 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.095827 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="86c571b6-0f65-41f0-b1be-f63d7a974782" containerName="installer" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.095848 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fe1c16d-061a-4a57-aea4-cf1d4b24d02f" containerName="installer" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.095873 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="config-sync-controllers" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.095900 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="setup" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.095956 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa" containerName="extract-utilities" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.095974 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.095987 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d86b04-1d3f-4f27-a262-b732c1295997" containerName="extract-utilities" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.096006 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8589094-f18e-4070-a550-b2da6f8acfc0" containerName="assisted-installer-controller" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.096023 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="9460ca0802075a8a6a10d7b3e6052c4d" containerName="kube-scheduler" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.096041 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="035c8af0-95f3-4ab6-939c-d7fa8bda40a3" containerName="installer" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.096062 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b5c9593-e93c-40f4-966d-8fb2a4edd5b7" containerName="cluster-cloud-controller-manager" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.096085 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="10280d4e-9a32-4fea-aea0-211e7c9f0502" containerName="prober" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.096100 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d39ed24-4301-4cea-8a42-a08f4ba8b479" containerName="installer" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.096116 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b4fccc-6bf6-47ac-8ae1-32cad23734da" containerName="installer" Feb 16 17:03:09.096310 master-0 kubenswrapper[15493]: I0216 17:03:09.096138 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1e91e5a1fed5cf7076a92d2830d36f" containerName="kube-apiserver-insecure-readyz" Feb 16 17:03:09.096910 master-0 kubenswrapper[15493]: I0216 17:03:09.096880 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z69zq","openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48","openshift-marketplace/community-operators-7w4km"] Feb 16 17:03:09.097394 master-0 kubenswrapper[15493]: I0216 17:03:09.097346 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:09.099855 master-0 kubenswrapper[15493]: I0216 17:03:09.099767 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-94r9k" Feb 16 17:03:09.129292 master-0 kubenswrapper[15493]: I0216 17:03:09.129178 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=34.129155939 podStartE2EDuration="34.129155939s" podCreationTimestamp="2026-02-16 17:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:03:09.126778 +0000 UTC m=+68.276951140" watchObservedRunningTime="2026-02-16 17:03:09.129155939 +0000 UTC m=+68.279329019" Feb 16 17:03:09.267756 master-0 kubenswrapper[15493]: I0216 17:03:09.267661 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:09.268091 master-0 kubenswrapper[15493]: I0216 17:03:09.268019 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7l6q\" (UniqueName: \"kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:09.371066 master-0 kubenswrapper[15493]: I0216 17:03:09.370813 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7l6q\" (UniqueName: \"kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:09.371349 master-0 kubenswrapper[15493]: I0216 17:03:09.371294 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:09.372236 master-0 kubenswrapper[15493]: I0216 17:03:09.372181 15493 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 17:03:09.378202 master-0 kubenswrapper[15493]: I0216 17:03:09.378134 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:09.395287 master-0 kubenswrapper[15493]: I0216 17:03:09.395202 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7l6q\" (UniqueName: \"kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:09.417140 master-0 kubenswrapper[15493]: I0216 17:03:09.417063 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:09.561599 master-0 kubenswrapper[15493]: I0216 17:03:09.561559 15493 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 17:03:09.562029 master-0 kubenswrapper[15493]: I0216 17:03:09.562001 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="5b26dae9694224e04f0cdc3841408c63" containerName="startup-monitor" containerID="cri-o://1083aa6beb90f48dd5db6f69c3ba490b4f6ca8d9fefde7aaf7754452f48f28b5" gracePeriod=5 Feb 16 17:03:11.071314 master-0 kubenswrapper[15493]: I0216 17:03:11.071240 15493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa" path="/var/lib/kubelet/pods/1e51a0d9-d1bd-4b32-9196-5f756b1fa8aa/volumes" Feb 16 17:03:11.072171 master-0 kubenswrapper[15493]: I0216 17:03:11.072127 15493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fe1c16d-061a-4a57-aea4-cf1d4b24d02f" path="/var/lib/kubelet/pods/7fe1c16d-061a-4a57-aea4-cf1d4b24d02f/volumes" Feb 16 17:03:11.073122 master-0 kubenswrapper[15493]: I0216 17:03:11.073084 15493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d86b04-1d3f-4f27-a262-b732c1295997" path="/var/lib/kubelet/pods/a6d86b04-1d3f-4f27-a262-b732c1295997/volumes" Feb 16 17:03:11.806160 master-0 kubenswrapper[15493]: I0216 17:03:11.806114 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-5n9cl"] Feb 16 17:03:12.600447 master-0 kubenswrapper[15493]: E0216 17:03:12.600215 15493 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:03:12.600447 master-0 kubenswrapper[15493]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-6d678b8d67-5n9cl_openshift-multus_0d980a9a-2574-41b9-b970-0718cd97c8cd_0(abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0): error adding pod openshift-multus_multus-admission-controller-6d678b8d67-5n9cl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0" Netns:"/var/run/netns/b60abb88-63a7-4c8b-800f-7611e9dc1da2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6d678b8d67-5n9cl;K8S_POD_INFRA_CONTAINER_ID=abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0;K8S_POD_UID=0d980a9a-2574-41b9-b970-0718cd97c8cd" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl] networking: Multus: [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl/0d980a9a-2574-41b9-b970-0718cd97c8cd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod multus-admission-controller-6d678b8d67-5n9cl in out of cluster comm: pod "multus-admission-controller-6d678b8d67-5n9cl" not found Feb 16 17:03:12.600447 master-0 kubenswrapper[15493]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:03:12.600447 master-0 kubenswrapper[15493]: > Feb 16 17:03:12.600447 master-0 kubenswrapper[15493]: E0216 17:03:12.600314 15493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:03:12.600447 master-0 kubenswrapper[15493]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-6d678b8d67-5n9cl_openshift-multus_0d980a9a-2574-41b9-b970-0718cd97c8cd_0(abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0): error adding pod openshift-multus_multus-admission-controller-6d678b8d67-5n9cl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0" Netns:"/var/run/netns/b60abb88-63a7-4c8b-800f-7611e9dc1da2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6d678b8d67-5n9cl;K8S_POD_INFRA_CONTAINER_ID=abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0;K8S_POD_UID=0d980a9a-2574-41b9-b970-0718cd97c8cd" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl] networking: Multus: [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl/0d980a9a-2574-41b9-b970-0718cd97c8cd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod multus-admission-controller-6d678b8d67-5n9cl in out of cluster comm: pod "multus-admission-controller-6d678b8d67-5n9cl" not found Feb 16 17:03:12.600447 master-0 kubenswrapper[15493]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:03:12.600447 master-0 kubenswrapper[15493]: > pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:12.600447 master-0 kubenswrapper[15493]: E0216 17:03:12.600335 15493 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:03:12.600447 master-0 kubenswrapper[15493]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-6d678b8d67-5n9cl_openshift-multus_0d980a9a-2574-41b9-b970-0718cd97c8cd_0(abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0): error adding pod openshift-multus_multus-admission-controller-6d678b8d67-5n9cl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0" Netns:"/var/run/netns/b60abb88-63a7-4c8b-800f-7611e9dc1da2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6d678b8d67-5n9cl;K8S_POD_INFRA_CONTAINER_ID=abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0;K8S_POD_UID=0d980a9a-2574-41b9-b970-0718cd97c8cd" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl] networking: Multus: [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl/0d980a9a-2574-41b9-b970-0718cd97c8cd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod multus-admission-controller-6d678b8d67-5n9cl in out of cluster comm: pod "multus-admission-controller-6d678b8d67-5n9cl" not found Feb 16 17:03:12.600447 master-0 kubenswrapper[15493]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:03:12.600447 master-0 kubenswrapper[15493]: > pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:12.601698 master-0 kubenswrapper[15493]: E0216 17:03:12.600414 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"multus-admission-controller-6d678b8d67-5n9cl_openshift-multus(0d980a9a-2574-41b9-b970-0718cd97c8cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"multus-admission-controller-6d678b8d67-5n9cl_openshift-multus(0d980a9a-2574-41b9-b970-0718cd97c8cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-6d678b8d67-5n9cl_openshift-multus_0d980a9a-2574-41b9-b970-0718cd97c8cd_0(abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0): error adding pod openshift-multus_multus-admission-controller-6d678b8d67-5n9cl to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0\\\" Netns:\\\"/var/run/netns/b60abb88-63a7-4c8b-800f-7611e9dc1da2\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6d678b8d67-5n9cl;K8S_POD_INFRA_CONTAINER_ID=abe0fb1472d69671527a7d41acbf25ca0fc55ca6f6a1c5a2d0bee720316f0ff0;K8S_POD_UID=0d980a9a-2574-41b9-b970-0718cd97c8cd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl] networking: Multus: [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl/0d980a9a-2574-41b9-b970-0718cd97c8cd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod multus-admission-controller-6d678b8d67-5n9cl in out of cluster comm: pod \\\"multus-admission-controller-6d678b8d67-5n9cl\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:03:12.968871 master-0 kubenswrapper[15493]: I0216 17:03:12.968663 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:12.969504 master-0 kubenswrapper[15493]: I0216 17:03:12.969441 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:13.387717 master-0 kubenswrapper[15493]: I0216 17:03:13.387678 15493 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:03:14.988743 master-0 kubenswrapper[15493]: I0216 17:03:14.988619 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5b26dae9694224e04f0cdc3841408c63/startup-monitor/0.log" Feb 16 17:03:14.988743 master-0 kubenswrapper[15493]: I0216 17:03:14.988689 15493 generic.go:334] "Generic (PLEG): container finished" podID="5b26dae9694224e04f0cdc3841408c63" containerID="1083aa6beb90f48dd5db6f69c3ba490b4f6ca8d9fefde7aaf7754452f48f28b5" exitCode=137 Feb 16 17:03:15.055805 master-0 kubenswrapper[15493]: I0216 17:03:15.055750 15493 scope.go:117] "RemoveContainer" containerID="357614bbfff97cddae555d53246293360c153a7a8cf94c75b2ec64088b6da4e3" Feb 16 17:03:15.056171 master-0 kubenswrapper[15493]: E0216 17:03:15.056112 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=insights-operator pod=insights-operator-cb4f7b4cf-6qrw5_openshift-insights(c2511146-1d04-4ecd-a28e-79662ef7b9d3)\"" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:03:15.134570 master-0 kubenswrapper[15493]: I0216 17:03:15.134522 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5b26dae9694224e04f0cdc3841408c63/startup-monitor/0.log" Feb 16 17:03:15.134734 master-0 kubenswrapper[15493]: I0216 17:03:15.134597 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:03:15.267216 master-0 kubenswrapper[15493]: I0216 17:03:15.267124 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") pod \"5b26dae9694224e04f0cdc3841408c63\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " Feb 16 17:03:15.267216 master-0 kubenswrapper[15493]: I0216 17:03:15.267209 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") pod \"5b26dae9694224e04f0cdc3841408c63\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " Feb 16 17:03:15.267216 master-0 kubenswrapper[15493]: I0216 17:03:15.267228 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") pod \"5b26dae9694224e04f0cdc3841408c63\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " Feb 16 17:03:15.267216 master-0 kubenswrapper[15493]: I0216 17:03:15.267243 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") pod \"5b26dae9694224e04f0cdc3841408c63\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " Feb 16 17:03:15.267759 master-0 kubenswrapper[15493]: I0216 17:03:15.267257 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") pod \"5b26dae9694224e04f0cdc3841408c63\" (UID: \"5b26dae9694224e04f0cdc3841408c63\") " Feb 16 17:03:15.267759 master-0 kubenswrapper[15493]: I0216 17:03:15.267296 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock" (OuterVolumeSpecName: "var-lock") pod "5b26dae9694224e04f0cdc3841408c63" (UID: "5b26dae9694224e04f0cdc3841408c63"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:15.267759 master-0 kubenswrapper[15493]: I0216 17:03:15.267337 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests" (OuterVolumeSpecName: "manifests") pod "5b26dae9694224e04f0cdc3841408c63" (UID: "5b26dae9694224e04f0cdc3841408c63"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:15.267759 master-0 kubenswrapper[15493]: I0216 17:03:15.267350 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log" (OuterVolumeSpecName: "var-log") pod "5b26dae9694224e04f0cdc3841408c63" (UID: "5b26dae9694224e04f0cdc3841408c63"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:15.267759 master-0 kubenswrapper[15493]: I0216 17:03:15.267435 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "5b26dae9694224e04f0cdc3841408c63" (UID: "5b26dae9694224e04f0cdc3841408c63"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:15.267759 master-0 kubenswrapper[15493]: I0216 17:03:15.267693 15493 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:03:15.267759 master-0 kubenswrapper[15493]: I0216 17:03:15.267714 15493 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:03:15.267759 master-0 kubenswrapper[15493]: I0216 17:03:15.267729 15493 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-var-log\") on node \"master-0\" DevicePath \"\"" Feb 16 17:03:15.267759 master-0 kubenswrapper[15493]: I0216 17:03:15.267740 15493 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-manifests\") on node \"master-0\" DevicePath \"\"" Feb 16 17:03:15.272178 master-0 kubenswrapper[15493]: I0216 17:03:15.272084 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "5b26dae9694224e04f0cdc3841408c63" (UID: "5b26dae9694224e04f0cdc3841408c63"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:03:15.368797 master-0 kubenswrapper[15493]: I0216 17:03:15.368709 15493 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/5b26dae9694224e04f0cdc3841408c63-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:03:15.995145 master-0 kubenswrapper[15493]: I0216 17:03:15.995115 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_5b26dae9694224e04f0cdc3841408c63/startup-monitor/0.log" Feb 16 17:03:15.995624 master-0 kubenswrapper[15493]: I0216 17:03:15.995610 15493 scope.go:117] "RemoveContainer" containerID="1083aa6beb90f48dd5db6f69c3ba490b4f6ca8d9fefde7aaf7754452f48f28b5" Feb 16 17:03:15.995794 master-0 kubenswrapper[15493]: I0216 17:03:15.995751 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:03:16.092154 master-0 kubenswrapper[15493]: E0216 17:03:16.092090 15493 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:03:16.092154 master-0 kubenswrapper[15493]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-6d678b8d67-5n9cl_openshift-multus_0d980a9a-2574-41b9-b970-0718cd97c8cd_0(868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70): error adding pod openshift-multus_multus-admission-controller-6d678b8d67-5n9cl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70" Netns:"/var/run/netns/580e20b7-7195-45e5-bf3f-bf8ded31aeef" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6d678b8d67-5n9cl;K8S_POD_INFRA_CONTAINER_ID=868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70;K8S_POD_UID=0d980a9a-2574-41b9-b970-0718cd97c8cd" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl] networking: Multus: [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl/0d980a9a-2574-41b9-b970-0718cd97c8cd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod multus-admission-controller-6d678b8d67-5n9cl in out of cluster comm: pod "multus-admission-controller-6d678b8d67-5n9cl" not found Feb 16 17:03:16.092154 master-0 kubenswrapper[15493]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:03:16.092154 master-0 kubenswrapper[15493]: > Feb 16 17:03:16.092384 master-0 kubenswrapper[15493]: E0216 17:03:16.092177 15493 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:03:16.092384 master-0 kubenswrapper[15493]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-6d678b8d67-5n9cl_openshift-multus_0d980a9a-2574-41b9-b970-0718cd97c8cd_0(868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70): error adding pod openshift-multus_multus-admission-controller-6d678b8d67-5n9cl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70" Netns:"/var/run/netns/580e20b7-7195-45e5-bf3f-bf8ded31aeef" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6d678b8d67-5n9cl;K8S_POD_INFRA_CONTAINER_ID=868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70;K8S_POD_UID=0d980a9a-2574-41b9-b970-0718cd97c8cd" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl] networking: Multus: [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl/0d980a9a-2574-41b9-b970-0718cd97c8cd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod multus-admission-controller-6d678b8d67-5n9cl in out of cluster comm: pod "multus-admission-controller-6d678b8d67-5n9cl" not found Feb 16 17:03:16.092384 master-0 kubenswrapper[15493]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:03:16.092384 master-0 kubenswrapper[15493]: > pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:16.092384 master-0 kubenswrapper[15493]: E0216 17:03:16.092199 15493 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:03:16.092384 master-0 kubenswrapper[15493]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-6d678b8d67-5n9cl_openshift-multus_0d980a9a-2574-41b9-b970-0718cd97c8cd_0(868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70): error adding pod openshift-multus_multus-admission-controller-6d678b8d67-5n9cl to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70" Netns:"/var/run/netns/580e20b7-7195-45e5-bf3f-bf8ded31aeef" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6d678b8d67-5n9cl;K8S_POD_INFRA_CONTAINER_ID=868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70;K8S_POD_UID=0d980a9a-2574-41b9-b970-0718cd97c8cd" Path:"" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl] networking: Multus: [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl/0d980a9a-2574-41b9-b970-0718cd97c8cd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod multus-admission-controller-6d678b8d67-5n9cl in out of cluster comm: pod "multus-admission-controller-6d678b8d67-5n9cl" not found Feb 16 17:03:16.092384 master-0 kubenswrapper[15493]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:03:16.092384 master-0 kubenswrapper[15493]: > pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:16.092384 master-0 kubenswrapper[15493]: E0216 17:03:16.092271 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"multus-admission-controller-6d678b8d67-5n9cl_openshift-multus(0d980a9a-2574-41b9-b970-0718cd97c8cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"multus-admission-controller-6d678b8d67-5n9cl_openshift-multus(0d980a9a-2574-41b9-b970-0718cd97c8cd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_multus-admission-controller-6d678b8d67-5n9cl_openshift-multus_0d980a9a-2574-41b9-b970-0718cd97c8cd_0(868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70): error adding pod openshift-multus_multus-admission-controller-6d678b8d67-5n9cl to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70\\\" Netns:\\\"/var/run/netns/580e20b7-7195-45e5-bf3f-bf8ded31aeef\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-multus;K8S_POD_NAME=multus-admission-controller-6d678b8d67-5n9cl;K8S_POD_INFRA_CONTAINER_ID=868c359f8417e256e70ce312071baeebd1ac1ebcd26d0563d55502a71158ae70;K8S_POD_UID=0d980a9a-2574-41b9-b970-0718cd97c8cd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl] networking: Multus: [openshift-multus/multus-admission-controller-6d678b8d67-5n9cl/0d980a9a-2574-41b9-b970-0718cd97c8cd]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod multus-admission-controller-6d678b8d67-5n9cl in out of cluster comm: pod \\\"multus-admission-controller-6d678b8d67-5n9cl\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:03:16.579195 master-0 kubenswrapper[15493]: I0216 17:03:16.579040 15493 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:03:17.003202 master-0 kubenswrapper[15493]: I0216 17:03:17.003135 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv_d020c902-2adb-4919-8dd9-0c2109830580/kube-apiserver-operator/1.log" Feb 16 17:03:17.004122 master-0 kubenswrapper[15493]: I0216 17:03:17.003685 15493 generic.go:334] "Generic (PLEG): container finished" podID="d020c902-2adb-4919-8dd9-0c2109830580" containerID="d371c36e93606a6be62a05a6e38d4e0131418dc0eaea65b286323f5ff81944ef" exitCode=255 Feb 16 17:03:17.004122 master-0 kubenswrapper[15493]: I0216 17:03:17.003775 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerDied","Data":"d371c36e93606a6be62a05a6e38d4e0131418dc0eaea65b286323f5ff81944ef"} Feb 16 17:03:17.004122 master-0 kubenswrapper[15493]: I0216 17:03:17.003822 15493 scope.go:117] "RemoveContainer" containerID="e310e36fd740b75515307293e697ecd768c9c8241ff939db071d778913f35a7a" Feb 16 17:03:17.011401 master-0 kubenswrapper[15493]: I0216 17:03:17.005896 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-829l6_8e623376-9e14-4341-9dcf-7a7c218b6f9f/kube-storage-version-migrator-operator/1.log" Feb 16 17:03:17.011401 master-0 kubenswrapper[15493]: I0216 17:03:17.005912 15493 scope.go:117] "RemoveContainer" containerID="d371c36e93606a6be62a05a6e38d4e0131418dc0eaea65b286323f5ff81944ef" Feb 16 17:03:17.011401 master-0 kubenswrapper[15493]: I0216 17:03:17.006557 15493 generic.go:334] "Generic (PLEG): container finished" podID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" containerID="5d742ee8f3ff4d437ae51d12ae2509ff6a091914c30d3aa55203939de62735fd" exitCode=255 Feb 16 17:03:17.011401 master-0 kubenswrapper[15493]: I0216 17:03:17.006626 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" event={"ID":"8e623376-9e14-4341-9dcf-7a7c218b6f9f","Type":"ContainerDied","Data":"5d742ee8f3ff4d437ae51d12ae2509ff6a091914c30d3aa55203939de62735fd"} Feb 16 17:03:17.011401 master-0 kubenswrapper[15493]: I0216 17:03:17.007194 15493 scope.go:117] "RemoveContainer" containerID="5d742ee8f3ff4d437ae51d12ae2509ff6a091914c30d3aa55203939de62735fd" Feb 16 17:03:17.011401 master-0 kubenswrapper[15493]: E0216 17:03:17.007257 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator(d020c902-2adb-4919-8dd9-0c2109830580)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:03:17.011401 master-0 kubenswrapper[15493]: E0216 17:03:17.007486 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-storage-version-migrator-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-storage-version-migrator-operator pod=kube-storage-version-migrator-operator-cd5474998-829l6_openshift-kube-storage-version-migrator-operator(8e623376-9e14-4341-9dcf-7a7c218b6f9f)\"" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:03:17.011401 master-0 kubenswrapper[15493]: I0216 17:03:17.010406 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5_29402454-a920-471e-895e-764235d16eb4/service-ca-operator/1.log" Feb 16 17:03:17.011401 master-0 kubenswrapper[15493]: I0216 17:03:17.010833 15493 generic.go:334] "Generic (PLEG): container finished" podID="29402454-a920-471e-895e-764235d16eb4" containerID="db8e6dde9089415ec50ea395cc6048bd2122d36d369cf40adfb691513d4759ff" exitCode=255 Feb 16 17:03:17.011401 master-0 kubenswrapper[15493]: I0216 17:03:17.010862 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerDied","Data":"db8e6dde9089415ec50ea395cc6048bd2122d36d369cf40adfb691513d4759ff"} Feb 16 17:03:17.011401 master-0 kubenswrapper[15493]: I0216 17:03:17.011182 15493 scope.go:117] "RemoveContainer" containerID="db8e6dde9089415ec50ea395cc6048bd2122d36d369cf40adfb691513d4759ff" Feb 16 17:03:17.011401 master-0 kubenswrapper[15493]: E0216 17:03:17.011352 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=service-ca-operator pod=service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator(29402454-a920-471e-895e-764235d16eb4)\"" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:03:17.077975 master-0 kubenswrapper[15493]: I0216 17:03:17.077895 15493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b26dae9694224e04f0cdc3841408c63" path="/var/lib/kubelet/pods/5b26dae9694224e04f0cdc3841408c63/volumes" Feb 16 17:03:17.142206 master-0 kubenswrapper[15493]: I0216 17:03:17.142173 15493 scope.go:117] "RemoveContainer" containerID="8399cb1f8a954f603085247154bb48084a1a6283fe2b99aa8facab4cb78f381d" Feb 16 17:03:17.166758 master-0 kubenswrapper[15493]: I0216 17:03:17.166723 15493 scope.go:117] "RemoveContainer" containerID="f31ff62ede3b23583193a8479095d460885c4665f91f714a80c48601aa1a71ad" Feb 16 17:03:18.025269 master-0 kubenswrapper[15493]: I0216 17:03:18.025208 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-qhn9v_737fcc7d-d850-4352-9f17-383c85d5bc28/openshift-apiserver-operator/1.log" Feb 16 17:03:18.026034 master-0 kubenswrapper[15493]: I0216 17:03:18.025972 15493 generic.go:334] "Generic (PLEG): container finished" podID="737fcc7d-d850-4352-9f17-383c85d5bc28" containerID="1110ab99d776a3d68ff736e046cffc3c590f742752e3080d6ce45308e9fb665f" exitCode=255 Feb 16 17:03:18.026318 master-0 kubenswrapper[15493]: I0216 17:03:18.026088 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" event={"ID":"737fcc7d-d850-4352-9f17-383c85d5bc28","Type":"ContainerDied","Data":"1110ab99d776a3d68ff736e046cffc3c590f742752e3080d6ce45308e9fb665f"} Feb 16 17:03:18.026318 master-0 kubenswrapper[15493]: I0216 17:03:18.026145 15493 scope.go:117] "RemoveContainer" containerID="435ef5863cc155441a05593945ab2775001a00c9f99d0e797a375813404c36ac" Feb 16 17:03:18.026657 master-0 kubenswrapper[15493]: I0216 17:03:18.026619 15493 scope.go:117] "RemoveContainer" containerID="1110ab99d776a3d68ff736e046cffc3c590f742752e3080d6ce45308e9fb665f" Feb 16 17:03:18.026876 master-0 kubenswrapper[15493]: E0216 17:03:18.026837 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-6d4655d9cf-qhn9v_openshift-apiserver-operator(737fcc7d-d850-4352-9f17-383c85d5bc28)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:03:18.032347 master-0 kubenswrapper[15493]: I0216 17:03:18.031096 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv_d020c902-2adb-4919-8dd9-0c2109830580/kube-apiserver-operator/1.log" Feb 16 17:03:18.033550 master-0 kubenswrapper[15493]: I0216 17:03:18.033466 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-829l6_8e623376-9e14-4341-9dcf-7a7c218b6f9f/kube-storage-version-migrator-operator/1.log" Feb 16 17:03:18.036329 master-0 kubenswrapper[15493]: I0216 17:03:18.036271 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/0.log" Feb 16 17:03:18.036867 master-0 kubenswrapper[15493]: I0216 17:03:18.036826 15493 generic.go:334] "Generic (PLEG): container finished" podID="702322ac-7610-4568-9a68-b6acbd1f0c12" containerID="9713e0568adf454e7586d1d021067b3f58ea3654d5eca48f5359291f1475c373" exitCode=255 Feb 16 17:03:18.037031 master-0 kubenswrapper[15493]: I0216 17:03:18.036880 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerDied","Data":"9713e0568adf454e7586d1d021067b3f58ea3654d5eca48f5359291f1475c373"} Feb 16 17:03:18.037698 master-0 kubenswrapper[15493]: I0216 17:03:18.037658 15493 scope.go:117] "RemoveContainer" containerID="9713e0568adf454e7586d1d021067b3f58ea3654d5eca48f5359291f1475c373" Feb 16 17:03:18.040081 master-0 kubenswrapper[15493]: I0216 17:03:18.039423 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf_eaf7edff-0a89-4ac0-b9dd-511e098b5434/kube-scheduler-operator-container/1.log" Feb 16 17:03:18.040081 master-0 kubenswrapper[15493]: I0216 17:03:18.039992 15493 generic.go:334] "Generic (PLEG): container finished" podID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" containerID="385e9821f6c9a23f9fd968b241bfd034b46ca6792159d22bc7ee49611730173e" exitCode=255 Feb 16 17:03:18.040231 master-0 kubenswrapper[15493]: I0216 17:03:18.040080 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" event={"ID":"eaf7edff-0a89-4ac0-b9dd-511e098b5434","Type":"ContainerDied","Data":"385e9821f6c9a23f9fd968b241bfd034b46ca6792159d22bc7ee49611730173e"} Feb 16 17:03:18.040643 master-0 kubenswrapper[15493]: I0216 17:03:18.040620 15493 scope.go:117] "RemoveContainer" containerID="385e9821f6c9a23f9fd968b241bfd034b46ca6792159d22bc7ee49611730173e" Feb 16 17:03:18.040835 master-0 kubenswrapper[15493]: E0216 17:03:18.040794 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator(eaf7edff-0a89-4ac0-b9dd-511e098b5434)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:03:18.041721 master-0 kubenswrapper[15493]: I0216 17:03:18.041683 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9_edbaac23-11f0-4bc7-a7ce-b593c774c0fa/openshift-controller-manager-operator/1.log" Feb 16 17:03:18.042807 master-0 kubenswrapper[15493]: I0216 17:03:18.042748 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9_edbaac23-11f0-4bc7-a7ce-b593c774c0fa/openshift-controller-manager-operator/0.log" Feb 16 17:03:18.042888 master-0 kubenswrapper[15493]: I0216 17:03:18.042838 15493 generic.go:334] "Generic (PLEG): container finished" podID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" containerID="3c107344ed7506b61b6ef1a5ca57eedb7069a294a5c75b6d6c41f82bdc6b94c0" exitCode=255 Feb 16 17:03:18.042976 master-0 kubenswrapper[15493]: I0216 17:03:18.042910 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" event={"ID":"edbaac23-11f0-4bc7-a7ce-b593c774c0fa","Type":"ContainerDied","Data":"3c107344ed7506b61b6ef1a5ca57eedb7069a294a5c75b6d6c41f82bdc6b94c0"} Feb 16 17:03:18.043212 master-0 kubenswrapper[15493]: I0216 17:03:18.043177 15493 scope.go:117] "RemoveContainer" containerID="3c107344ed7506b61b6ef1a5ca57eedb7069a294a5c75b6d6c41f82bdc6b94c0" Feb 16 17:03:18.043357 master-0 kubenswrapper[15493]: E0216 17:03:18.043322 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator(edbaac23-11f0-4bc7-a7ce-b593c774c0fa)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:03:18.045460 master-0 kubenswrapper[15493]: I0216 17:03:18.044525 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5_29402454-a920-471e-895e-764235d16eb4/service-ca-operator/1.log" Feb 16 17:03:18.074663 master-0 kubenswrapper[15493]: I0216 17:03:18.074615 15493 scope.go:117] "RemoveContainer" containerID="e8c4ffcf7c4ece8cb912757e2c966b100c9bb74e9a2ec208a540c26e8e9187ce" Feb 16 17:03:18.142836 master-0 kubenswrapper[15493]: I0216 17:03:18.142782 15493 scope.go:117] "RemoveContainer" containerID="58c88a445d8c10824c3855b7412ae17cbbff466b8394e38c4224ab694839c37d" Feb 16 17:03:19.051743 master-0 kubenswrapper[15493]: I0216 17:03:19.051677 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf_eaf7edff-0a89-4ac0-b9dd-511e098b5434/kube-scheduler-operator-container/1.log" Feb 16 17:03:19.053349 master-0 kubenswrapper[15493]: I0216 17:03:19.053309 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9_edbaac23-11f0-4bc7-a7ce-b593c774c0fa/openshift-controller-manager-operator/1.log" Feb 16 17:03:19.054867 master-0 kubenswrapper[15493]: I0216 17:03:19.054835 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-qhn9v_737fcc7d-d850-4352-9f17-383c85d5bc28/openshift-apiserver-operator/1.log" Feb 16 17:03:19.056875 master-0 kubenswrapper[15493]: I0216 17:03:19.056842 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/0.log" Feb 16 17:03:19.062857 master-0 kubenswrapper[15493]: I0216 17:03:19.062767 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerStarted","Data":"30bf73d84862c88d4c4114e9d6fc64cf4ffbe405a1bbd1d4d5e42a328739ac61"} Feb 16 17:03:22.077641 master-0 kubenswrapper[15493]: I0216 17:03:22.077549 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k_442600dc-09b2-4fee-9f89-777296b2ee40/kube-controller-manager-operator/1.log" Feb 16 17:03:22.078634 master-0 kubenswrapper[15493]: I0216 17:03:22.078230 15493 generic.go:334] "Generic (PLEG): container finished" podID="442600dc-09b2-4fee-9f89-777296b2ee40" containerID="eecec016977d0933f995cec094efa7991dea3fd076989159458e21d05f18d3bb" exitCode=255 Feb 16 17:03:22.078634 master-0 kubenswrapper[15493]: I0216 17:03:22.078277 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" event={"ID":"442600dc-09b2-4fee-9f89-777296b2ee40","Type":"ContainerDied","Data":"eecec016977d0933f995cec094efa7991dea3fd076989159458e21d05f18d3bb"} Feb 16 17:03:22.078634 master-0 kubenswrapper[15493]: I0216 17:03:22.078309 15493 scope.go:117] "RemoveContainer" containerID="8d2e502f120afcfdbf733271dac9a15c4729b61975022a5fc8946190d6a66af4" Feb 16 17:03:22.079197 master-0 kubenswrapper[15493]: I0216 17:03:22.079129 15493 scope.go:117] "RemoveContainer" containerID="eecec016977d0933f995cec094efa7991dea3fd076989159458e21d05f18d3bb" Feb 16 17:03:22.079689 master-0 kubenswrapper[15493]: E0216 17:03:22.079552 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager-operator pod=kube-controller-manager-operator-78ff47c7c5-txr5k_openshift-kube-controller-manager-operator(442600dc-09b2-4fee-9f89-777296b2ee40)\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:03:23.086636 master-0 kubenswrapper[15493]: I0216 17:03:23.086578 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k_442600dc-09b2-4fee-9f89-777296b2ee40/kube-controller-manager-operator/1.log" Feb 16 17:03:27.055428 master-0 kubenswrapper[15493]: I0216 17:03:27.055318 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:27.056351 master-0 kubenswrapper[15493]: I0216 17:03:27.056192 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:03:27.117769 master-0 kubenswrapper[15493]: I0216 17:03:27.117642 15493 generic.go:334] "Generic (PLEG): container finished" podID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" containerID="e89782b445c861c527809a14cee2fd06738fb3945a0c6a3181ecbf78934d2bda" exitCode=0 Feb 16 17:03:27.117769 master-0 kubenswrapper[15493]: I0216 17:03:27.117695 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerDied","Data":"e89782b445c861c527809a14cee2fd06738fb3945a0c6a3181ecbf78934d2bda"} Feb 16 17:03:27.118171 master-0 kubenswrapper[15493]: I0216 17:03:27.118136 15493 scope.go:117] "RemoveContainer" containerID="e89782b445c861c527809a14cee2fd06738fb3945a0c6a3181ecbf78934d2bda" Feb 16 17:03:27.477095 master-0 kubenswrapper[15493]: I0216 17:03:27.476658 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-5n9cl"] Feb 16 17:03:27.480867 master-0 kubenswrapper[15493]: W0216 17:03:27.480802 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d980a9a_2574_41b9_b970_0718cd97c8cd.slice/crio-2bd16d7ef772d17030b52b2a2cc1cb446e8e36ba6b438dcb45083c77ca8f58c3 WatchSource:0}: Error finding container 2bd16d7ef772d17030b52b2a2cc1cb446e8e36ba6b438dcb45083c77ca8f58c3: Status 404 returned error can't find the container with id 2bd16d7ef772d17030b52b2a2cc1cb446e8e36ba6b438dcb45083c77ca8f58c3 Feb 16 17:03:28.131781 master-0 kubenswrapper[15493]: I0216 17:03:28.131604 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerStarted","Data":"6085bc9ebf42425482da1178217497bf1485803c2eb95c3c1e42d6ee2c909484"} Feb 16 17:03:28.131781 master-0 kubenswrapper[15493]: I0216 17:03:28.131671 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerStarted","Data":"a34d2995b35b8dc31426e61695ea1611b88d90123e5a6ad5b62a2e08e2e26a32"} Feb 16 17:03:28.131781 master-0 kubenswrapper[15493]: I0216 17:03:28.131691 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerStarted","Data":"2bd16d7ef772d17030b52b2a2cc1cb446e8e36ba6b438dcb45083c77ca8f58c3"} Feb 16 17:03:28.134467 master-0 kubenswrapper[15493]: I0216 17:03:28.134393 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerStarted","Data":"05aafc42942edec512935971aa649b31b51a37bb17ad3e45d4a47e5503a28fae"} Feb 16 17:03:28.135296 master-0 kubenswrapper[15493]: I0216 17:03:28.134825 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:03:28.137235 master-0 kubenswrapper[15493]: I0216 17:03:28.136663 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:03:28.238521 master-0 kubenswrapper[15493]: I0216 17:03:28.238370 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podStartSLOduration=29.238346138 podStartE2EDuration="29.238346138s" podCreationTimestamp="2026-02-16 17:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:03:28.16311324 +0000 UTC m=+87.313286390" watchObservedRunningTime="2026-02-16 17:03:28.238346138 +0000 UTC m=+87.388519208" Feb 16 17:03:28.248834 master-0 kubenswrapper[15493]: I0216 17:03:28.248734 15493 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-4jz2t"] Feb 16 17:03:28.249077 master-0 kubenswrapper[15493]: I0216 17:03:28.249047 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" podUID="ab6e5720-2c30-4962-9c67-89f1607d137f" containerName="multus-admission-controller" containerID="cri-o://8596ea544be0a448a19f843f8fb2963353f75aac2b39d7b1fc12540532ae6bdc" gracePeriod=30 Feb 16 17:03:28.249270 master-0 kubenswrapper[15493]: I0216 17:03:28.249205 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" podUID="ab6e5720-2c30-4962-9c67-89f1607d137f" containerName="kube-rbac-proxy" containerID="cri-o://4a3f00327e72eb182ca9d24f6345e55e740d3bc96d139c82176b1ad867248cfd" gracePeriod=30 Feb 16 17:03:28.564079 master-0 kubenswrapper[15493]: I0216 17:03:28.564013 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg"] Feb 16 17:03:28.564411 master-0 kubenswrapper[15493]: E0216 17:03:28.564379 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b26dae9694224e04f0cdc3841408c63" containerName="startup-monitor" Feb 16 17:03:28.564506 master-0 kubenswrapper[15493]: I0216 17:03:28.564411 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b26dae9694224e04f0cdc3841408c63" containerName="startup-monitor" Feb 16 17:03:28.564676 master-0 kubenswrapper[15493]: I0216 17:03:28.564635 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b26dae9694224e04f0cdc3841408c63" containerName="startup-monitor" Feb 16 17:03:28.565331 master-0 kubenswrapper[15493]: I0216 17:03:28.565298 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:03:28.567720 master-0 kubenswrapper[15493]: I0216 17:03:28.567657 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-864ddd5f56-pm4rt"] Feb 16 17:03:28.568835 master-0 kubenswrapper[15493]: I0216 17:03:28.568793 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.570827 master-0 kubenswrapper[15493]: I0216 17:03:28.570778 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 17:03:28.571127 master-0 kubenswrapper[15493]: I0216 17:03:28.571080 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 17:03:28.571948 master-0 kubenswrapper[15493]: I0216 17:03:28.571887 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 17:03:28.572049 master-0 kubenswrapper[15493]: I0216 17:03:28.571957 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 17:03:28.572049 master-0 kubenswrapper[15493]: I0216 17:03:28.572000 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 17:03:28.572222 master-0 kubenswrapper[15493]: I0216 17:03:28.572191 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 17:03:28.572222 master-0 kubenswrapper[15493]: I0216 17:03:28.572211 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 17:03:28.573521 master-0 kubenswrapper[15493]: I0216 17:03:28.573480 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w"] Feb 16 17:03:28.574440 master-0 kubenswrapper[15493]: I0216 17:03:28.574391 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:03:28.577053 master-0 kubenswrapper[15493]: I0216 17:03:28.576556 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf"] Feb 16 17:03:28.577605 master-0 kubenswrapper[15493]: I0216 17:03:28.577346 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:03:28.581348 master-0 kubenswrapper[15493]: I0216 17:03:28.579677 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 17:03:28.584305 master-0 kubenswrapper[15493]: I0216 17:03:28.584016 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-56w7x"] Feb 16 17:03:28.584430 master-0 kubenswrapper[15493]: I0216 17:03:28.584348 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.584430 master-0 kubenswrapper[15493]: I0216 17:03:28.584393 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869cd4c8-bf00-427c-84f0-5c39517f2d27-config-volume\") pod \"collect-profiles-29521020-mtpvf\" (UID: \"869cd4c8-bf00-427c-84f0-5c39517f2d27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:03:28.584430 master-0 kubenswrapper[15493]: I0216 17:03:28.584418 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:03:28.584609 master-0 kubenswrapper[15493]: I0216 17:03:28.584489 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:03:28.584609 master-0 kubenswrapper[15493]: I0216 17:03:28.584515 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.584609 master-0 kubenswrapper[15493]: I0216 17:03:28.584542 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69n2d\" (UniqueName: \"kubernetes.io/projected/869cd4c8-bf00-427c-84f0-5c39517f2d27-kube-api-access-69n2d\") pod \"collect-profiles-29521020-mtpvf\" (UID: \"869cd4c8-bf00-427c-84f0-5c39517f2d27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:03:28.584609 master-0 kubenswrapper[15493]: I0216 17:03:28.584569 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869cd4c8-bf00-427c-84f0-5c39517f2d27-secret-volume\") pod \"collect-profiles-29521020-mtpvf\" (UID: \"869cd4c8-bf00-427c-84f0-5c39517f2d27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:03:28.584609 master-0 kubenswrapper[15493]: I0216 17:03:28.584593 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94kdz\" (UniqueName: \"kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.584609 master-0 kubenswrapper[15493]: I0216 17:03:28.584614 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.585050 master-0 kubenswrapper[15493]: I0216 17:03:28.584638 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.585521 master-0 kubenswrapper[15493]: I0216 17:03:28.585249 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:28.590714 master-0 kubenswrapper[15493]: I0216 17:03:28.588897 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg"] Feb 16 17:03:28.600105 master-0 kubenswrapper[15493]: I0216 17:03:28.594581 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-whlrc" Feb 16 17:03:28.600105 master-0 kubenswrapper[15493]: I0216 17:03:28.594767 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 16 17:03:28.602961 master-0 kubenswrapper[15493]: I0216 17:03:28.602678 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf"] Feb 16 17:03:28.609659 master-0 kubenswrapper[15493]: I0216 17:03:28.609614 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w"] Feb 16 17:03:28.686340 master-0 kubenswrapper[15493]: I0216 17:03:28.686305 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/805822e0-7af3-4f6f-9411-6256367d1fe1-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-56w7x\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:28.686605 master-0 kubenswrapper[15493]: I0216 17:03:28.686590 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:03:28.686854 master-0 kubenswrapper[15493]: I0216 17:03:28.686736 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/805822e0-7af3-4f6f-9411-6256367d1fe1-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-56w7x\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:28.686908 master-0 kubenswrapper[15493]: I0216 17:03:28.686881 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h8cb\" (UniqueName: \"kubernetes.io/projected/805822e0-7af3-4f6f-9411-6256367d1fe1-kube-api-access-2h8cb\") pod \"cni-sysctl-allowlist-ds-56w7x\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:28.686958 master-0 kubenswrapper[15493]: I0216 17:03:28.686915 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.686991 master-0 kubenswrapper[15493]: I0216 17:03:28.686966 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69n2d\" (UniqueName: \"kubernetes.io/projected/869cd4c8-bf00-427c-84f0-5c39517f2d27-kube-api-access-69n2d\") pod \"collect-profiles-29521020-mtpvf\" (UID: \"869cd4c8-bf00-427c-84f0-5c39517f2d27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:03:28.687131 master-0 kubenswrapper[15493]: I0216 17:03:28.687089 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869cd4c8-bf00-427c-84f0-5c39517f2d27-secret-volume\") pod \"collect-profiles-29521020-mtpvf\" (UID: \"869cd4c8-bf00-427c-84f0-5c39517f2d27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:03:28.687666 master-0 kubenswrapper[15493]: I0216 17:03:28.687647 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94kdz\" (UniqueName: \"kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.687722 master-0 kubenswrapper[15493]: I0216 17:03:28.687706 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.687757 master-0 kubenswrapper[15493]: I0216 17:03:28.687734 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.687827 master-0 kubenswrapper[15493]: I0216 17:03:28.687811 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.687894 master-0 kubenswrapper[15493]: I0216 17:03:28.687871 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869cd4c8-bf00-427c-84f0-5c39517f2d27-config-volume\") pod \"collect-profiles-29521020-mtpvf\" (UID: \"869cd4c8-bf00-427c-84f0-5c39517f2d27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:03:28.688127 master-0 kubenswrapper[15493]: I0216 17:03:28.688093 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.691315 master-0 kubenswrapper[15493]: I0216 17:03:28.690526 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869cd4c8-bf00-427c-84f0-5c39517f2d27-secret-volume\") pod \"collect-profiles-29521020-mtpvf\" (UID: \"869cd4c8-bf00-427c-84f0-5c39517f2d27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:03:28.691315 master-0 kubenswrapper[15493]: I0216 17:03:28.690651 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:03:28.692349 master-0 kubenswrapper[15493]: I0216 17:03:28.692275 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:03:28.692464 master-0 kubenswrapper[15493]: I0216 17:03:28.692437 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/805822e0-7af3-4f6f-9411-6256367d1fe1-ready\") pod \"cni-sysctl-allowlist-ds-56w7x\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:28.692556 master-0 kubenswrapper[15493]: I0216 17:03:28.692316 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.693410 master-0 kubenswrapper[15493]: I0216 17:03:28.693370 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869cd4c8-bf00-427c-84f0-5c39517f2d27-config-volume\") pod \"collect-profiles-29521020-mtpvf\" (UID: \"869cd4c8-bf00-427c-84f0-5c39517f2d27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:03:28.694907 master-0 kubenswrapper[15493]: I0216 17:03:28.694873 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.695646 master-0 kubenswrapper[15493]: I0216 17:03:28.695614 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.708050 master-0 kubenswrapper[15493]: I0216 17:03:28.707990 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69n2d\" (UniqueName: \"kubernetes.io/projected/869cd4c8-bf00-427c-84f0-5c39517f2d27-kube-api-access-69n2d\") pod \"collect-profiles-29521020-mtpvf\" (UID: \"869cd4c8-bf00-427c-84f0-5c39517f2d27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:03:28.709785 master-0 kubenswrapper[15493]: I0216 17:03:28.709740 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94kdz\" (UniqueName: \"kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.713590 master-0 kubenswrapper[15493]: I0216 17:03:28.713550 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:03:28.794288 master-0 kubenswrapper[15493]: I0216 17:03:28.794224 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/805822e0-7af3-4f6f-9411-6256367d1fe1-ready\") pod \"cni-sysctl-allowlist-ds-56w7x\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:28.794532 master-0 kubenswrapper[15493]: I0216 17:03:28.794321 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/805822e0-7af3-4f6f-9411-6256367d1fe1-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-56w7x\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:28.794589 master-0 kubenswrapper[15493]: I0216 17:03:28.794502 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/805822e0-7af3-4f6f-9411-6256367d1fe1-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-56w7x\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:28.794589 master-0 kubenswrapper[15493]: I0216 17:03:28.794534 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/805822e0-7af3-4f6f-9411-6256367d1fe1-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-56w7x\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:28.794689 master-0 kubenswrapper[15493]: I0216 17:03:28.794599 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h8cb\" (UniqueName: \"kubernetes.io/projected/805822e0-7af3-4f6f-9411-6256367d1fe1-kube-api-access-2h8cb\") pod \"cni-sysctl-allowlist-ds-56w7x\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:28.795085 master-0 kubenswrapper[15493]: I0216 17:03:28.795014 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/805822e0-7af3-4f6f-9411-6256367d1fe1-ready\") pod \"cni-sysctl-allowlist-ds-56w7x\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:28.795628 master-0 kubenswrapper[15493]: I0216 17:03:28.795605 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/805822e0-7af3-4f6f-9411-6256367d1fe1-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-56w7x\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:28.822982 master-0 kubenswrapper[15493]: I0216 17:03:28.822841 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h8cb\" (UniqueName: \"kubernetes.io/projected/805822e0-7af3-4f6f-9411-6256367d1fe1-kube-api-access-2h8cb\") pod \"cni-sysctl-allowlist-ds-56w7x\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:28.905710 master-0 kubenswrapper[15493]: I0216 17:03:28.905604 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:03:28.924454 master-0 kubenswrapper[15493]: I0216 17:03:28.924363 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:28.946397 master-0 kubenswrapper[15493]: I0216 17:03:28.946341 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:03:28.946970 master-0 kubenswrapper[15493]: W0216 17:03:28.946888 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0b1ebd3_1068_4624_9b6d_3e9f45ded76a.slice/crio-e08b701fed0575b5cfce5e17183b651b4abeb7caf807109b42ab6751b5fb7c83 WatchSource:0}: Error finding container e08b701fed0575b5cfce5e17183b651b4abeb7caf807109b42ab6751b5fb7c83: Status 404 returned error can't find the container with id e08b701fed0575b5cfce5e17183b651b4abeb7caf807109b42ab6751b5fb7c83 Feb 16 17:03:28.966435 master-0 kubenswrapper[15493]: I0216 17:03:28.966391 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:03:28.988603 master-0 kubenswrapper[15493]: I0216 17:03:28.988527 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:29.055612 master-0 kubenswrapper[15493]: I0216 17:03:29.055571 15493 scope.go:117] "RemoveContainer" containerID="d371c36e93606a6be62a05a6e38d4e0131418dc0eaea65b286323f5ff81944ef" Feb 16 17:03:29.056134 master-0 kubenswrapper[15493]: I0216 17:03:29.056087 15493 scope.go:117] "RemoveContainer" containerID="357614bbfff97cddae555d53246293360c153a7a8cf94c75b2ec64088b6da4e3" Feb 16 17:03:29.056201 master-0 kubenswrapper[15493]: I0216 17:03:29.056161 15493 scope.go:117] "RemoveContainer" containerID="3c107344ed7506b61b6ef1a5ca57eedb7069a294a5c75b6d6c41f82bdc6b94c0" Feb 16 17:03:29.152405 master-0 kubenswrapper[15493]: I0216 17:03:29.152343 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" event={"ID":"805822e0-7af3-4f6f-9411-6256367d1fe1","Type":"ContainerStarted","Data":"3b923bceeabb7fd6fb00136efa5784eaa214ef15dbf2ad14f2588c616f89178e"} Feb 16 17:03:29.154912 master-0 kubenswrapper[15493]: I0216 17:03:29.154865 15493 generic.go:334] "Generic (PLEG): container finished" podID="ab6e5720-2c30-4962-9c67-89f1607d137f" containerID="4a3f00327e72eb182ca9d24f6345e55e740d3bc96d139c82176b1ad867248cfd" exitCode=0 Feb 16 17:03:29.155043 master-0 kubenswrapper[15493]: I0216 17:03:29.154956 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" event={"ID":"ab6e5720-2c30-4962-9c67-89f1607d137f","Type":"ContainerDied","Data":"4a3f00327e72eb182ca9d24f6345e55e740d3bc96d139c82176b1ad867248cfd"} Feb 16 17:03:29.158240 master-0 kubenswrapper[15493]: I0216 17:03:29.158184 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" event={"ID":"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a","Type":"ContainerStarted","Data":"e08b701fed0575b5cfce5e17183b651b4abeb7caf807109b42ab6751b5fb7c83"} Feb 16 17:03:29.375324 master-0 kubenswrapper[15493]: I0216 17:03:29.375278 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg"] Feb 16 17:03:29.381706 master-0 kubenswrapper[15493]: W0216 17:03:29.381623 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod544c6815_81d7_422a_9e4a_5fcbfabe8da8.slice/crio-eba68c9bf88d743d6ece6b208ccd3b9a0e79a6a0196ae5625b7efa1402a6a6be WatchSource:0}: Error finding container eba68c9bf88d743d6ece6b208ccd3b9a0e79a6a0196ae5625b7efa1402a6a6be: Status 404 returned error can't find the container with id eba68c9bf88d743d6ece6b208ccd3b9a0e79a6a0196ae5625b7efa1402a6a6be Feb 16 17:03:29.458577 master-0 kubenswrapper[15493]: I0216 17:03:29.457744 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w"] Feb 16 17:03:29.461971 master-0 kubenswrapper[15493]: I0216 17:03:29.460833 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf"] Feb 16 17:03:29.461971 master-0 kubenswrapper[15493]: W0216 17:03:29.461684 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ff68421_1741_41c1_93d5_5c722dfd295e.slice/crio-698c7fdef468285c41eae0e5106cddd22418fabdcf6e6399b61b0f90a693d483 WatchSource:0}: Error finding container 698c7fdef468285c41eae0e5106cddd22418fabdcf6e6399b61b0f90a693d483: Status 404 returned error can't find the container with id 698c7fdef468285c41eae0e5106cddd22418fabdcf6e6399b61b0f90a693d483 Feb 16 17:03:29.476868 master-0 kubenswrapper[15493]: W0216 17:03:29.476831 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod869cd4c8_bf00_427c_84f0_5c39517f2d27.slice/crio-35e21db89571d4e27728fda75d12eb7742df73eff57b954273665ce938da0f7e WatchSource:0}: Error finding container 35e21db89571d4e27728fda75d12eb7742df73eff57b954273665ce938da0f7e: Status 404 returned error can't find the container with id 35e21db89571d4e27728fda75d12eb7742df73eff57b954273665ce938da0f7e Feb 16 17:03:30.055375 master-0 kubenswrapper[15493]: I0216 17:03:30.055301 15493 scope.go:117] "RemoveContainer" containerID="385e9821f6c9a23f9fd968b241bfd034b46ca6792159d22bc7ee49611730173e" Feb 16 17:03:30.055590 master-0 kubenswrapper[15493]: I0216 17:03:30.055381 15493 scope.go:117] "RemoveContainer" containerID="1110ab99d776a3d68ff736e046cffc3c590f742752e3080d6ce45308e9fb665f" Feb 16 17:03:30.169000 master-0 kubenswrapper[15493]: I0216 17:03:30.167330 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" event={"ID":"544c6815-81d7-422a-9e4a-5fcbfabe8da8","Type":"ContainerStarted","Data":"eba68c9bf88d743d6ece6b208ccd3b9a0e79a6a0196ae5625b7efa1402a6a6be"} Feb 16 17:03:30.171128 master-0 kubenswrapper[15493]: I0216 17:03:30.169892 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" event={"ID":"869cd4c8-bf00-427c-84f0-5c39517f2d27","Type":"ContainerStarted","Data":"3e83cc2b9d59101af52883eb85a000c7d49ff8d2c7af0b49ff2cc02ae1e2e3af"} Feb 16 17:03:30.171128 master-0 kubenswrapper[15493]: I0216 17:03:30.169966 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" event={"ID":"869cd4c8-bf00-427c-84f0-5c39517f2d27","Type":"ContainerStarted","Data":"35e21db89571d4e27728fda75d12eb7742df73eff57b954273665ce938da0f7e"} Feb 16 17:03:30.175011 master-0 kubenswrapper[15493]: I0216 17:03:30.174837 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv_d020c902-2adb-4919-8dd9-0c2109830580/kube-apiserver-operator/1.log" Feb 16 17:03:30.175011 master-0 kubenswrapper[15493]: I0216 17:03:30.174930 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerStarted","Data":"58ea7e7597fd84e0ed74580e261b589711b1b87586741887fc2593fecc63262c"} Feb 16 17:03:30.177561 master-0 kubenswrapper[15493]: I0216 17:03:30.177530 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerStarted","Data":"5d5a5bf8842891b7c39944888923705610466aef86b87837b0015a5336b1bc63"} Feb 16 17:03:30.178943 master-0 kubenswrapper[15493]: I0216 17:03:30.178905 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9_edbaac23-11f0-4bc7-a7ce-b593c774c0fa/openshift-controller-manager-operator/1.log" Feb 16 17:03:30.179008 master-0 kubenswrapper[15493]: I0216 17:03:30.178982 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" event={"ID":"edbaac23-11f0-4bc7-a7ce-b593c774c0fa","Type":"ContainerStarted","Data":"8683030b69ae10922c055c637d624b11c398b29a58d7bc6013013bff4035d97c"} Feb 16 17:03:30.180053 master-0 kubenswrapper[15493]: I0216 17:03:30.180017 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" event={"ID":"805822e0-7af3-4f6f-9411-6256367d1fe1","Type":"ContainerStarted","Data":"064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c"} Feb 16 17:03:30.180171 master-0 kubenswrapper[15493]: I0216 17:03:30.180143 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:30.184064 master-0 kubenswrapper[15493]: I0216 17:03:30.184029 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" event={"ID":"0ff68421-1741-41c1-93d5-5c722dfd295e","Type":"ContainerStarted","Data":"a3effd6b237c4893b5a519d4b8fb5bea7b5a96d22cc8bc7d99b660e1adef87fb"} Feb 16 17:03:30.184140 master-0 kubenswrapper[15493]: I0216 17:03:30.184064 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" event={"ID":"0ff68421-1741-41c1-93d5-5c722dfd295e","Type":"ContainerStarted","Data":"698c7fdef468285c41eae0e5106cddd22418fabdcf6e6399b61b0f90a693d483"} Feb 16 17:03:30.198616 master-0 kubenswrapper[15493]: I0216 17:03:30.198542 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" podStartSLOduration=210.198497018 podStartE2EDuration="3m30.198497018s" podCreationTimestamp="2026-02-16 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:03:30.18695176 +0000 UTC m=+89.337124850" watchObservedRunningTime="2026-02-16 17:03:30.198497018 +0000 UTC m=+89.348670098" Feb 16 17:03:30.237197 master-0 kubenswrapper[15493]: I0216 17:03:30.226496 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:03:30.248748 master-0 kubenswrapper[15493]: I0216 17:03:30.247349 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podStartSLOduration=244.247329768 podStartE2EDuration="4m4.247329768s" podCreationTimestamp="2026-02-16 16:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:03:30.243428471 +0000 UTC m=+89.393601551" watchObservedRunningTime="2026-02-16 17:03:30.247329768 +0000 UTC m=+89.397502838" Feb 16 17:03:30.278748 master-0 kubenswrapper[15493]: I0216 17:03:30.278694 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" podStartSLOduration=2.278679521 podStartE2EDuration="2.278679521s" podCreationTimestamp="2026-02-16 17:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:03:30.275673186 +0000 UTC m=+89.425846266" watchObservedRunningTime="2026-02-16 17:03:30.278679521 +0000 UTC m=+89.428852601" Feb 16 17:03:30.401875 master-0 kubenswrapper[15493]: I0216 17:03:30.401810 15493 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-56w7x"] Feb 16 17:03:30.849023 master-0 kubenswrapper[15493]: I0216 17:03:30.848972 15493 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 17:03:31.060652 master-0 kubenswrapper[15493]: I0216 17:03:31.060612 15493 scope.go:117] "RemoveContainer" containerID="5d742ee8f3ff4d437ae51d12ae2509ff6a091914c30d3aa55203939de62735fd" Feb 16 17:03:31.060652 master-0 kubenswrapper[15493]: I0216 17:03:31.060656 15493 scope.go:117] "RemoveContainer" containerID="db8e6dde9089415ec50ea395cc6048bd2122d36d369cf40adfb691513d4759ff" Feb 16 17:03:31.066130 master-0 kubenswrapper[15493]: I0216 17:03:31.066087 15493 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:03:31.191155 master-0 kubenswrapper[15493]: I0216 17:03:31.191013 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-6d4655d9cf-qhn9v_737fcc7d-d850-4352-9f17-383c85d5bc28/openshift-apiserver-operator/1.log" Feb 16 17:03:31.191155 master-0 kubenswrapper[15493]: I0216 17:03:31.191097 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" event={"ID":"737fcc7d-d850-4352-9f17-383c85d5bc28","Type":"ContainerStarted","Data":"8193ed81fd324895280bca9de8fc440bcb1f14bfadc76963961c38a2ed7e361f"} Feb 16 17:03:31.192448 master-0 kubenswrapper[15493]: I0216 17:03:31.192410 15493 generic.go:334] "Generic (PLEG): container finished" podID="869cd4c8-bf00-427c-84f0-5c39517f2d27" containerID="3e83cc2b9d59101af52883eb85a000c7d49ff8d2c7af0b49ff2cc02ae1e2e3af" exitCode=0 Feb 16 17:03:31.192524 master-0 kubenswrapper[15493]: I0216 17:03:31.192454 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" event={"ID":"869cd4c8-bf00-427c-84f0-5c39517f2d27","Type":"ContainerDied","Data":"3e83cc2b9d59101af52883eb85a000c7d49ff8d2c7af0b49ff2cc02ae1e2e3af"} Feb 16 17:03:31.194722 master-0 kubenswrapper[15493]: I0216 17:03:31.194686 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf_eaf7edff-0a89-4ac0-b9dd-511e098b5434/kube-scheduler-operator-container/1.log" Feb 16 17:03:31.194812 master-0 kubenswrapper[15493]: I0216 17:03:31.194786 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" event={"ID":"eaf7edff-0a89-4ac0-b9dd-511e098b5434","Type":"ContainerStarted","Data":"6417c6ab09f776c6cd5527ce7ed0c693dfb9915491fd7480b2522f60cdf3a710"} Feb 16 17:03:31.237892 master-0 kubenswrapper[15493]: I0216 17:03:31.237827 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qqvg4"] Feb 16 17:03:31.239284 master-0 kubenswrapper[15493]: I0216 17:03:31.239247 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:03:31.258833 master-0 kubenswrapper[15493]: I0216 17:03:31.258604 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 17:03:31.258833 master-0 kubenswrapper[15493]: I0216 17:03:31.258699 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 17:03:31.259364 master-0 kubenswrapper[15493]: I0216 17:03:31.259251 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 17:03:31.266821 master-0 kubenswrapper[15493]: I0216 17:03:31.266781 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qqvg4"] Feb 16 17:03:31.348400 master-0 kubenswrapper[15493]: I0216 17:03:31.348051 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:03:31.348812 master-0 kubenswrapper[15493]: I0216 17:03:31.348639 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:03:31.450124 master-0 kubenswrapper[15493]: I0216 17:03:31.449946 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:03:31.450124 master-0 kubenswrapper[15493]: I0216 17:03:31.450063 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:03:31.450333 master-0 kubenswrapper[15493]: E0216 17:03:31.450192 15493 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 17:03:31.450333 master-0 kubenswrapper[15493]: E0216 17:03:31.450250 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:03:31.950227654 +0000 UTC m=+91.100400724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : secret "canary-serving-cert" not found Feb 16 17:03:31.469541 master-0 kubenswrapper[15493]: I0216 17:03:31.468692 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:03:31.956427 master-0 kubenswrapper[15493]: I0216 17:03:31.956311 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:03:31.956614 master-0 kubenswrapper[15493]: E0216 17:03:31.956581 15493 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 17:03:31.956700 master-0 kubenswrapper[15493]: E0216 17:03:31.956673 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:03:32.956646902 +0000 UTC m=+92.106820022 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : secret "canary-serving-cert" not found Feb 16 17:03:32.202264 master-0 kubenswrapper[15493]: I0216 17:03:32.202215 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator-operator_kube-storage-version-migrator-operator-cd5474998-829l6_8e623376-9e14-4341-9dcf-7a7c218b6f9f/kube-storage-version-migrator-operator/1.log" Feb 16 17:03:32.202780 master-0 kubenswrapper[15493]: I0216 17:03:32.202299 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" event={"ID":"8e623376-9e14-4341-9dcf-7a7c218b6f9f","Type":"ContainerStarted","Data":"aacb043d9bcb661a21903cf65162c48b6cdd1e9e3c2a3bfe75bff6657fb6b31a"} Feb 16 17:03:32.204725 master-0 kubenswrapper[15493]: I0216 17:03:32.204666 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" event={"ID":"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a","Type":"ContainerStarted","Data":"14a331c8b27c1595f8ba85399e67145e282c1cfaee241158ac6fd3dfd5ac6cfd"} Feb 16 17:03:32.209810 master-0 kubenswrapper[15493]: I0216 17:03:32.209725 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5_29402454-a920-471e-895e-764235d16eb4/service-ca-operator/1.log" Feb 16 17:03:32.209988 master-0 kubenswrapper[15493]: I0216 17:03:32.209953 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" podUID="805822e0-7af3-4f6f-9411-6256367d1fe1" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" gracePeriod=30 Feb 16 17:03:32.210242 master-0 kubenswrapper[15493]: I0216 17:03:32.210212 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerStarted","Data":"3a45efa110b434ba3eaa66dc54c0ad512d611956acbf896c50fe7ddda2a43beb"} Feb 16 17:03:32.282267 master-0 kubenswrapper[15493]: I0216 17:03:32.282193 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podStartSLOduration=158.70293995 podStartE2EDuration="2m41.282170893s" podCreationTimestamp="2026-02-16 17:00:51 +0000 UTC" firstStartedPulling="2026-02-16 17:03:28.949403329 +0000 UTC m=+88.099576439" lastFinishedPulling="2026-02-16 17:03:31.528634302 +0000 UTC m=+90.678807382" observedRunningTime="2026-02-16 17:03:32.28123768 +0000 UTC m=+91.431410760" watchObservedRunningTime="2026-02-16 17:03:32.282170893 +0000 UTC m=+91.432343973" Feb 16 17:03:32.694265 master-0 kubenswrapper[15493]: I0216 17:03:32.694235 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:03:32.783945 master-0 kubenswrapper[15493]: I0216 17:03:32.776107 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69n2d\" (UniqueName: \"kubernetes.io/projected/869cd4c8-bf00-427c-84f0-5c39517f2d27-kube-api-access-69n2d\") pod \"869cd4c8-bf00-427c-84f0-5c39517f2d27\" (UID: \"869cd4c8-bf00-427c-84f0-5c39517f2d27\") " Feb 16 17:03:32.783945 master-0 kubenswrapper[15493]: I0216 17:03:32.776210 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869cd4c8-bf00-427c-84f0-5c39517f2d27-secret-volume\") pod \"869cd4c8-bf00-427c-84f0-5c39517f2d27\" (UID: \"869cd4c8-bf00-427c-84f0-5c39517f2d27\") " Feb 16 17:03:32.783945 master-0 kubenswrapper[15493]: I0216 17:03:32.776278 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869cd4c8-bf00-427c-84f0-5c39517f2d27-config-volume\") pod \"869cd4c8-bf00-427c-84f0-5c39517f2d27\" (UID: \"869cd4c8-bf00-427c-84f0-5c39517f2d27\") " Feb 16 17:03:32.783945 master-0 kubenswrapper[15493]: I0216 17:03:32.776951 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869cd4c8-bf00-427c-84f0-5c39517f2d27-config-volume" (OuterVolumeSpecName: "config-volume") pod "869cd4c8-bf00-427c-84f0-5c39517f2d27" (UID: "869cd4c8-bf00-427c-84f0-5c39517f2d27"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:03:32.783945 master-0 kubenswrapper[15493]: I0216 17:03:32.780529 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/869cd4c8-bf00-427c-84f0-5c39517f2d27-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "869cd4c8-bf00-427c-84f0-5c39517f2d27" (UID: "869cd4c8-bf00-427c-84f0-5c39517f2d27"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:03:32.800947 master-0 kubenswrapper[15493]: I0216 17:03:32.793092 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869cd4c8-bf00-427c-84f0-5c39517f2d27-kube-api-access-69n2d" (OuterVolumeSpecName: "kube-api-access-69n2d") pod "869cd4c8-bf00-427c-84f0-5c39517f2d27" (UID: "869cd4c8-bf00-427c-84f0-5c39517f2d27"). InnerVolumeSpecName "kube-api-access-69n2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:03:32.877739 master-0 kubenswrapper[15493]: I0216 17:03:32.877683 15493 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/869cd4c8-bf00-427c-84f0-5c39517f2d27-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 17:03:32.877739 master-0 kubenswrapper[15493]: I0216 17:03:32.877729 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69n2d\" (UniqueName: \"kubernetes.io/projected/869cd4c8-bf00-427c-84f0-5c39517f2d27-kube-api-access-69n2d\") on node \"master-0\" DevicePath \"\"" Feb 16 17:03:32.877739 master-0 kubenswrapper[15493]: I0216 17:03:32.877744 15493 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/869cd4c8-bf00-427c-84f0-5c39517f2d27-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 17:03:32.925388 master-0 kubenswrapper[15493]: I0216 17:03:32.925266 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:32.928114 master-0 kubenswrapper[15493]: I0216 17:03:32.928069 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:32.978599 master-0 kubenswrapper[15493]: I0216 17:03:32.978540 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:03:32.978841 master-0 kubenswrapper[15493]: E0216 17:03:32.978708 15493 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 17:03:32.978841 master-0 kubenswrapper[15493]: E0216 17:03:32.978803 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:03:34.978780753 +0000 UTC m=+94.128953823 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : secret "canary-serving-cert" not found Feb 16 17:03:33.217374 master-0 kubenswrapper[15493]: I0216 17:03:33.217244 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" event={"ID":"544c6815-81d7-422a-9e4a-5fcbfabe8da8","Type":"ContainerStarted","Data":"4b036f06ab4b931412c4f49164cb23548d94c16700a2c873ec169ead4f77b13a"} Feb 16 17:03:33.217895 master-0 kubenswrapper[15493]: I0216 17:03:33.217653 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:03:33.219907 master-0 kubenswrapper[15493]: I0216 17:03:33.219881 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:03:33.220286 master-0 kubenswrapper[15493]: I0216 17:03:33.220253 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" event={"ID":"869cd4c8-bf00-427c-84f0-5c39517f2d27","Type":"ContainerDied","Data":"35e21db89571d4e27728fda75d12eb7742df73eff57b954273665ce938da0f7e"} Feb 16 17:03:33.220286 master-0 kubenswrapper[15493]: I0216 17:03:33.220278 15493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35e21db89571d4e27728fda75d12eb7742df73eff57b954273665ce938da0f7e" Feb 16 17:03:33.220385 master-0 kubenswrapper[15493]: I0216 17:03:33.220292 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:33.223581 master-0 kubenswrapper[15493]: I0216 17:03:33.223550 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:03:33.229206 master-0 kubenswrapper[15493]: I0216 17:03:33.229170 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:03:33.264891 master-0 kubenswrapper[15493]: I0216 17:03:33.264808 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podStartSLOduration=155.961995252 podStartE2EDuration="2m39.264783877s" podCreationTimestamp="2026-02-16 17:00:54 +0000 UTC" firstStartedPulling="2026-02-16 17:03:29.383589014 +0000 UTC m=+88.533762084" lastFinishedPulling="2026-02-16 17:03:32.686377619 +0000 UTC m=+91.836550709" observedRunningTime="2026-02-16 17:03:33.246304665 +0000 UTC m=+92.396477755" watchObservedRunningTime="2026-02-16 17:03:33.264783877 +0000 UTC m=+92.414956957" Feb 16 17:03:33.290190 master-0 kubenswrapper[15493]: I0216 17:03:33.289617 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-989b889c9-l264c"] Feb 16 17:03:33.290190 master-0 kubenswrapper[15493]: E0216 17:03:33.289860 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869cd4c8-bf00-427c-84f0-5c39517f2d27" containerName="collect-profiles" Feb 16 17:03:33.290190 master-0 kubenswrapper[15493]: I0216 17:03:33.289873 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="869cd4c8-bf00-427c-84f0-5c39517f2d27" containerName="collect-profiles" Feb 16 17:03:33.290190 master-0 kubenswrapper[15493]: I0216 17:03:33.290006 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="869cd4c8-bf00-427c-84f0-5c39517f2d27" containerName="collect-profiles" Feb 16 17:03:33.290461 master-0 kubenswrapper[15493]: I0216 17:03:33.290400 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.298028 master-0 kubenswrapper[15493]: I0216 17:03:33.296661 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 17:03:33.298028 master-0 kubenswrapper[15493]: I0216 17:03:33.296871 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 17:03:33.298028 master-0 kubenswrapper[15493]: I0216 17:03:33.297034 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 17:03:33.298028 master-0 kubenswrapper[15493]: I0216 17:03:33.297054 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 17:03:33.298028 master-0 kubenswrapper[15493]: I0216 17:03:33.297166 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 17:03:33.298028 master-0 kubenswrapper[15493]: I0216 17:03:33.297282 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 17:03:33.298028 master-0 kubenswrapper[15493]: I0216 17:03:33.297432 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-nkhdh" Feb 16 17:03:33.298497 master-0 kubenswrapper[15493]: I0216 17:03:33.298060 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 17:03:33.298497 master-0 kubenswrapper[15493]: I0216 17:03:33.298180 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 17:03:33.298497 master-0 kubenswrapper[15493]: I0216 17:03:33.298254 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 17:03:33.298497 master-0 kubenswrapper[15493]: I0216 17:03:33.298334 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 17:03:33.307081 master-0 kubenswrapper[15493]: I0216 17:03:33.305347 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 17:03:33.310691 master-0 kubenswrapper[15493]: I0216 17:03:33.310465 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 17:03:33.311450 master-0 kubenswrapper[15493]: I0216 17:03:33.311417 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-989b889c9-l264c"] Feb 16 17:03:33.311753 master-0 kubenswrapper[15493]: I0216 17:03:33.311737 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 17:03:33.383282 master-0 kubenswrapper[15493]: I0216 17:03:33.383219 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-router-certs\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.383282 master-0 kubenswrapper[15493]: I0216 17:03:33.383276 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-audit-dir\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.383531 master-0 kubenswrapper[15493]: I0216 17:03:33.383339 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-session\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.383531 master-0 kubenswrapper[15493]: I0216 17:03:33.383381 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.383531 master-0 kubenswrapper[15493]: I0216 17:03:33.383423 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-cliconfig\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.383531 master-0 kubenswrapper[15493]: I0216 17:03:33.383441 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk5ht\" (UniqueName: \"kubernetes.io/projected/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-kube-api-access-qk5ht\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.383531 master-0 kubenswrapper[15493]: I0216 17:03:33.383463 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-serving-cert\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.383531 master-0 kubenswrapper[15493]: I0216 17:03:33.383490 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-service-ca\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.383531 master-0 kubenswrapper[15493]: I0216 17:03:33.383524 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.383725 master-0 kubenswrapper[15493]: I0216 17:03:33.383551 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-error\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.383725 master-0 kubenswrapper[15493]: I0216 17:03:33.383574 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-login\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.383725 master-0 kubenswrapper[15493]: I0216 17:03:33.383604 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-audit-policies\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.383725 master-0 kubenswrapper[15493]: I0216 17:03:33.383649 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.486706 master-0 kubenswrapper[15493]: I0216 17:03:33.485366 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.486706 master-0 kubenswrapper[15493]: I0216 17:03:33.485469 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-error\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.486706 master-0 kubenswrapper[15493]: I0216 17:03:33.485502 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-login\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.486706 master-0 kubenswrapper[15493]: I0216 17:03:33.486103 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-audit-policies\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.486706 master-0 kubenswrapper[15493]: I0216 17:03:33.486152 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.486706 master-0 kubenswrapper[15493]: I0216 17:03:33.486178 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-router-certs\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.486706 master-0 kubenswrapper[15493]: I0216 17:03:33.486206 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-audit-dir\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.486706 master-0 kubenswrapper[15493]: I0216 17:03:33.486229 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.486706 master-0 kubenswrapper[15493]: I0216 17:03:33.486268 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-session\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.486706 master-0 kubenswrapper[15493]: I0216 17:03:33.486302 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-cliconfig\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.486706 master-0 kubenswrapper[15493]: I0216 17:03:33.486331 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk5ht\" (UniqueName: \"kubernetes.io/projected/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-kube-api-access-qk5ht\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.486706 master-0 kubenswrapper[15493]: I0216 17:03:33.486357 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-serving-cert\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.486706 master-0 kubenswrapper[15493]: I0216 17:03:33.486388 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-service-ca\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.487658 master-0 kubenswrapper[15493]: I0216 17:03:33.487628 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-audit-policies\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.487730 master-0 kubenswrapper[15493]: I0216 17:03:33.487638 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-audit-dir\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.487730 master-0 kubenswrapper[15493]: I0216 17:03:33.487692 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-service-ca\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.487840 master-0 kubenswrapper[15493]: E0216 17:03:33.487765 15493 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Feb 16 17:03:33.487889 master-0 kubenswrapper[15493]: E0216 17:03:33.487854 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-cliconfig podName:5985bd5d-ee56-4995-a4d3-cb4fda84ef31 nodeName:}" failed. No retries permitted until 2026-02-16 17:03:33.987829798 +0000 UTC m=+93.138002938 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-cliconfig") pod "oauth-openshift-989b889c9-l264c" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31") : configmap "v4-0-config-system-cliconfig" not found Feb 16 17:03:33.489974 master-0 kubenswrapper[15493]: I0216 17:03:33.488417 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-error\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.489974 master-0 kubenswrapper[15493]: I0216 17:03:33.488592 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.489974 master-0 kubenswrapper[15493]: I0216 17:03:33.489586 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-login\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.489974 master-0 kubenswrapper[15493]: I0216 17:03:33.489778 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-session\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.490376 master-0 kubenswrapper[15493]: I0216 17:03:33.490337 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.491711 master-0 kubenswrapper[15493]: I0216 17:03:33.490658 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-serving-cert\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.491711 master-0 kubenswrapper[15493]: I0216 17:03:33.490867 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.492281 master-0 kubenswrapper[15493]: I0216 17:03:33.492141 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-router-certs\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.506661 master-0 kubenswrapper[15493]: I0216 17:03:33.506606 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk5ht\" (UniqueName: \"kubernetes.io/projected/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-kube-api-access-qk5ht\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.812292 master-0 kubenswrapper[15493]: I0216 17:03:33.812229 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-7485d645b8-zxxwd"] Feb 16 17:03:33.813152 master-0 kubenswrapper[15493]: I0216 17:03:33.813130 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:33.814626 master-0 kubenswrapper[15493]: I0216 17:03:33.814571 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-qmzhq" Feb 16 17:03:33.815403 master-0 kubenswrapper[15493]: I0216 17:03:33.815382 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 17:03:33.815532 master-0 kubenswrapper[15493]: I0216 17:03:33.815500 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 17:03:33.816827 master-0 kubenswrapper[15493]: I0216 17:03:33.816762 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 17:03:33.829806 master-0 kubenswrapper[15493]: I0216 17:03:33.829078 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-7485d645b8-zxxwd"] Feb 16 17:03:33.892893 master-0 kubenswrapper[15493]: I0216 17:03:33.892674 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbtld\" (UniqueName: \"kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:33.892893 master-0 kubenswrapper[15493]: I0216 17:03:33.892826 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:33.892893 master-0 kubenswrapper[15493]: I0216 17:03:33.892897 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:33.893200 master-0 kubenswrapper[15493]: I0216 17:03:33.892993 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:33.994554 master-0 kubenswrapper[15493]: I0216 17:03:33.994489 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:33.994554 master-0 kubenswrapper[15493]: I0216 17:03:33.994554 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:33.994827 master-0 kubenswrapper[15493]: I0216 17:03:33.994585 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:33.994827 master-0 kubenswrapper[15493]: I0216 17:03:33.994638 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-cliconfig\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.995251 master-0 kubenswrapper[15493]: I0216 17:03:33.995215 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbtld\" (UniqueName: \"kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:33.995319 master-0 kubenswrapper[15493]: E0216 17:03:33.995271 15493 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 17:03:33.995432 master-0 kubenswrapper[15493]: E0216 17:03:33.995404 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:03:34.495375735 +0000 UTC m=+93.645548835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : secret "prometheus-operator-tls" not found Feb 16 17:03:33.995492 master-0 kubenswrapper[15493]: I0216 17:03:33.995444 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-cliconfig\") pod \"oauth-openshift-989b889c9-l264c\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:33.995982 master-0 kubenswrapper[15493]: I0216 17:03:33.995895 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:33.999345 master-0 kubenswrapper[15493]: I0216 17:03:33.999295 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:34.028689 master-0 kubenswrapper[15493]: I0216 17:03:34.028630 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbtld\" (UniqueName: \"kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:34.221547 master-0 kubenswrapper[15493]: I0216 17:03:34.221379 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:34.504061 master-0 kubenswrapper[15493]: I0216 17:03:34.504001 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:34.504263 master-0 kubenswrapper[15493]: E0216 17:03:34.504166 15493 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 17:03:34.504263 master-0 kubenswrapper[15493]: E0216 17:03:34.504223 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:03:35.504205755 +0000 UTC m=+94.654378815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : secret "prometheus-operator-tls" not found Feb 16 17:03:34.698088 master-0 kubenswrapper[15493]: I0216 17:03:34.698026 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-989b889c9-l264c"] Feb 16 17:03:35.010933 master-0 kubenswrapper[15493]: I0216 17:03:35.010831 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:03:35.011162 master-0 kubenswrapper[15493]: E0216 17:03:35.011052 15493 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 17:03:35.011162 master-0 kubenswrapper[15493]: E0216 17:03:35.011138 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:03:39.011121856 +0000 UTC m=+98.161294926 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : secret "canary-serving-cert" not found Feb 16 17:03:35.232684 master-0 kubenswrapper[15493]: I0216 17:03:35.232594 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-989b889c9-l264c" event={"ID":"5985bd5d-ee56-4995-a4d3-cb4fda84ef31","Type":"ContainerStarted","Data":"811ecd50606dddc2e6a8c6214542c8af48017f90ad09bb05c9bf8405f0e0473a"} Feb 16 17:03:35.517404 master-0 kubenswrapper[15493]: I0216 17:03:35.517343 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:35.517633 master-0 kubenswrapper[15493]: E0216 17:03:35.517495 15493 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 17:03:35.517633 master-0 kubenswrapper[15493]: E0216 17:03:35.517584 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:03:37.517564355 +0000 UTC m=+96.667737435 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : secret "prometheus-operator-tls" not found Feb 16 17:03:37.055445 master-0 kubenswrapper[15493]: I0216 17:03:37.055385 15493 scope.go:117] "RemoveContainer" containerID="eecec016977d0933f995cec094efa7991dea3fd076989159458e21d05f18d3bb" Feb 16 17:03:37.246896 master-0 kubenswrapper[15493]: I0216 17:03:37.246827 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-989b889c9-l264c" event={"ID":"5985bd5d-ee56-4995-a4d3-cb4fda84ef31","Type":"ContainerStarted","Data":"fe5deb4be7c9585b3362450f2ce8ffdcd9334e9025f031fcee47ce8437c2a1fb"} Feb 16 17:03:37.247476 master-0 kubenswrapper[15493]: I0216 17:03:37.247416 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:37.251435 master-0 kubenswrapper[15493]: I0216 17:03:37.251406 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:03:37.273028 master-0 kubenswrapper[15493]: I0216 17:03:37.271188 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-989b889c9-l264c" podStartSLOduration=2.434570234 podStartE2EDuration="4.271172317s" podCreationTimestamp="2026-02-16 17:03:33 +0000 UTC" firstStartedPulling="2026-02-16 17:03:34.714755904 +0000 UTC m=+93.864928984" lastFinishedPulling="2026-02-16 17:03:36.551357997 +0000 UTC m=+95.701531067" observedRunningTime="2026-02-16 17:03:37.270965542 +0000 UTC m=+96.421138632" watchObservedRunningTime="2026-02-16 17:03:37.271172317 +0000 UTC m=+96.421345387" Feb 16 17:03:37.547703 master-0 kubenswrapper[15493]: I0216 17:03:37.547636 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:37.547942 master-0 kubenswrapper[15493]: E0216 17:03:37.547797 15493 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 17:03:37.547942 master-0 kubenswrapper[15493]: E0216 17:03:37.547894 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:03:41.547870708 +0000 UTC m=+100.698043788 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : secret "prometheus-operator-tls" not found Feb 16 17:03:38.259357 master-0 kubenswrapper[15493]: I0216 17:03:38.259273 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k_442600dc-09b2-4fee-9f89-777296b2ee40/kube-controller-manager-operator/1.log" Feb 16 17:03:38.260395 master-0 kubenswrapper[15493]: I0216 17:03:38.259487 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" event={"ID":"442600dc-09b2-4fee-9f89-777296b2ee40","Type":"ContainerStarted","Data":"a48464245a4d736ce11fabe0fbd5a7d553c8cfc8ff89e653f1d99b8b3cc32538"} Feb 16 17:03:38.992873 master-0 kubenswrapper[15493]: E0216 17:03:38.992776 15493 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:03:38.995081 master-0 kubenswrapper[15493]: E0216 17:03:38.995013 15493 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:03:38.997479 master-0 kubenswrapper[15493]: E0216 17:03:38.997395 15493 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:03:38.997590 master-0 kubenswrapper[15493]: E0216 17:03:38.997502 15493 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" podUID="805822e0-7af3-4f6f-9411-6256367d1fe1" containerName="kube-multus-additional-cni-plugins" Feb 16 17:03:39.076809 master-0 kubenswrapper[15493]: I0216 17:03:39.076736 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:03:39.077133 master-0 kubenswrapper[15493]: E0216 17:03:39.077064 15493 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 17:03:39.077214 master-0 kubenswrapper[15493]: E0216 17:03:39.077183 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:03:47.077155404 +0000 UTC m=+106.227328514 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : secret "canary-serving-cert" not found Feb 16 17:03:41.613285 master-0 kubenswrapper[15493]: I0216 17:03:41.613172 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:41.613765 master-0 kubenswrapper[15493]: E0216 17:03:41.613437 15493 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 17:03:41.613765 master-0 kubenswrapper[15493]: E0216 17:03:41.613549 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:03:49.613528757 +0000 UTC m=+108.763701827 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : secret "prometheus-operator-tls" not found Feb 16 17:03:45.502684 master-0 kubenswrapper[15493]: I0216 17:03:45.502594 15493 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:03:45.503707 master-0 kubenswrapper[15493]: I0216 17:03:45.502673 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:03:47.102122 master-0 kubenswrapper[15493]: I0216 17:03:47.101938 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:03:47.102835 master-0 kubenswrapper[15493]: E0216 17:03:47.102109 15493 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 17:03:47.102835 master-0 kubenswrapper[15493]: E0216 17:03:47.102218 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:04:03.10216744 +0000 UTC m=+122.252340510 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : secret "canary-serving-cert" not found Feb 16 17:03:48.992212 master-0 kubenswrapper[15493]: E0216 17:03:48.992090 15493 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:03:48.993616 master-0 kubenswrapper[15493]: E0216 17:03:48.993535 15493 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:03:48.995802 master-0 kubenswrapper[15493]: E0216 17:03:48.995726 15493 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:03:48.995895 master-0 kubenswrapper[15493]: E0216 17:03:48.995812 15493 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" podUID="805822e0-7af3-4f6f-9411-6256367d1fe1" containerName="kube-multus-additional-cni-plugins" Feb 16 17:03:49.640652 master-0 kubenswrapper[15493]: I0216 17:03:49.640572 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:03:49.640971 master-0 kubenswrapper[15493]: E0216 17:03:49.640789 15493 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 17:03:49.640971 master-0 kubenswrapper[15493]: E0216 17:03:49.640911 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:04:05.640879571 +0000 UTC m=+124.791052681 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : secret "prometheus-operator-tls" not found Feb 16 17:03:51.922620 master-0 kubenswrapper[15493]: I0216 17:03:51.922524 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 17:03:51.923952 master-0 kubenswrapper[15493]: I0216 17:03:51.923887 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 17:03:51.926088 master-0 kubenswrapper[15493]: I0216 17:03:51.926053 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 17:03:51.927024 master-0 kubenswrapper[15493]: I0216 17:03:51.926973 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-z9qtm" Feb 16 17:03:51.938451 master-0 kubenswrapper[15493]: I0216 17:03:51.938383 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 17:03:51.973609 master-0 kubenswrapper[15493]: I0216 17:03:51.973552 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0555924f-d581-40d7-9a45-aee3270a1383-var-lock\") pod \"installer-2-master-0\" (UID: \"0555924f-d581-40d7-9a45-aee3270a1383\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 17:03:51.973609 master-0 kubenswrapper[15493]: I0216 17:03:51.973609 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0555924f-d581-40d7-9a45-aee3270a1383-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"0555924f-d581-40d7-9a45-aee3270a1383\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 17:03:51.973832 master-0 kubenswrapper[15493]: I0216 17:03:51.973647 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0555924f-d581-40d7-9a45-aee3270a1383-kube-api-access\") pod \"installer-2-master-0\" (UID: \"0555924f-d581-40d7-9a45-aee3270a1383\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 17:03:52.074667 master-0 kubenswrapper[15493]: I0216 17:03:52.074588 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0555924f-d581-40d7-9a45-aee3270a1383-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"0555924f-d581-40d7-9a45-aee3270a1383\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 17:03:52.074667 master-0 kubenswrapper[15493]: I0216 17:03:52.074658 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0555924f-d581-40d7-9a45-aee3270a1383-kube-api-access\") pod \"installer-2-master-0\" (UID: \"0555924f-d581-40d7-9a45-aee3270a1383\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 17:03:52.074955 master-0 kubenswrapper[15493]: I0216 17:03:52.074765 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0555924f-d581-40d7-9a45-aee3270a1383-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"0555924f-d581-40d7-9a45-aee3270a1383\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 17:03:52.074955 master-0 kubenswrapper[15493]: I0216 17:03:52.074792 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0555924f-d581-40d7-9a45-aee3270a1383-var-lock\") pod \"installer-2-master-0\" (UID: \"0555924f-d581-40d7-9a45-aee3270a1383\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 17:03:52.075157 master-0 kubenswrapper[15493]: I0216 17:03:52.075074 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0555924f-d581-40d7-9a45-aee3270a1383-var-lock\") pod \"installer-2-master-0\" (UID: \"0555924f-d581-40d7-9a45-aee3270a1383\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 17:03:52.090987 master-0 kubenswrapper[15493]: I0216 17:03:52.090934 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0555924f-d581-40d7-9a45-aee3270a1383-kube-api-access\") pod \"installer-2-master-0\" (UID: \"0555924f-d581-40d7-9a45-aee3270a1383\") " pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 17:03:52.244419 master-0 kubenswrapper[15493]: I0216 17:03:52.244338 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 17:03:52.622648 master-0 kubenswrapper[15493]: I0216 17:03:52.622595 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 17:03:52.624063 master-0 kubenswrapper[15493]: W0216 17:03:52.624021 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0555924f_d581_40d7_9a45_aee3270a1383.slice/crio-b0ff5ffb566d083d38aec96238565927a8dce1d1cae54b78321629759a4a8fb4 WatchSource:0}: Error finding container b0ff5ffb566d083d38aec96238565927a8dce1d1cae54b78321629759a4a8fb4: Status 404 returned error can't find the container with id b0ff5ffb566d083d38aec96238565927a8dce1d1cae54b78321629759a4a8fb4 Feb 16 17:03:53.364271 master-0 kubenswrapper[15493]: I0216 17:03:53.364206 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"0555924f-d581-40d7-9a45-aee3270a1383","Type":"ContainerStarted","Data":"6051be8d6cd1c9e4e2f487baa8487a2bd2840efe75d738f7f14481aba9ae9beb"} Feb 16 17:03:53.364271 master-0 kubenswrapper[15493]: I0216 17:03:53.364271 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"0555924f-d581-40d7-9a45-aee3270a1383","Type":"ContainerStarted","Data":"b0ff5ffb566d083d38aec96238565927a8dce1d1cae54b78321629759a4a8fb4"} Feb 16 17:03:53.385612 master-0 kubenswrapper[15493]: I0216 17:03:53.385533 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.385514902 podStartE2EDuration="2.385514902s" podCreationTimestamp="2026-02-16 17:03:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:03:53.382870926 +0000 UTC m=+112.533044006" watchObservedRunningTime="2026-02-16 17:03:53.385514902 +0000 UTC m=+112.535687982" Feb 16 17:03:56.215151 master-0 kubenswrapper[15493]: I0216 17:03:56.215090 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-7777d5cc66-64vhv"] Feb 16 17:03:56.217281 master-0 kubenswrapper[15493]: I0216 17:03:56.217221 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.219670 master-0 kubenswrapper[15493]: I0216 17:03:56.219623 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 17:03:56.219787 master-0 kubenswrapper[15493]: I0216 17:03:56.219691 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 17:03:56.223356 master-0 kubenswrapper[15493]: I0216 17:03:56.223310 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 17:03:56.224600 master-0 kubenswrapper[15493]: I0216 17:03:56.224331 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 17:03:56.234447 master-0 kubenswrapper[15493]: I0216 17:03:56.231793 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 17:03:56.238992 master-0 kubenswrapper[15493]: I0216 17:03:56.238904 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-7777d5cc66-64vhv"] Feb 16 17:03:56.339821 master-0 kubenswrapper[15493]: I0216 17:03:56.339755 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.340052 master-0 kubenswrapper[15493]: I0216 17:03:56.339847 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.340052 master-0 kubenswrapper[15493]: I0216 17:03:56.339878 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.340052 master-0 kubenswrapper[15493]: I0216 17:03:56.339904 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.441836 master-0 kubenswrapper[15493]: I0216 17:03:56.441729 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.441836 master-0 kubenswrapper[15493]: I0216 17:03:56.441842 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.442330 master-0 kubenswrapper[15493]: I0216 17:03:56.441945 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.443196 master-0 kubenswrapper[15493]: I0216 17:03:56.442364 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.443462 master-0 kubenswrapper[15493]: I0216 17:03:56.443411 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.444049 master-0 kubenswrapper[15493]: I0216 17:03:56.443992 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.446961 master-0 kubenswrapper[15493]: I0216 17:03:56.446850 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.462134 master-0 kubenswrapper[15493]: I0216 17:03:56.462066 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.536286 master-0 kubenswrapper[15493]: I0216 17:03:56.536211 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:03:56.942247 master-0 kubenswrapper[15493]: I0216 17:03:56.942184 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-7777d5cc66-64vhv"] Feb 16 17:03:57.397501 master-0 kubenswrapper[15493]: I0216 17:03:57.397436 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" event={"ID":"0517b180-00ee-47fe-a8e7-36a3931b7e72","Type":"ContainerStarted","Data":"c1b991a154ee88061ab8f48fa2b955ab50378aa767da0efb2312b73d366997a0"} Feb 16 17:03:58.404746 master-0 kubenswrapper[15493]: I0216 17:03:58.404674 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7c64d55f8-4jz2t_ab6e5720-2c30-4962-9c67-89f1607d137f/multus-admission-controller/0.log" Feb 16 17:03:58.404746 master-0 kubenswrapper[15493]: I0216 17:03:58.404732 15493 generic.go:334] "Generic (PLEG): container finished" podID="ab6e5720-2c30-4962-9c67-89f1607d137f" containerID="8596ea544be0a448a19f843f8fb2963353f75aac2b39d7b1fc12540532ae6bdc" exitCode=137 Feb 16 17:03:58.405312 master-0 kubenswrapper[15493]: I0216 17:03:58.404762 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" event={"ID":"ab6e5720-2c30-4962-9c67-89f1607d137f","Type":"ContainerDied","Data":"8596ea544be0a448a19f843f8fb2963353f75aac2b39d7b1fc12540532ae6bdc"} Feb 16 17:03:58.589917 master-0 kubenswrapper[15493]: I0216 17:03:58.589462 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7c64d55f8-4jz2t_ab6e5720-2c30-4962-9c67-89f1607d137f/multus-admission-controller/0.log" Feb 16 17:03:58.589917 master-0 kubenswrapper[15493]: I0216 17:03:58.589533 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:03:58.673399 master-0 kubenswrapper[15493]: I0216 17:03:58.673328 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") pod \"ab6e5720-2c30-4962-9c67-89f1607d137f\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " Feb 16 17:03:58.673627 master-0 kubenswrapper[15493]: I0216 17:03:58.673430 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmk2b\" (UniqueName: \"kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b\") pod \"ab6e5720-2c30-4962-9c67-89f1607d137f\" (UID: \"ab6e5720-2c30-4962-9c67-89f1607d137f\") " Feb 16 17:03:58.676700 master-0 kubenswrapper[15493]: I0216 17:03:58.676645 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b" (OuterVolumeSpecName: "kube-api-access-xmk2b") pod "ab6e5720-2c30-4962-9c67-89f1607d137f" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f"). InnerVolumeSpecName "kube-api-access-xmk2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:03:58.677944 master-0 kubenswrapper[15493]: I0216 17:03:58.677877 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "ab6e5720-2c30-4962-9c67-89f1607d137f" (UID: "ab6e5720-2c30-4962-9c67-89f1607d137f"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:03:58.775022 master-0 kubenswrapper[15493]: I0216 17:03:58.774953 15493 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ab6e5720-2c30-4962-9c67-89f1607d137f-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:03:58.775022 master-0 kubenswrapper[15493]: I0216 17:03:58.774995 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmk2b\" (UniqueName: \"kubernetes.io/projected/ab6e5720-2c30-4962-9c67-89f1607d137f-kube-api-access-xmk2b\") on node \"master-0\" DevicePath \"\"" Feb 16 17:03:58.991232 master-0 kubenswrapper[15493]: E0216 17:03:58.991077 15493 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:03:58.992943 master-0 kubenswrapper[15493]: E0216 17:03:58.992850 15493 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:03:58.994176 master-0 kubenswrapper[15493]: E0216 17:03:58.994093 15493 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:03:58.994277 master-0 kubenswrapper[15493]: E0216 17:03:58.994175 15493 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" podUID="805822e0-7af3-4f6f-9411-6256367d1fe1" containerName="kube-multus-additional-cni-plugins" Feb 16 17:03:59.413620 master-0 kubenswrapper[15493]: I0216 17:03:59.413555 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7c64d55f8-4jz2t_ab6e5720-2c30-4962-9c67-89f1607d137f/multus-admission-controller/0.log" Feb 16 17:03:59.413620 master-0 kubenswrapper[15493]: I0216 17:03:59.413621 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" event={"ID":"ab6e5720-2c30-4962-9c67-89f1607d137f","Type":"ContainerDied","Data":"2ab21ee08c6858b29e0d5402811c6a5058510ebfc99fdee4dceca48abf0ebb37"} Feb 16 17:03:59.418318 master-0 kubenswrapper[15493]: I0216 17:03:59.413658 15493 scope.go:117] "RemoveContainer" containerID="4a3f00327e72eb182ca9d24f6345e55e740d3bc96d139c82176b1ad867248cfd" Feb 16 17:03:59.418318 master-0 kubenswrapper[15493]: I0216 17:03:59.413717 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7c64d55f8-4jz2t" Feb 16 17:03:59.447463 master-0 kubenswrapper[15493]: I0216 17:03:59.447398 15493 scope.go:117] "RemoveContainer" containerID="8596ea544be0a448a19f843f8fb2963353f75aac2b39d7b1fc12540532ae6bdc" Feb 16 17:03:59.453533 master-0 kubenswrapper[15493]: I0216 17:03:59.453473 15493 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-4jz2t"] Feb 16 17:03:59.459995 master-0 kubenswrapper[15493]: I0216 17:03:59.459289 15493 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-7c64d55f8-4jz2t"] Feb 16 17:04:00.423368 master-0 kubenswrapper[15493]: I0216 17:04:00.423335 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-7777d5cc66-64vhv_0517b180-00ee-47fe-a8e7-36a3931b7e72/console-operator/0.log" Feb 16 17:04:00.423894 master-0 kubenswrapper[15493]: I0216 17:04:00.423871 15493 generic.go:334] "Generic (PLEG): container finished" podID="0517b180-00ee-47fe-a8e7-36a3931b7e72" containerID="3b23c492a9d79f60e024c95a7b20104a673b11686c6238f7821d38b576015cd8" exitCode=255 Feb 16 17:04:00.424266 master-0 kubenswrapper[15493]: I0216 17:04:00.424005 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" event={"ID":"0517b180-00ee-47fe-a8e7-36a3931b7e72","Type":"ContainerDied","Data":"3b23c492a9d79f60e024c95a7b20104a673b11686c6238f7821d38b576015cd8"} Feb 16 17:04:00.426041 master-0 kubenswrapper[15493]: I0216 17:04:00.424649 15493 scope.go:117] "RemoveContainer" containerID="3b23c492a9d79f60e024c95a7b20104a673b11686c6238f7821d38b576015cd8" Feb 16 17:04:01.070035 master-0 kubenswrapper[15493]: I0216 17:04:01.069879 15493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab6e5720-2c30-4962-9c67-89f1607d137f" path="/var/lib/kubelet/pods/ab6e5720-2c30-4962-9c67-89f1607d137f/volumes" Feb 16 17:04:01.432865 master-0 kubenswrapper[15493]: I0216 17:04:01.432783 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-7777d5cc66-64vhv_0517b180-00ee-47fe-a8e7-36a3931b7e72/console-operator/1.log" Feb 16 17:04:01.433695 master-0 kubenswrapper[15493]: I0216 17:04:01.433681 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-7777d5cc66-64vhv_0517b180-00ee-47fe-a8e7-36a3931b7e72/console-operator/0.log" Feb 16 17:04:01.433799 master-0 kubenswrapper[15493]: I0216 17:04:01.433781 15493 generic.go:334] "Generic (PLEG): container finished" podID="0517b180-00ee-47fe-a8e7-36a3931b7e72" containerID="91710f3ffed9e691771d9c2df1c7410a137cbac4dc029e63df70fc8620c4721e" exitCode=255 Feb 16 17:04:01.433877 master-0 kubenswrapper[15493]: I0216 17:04:01.433862 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" event={"ID":"0517b180-00ee-47fe-a8e7-36a3931b7e72","Type":"ContainerDied","Data":"91710f3ffed9e691771d9c2df1c7410a137cbac4dc029e63df70fc8620c4721e"} Feb 16 17:04:01.433980 master-0 kubenswrapper[15493]: I0216 17:04:01.433964 15493 scope.go:117] "RemoveContainer" containerID="3b23c492a9d79f60e024c95a7b20104a673b11686c6238f7821d38b576015cd8" Feb 16 17:04:01.434410 master-0 kubenswrapper[15493]: I0216 17:04:01.434379 15493 scope.go:117] "RemoveContainer" containerID="91710f3ffed9e691771d9c2df1c7410a137cbac4dc029e63df70fc8620c4721e" Feb 16 17:04:01.434703 master-0 kubenswrapper[15493]: E0216 17:04:01.434666 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-7777d5cc66-64vhv_openshift-console-operator(0517b180-00ee-47fe-a8e7-36a3931b7e72)\"" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:04:02.308709 master-0 kubenswrapper[15493]: I0216 17:04:02.308609 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-56w7x_805822e0-7af3-4f6f-9411-6256367d1fe1/kube-multus-additional-cni-plugins/0.log" Feb 16 17:04:02.308709 master-0 kubenswrapper[15493]: I0216 17:04:02.308706 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:04:02.428890 master-0 kubenswrapper[15493]: I0216 17:04:02.428801 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/805822e0-7af3-4f6f-9411-6256367d1fe1-cni-sysctl-allowlist\") pod \"805822e0-7af3-4f6f-9411-6256367d1fe1\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " Feb 16 17:04:02.429173 master-0 kubenswrapper[15493]: I0216 17:04:02.428978 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/805822e0-7af3-4f6f-9411-6256367d1fe1-tuning-conf-dir\") pod \"805822e0-7af3-4f6f-9411-6256367d1fe1\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " Feb 16 17:04:02.429173 master-0 kubenswrapper[15493]: I0216 17:04:02.429027 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h8cb\" (UniqueName: \"kubernetes.io/projected/805822e0-7af3-4f6f-9411-6256367d1fe1-kube-api-access-2h8cb\") pod \"805822e0-7af3-4f6f-9411-6256367d1fe1\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " Feb 16 17:04:02.429173 master-0 kubenswrapper[15493]: I0216 17:04:02.429137 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/805822e0-7af3-4f6f-9411-6256367d1fe1-ready\") pod \"805822e0-7af3-4f6f-9411-6256367d1fe1\" (UID: \"805822e0-7af3-4f6f-9411-6256367d1fe1\") " Feb 16 17:04:02.429275 master-0 kubenswrapper[15493]: I0216 17:04:02.429185 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/805822e0-7af3-4f6f-9411-6256367d1fe1-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "805822e0-7af3-4f6f-9411-6256367d1fe1" (UID: "805822e0-7af3-4f6f-9411-6256367d1fe1"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:04:02.429386 master-0 kubenswrapper[15493]: I0216 17:04:02.429340 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805822e0-7af3-4f6f-9411-6256367d1fe1-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "805822e0-7af3-4f6f-9411-6256367d1fe1" (UID: "805822e0-7af3-4f6f-9411-6256367d1fe1"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:04:02.429470 master-0 kubenswrapper[15493]: I0216 17:04:02.429436 15493 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/805822e0-7af3-4f6f-9411-6256367d1fe1-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 16 17:04:02.429470 master-0 kubenswrapper[15493]: I0216 17:04:02.429463 15493 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/805822e0-7af3-4f6f-9411-6256367d1fe1-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:04:02.429910 master-0 kubenswrapper[15493]: I0216 17:04:02.429845 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/805822e0-7af3-4f6f-9411-6256367d1fe1-ready" (OuterVolumeSpecName: "ready") pod "805822e0-7af3-4f6f-9411-6256367d1fe1" (UID: "805822e0-7af3-4f6f-9411-6256367d1fe1"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:04:02.432003 master-0 kubenswrapper[15493]: I0216 17:04:02.431952 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805822e0-7af3-4f6f-9411-6256367d1fe1-kube-api-access-2h8cb" (OuterVolumeSpecName: "kube-api-access-2h8cb") pod "805822e0-7af3-4f6f-9411-6256367d1fe1" (UID: "805822e0-7af3-4f6f-9411-6256367d1fe1"). InnerVolumeSpecName "kube-api-access-2h8cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:04:02.440793 master-0 kubenswrapper[15493]: I0216 17:04:02.440766 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-56w7x_805822e0-7af3-4f6f-9411-6256367d1fe1/kube-multus-additional-cni-plugins/0.log" Feb 16 17:04:02.441202 master-0 kubenswrapper[15493]: I0216 17:04:02.440804 15493 generic.go:334] "Generic (PLEG): container finished" podID="805822e0-7af3-4f6f-9411-6256367d1fe1" containerID="064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" exitCode=137 Feb 16 17:04:02.441202 master-0 kubenswrapper[15493]: I0216 17:04:02.440863 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" event={"ID":"805822e0-7af3-4f6f-9411-6256367d1fe1","Type":"ContainerDied","Data":"064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c"} Feb 16 17:04:02.441202 master-0 kubenswrapper[15493]: I0216 17:04:02.440874 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" Feb 16 17:04:02.441202 master-0 kubenswrapper[15493]: I0216 17:04:02.440892 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-56w7x" event={"ID":"805822e0-7af3-4f6f-9411-6256367d1fe1","Type":"ContainerDied","Data":"3b923bceeabb7fd6fb00136efa5784eaa214ef15dbf2ad14f2588c616f89178e"} Feb 16 17:04:02.441202 master-0 kubenswrapper[15493]: I0216 17:04:02.440931 15493 scope.go:117] "RemoveContainer" containerID="064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" Feb 16 17:04:02.443020 master-0 kubenswrapper[15493]: I0216 17:04:02.442999 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-7777d5cc66-64vhv_0517b180-00ee-47fe-a8e7-36a3931b7e72/console-operator/1.log" Feb 16 17:04:02.443266 master-0 kubenswrapper[15493]: I0216 17:04:02.443241 15493 scope.go:117] "RemoveContainer" containerID="91710f3ffed9e691771d9c2df1c7410a137cbac4dc029e63df70fc8620c4721e" Feb 16 17:04:02.443422 master-0 kubenswrapper[15493]: E0216 17:04:02.443402 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-7777d5cc66-64vhv_openshift-console-operator(0517b180-00ee-47fe-a8e7-36a3931b7e72)\"" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:04:02.465647 master-0 kubenswrapper[15493]: I0216 17:04:02.465595 15493 scope.go:117] "RemoveContainer" containerID="064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" Feb 16 17:04:02.467599 master-0 kubenswrapper[15493]: E0216 17:04:02.467545 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c\": container with ID starting with 064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c not found: ID does not exist" containerID="064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c" Feb 16 17:04:02.467672 master-0 kubenswrapper[15493]: I0216 17:04:02.467594 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c"} err="failed to get container status \"064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c\": rpc error: code = NotFound desc = could not find container \"064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c\": container with ID starting with 064f75d782b38e4273c0a34dc592630dbb3e05b3ca6a7375e49fdd5d6b2afc5c not found: ID does not exist" Feb 16 17:04:02.530640 master-0 kubenswrapper[15493]: I0216 17:04:02.530564 15493 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/805822e0-7af3-4f6f-9411-6256367d1fe1-ready\") on node \"master-0\" DevicePath \"\"" Feb 16 17:04:02.530640 master-0 kubenswrapper[15493]: I0216 17:04:02.530616 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h8cb\" (UniqueName: \"kubernetes.io/projected/805822e0-7af3-4f6f-9411-6256367d1fe1-kube-api-access-2h8cb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:04:02.787837 master-0 kubenswrapper[15493]: I0216 17:04:02.787717 15493 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-56w7x"] Feb 16 17:04:02.791804 master-0 kubenswrapper[15493]: I0216 17:04:02.791748 15493 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-56w7x"] Feb 16 17:04:03.067134 master-0 kubenswrapper[15493]: I0216 17:04:03.066802 15493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="805822e0-7af3-4f6f-9411-6256367d1fe1" path="/var/lib/kubelet/pods/805822e0-7af3-4f6f-9411-6256367d1fe1/volumes" Feb 16 17:04:03.139197 master-0 kubenswrapper[15493]: I0216 17:04:03.139127 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:04:03.139430 master-0 kubenswrapper[15493]: E0216 17:04:03.139327 15493 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Feb 16 17:04:03.139503 master-0 kubenswrapper[15493]: E0216 17:04:03.139439 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:04:35.13941445 +0000 UTC m=+154.289587570 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : secret "canary-serving-cert" not found Feb 16 17:04:05.680840 master-0 kubenswrapper[15493]: I0216 17:04:05.680777 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:04:05.681360 master-0 kubenswrapper[15493]: E0216 17:04:05.680909 15493 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Feb 16 17:04:05.681360 master-0 kubenswrapper[15493]: E0216 17:04:05.680996 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:04:37.680978902 +0000 UTC m=+156.831151982 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : secret "prometheus-operator-tls" not found Feb 16 17:04:06.536796 master-0 kubenswrapper[15493]: I0216 17:04:06.536666 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:04:06.537174 master-0 kubenswrapper[15493]: I0216 17:04:06.536820 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:04:06.537890 master-0 kubenswrapper[15493]: I0216 17:04:06.537839 15493 scope.go:117] "RemoveContainer" containerID="91710f3ffed9e691771d9c2df1c7410a137cbac4dc029e63df70fc8620c4721e" Feb 16 17:04:06.538262 master-0 kubenswrapper[15493]: E0216 17:04:06.538218 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-7777d5cc66-64vhv_openshift-console-operator(0517b180-00ee-47fe-a8e7-36a3931b7e72)\"" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:04:09.915418 master-0 kubenswrapper[15493]: I0216 17:04:09.915354 15493 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 17:04:09.916256 master-0 kubenswrapper[15493]: I0216 17:04:09.915628 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="0555924f-d581-40d7-9a45-aee3270a1383" containerName="installer" containerID="cri-o://6051be8d6cd1c9e4e2f487baa8487a2bd2840efe75d738f7f14481aba9ae9beb" gracePeriod=30 Feb 16 17:04:13.327856 master-0 kubenswrapper[15493]: I0216 17:04:13.326892 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 16 17:04:13.328973 master-0 kubenswrapper[15493]: E0216 17:04:13.328210 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805822e0-7af3-4f6f-9411-6256367d1fe1" containerName="kube-multus-additional-cni-plugins" Feb 16 17:04:13.328973 master-0 kubenswrapper[15493]: I0216 17:04:13.328241 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="805822e0-7af3-4f6f-9411-6256367d1fe1" containerName="kube-multus-additional-cni-plugins" Feb 16 17:04:13.328973 master-0 kubenswrapper[15493]: E0216 17:04:13.328268 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6e5720-2c30-4962-9c67-89f1607d137f" containerName="kube-rbac-proxy" Feb 16 17:04:13.328973 master-0 kubenswrapper[15493]: I0216 17:04:13.328281 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6e5720-2c30-4962-9c67-89f1607d137f" containerName="kube-rbac-proxy" Feb 16 17:04:13.328973 master-0 kubenswrapper[15493]: E0216 17:04:13.328311 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab6e5720-2c30-4962-9c67-89f1607d137f" containerName="multus-admission-controller" Feb 16 17:04:13.328973 master-0 kubenswrapper[15493]: I0216 17:04:13.328323 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab6e5720-2c30-4962-9c67-89f1607d137f" containerName="multus-admission-controller" Feb 16 17:04:13.328973 master-0 kubenswrapper[15493]: I0216 17:04:13.328508 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6e5720-2c30-4962-9c67-89f1607d137f" containerName="multus-admission-controller" Feb 16 17:04:13.328973 master-0 kubenswrapper[15493]: I0216 17:04:13.328553 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab6e5720-2c30-4962-9c67-89f1607d137f" containerName="kube-rbac-proxy" Feb 16 17:04:13.328973 master-0 kubenswrapper[15493]: I0216 17:04:13.328601 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="805822e0-7af3-4f6f-9411-6256367d1fe1" containerName="kube-multus-additional-cni-plugins" Feb 16 17:04:13.329794 master-0 kubenswrapper[15493]: I0216 17:04:13.329292 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:04:13.334980 master-0 kubenswrapper[15493]: I0216 17:04:13.334873 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 16 17:04:13.398760 master-0 kubenswrapper[15493]: I0216 17:04:13.398654 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e206017-9a4e-4db1-9f43-60db756a022d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4e206017-9a4e-4db1-9f43-60db756a022d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:04:13.399007 master-0 kubenswrapper[15493]: I0216 17:04:13.398814 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e206017-9a4e-4db1-9f43-60db756a022d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4e206017-9a4e-4db1-9f43-60db756a022d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:04:13.399007 master-0 kubenswrapper[15493]: I0216 17:04:13.398909 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e206017-9a4e-4db1-9f43-60db756a022d-var-lock\") pod \"installer-3-master-0\" (UID: \"4e206017-9a4e-4db1-9f43-60db756a022d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:04:13.500297 master-0 kubenswrapper[15493]: I0216 17:04:13.500231 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e206017-9a4e-4db1-9f43-60db756a022d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4e206017-9a4e-4db1-9f43-60db756a022d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:04:13.500297 master-0 kubenswrapper[15493]: I0216 17:04:13.500290 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e206017-9a4e-4db1-9f43-60db756a022d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4e206017-9a4e-4db1-9f43-60db756a022d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:04:13.500669 master-0 kubenswrapper[15493]: I0216 17:04:13.500334 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e206017-9a4e-4db1-9f43-60db756a022d-var-lock\") pod \"installer-3-master-0\" (UID: \"4e206017-9a4e-4db1-9f43-60db756a022d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:04:13.500669 master-0 kubenswrapper[15493]: I0216 17:04:13.500432 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e206017-9a4e-4db1-9f43-60db756a022d-var-lock\") pod \"installer-3-master-0\" (UID: \"4e206017-9a4e-4db1-9f43-60db756a022d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:04:13.501126 master-0 kubenswrapper[15493]: I0216 17:04:13.501064 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e206017-9a4e-4db1-9f43-60db756a022d-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4e206017-9a4e-4db1-9f43-60db756a022d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:04:13.515818 master-0 kubenswrapper[15493]: I0216 17:04:13.515770 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e206017-9a4e-4db1-9f43-60db756a022d-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4e206017-9a4e-4db1-9f43-60db756a022d\") " pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:04:13.657002 master-0 kubenswrapper[15493]: I0216 17:04:13.656809 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:04:14.113280 master-0 kubenswrapper[15493]: I0216 17:04:14.113211 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Feb 16 17:04:14.119170 master-0 kubenswrapper[15493]: W0216 17:04:14.119135 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4e206017_9a4e_4db1_9f43_60db756a022d.slice/crio-9b4561b52e2ec188d328992c37415bb43461b89a3939aa9d311729544dd52a0c WatchSource:0}: Error finding container 9b4561b52e2ec188d328992c37415bb43461b89a3939aa9d311729544dd52a0c: Status 404 returned error can't find the container with id 9b4561b52e2ec188d328992c37415bb43461b89a3939aa9d311729544dd52a0c Feb 16 17:04:14.526153 master-0 kubenswrapper[15493]: I0216 17:04:14.526091 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"4e206017-9a4e-4db1-9f43-60db756a022d","Type":"ContainerStarted","Data":"37e6e23249cca416f9a227102f928e3e16fa858be68bdd8d60d856c492484f5f"} Feb 16 17:04:14.526153 master-0 kubenswrapper[15493]: I0216 17:04:14.526145 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"4e206017-9a4e-4db1-9f43-60db756a022d","Type":"ContainerStarted","Data":"9b4561b52e2ec188d328992c37415bb43461b89a3939aa9d311729544dd52a0c"} Feb 16 17:04:14.557860 master-0 kubenswrapper[15493]: I0216 17:04:14.557769 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=1.557751103 podStartE2EDuration="1.557751103s" podCreationTimestamp="2026-02-16 17:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:04:14.545049955 +0000 UTC m=+133.695223025" watchObservedRunningTime="2026-02-16 17:04:14.557751103 +0000 UTC m=+133.707924173" Feb 16 17:04:15.502349 master-0 kubenswrapper[15493]: I0216 17:04:15.502285 15493 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:04:15.502349 master-0 kubenswrapper[15493]: I0216 17:04:15.502335 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:04:18.055428 master-0 kubenswrapper[15493]: I0216 17:04:18.055360 15493 scope.go:117] "RemoveContainer" containerID="91710f3ffed9e691771d9c2df1c7410a137cbac4dc029e63df70fc8620c4721e" Feb 16 17:04:18.554195 master-0 kubenswrapper[15493]: I0216 17:04:18.554130 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-7777d5cc66-64vhv_0517b180-00ee-47fe-a8e7-36a3931b7e72/console-operator/1.log" Feb 16 17:04:18.554418 master-0 kubenswrapper[15493]: I0216 17:04:18.554198 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" event={"ID":"0517b180-00ee-47fe-a8e7-36a3931b7e72","Type":"ContainerStarted","Data":"5280a9e5fa47ae992e070a527ddf28952c9ffc9ee73154415f61b91183dd9a89"} Feb 16 17:04:18.554715 master-0 kubenswrapper[15493]: I0216 17:04:18.554659 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:04:18.574550 master-0 kubenswrapper[15493]: I0216 17:04:18.574474 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podStartSLOduration=20.179880569 podStartE2EDuration="22.57445551s" podCreationTimestamp="2026-02-16 17:03:56 +0000 UTC" firstStartedPulling="2026-02-16 17:03:56.945723907 +0000 UTC m=+116.095896977" lastFinishedPulling="2026-02-16 17:03:59.340298848 +0000 UTC m=+118.490471918" observedRunningTime="2026-02-16 17:04:18.570340617 +0000 UTC m=+137.720513717" watchObservedRunningTime="2026-02-16 17:04:18.57445551 +0000 UTC m=+137.724628580" Feb 16 17:04:18.766555 master-0 kubenswrapper[15493]: I0216 17:04:18.766454 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:04:18.960465 master-0 kubenswrapper[15493]: I0216 17:04:18.960350 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-dcd7b7d95-dhhfh"] Feb 16 17:04:18.961312 master-0 kubenswrapper[15493]: I0216 17:04:18.961276 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:04:18.963407 master-0 kubenswrapper[15493]: I0216 17:04:18.963380 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 17:04:18.965713 master-0 kubenswrapper[15493]: I0216 17:04:18.965650 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 17:04:18.970709 master-0 kubenswrapper[15493]: I0216 17:04:18.970657 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-dcd7b7d95-dhhfh"] Feb 16 17:04:19.077805 master-0 kubenswrapper[15493]: I0216 17:04:19.077734 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:04:19.179257 master-0 kubenswrapper[15493]: I0216 17:04:19.179198 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:04:19.196717 master-0 kubenswrapper[15493]: I0216 17:04:19.196667 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:04:19.301552 master-0 kubenswrapper[15493]: I0216 17:04:19.301473 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:04:19.718253 master-0 kubenswrapper[15493]: I0216 17:04:19.718171 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-dcd7b7d95-dhhfh"] Feb 16 17:04:19.724998 master-0 kubenswrapper[15493]: W0216 17:04:19.724905 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08a90dc5_b0d8_4aad_a002_736492b6c1a9.slice/crio-364452d959dddcc724ef913b661912f6596fd419075309c870bdb2698dcecc99 WatchSource:0}: Error finding container 364452d959dddcc724ef913b661912f6596fd419075309c870bdb2698dcecc99: Status 404 returned error can't find the container with id 364452d959dddcc724ef913b661912f6596fd419075309c870bdb2698dcecc99 Feb 16 17:04:20.569887 master-0 kubenswrapper[15493]: I0216 17:04:20.569819 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-dhhfh" event={"ID":"08a90dc5-b0d8-4aad-a002-736492b6c1a9","Type":"ContainerStarted","Data":"364452d959dddcc724ef913b661912f6596fd419075309c870bdb2698dcecc99"} Feb 16 17:04:24.112109 master-0 kubenswrapper[15493]: I0216 17:04:24.112053 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_0555924f-d581-40d7-9a45-aee3270a1383/installer/0.log" Feb 16 17:04:24.112854 master-0 kubenswrapper[15493]: I0216 17:04:24.112155 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 17:04:24.256852 master-0 kubenswrapper[15493]: I0216 17:04:24.256598 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0555924f-d581-40d7-9a45-aee3270a1383-kubelet-dir\") pod \"0555924f-d581-40d7-9a45-aee3270a1383\" (UID: \"0555924f-d581-40d7-9a45-aee3270a1383\") " Feb 16 17:04:24.256852 master-0 kubenswrapper[15493]: I0216 17:04:24.256660 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0555924f-d581-40d7-9a45-aee3270a1383-var-lock\") pod \"0555924f-d581-40d7-9a45-aee3270a1383\" (UID: \"0555924f-d581-40d7-9a45-aee3270a1383\") " Feb 16 17:04:24.256852 master-0 kubenswrapper[15493]: I0216 17:04:24.256694 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0555924f-d581-40d7-9a45-aee3270a1383-kube-api-access\") pod \"0555924f-d581-40d7-9a45-aee3270a1383\" (UID: \"0555924f-d581-40d7-9a45-aee3270a1383\") " Feb 16 17:04:24.256852 master-0 kubenswrapper[15493]: I0216 17:04:24.256795 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0555924f-d581-40d7-9a45-aee3270a1383-var-lock" (OuterVolumeSpecName: "var-lock") pod "0555924f-d581-40d7-9a45-aee3270a1383" (UID: "0555924f-d581-40d7-9a45-aee3270a1383"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:04:24.257372 master-0 kubenswrapper[15493]: I0216 17:04:24.256819 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0555924f-d581-40d7-9a45-aee3270a1383-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0555924f-d581-40d7-9a45-aee3270a1383" (UID: "0555924f-d581-40d7-9a45-aee3270a1383"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:04:24.257430 master-0 kubenswrapper[15493]: I0216 17:04:24.257378 15493 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0555924f-d581-40d7-9a45-aee3270a1383-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:04:24.257430 master-0 kubenswrapper[15493]: I0216 17:04:24.257399 15493 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0555924f-d581-40d7-9a45-aee3270a1383-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:04:24.259684 master-0 kubenswrapper[15493]: I0216 17:04:24.259641 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0555924f-d581-40d7-9a45-aee3270a1383-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0555924f-d581-40d7-9a45-aee3270a1383" (UID: "0555924f-d581-40d7-9a45-aee3270a1383"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:04:24.358534 master-0 kubenswrapper[15493]: I0216 17:04:24.358468 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0555924f-d581-40d7-9a45-aee3270a1383-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:04:24.611745 master-0 kubenswrapper[15493]: I0216 17:04:24.611599 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_0555924f-d581-40d7-9a45-aee3270a1383/installer/0.log" Feb 16 17:04:24.611745 master-0 kubenswrapper[15493]: I0216 17:04:24.611721 15493 generic.go:334] "Generic (PLEG): container finished" podID="0555924f-d581-40d7-9a45-aee3270a1383" containerID="6051be8d6cd1c9e4e2f487baa8487a2bd2840efe75d738f7f14481aba9ae9beb" exitCode=1 Feb 16 17:04:24.612182 master-0 kubenswrapper[15493]: I0216 17:04:24.611764 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"0555924f-d581-40d7-9a45-aee3270a1383","Type":"ContainerDied","Data":"6051be8d6cd1c9e4e2f487baa8487a2bd2840efe75d738f7f14481aba9ae9beb"} Feb 16 17:04:24.612182 master-0 kubenswrapper[15493]: I0216 17:04:24.611810 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"0555924f-d581-40d7-9a45-aee3270a1383","Type":"ContainerDied","Data":"b0ff5ffb566d083d38aec96238565927a8dce1d1cae54b78321629759a4a8fb4"} Feb 16 17:04:24.612182 master-0 kubenswrapper[15493]: I0216 17:04:24.611813 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Feb 16 17:04:24.612182 master-0 kubenswrapper[15493]: I0216 17:04:24.611829 15493 scope.go:117] "RemoveContainer" containerID="6051be8d6cd1c9e4e2f487baa8487a2bd2840efe75d738f7f14481aba9ae9beb" Feb 16 17:04:24.634783 master-0 kubenswrapper[15493]: I0216 17:04:24.634734 15493 scope.go:117] "RemoveContainer" containerID="6051be8d6cd1c9e4e2f487baa8487a2bd2840efe75d738f7f14481aba9ae9beb" Feb 16 17:04:24.635201 master-0 kubenswrapper[15493]: E0216 17:04:24.635158 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6051be8d6cd1c9e4e2f487baa8487a2bd2840efe75d738f7f14481aba9ae9beb\": container with ID starting with 6051be8d6cd1c9e4e2f487baa8487a2bd2840efe75d738f7f14481aba9ae9beb not found: ID does not exist" containerID="6051be8d6cd1c9e4e2f487baa8487a2bd2840efe75d738f7f14481aba9ae9beb" Feb 16 17:04:24.635343 master-0 kubenswrapper[15493]: I0216 17:04:24.635204 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6051be8d6cd1c9e4e2f487baa8487a2bd2840efe75d738f7f14481aba9ae9beb"} err="failed to get container status \"6051be8d6cd1c9e4e2f487baa8487a2bd2840efe75d738f7f14481aba9ae9beb\": rpc error: code = NotFound desc = could not find container \"6051be8d6cd1c9e4e2f487baa8487a2bd2840efe75d738f7f14481aba9ae9beb\": container with ID starting with 6051be8d6cd1c9e4e2f487baa8487a2bd2840efe75d738f7f14481aba9ae9beb not found: ID does not exist" Feb 16 17:04:24.655100 master-0 kubenswrapper[15493]: I0216 17:04:24.655041 15493 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 17:04:24.669045 master-0 kubenswrapper[15493]: I0216 17:04:24.667385 15493 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Feb 16 17:04:25.065732 master-0 kubenswrapper[15493]: I0216 17:04:25.064396 15493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0555924f-d581-40d7-9a45-aee3270a1383" path="/var/lib/kubelet/pods/0555924f-d581-40d7-9a45-aee3270a1383/volumes" Feb 16 17:04:27.023900 master-0 kubenswrapper[15493]: I0216 17:04:27.023823 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-retry-1-master-0"] Feb 16 17:04:27.024445 master-0 kubenswrapper[15493]: E0216 17:04:27.024142 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0555924f-d581-40d7-9a45-aee3270a1383" containerName="installer" Feb 16 17:04:27.024445 master-0 kubenswrapper[15493]: I0216 17:04:27.024159 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="0555924f-d581-40d7-9a45-aee3270a1383" containerName="installer" Feb 16 17:04:27.024445 master-0 kubenswrapper[15493]: I0216 17:04:27.024307 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="0555924f-d581-40d7-9a45-aee3270a1383" containerName="installer" Feb 16 17:04:27.024788 master-0 kubenswrapper[15493]: I0216 17:04:27.024760 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:04:27.026531 master-0 kubenswrapper[15493]: I0216 17:04:27.026300 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-rxv66" Feb 16 17:04:27.026605 master-0 kubenswrapper[15493]: I0216 17:04:27.026593 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Feb 16 17:04:27.028782 master-0 kubenswrapper[15493]: I0216 17:04:27.028543 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-retry-1-master-0"] Feb 16 17:04:27.096608 master-0 kubenswrapper[15493]: I0216 17:04:27.096569 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\") " pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:04:27.096886 master-0 kubenswrapper[15493]: I0216 17:04:27.096866 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\") " pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:04:27.097009 master-0 kubenswrapper[15493]: I0216 17:04:27.096995 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\") " pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:04:27.198378 master-0 kubenswrapper[15493]: I0216 17:04:27.198318 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\") " pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:04:27.198378 master-0 kubenswrapper[15493]: I0216 17:04:27.198375 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\") " pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:04:27.198633 master-0 kubenswrapper[15493]: I0216 17:04:27.198402 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\") " pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:04:27.198633 master-0 kubenswrapper[15493]: I0216 17:04:27.198463 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\") " pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:04:27.198633 master-0 kubenswrapper[15493]: I0216 17:04:27.198563 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\") " pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:04:27.213580 master-0 kubenswrapper[15493]: I0216 17:04:27.213534 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\") " pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:04:27.343344 master-0 kubenswrapper[15493]: I0216 17:04:27.343210 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:04:27.768733 master-0 kubenswrapper[15493]: I0216 17:04:27.768676 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-retry-1-master-0"] Feb 16 17:04:28.643814 master-0 kubenswrapper[15493]: I0216 17:04:28.643739 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-retry-1-master-0" event={"ID":"56a53ffd-3f43-41cb-a9a8-23fcac93f49f","Type":"ContainerStarted","Data":"1240451b8d8edbccb06eba1a7befcdaf3de33d2010502cf2ca66305dbdae7fda"} Feb 16 17:04:28.643814 master-0 kubenswrapper[15493]: I0216 17:04:28.643804 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-retry-1-master-0" event={"ID":"56a53ffd-3f43-41cb-a9a8-23fcac93f49f","Type":"ContainerStarted","Data":"68e17ad5378599c0b9f78f26d8bfa0ac4c8e1a45aad43487c9a82787afd7e198"} Feb 16 17:04:28.670999 master-0 kubenswrapper[15493]: I0216 17:04:28.667432 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-retry-1-master-0" podStartSLOduration=1.667413116 podStartE2EDuration="1.667413116s" podCreationTimestamp="2026-02-16 17:04:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:04:28.66556781 +0000 UTC m=+147.815740900" watchObservedRunningTime="2026-02-16 17:04:28.667413116 +0000 UTC m=+147.817586186" Feb 16 17:04:35.208687 master-0 kubenswrapper[15493]: I0216 17:04:35.208618 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:04:35.213177 master-0 kubenswrapper[15493]: I0216 17:04:35.213113 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:04:35.467089 master-0 kubenswrapper[15493]: I0216 17:04:35.466962 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:04:35.866882 master-0 kubenswrapper[15493]: I0216 17:04:35.866725 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qqvg4"] Feb 16 17:04:35.869759 master-0 kubenswrapper[15493]: W0216 17:04:35.869363 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1363cb7b_62cc_497b_af6f_4d5e0eb7f174.slice/crio-2cf0f488afd360ac495e55855e7e7f1de024fedb887be3f6838e29b31ddd9821 WatchSource:0}: Error finding container 2cf0f488afd360ac495e55855e7e7f1de024fedb887be3f6838e29b31ddd9821: Status 404 returned error can't find the container with id 2cf0f488afd360ac495e55855e7e7f1de024fedb887be3f6838e29b31ddd9821 Feb 16 17:04:36.735797 master-0 kubenswrapper[15493]: I0216 17:04:36.735740 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qqvg4" event={"ID":"1363cb7b-62cc-497b-af6f-4d5e0eb7f174","Type":"ContainerStarted","Data":"1f9fdf8ad8b22c269fdbde8bae7ca0001ee8651ea5ecbb2a592ce042830398a8"} Feb 16 17:04:36.735797 master-0 kubenswrapper[15493]: I0216 17:04:36.735791 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qqvg4" event={"ID":"1363cb7b-62cc-497b-af6f-4d5e0eb7f174","Type":"ContainerStarted","Data":"2cf0f488afd360ac495e55855e7e7f1de024fedb887be3f6838e29b31ddd9821"} Feb 16 17:04:36.792855 master-0 kubenswrapper[15493]: I0216 17:04:36.792805 15493 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:04:37.746816 master-0 kubenswrapper[15493]: I0216 17:04:37.746762 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:04:37.750039 master-0 kubenswrapper[15493]: I0216 17:04:37.749993 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:04:37.758197 master-0 kubenswrapper[15493]: I0216 17:04:37.758171 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-qmzhq" Feb 16 17:04:37.770186 master-0 kubenswrapper[15493]: I0216 17:04:37.770137 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:04:38.149537 master-0 kubenswrapper[15493]: I0216 17:04:38.147915 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qqvg4" podStartSLOduration=67.147893194 podStartE2EDuration="1m7.147893194s" podCreationTimestamp="2026-02-16 17:03:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:04:36.760300675 +0000 UTC m=+155.910473985" watchObservedRunningTime="2026-02-16 17:04:38.147893194 +0000 UTC m=+157.298066274" Feb 16 17:04:38.150235 master-0 kubenswrapper[15493]: I0216 17:04:38.150194 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-7485d645b8-zxxwd"] Feb 16 17:04:38.167380 master-0 kubenswrapper[15493]: W0216 17:04:38.167333 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d1636c0_f34d_444c_822d_77f1d203ddc4.slice/crio-edbebe7236caf75b0d8a11ee8b06ba5eb3d7759730805aa1085193b4251ca5a8 WatchSource:0}: Error finding container edbebe7236caf75b0d8a11ee8b06ba5eb3d7759730805aa1085193b4251ca5a8: Status 404 returned error can't find the container with id edbebe7236caf75b0d8a11ee8b06ba5eb3d7759730805aa1085193b4251ca5a8 Feb 16 17:04:38.769368 master-0 kubenswrapper[15493]: I0216 17:04:38.768892 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" event={"ID":"2d1636c0-f34d-444c-822d-77f1d203ddc4","Type":"ContainerStarted","Data":"edbebe7236caf75b0d8a11ee8b06ba5eb3d7759730805aa1085193b4251ca5a8"} Feb 16 17:04:39.224723 master-0 kubenswrapper[15493]: I0216 17:04:39.224446 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2ws9r"] Feb 16 17:04:39.225510 master-0 kubenswrapper[15493]: I0216 17:04:39.225461 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:04:39.233218 master-0 kubenswrapper[15493]: I0216 17:04:39.233007 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 17:04:39.233218 master-0 kubenswrapper[15493]: I0216 17:04:39.233032 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 17:04:39.370339 master-0 kubenswrapper[15493]: I0216 17:04:39.370253 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvw4s\" (UniqueName: \"kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:04:39.370339 master-0 kubenswrapper[15493]: I0216 17:04:39.370311 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:04:39.370339 master-0 kubenswrapper[15493]: I0216 17:04:39.370333 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:04:39.471645 master-0 kubenswrapper[15493]: I0216 17:04:39.471560 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvw4s\" (UniqueName: \"kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:04:39.471645 master-0 kubenswrapper[15493]: I0216 17:04:39.471628 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:04:39.472066 master-0 kubenswrapper[15493]: I0216 17:04:39.471749 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:04:39.475539 master-0 kubenswrapper[15493]: I0216 17:04:39.475426 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:04:39.475697 master-0 kubenswrapper[15493]: I0216 17:04:39.475558 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:04:39.492190 master-0 kubenswrapper[15493]: I0216 17:04:39.492122 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvw4s\" (UniqueName: \"kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:04:39.547431 master-0 kubenswrapper[15493]: I0216 17:04:39.547371 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:04:39.858765 master-0 kubenswrapper[15493]: I0216 17:04:39.858699 15493 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-989b889c9-l264c"] Feb 16 17:04:40.781168 master-0 kubenswrapper[15493]: I0216 17:04:40.781115 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2ws9r" event={"ID":"9c48005e-c4df-4332-87fc-ec028f2c6921","Type":"ContainerStarted","Data":"88fb0564c391b1d841f5663a68574e4b3c75822e3555a9b9f404dbe3fc5c5089"} Feb 16 17:04:40.781168 master-0 kubenswrapper[15493]: I0216 17:04:40.781172 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2ws9r" event={"ID":"9c48005e-c4df-4332-87fc-ec028f2c6921","Type":"ContainerStarted","Data":"684d096e2fd0649303d297e067b3466a6cde14a1b1daeadf38535108ce247496"} Feb 16 17:04:40.783559 master-0 kubenswrapper[15493]: I0216 17:04:40.782799 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" event={"ID":"2d1636c0-f34d-444c-822d-77f1d203ddc4","Type":"ContainerStarted","Data":"5e22f442c57442c5b54cef3353bfe6f252841ff0c85eccba549a06ebef40c806"} Feb 16 17:04:40.783559 master-0 kubenswrapper[15493]: I0216 17:04:40.782851 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" event={"ID":"2d1636c0-f34d-444c-822d-77f1d203ddc4","Type":"ContainerStarted","Data":"e2cd6c58d3deb1bb956b3338bb6da8c859fdee1ec0ad904ec42c12d7238536af"} Feb 16 17:04:40.799281 master-0 kubenswrapper[15493]: I0216 17:04:40.799197 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2ws9r" podStartSLOduration=1.7991804660000001 podStartE2EDuration="1.799180466s" podCreationTimestamp="2026-02-16 17:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:04:40.797195457 +0000 UTC m=+159.947368527" watchObservedRunningTime="2026-02-16 17:04:40.799180466 +0000 UTC m=+159.949353536" Feb 16 17:04:40.826027 master-0 kubenswrapper[15493]: I0216 17:04:40.825381 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podStartSLOduration=65.956235524 podStartE2EDuration="1m7.82535664s" podCreationTimestamp="2026-02-16 17:03:33 +0000 UTC" firstStartedPulling="2026-02-16 17:04:38.169685728 +0000 UTC m=+157.319858818" lastFinishedPulling="2026-02-16 17:04:40.038806864 +0000 UTC m=+159.188979934" observedRunningTime="2026-02-16 17:04:40.82135431 +0000 UTC m=+159.971527380" watchObservedRunningTime="2026-02-16 17:04:40.82535664 +0000 UTC m=+159.975529730" Feb 16 17:04:43.154004 master-0 kubenswrapper[15493]: I0216 17:04:43.153900 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl"] Feb 16 17:04:43.156892 master-0 kubenswrapper[15493]: I0216 17:04:43.155154 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.159407 master-0 kubenswrapper[15493]: I0216 17:04:43.157527 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 17:04:43.159407 master-0 kubenswrapper[15493]: I0216 17:04:43.157834 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 17:04:43.169353 master-0 kubenswrapper[15493]: I0216 17:04:43.168498 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-8256c"] Feb 16 17:04:43.171859 master-0 kubenswrapper[15493]: I0216 17:04:43.170159 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.171859 master-0 kubenswrapper[15493]: I0216 17:04:43.171523 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk"] Feb 16 17:04:43.176956 master-0 kubenswrapper[15493]: I0216 17:04:43.172061 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 17:04:43.176956 master-0 kubenswrapper[15493]: I0216 17:04:43.172387 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 17:04:43.176956 master-0 kubenswrapper[15493]: I0216 17:04:43.172799 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.176956 master-0 kubenswrapper[15493]: I0216 17:04:43.175824 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 17:04:43.176956 master-0 kubenswrapper[15493]: I0216 17:04:43.176046 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 17:04:43.176956 master-0 kubenswrapper[15493]: I0216 17:04:43.176196 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 17:04:43.181525 master-0 kubenswrapper[15493]: I0216 17:04:43.180693 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl"] Feb 16 17:04:43.189202 master-0 kubenswrapper[15493]: I0216 17:04:43.188899 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk"] Feb 16 17:04:43.235653 master-0 kubenswrapper[15493]: I0216 17:04:43.235572 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.235653 master-0 kubenswrapper[15493]: I0216 17:04:43.235639 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.235942 master-0 kubenswrapper[15493]: I0216 17:04:43.235697 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.235942 master-0 kubenswrapper[15493]: I0216 17:04:43.235728 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.235942 master-0 kubenswrapper[15493]: I0216 17:04:43.235750 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76rtg\" (UniqueName: \"kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.235942 master-0 kubenswrapper[15493]: I0216 17:04:43.235772 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.235942 master-0 kubenswrapper[15493]: I0216 17:04:43.235792 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.235942 master-0 kubenswrapper[15493]: I0216 17:04:43.235812 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.235942 master-0 kubenswrapper[15493]: I0216 17:04:43.235832 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.235942 master-0 kubenswrapper[15493]: I0216 17:04:43.235850 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.235942 master-0 kubenswrapper[15493]: I0216 17:04:43.235867 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.235942 master-0 kubenswrapper[15493]: I0216 17:04:43.235892 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nfk2\" (UniqueName: \"kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.235942 master-0 kubenswrapper[15493]: I0216 17:04:43.235914 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.235942 master-0 kubenswrapper[15493]: I0216 17:04:43.235953 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.236458 master-0 kubenswrapper[15493]: I0216 17:04:43.235975 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.236458 master-0 kubenswrapper[15493]: I0216 17:04:43.236000 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.236458 master-0 kubenswrapper[15493]: I0216 17:04:43.236019 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.236458 master-0 kubenswrapper[15493]: I0216 17:04:43.236038 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.338079 master-0 kubenswrapper[15493]: I0216 17:04:43.337426 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nfk2\" (UniqueName: \"kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.338079 master-0 kubenswrapper[15493]: I0216 17:04:43.337508 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.338079 master-0 kubenswrapper[15493]: I0216 17:04:43.337903 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.338079 master-0 kubenswrapper[15493]: I0216 17:04:43.337964 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.338079 master-0 kubenswrapper[15493]: I0216 17:04:43.337991 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.338079 master-0 kubenswrapper[15493]: I0216 17:04:43.338026 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.338079 master-0 kubenswrapper[15493]: I0216 17:04:43.338056 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.338079 master-0 kubenswrapper[15493]: I0216 17:04:43.338093 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.338338 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.338420 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.338480 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.338534 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76rtg\" (UniqueName: \"kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.338584 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.338633 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.338671 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.338715 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.338758 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.338799 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: E0216 17:04:43.339011 15493 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.340738 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.340756 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.340755 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.340814 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: E0216 17:04:43.340924 15493 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:04:43.839054417 +0000 UTC m=+162.989227507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : secret "kube-state-metrics-tls" not found Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.340976 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.341247 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.341279 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.341247 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.341310 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.341483 master-0 kubenswrapper[15493]: I0216 17:04:43.341311 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.343539 master-0 kubenswrapper[15493]: I0216 17:04:43.341523 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.343539 master-0 kubenswrapper[15493]: I0216 17:04:43.342633 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.347173 master-0 kubenswrapper[15493]: I0216 17:04:43.346002 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.347415 master-0 kubenswrapper[15493]: I0216 17:04:43.347379 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.357609 master-0 kubenswrapper[15493]: I0216 17:04:43.357029 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nfk2\" (UniqueName: \"kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.358995 master-0 kubenswrapper[15493]: I0216 17:04:43.358930 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76rtg\" (UniqueName: \"kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.359873 master-0 kubenswrapper[15493]: I0216 17:04:43.359831 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.475754 master-0 kubenswrapper[15493]: I0216 17:04:43.475623 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:04:43.515129 master-0 kubenswrapper[15493]: I0216 17:04:43.515085 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:04:43.845769 master-0 kubenswrapper[15493]: I0216 17:04:43.845703 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:43.848796 master-0 kubenswrapper[15493]: I0216 17:04:43.848767 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:44.136871 master-0 kubenswrapper[15493]: I0216 17:04:44.136762 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:04:44.209328 master-0 kubenswrapper[15493]: I0216 17:04:44.209271 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 17:04:44.211790 master-0 kubenswrapper[15493]: I0216 17:04:44.211760 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.214775 master-0 kubenswrapper[15493]: I0216 17:04:44.214733 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 17:04:44.214966 master-0 kubenswrapper[15493]: I0216 17:04:44.214773 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 17:04:44.214966 master-0 kubenswrapper[15493]: I0216 17:04:44.214746 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 17:04:44.217191 master-0 kubenswrapper[15493]: I0216 17:04:44.217143 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 17:04:44.217373 master-0 kubenswrapper[15493]: I0216 17:04:44.217330 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 17:04:44.217373 master-0 kubenswrapper[15493]: I0216 17:04:44.217339 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 17:04:44.217586 master-0 kubenswrapper[15493]: I0216 17:04:44.217550 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 17:04:44.227123 master-0 kubenswrapper[15493]: I0216 17:04:44.227075 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 17:04:44.239883 master-0 kubenswrapper[15493]: I0216 17:04:44.239833 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 17:04:44.354881 master-0 kubenswrapper[15493]: I0216 17:04:44.354808 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.354881 master-0 kubenswrapper[15493]: I0216 17:04:44.354880 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.355210 master-0 kubenswrapper[15493]: I0216 17:04:44.355042 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.355261 master-0 kubenswrapper[15493]: I0216 17:04:44.355193 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.355261 master-0 kubenswrapper[15493]: I0216 17:04:44.355246 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.355340 master-0 kubenswrapper[15493]: I0216 17:04:44.355282 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.355678 master-0 kubenswrapper[15493]: I0216 17:04:44.355455 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.355678 master-0 kubenswrapper[15493]: I0216 17:04:44.355546 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjpvn\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-kube-api-access-tjpvn\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.355678 master-0 kubenswrapper[15493]: I0216 17:04:44.355597 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-config-out\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.355821 master-0 kubenswrapper[15493]: I0216 17:04:44.355671 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.356221 master-0 kubenswrapper[15493]: I0216 17:04:44.356165 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.356324 master-0 kubenswrapper[15493]: I0216 17:04:44.356233 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.457551 master-0 kubenswrapper[15493]: I0216 17:04:44.457430 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.457551 master-0 kubenswrapper[15493]: I0216 17:04:44.457488 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.457551 master-0 kubenswrapper[15493]: I0216 17:04:44.457540 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.457837 master-0 kubenswrapper[15493]: I0216 17:04:44.457684 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.457889 master-0 kubenswrapper[15493]: I0216 17:04:44.457872 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.457979 master-0 kubenswrapper[15493]: I0216 17:04:44.457961 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.458036 master-0 kubenswrapper[15493]: I0216 17:04:44.457997 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjpvn\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-kube-api-access-tjpvn\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.458083 master-0 kubenswrapper[15493]: I0216 17:04:44.458045 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-config-out\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.458219 master-0 kubenswrapper[15493]: I0216 17:04:44.458141 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.458219 master-0 kubenswrapper[15493]: I0216 17:04:44.458210 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.458322 master-0 kubenswrapper[15493]: I0216 17:04:44.458300 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.458372 master-0 kubenswrapper[15493]: I0216 17:04:44.458359 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.458891 master-0 kubenswrapper[15493]: I0216 17:04:44.458852 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.459789 master-0 kubenswrapper[15493]: I0216 17:04:44.459751 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.460082 master-0 kubenswrapper[15493]: I0216 17:04:44.460053 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.462342 master-0 kubenswrapper[15493]: I0216 17:04:44.462317 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.463890 master-0 kubenswrapper[15493]: I0216 17:04:44.463862 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.464372 master-0 kubenswrapper[15493]: I0216 17:04:44.464351 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.466428 master-0 kubenswrapper[15493]: I0216 17:04:44.466400 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.469364 master-0 kubenswrapper[15493]: I0216 17:04:44.469317 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.469811 master-0 kubenswrapper[15493]: I0216 17:04:44.469784 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.471662 master-0 kubenswrapper[15493]: I0216 17:04:44.471619 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.471823 master-0 kubenswrapper[15493]: I0216 17:04:44.471789 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-config-out\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.474258 master-0 kubenswrapper[15493]: I0216 17:04:44.474226 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjpvn\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-kube-api-access-tjpvn\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:44.541436 master-0 kubenswrapper[15493]: I0216 17:04:44.541383 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:04:45.111536 master-0 kubenswrapper[15493]: I0216 17:04:45.109370 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h"] Feb 16 17:04:45.111536 master-0 kubenswrapper[15493]: I0216 17:04:45.111121 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.112622 master-0 kubenswrapper[15493]: I0216 17:04:45.112514 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 17:04:45.112972 master-0 kubenswrapper[15493]: I0216 17:04:45.112858 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 17:04:45.112972 master-0 kubenswrapper[15493]: I0216 17:04:45.112935 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 17:04:45.112972 master-0 kubenswrapper[15493]: I0216 17:04:45.112967 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 17:04:45.113165 master-0 kubenswrapper[15493]: I0216 17:04:45.113026 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 17:04:45.113256 master-0 kubenswrapper[15493]: I0216 17:04:45.113217 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" Feb 16 17:04:45.130861 master-0 kubenswrapper[15493]: I0216 17:04:45.130800 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h"] Feb 16 17:04:45.168798 master-0 kubenswrapper[15493]: I0216 17:04:45.168739 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.169025 master-0 kubenswrapper[15493]: I0216 17:04:45.168819 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.169025 master-0 kubenswrapper[15493]: I0216 17:04:45.168845 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.169174 master-0 kubenswrapper[15493]: I0216 17:04:45.169101 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.169244 master-0 kubenswrapper[15493]: I0216 17:04:45.169175 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.169244 master-0 kubenswrapper[15493]: I0216 17:04:45.169200 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j99jl\" (UniqueName: \"kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.169319 master-0 kubenswrapper[15493]: I0216 17:04:45.169275 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.169392 master-0 kubenswrapper[15493]: I0216 17:04:45.169364 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.270319 master-0 kubenswrapper[15493]: I0216 17:04:45.270259 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.270796 master-0 kubenswrapper[15493]: I0216 17:04:45.270334 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.270796 master-0 kubenswrapper[15493]: I0216 17:04:45.270357 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j99jl\" (UniqueName: \"kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.270796 master-0 kubenswrapper[15493]: I0216 17:04:45.270377 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.270796 master-0 kubenswrapper[15493]: I0216 17:04:45.270443 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.270796 master-0 kubenswrapper[15493]: I0216 17:04:45.270507 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.270796 master-0 kubenswrapper[15493]: I0216 17:04:45.270553 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.270796 master-0 kubenswrapper[15493]: I0216 17:04:45.270573 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.271638 master-0 kubenswrapper[15493]: I0216 17:04:45.271597 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.277848 master-0 kubenswrapper[15493]: I0216 17:04:45.277777 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.283679 master-0 kubenswrapper[15493]: I0216 17:04:45.283640 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.283744 master-0 kubenswrapper[15493]: I0216 17:04:45.283675 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.283887 master-0 kubenswrapper[15493]: I0216 17:04:45.283843 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.284029 master-0 kubenswrapper[15493]: I0216 17:04:45.283999 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.286413 master-0 kubenswrapper[15493]: I0216 17:04:45.286379 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j99jl\" (UniqueName: \"kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.286919 master-0 kubenswrapper[15493]: I0216 17:04:45.286885 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.432491 master-0 kubenswrapper[15493]: I0216 17:04:45.432202 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:04:45.503481 master-0 kubenswrapper[15493]: I0216 17:04:45.503332 15493 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:04:45.503481 master-0 kubenswrapper[15493]: I0216 17:04:45.503414 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:04:45.503481 master-0 kubenswrapper[15493]: I0216 17:04:45.503482 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:04:45.504275 master-0 kubenswrapper[15493]: I0216 17:04:45.504247 15493 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"df55732c40e933ea372f1214a91cde4306eb5555441b3143bbda5066dd5d87f2"} pod="openshift-machine-config-operator/machine-config-daemon-98q6v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:04:45.504356 master-0 kubenswrapper[15493]: I0216 17:04:45.504305 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" containerID="cri-o://df55732c40e933ea372f1214a91cde4306eb5555441b3143bbda5066dd5d87f2" gracePeriod=600 Feb 16 17:04:45.816011 master-0 kubenswrapper[15493]: I0216 17:04:45.815950 15493 generic.go:334] "Generic (PLEG): container finished" podID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerID="df55732c40e933ea372f1214a91cde4306eb5555441b3143bbda5066dd5d87f2" exitCode=0 Feb 16 17:04:45.816011 master-0 kubenswrapper[15493]: I0216 17:04:45.815998 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerDied","Data":"df55732c40e933ea372f1214a91cde4306eb5555441b3143bbda5066dd5d87f2"} Feb 16 17:04:48.373721 master-0 kubenswrapper[15493]: I0216 17:04:48.373667 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz"] Feb 16 17:04:48.375948 master-0 kubenswrapper[15493]: I0216 17:04:48.375862 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.384817 master-0 kubenswrapper[15493]: I0216 17:04:48.384766 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 16 17:04:48.385777 master-0 kubenswrapper[15493]: I0216 17:04:48.385043 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 16 17:04:48.385858 master-0 kubenswrapper[15493]: I0216 17:04:48.384793 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 16 17:04:48.386232 master-0 kubenswrapper[15493]: I0216 17:04:48.386219 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 16 17:04:48.386822 master-0 kubenswrapper[15493]: I0216 17:04:48.386769 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 16 17:04:48.389492 master-0 kubenswrapper[15493]: I0216 17:04:48.389450 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz"] Feb 16 17:04:48.403737 master-0 kubenswrapper[15493]: I0216 17:04:48.393521 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 16 17:04:48.474974 master-0 kubenswrapper[15493]: I0216 17:04:48.474799 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-745bd8d89b-qr4zh"] Feb 16 17:04:48.476696 master-0 kubenswrapper[15493]: I0216 17:04:48.476674 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.481042 master-0 kubenswrapper[15493]: I0216 17:04:48.481007 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 17:04:48.481245 master-0 kubenswrapper[15493]: I0216 17:04:48.481159 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 17:04:48.481395 master-0 kubenswrapper[15493]: I0216 17:04:48.481344 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 17:04:48.481830 master-0 kubenswrapper[15493]: I0216 17:04:48.481795 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-3enh2b6fkpcog" Feb 16 17:04:48.482336 master-0 kubenswrapper[15493]: I0216 17:04:48.481891 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 17:04:48.483096 master-0 kubenswrapper[15493]: I0216 17:04:48.483058 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-745bd8d89b-qr4zh"] Feb 16 17:04:48.524499 master-0 kubenswrapper[15493]: I0216 17:04:48.524396 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.524753 master-0 kubenswrapper[15493]: I0216 17:04:48.524552 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq4dn\" (UniqueName: \"kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.524753 master-0 kubenswrapper[15493]: I0216 17:04:48.524610 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.524753 master-0 kubenswrapper[15493]: I0216 17:04:48.524645 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.524753 master-0 kubenswrapper[15493]: I0216 17:04:48.524688 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.524753 master-0 kubenswrapper[15493]: I0216 17:04:48.524717 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.524753 master-0 kubenswrapper[15493]: I0216 17:04:48.524747 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.525052 master-0 kubenswrapper[15493]: I0216 17:04:48.524781 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.626972 master-0 kubenswrapper[15493]: I0216 17:04:48.626825 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.626972 master-0 kubenswrapper[15493]: I0216 17:04:48.626891 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.629022 master-0 kubenswrapper[15493]: I0216 17:04:48.628376 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq4dn\" (UniqueName: \"kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.629022 master-0 kubenswrapper[15493]: I0216 17:04:48.628458 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.629022 master-0 kubenswrapper[15493]: I0216 17:04:48.628509 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57455\" (UniqueName: \"kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.629022 master-0 kubenswrapper[15493]: I0216 17:04:48.628647 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.629022 master-0 kubenswrapper[15493]: I0216 17:04:48.628685 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.629022 master-0 kubenswrapper[15493]: I0216 17:04:48.628805 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.629022 master-0 kubenswrapper[15493]: I0216 17:04:48.628890 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.629022 master-0 kubenswrapper[15493]: I0216 17:04:48.628971 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.629022 master-0 kubenswrapper[15493]: I0216 17:04:48.628994 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.629618 master-0 kubenswrapper[15493]: I0216 17:04:48.629051 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.629618 master-0 kubenswrapper[15493]: I0216 17:04:48.629089 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.629618 master-0 kubenswrapper[15493]: I0216 17:04:48.629223 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.629618 master-0 kubenswrapper[15493]: I0216 17:04:48.629258 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.630538 master-0 kubenswrapper[15493]: I0216 17:04:48.630482 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.632874 master-0 kubenswrapper[15493]: I0216 17:04:48.632801 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.634420 master-0 kubenswrapper[15493]: I0216 17:04:48.634394 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.635081 master-0 kubenswrapper[15493]: I0216 17:04:48.635027 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.635525 master-0 kubenswrapper[15493]: I0216 17:04:48.635501 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.635576 master-0 kubenswrapper[15493]: I0216 17:04:48.635544 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.639822 master-0 kubenswrapper[15493]: I0216 17:04:48.639784 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.647639 master-0 kubenswrapper[15493]: I0216 17:04:48.647589 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq4dn\" (UniqueName: \"kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.711286 master-0 kubenswrapper[15493]: I0216 17:04:48.711228 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:04:48.730441 master-0 kubenswrapper[15493]: I0216 17:04:48.730381 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.730441 master-0 kubenswrapper[15493]: I0216 17:04:48.730435 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57455\" (UniqueName: \"kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.730877 master-0 kubenswrapper[15493]: I0216 17:04:48.730843 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.731006 master-0 kubenswrapper[15493]: I0216 17:04:48.730902 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.731073 master-0 kubenswrapper[15493]: I0216 17:04:48.731029 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.731385 master-0 kubenswrapper[15493]: I0216 17:04:48.731345 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.731469 master-0 kubenswrapper[15493]: I0216 17:04:48.731444 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.731524 master-0 kubenswrapper[15493]: I0216 17:04:48.731485 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.733388 master-0 kubenswrapper[15493]: I0216 17:04:48.733342 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.733889 master-0 kubenswrapper[15493]: I0216 17:04:48.733841 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.734230 master-0 kubenswrapper[15493]: I0216 17:04:48.734200 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.734734 master-0 kubenswrapper[15493]: I0216 17:04:48.734704 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.735134 master-0 kubenswrapper[15493]: I0216 17:04:48.735107 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.748872 master-0 kubenswrapper[15493]: I0216 17:04:48.748840 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57455\" (UniqueName: \"kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.810939 master-0 kubenswrapper[15493]: I0216 17:04:48.810867 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:04:48.920048 master-0 kubenswrapper[15493]: I0216 17:04:48.919881 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-555857f695-nlrnr"] Feb 16 17:04:48.922725 master-0 kubenswrapper[15493]: I0216 17:04:48.922694 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:04:48.927543 master-0 kubenswrapper[15493]: I0216 17:04:48.927493 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 17:04:48.927722 master-0 kubenswrapper[15493]: I0216 17:04:48.927693 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-lcpkn" Feb 16 17:04:48.955093 master-0 kubenswrapper[15493]: I0216 17:04:48.937844 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-555857f695-nlrnr"] Feb 16 17:04:49.035958 master-0 kubenswrapper[15493]: I0216 17:04:49.035856 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:04:49.138971 master-0 kubenswrapper[15493]: I0216 17:04:49.137626 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:04:49.141640 master-0 kubenswrapper[15493]: I0216 17:04:49.141606 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:04:49.283427 master-0 kubenswrapper[15493]: I0216 17:04:49.283364 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:04:49.609003 master-0 kubenswrapper[15493]: I0216 17:04:49.608861 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 17:04:49.616991 master-0 kubenswrapper[15493]: I0216 17:04:49.616936 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.620581 master-0 kubenswrapper[15493]: I0216 17:04:49.620533 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 17:04:49.620753 master-0 kubenswrapper[15493]: I0216 17:04:49.620728 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 17:04:49.620891 master-0 kubenswrapper[15493]: I0216 17:04:49.620857 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 17:04:49.620973 master-0 kubenswrapper[15493]: I0216 17:04:49.620898 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 17:04:49.621225 master-0 kubenswrapper[15493]: I0216 17:04:49.621206 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 17:04:49.623341 master-0 kubenswrapper[15493]: I0216 17:04:49.622856 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" Feb 16 17:04:49.623341 master-0 kubenswrapper[15493]: I0216 17:04:49.622955 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 17:04:49.626710 master-0 kubenswrapper[15493]: I0216 17:04:49.626664 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 17:04:49.627322 master-0 kubenswrapper[15493]: I0216 17:04:49.627293 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 17:04:49.630079 master-0 kubenswrapper[15493]: I0216 17:04:49.629916 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 17:04:49.631886 master-0 kubenswrapper[15493]: I0216 17:04:49.631862 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 17:04:49.635826 master-0 kubenswrapper[15493]: I0216 17:04:49.635782 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 17:04:49.639621 master-0 kubenswrapper[15493]: I0216 17:04:49.638900 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 17:04:49.746819 master-0 kubenswrapper[15493]: I0216 17:04:49.746753 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747054 master-0 kubenswrapper[15493]: I0216 17:04:49.746884 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747054 master-0 kubenswrapper[15493]: I0216 17:04:49.746998 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747054 master-0 kubenswrapper[15493]: I0216 17:04:49.747035 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747170 master-0 kubenswrapper[15493]: I0216 17:04:49.747056 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747170 master-0 kubenswrapper[15493]: I0216 17:04:49.747118 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747170 master-0 kubenswrapper[15493]: I0216 17:04:49.747151 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747327 master-0 kubenswrapper[15493]: I0216 17:04:49.747212 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgm4p\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-kube-api-access-lgm4p\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747327 master-0 kubenswrapper[15493]: I0216 17:04:49.747257 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config-out\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747327 master-0 kubenswrapper[15493]: I0216 17:04:49.747301 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747418 master-0 kubenswrapper[15493]: I0216 17:04:49.747332 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747418 master-0 kubenswrapper[15493]: I0216 17:04:49.747358 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747418 master-0 kubenswrapper[15493]: I0216 17:04:49.747387 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747418 master-0 kubenswrapper[15493]: I0216 17:04:49.747413 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747540 master-0 kubenswrapper[15493]: I0216 17:04:49.747441 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747540 master-0 kubenswrapper[15493]: I0216 17:04:49.747514 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747601 master-0 kubenswrapper[15493]: I0216 17:04:49.747542 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.747633 master-0 kubenswrapper[15493]: I0216 17:04:49.747616 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.849399 master-0 kubenswrapper[15493]: I0216 17:04:49.849357 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.849399 master-0 kubenswrapper[15493]: I0216 17:04:49.849403 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.849644 master-0 kubenswrapper[15493]: I0216 17:04:49.849423 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.849644 master-0 kubenswrapper[15493]: I0216 17:04:49.849439 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.849644 master-0 kubenswrapper[15493]: I0216 17:04:49.849582 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.849644 master-0 kubenswrapper[15493]: I0216 17:04:49.849603 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.849808 master-0 kubenswrapper[15493]: I0216 17:04:49.849674 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgm4p\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-kube-api-access-lgm4p\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.849808 master-0 kubenswrapper[15493]: I0216 17:04:49.849750 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config-out\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.850092 master-0 kubenswrapper[15493]: I0216 17:04:49.850066 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.850163 master-0 kubenswrapper[15493]: I0216 17:04:49.850115 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.850163 master-0 kubenswrapper[15493]: I0216 17:04:49.850152 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.850244 master-0 kubenswrapper[15493]: I0216 17:04:49.850180 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.850244 master-0 kubenswrapper[15493]: I0216 17:04:49.850215 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.850327 master-0 kubenswrapper[15493]: I0216 17:04:49.850244 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.850327 master-0 kubenswrapper[15493]: I0216 17:04:49.850316 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.850411 master-0 kubenswrapper[15493]: I0216 17:04:49.850347 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.850452 master-0 kubenswrapper[15493]: I0216 17:04:49.850425 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.850452 master-0 kubenswrapper[15493]: I0216 17:04:49.850447 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.850867 master-0 kubenswrapper[15493]: I0216 17:04:49.850832 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.851887 master-0 kubenswrapper[15493]: I0216 17:04:49.851856 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.852174 master-0 kubenswrapper[15493]: I0216 17:04:49.852147 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config-out\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.852352 master-0 kubenswrapper[15493]: I0216 17:04:49.852118 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.852608 master-0 kubenswrapper[15493]: I0216 17:04:49.852572 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.854296 master-0 kubenswrapper[15493]: I0216 17:04:49.854268 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.855572 master-0 kubenswrapper[15493]: I0216 17:04:49.855537 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.855572 master-0 kubenswrapper[15493]: I0216 17:04:49.855568 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.857422 master-0 kubenswrapper[15493]: I0216 17:04:49.857387 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.857757 master-0 kubenswrapper[15493]: I0216 17:04:49.857710 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.858976 master-0 kubenswrapper[15493]: I0216 17:04:49.858728 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.859270 master-0 kubenswrapper[15493]: I0216 17:04:49.859192 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.859327 master-0 kubenswrapper[15493]: I0216 17:04:49.859301 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.859779 master-0 kubenswrapper[15493]: I0216 17:04:49.859739 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.861506 master-0 kubenswrapper[15493]: I0216 17:04:49.861154 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.868182 master-0 kubenswrapper[15493]: I0216 17:04:49.868138 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgm4p\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-kube-api-access-lgm4p\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.872905 master-0 kubenswrapper[15493]: I0216 17:04:49.872809 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.873729 master-0 kubenswrapper[15493]: I0216 17:04:49.873408 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:49.948782 master-0 kubenswrapper[15493]: I0216 17:04:49.948713 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:04:55.249745 master-0 kubenswrapper[15493]: I0216 17:04:55.249670 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl"] Feb 16 17:04:55.776990 master-0 kubenswrapper[15493]: I0216 17:04:55.776856 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz"] Feb 16 17:04:55.781980 master-0 kubenswrapper[15493]: I0216 17:04:55.781938 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-745bd8d89b-qr4zh"] Feb 16 17:04:55.791022 master-0 kubenswrapper[15493]: I0216 17:04:55.790675 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 17:04:55.796994 master-0 kubenswrapper[15493]: I0216 17:04:55.796821 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h"] Feb 16 17:04:55.816942 master-0 kubenswrapper[15493]: W0216 17:04:55.808345 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba37ef0e_373c_4ccc_b082_668630399765.slice/crio-48a6f4b301aa336dd51be0a2dc9935ab26ee4638e511cb6a39d34845e05febb4 WatchSource:0}: Error finding container 48a6f4b301aa336dd51be0a2dc9935ab26ee4638e511cb6a39d34845e05febb4: Status 404 returned error can't find the container with id 48a6f4b301aa336dd51be0a2dc9935ab26ee4638e511cb6a39d34845e05febb4 Feb 16 17:04:55.836842 master-0 kubenswrapper[15493]: W0216 17:04:55.831002 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06067627_6ccf_4cc8_bd20_dabdd776bb46.slice/crio-8a85895229caf9835243513dfb6c9ac92dfc9071663268e6700ff549391d94d1 WatchSource:0}: Error finding container 8a85895229caf9835243513dfb6c9ac92dfc9071663268e6700ff549391d94d1: Status 404 returned error can't find the container with id 8a85895229caf9835243513dfb6c9ac92dfc9071663268e6700ff549391d94d1 Feb 16 17:04:55.836842 master-0 kubenswrapper[15493]: W0216 17:04:55.835284 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe8e8e5d_cebb_4361_b765_5ff737f5e838.slice/crio-1a7530104228dfcd9cf39e02548e7a21bcb499d66ca95afc2dcb21e5dd610518 WatchSource:0}: Error finding container 1a7530104228dfcd9cf39e02548e7a21bcb499d66ca95afc2dcb21e5dd610518: Status 404 returned error can't find the container with id 1a7530104228dfcd9cf39e02548e7a21bcb499d66ca95afc2dcb21e5dd610518 Feb 16 17:04:55.863954 master-0 kubenswrapper[15493]: I0216 17:04:55.862799 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 17:04:55.882686 master-0 kubenswrapper[15493]: W0216 17:04:55.877562 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55d635cd_1f0d_4086_96f2_9f3524f3f18c.slice/crio-4d417c8038e74749a3ea388a9bb9e1e996a9ff87c04206e6e2dab9ae67184635 WatchSource:0}: Error finding container 4d417c8038e74749a3ea388a9bb9e1e996a9ff87c04206e6e2dab9ae67184635: Status 404 returned error can't find the container with id 4d417c8038e74749a3ea388a9bb9e1e996a9ff87c04206e6e2dab9ae67184635 Feb 16 17:04:55.882686 master-0 kubenswrapper[15493]: I0216 17:04:55.879692 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk"] Feb 16 17:04:55.882686 master-0 kubenswrapper[15493]: I0216 17:04:55.881094 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerStarted","Data":"20d8f5571a1b1b9c7175f804573996125ab1f7703e75b75958f3fa465875a31d"} Feb 16 17:04:55.886727 master-0 kubenswrapper[15493]: I0216 17:04:55.886561 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" event={"ID":"ba37ef0e-373c-4ccc-b082-668630399765","Type":"ContainerStarted","Data":"48a6f4b301aa336dd51be0a2dc9935ab26ee4638e511cb6a39d34845e05febb4"} Feb 16 17:04:55.893585 master-0 kubenswrapper[15493]: I0216 17:04:55.893547 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-555857f695-nlrnr"] Feb 16 17:04:55.913561 master-0 kubenswrapper[15493]: I0216 17:04:55.913037 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"4080bf53bbe4953d64ca438398da912c4c4c2a5658ee46224801380ebb32b364"} Feb 16 17:04:55.934680 master-0 kubenswrapper[15493]: I0216 17:04:55.934606 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-dhhfh" event={"ID":"08a90dc5-b0d8-4aad-a002-736492b6c1a9","Type":"ContainerStarted","Data":"dcb8ee027102c635f78357f4ce72bd5b7efe3822aa120feb8c3bf58da7f28758"} Feb 16 17:04:55.935002 master-0 kubenswrapper[15493]: I0216 17:04:55.934961 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:04:55.936068 master-0 kubenswrapper[15493]: I0216 17:04:55.936021 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"1a7530104228dfcd9cf39e02548e7a21bcb499d66ca95afc2dcb21e5dd610518"} Feb 16 17:04:55.937210 master-0 kubenswrapper[15493]: I0216 17:04:55.936979 15493 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-dhhfh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Feb 16 17:04:55.937210 master-0 kubenswrapper[15493]: I0216 17:04:55.937020 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Feb 16 17:04:55.942724 master-0 kubenswrapper[15493]: I0216 17:04:55.942672 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerStarted","Data":"271a64f88c33115a51b95b2f92773598ac51c97a03b1f3cba45b0dab0c8fe865"} Feb 16 17:04:55.943055 master-0 kubenswrapper[15493]: I0216 17:04:55.942729 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerStarted","Data":"b72dba53719e20d1f39a1a24f3506f415214cc3ac1cfd20c226c7b2f937c48b8"} Feb 16 17:04:55.943055 master-0 kubenswrapper[15493]: I0216 17:04:55.942745 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerStarted","Data":"10dbb9e01d0a50aea916059328928bbcf3c420380b615467fc12cc3999f4578f"} Feb 16 17:04:55.944185 master-0 kubenswrapper[15493]: I0216 17:04:55.944144 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerStarted","Data":"8a85895229caf9835243513dfb6c9ac92dfc9071663268e6700ff549391d94d1"} Feb 16 17:04:55.946647 master-0 kubenswrapper[15493]: I0216 17:04:55.946621 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"e14c52dbf9d0263521da55835d9630da4b72192e3d1606e8dd551ca67592feb1"} Feb 16 17:04:55.959724 master-0 kubenswrapper[15493]: I0216 17:04:55.959642 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podStartSLOduration=2.66753649 podStartE2EDuration="37.959614987s" podCreationTimestamp="2026-02-16 17:04:18 +0000 UTC" firstStartedPulling="2026-02-16 17:04:19.727200532 +0000 UTC m=+138.877373612" lastFinishedPulling="2026-02-16 17:04:55.019279039 +0000 UTC m=+174.169452109" observedRunningTime="2026-02-16 17:04:55.953915024 +0000 UTC m=+175.104088114" watchObservedRunningTime="2026-02-16 17:04:55.959614987 +0000 UTC m=+175.109788067" Feb 16 17:04:56.960486 master-0 kubenswrapper[15493]: I0216 17:04:56.960030 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerStarted","Data":"c3d04701b19e2449625d50046aa3d9a7bc6070e2f8b8a4b11366c0cbf1103b1a"} Feb 16 17:04:56.963616 master-0 kubenswrapper[15493]: I0216 17:04:56.963518 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" event={"ID":"54fba066-0e9e-49f6-8a86-34d5b4b660df","Type":"ContainerStarted","Data":"580c3349e66783983316382e688e85aaf107acdabacc36f815648d0f42df13f4"} Feb 16 17:04:56.965775 master-0 kubenswrapper[15493]: I0216 17:04:56.965724 15493 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-dhhfh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Feb 16 17:04:56.965875 master-0 kubenswrapper[15493]: I0216 17:04:56.965784 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Feb 16 17:04:56.965875 master-0 kubenswrapper[15493]: I0216 17:04:56.965733 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerStarted","Data":"4d417c8038e74749a3ea388a9bb9e1e996a9ff87c04206e6e2dab9ae67184635"} Feb 16 17:04:59.309013 master-0 kubenswrapper[15493]: I0216 17:04:59.308965 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:04:59.539338 master-0 kubenswrapper[15493]: I0216 17:04:59.536577 15493 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 16 17:04:59.539338 master-0 kubenswrapper[15493]: I0216 17:04:59.536814 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" containerID="cri-o://c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841" gracePeriod=30 Feb 16 17:04:59.539338 master-0 kubenswrapper[15493]: I0216 17:04:59.536968 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" containerID="cri-o://eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9" gracePeriod=30 Feb 16 17:04:59.540365 master-0 kubenswrapper[15493]: I0216 17:04:59.540323 15493 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Feb 16 17:04:59.540722 master-0 kubenswrapper[15493]: E0216 17:04:59.540695 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" Feb 16 17:04:59.540803 master-0 kubenswrapper[15493]: I0216 17:04:59.540724 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" Feb 16 17:04:59.540803 master-0 kubenswrapper[15493]: E0216 17:04:59.540742 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" Feb 16 17:04:59.540803 master-0 kubenswrapper[15493]: I0216 17:04:59.540753 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" Feb 16 17:04:59.548143 master-0 kubenswrapper[15493]: I0216 17:04:59.540943 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcd" Feb 16 17:04:59.548143 master-0 kubenswrapper[15493]: I0216 17:04:59.540970 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerName="etcdctl" Feb 16 17:04:59.548143 master-0 kubenswrapper[15493]: I0216 17:04:59.544103 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.628000 master-0 kubenswrapper[15493]: I0216 17:04:59.623042 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.628000 master-0 kubenswrapper[15493]: I0216 17:04:59.623120 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.628000 master-0 kubenswrapper[15493]: I0216 17:04:59.623166 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.628000 master-0 kubenswrapper[15493]: I0216 17:04:59.623205 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.628000 master-0 kubenswrapper[15493]: I0216 17:04:59.623222 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.628000 master-0 kubenswrapper[15493]: I0216 17:04:59.623289 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.724990 master-0 kubenswrapper[15493]: I0216 17:04:59.724893 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.724990 master-0 kubenswrapper[15493]: I0216 17:04:59.725002 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.725296 master-0 kubenswrapper[15493]: I0216 17:04:59.725053 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.725296 master-0 kubenswrapper[15493]: I0216 17:04:59.725147 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.725296 master-0 kubenswrapper[15493]: I0216 17:04:59.725248 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.725296 master-0 kubenswrapper[15493]: I0216 17:04:59.725281 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.725473 master-0 kubenswrapper[15493]: I0216 17:04:59.725400 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.725473 master-0 kubenswrapper[15493]: I0216 17:04:59.725466 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.725560 master-0 kubenswrapper[15493]: I0216 17:04:59.725506 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.725560 master-0 kubenswrapper[15493]: I0216 17:04:59.725549 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.725655 master-0 kubenswrapper[15493]: I0216 17:04:59.725587 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:04:59.725655 master-0 kubenswrapper[15493]: I0216 17:04:59.725627 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:05:04.883915 master-0 kubenswrapper[15493]: I0216 17:05:04.883834 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-989b889c9-l264c" podUID="5985bd5d-ee56-4995-a4d3-cb4fda84ef31" containerName="oauth-openshift" containerID="cri-o://fe5deb4be7c9585b3362450f2ce8ffdcd9334e9025f031fcee47ce8437c2a1fb" gracePeriod=15 Feb 16 17:05:07.034641 master-0 kubenswrapper[15493]: I0216 17:05:07.034575 15493 generic.go:334] "Generic (PLEG): container finished" podID="5985bd5d-ee56-4995-a4d3-cb4fda84ef31" containerID="fe5deb4be7c9585b3362450f2ce8ffdcd9334e9025f031fcee47ce8437c2a1fb" exitCode=0 Feb 16 17:05:07.034641 master-0 kubenswrapper[15493]: I0216 17:05:07.034620 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-989b889c9-l264c" event={"ID":"5985bd5d-ee56-4995-a4d3-cb4fda84ef31","Type":"ContainerDied","Data":"fe5deb4be7c9585b3362450f2ce8ffdcd9334e9025f031fcee47ce8437c2a1fb"} Feb 16 17:05:09.774850 master-0 kubenswrapper[15493]: E0216 17:05:09.773979 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:05:12.667510 master-0 kubenswrapper[15493]: E0216 17:05:12.667449 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 17:05:12.668038 master-0 kubenswrapper[15493]: I0216 17:05:12.668024 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 17:05:12.902252 master-0 kubenswrapper[15493]: E0216 17:05:12.902110 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:05:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:05:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:05:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:05:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:05:13.504913 master-0 kubenswrapper[15493]: W0216 17:05:13.504759 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7adecad495595c43c57c30abd350e987.slice/crio-67b69705a627aa1a9fdbcdc2144e62213e5c2b1d98c7b3a156082a883450cb27 WatchSource:0}: Error finding container 67b69705a627aa1a9fdbcdc2144e62213e5c2b1d98c7b3a156082a883450cb27: Status 404 returned error can't find the container with id 67b69705a627aa1a9fdbcdc2144e62213e5c2b1d98c7b3a156082a883450cb27 Feb 16 17:05:13.763623 master-0 kubenswrapper[15493]: I0216 17:05:13.763583 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839437 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-login\") pod \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839483 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-audit-dir\") pod \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839551 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-session\") pod \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839569 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-provider-selection\") pod \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839598 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-router-certs\") pod \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839597 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5985bd5d-ee56-4995-a4d3-cb4fda84ef31" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839633 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-error\") pod \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839669 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-cliconfig\") pod \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839694 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-serving-cert\") pod \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839733 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-trusted-ca-bundle\") pod \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839750 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk5ht\" (UniqueName: \"kubernetes.io/projected/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-kube-api-access-qk5ht\") pod \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839770 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-service-ca\") pod \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839788 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-ocp-branding-template\") pod \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.839810 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-audit-policies\") pod \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\" (UID: \"5985bd5d-ee56-4995-a4d3-cb4fda84ef31\") " Feb 16 17:05:13.840262 master-0 kubenswrapper[15493]: I0216 17:05:13.840059 15493 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:13.841462 master-0 kubenswrapper[15493]: I0216 17:05:13.840668 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5985bd5d-ee56-4995-a4d3-cb4fda84ef31" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:05:13.841462 master-0 kubenswrapper[15493]: I0216 17:05:13.840987 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5985bd5d-ee56-4995-a4d3-cb4fda84ef31" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:05:13.841462 master-0 kubenswrapper[15493]: I0216 17:05:13.841154 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5985bd5d-ee56-4995-a4d3-cb4fda84ef31" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:05:13.841462 master-0 kubenswrapper[15493]: I0216 17:05:13.841242 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5985bd5d-ee56-4995-a4d3-cb4fda84ef31" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:05:13.844340 master-0 kubenswrapper[15493]: I0216 17:05:13.844295 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5985bd5d-ee56-4995-a4d3-cb4fda84ef31" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:05:13.844598 master-0 kubenswrapper[15493]: I0216 17:05:13.844560 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5985bd5d-ee56-4995-a4d3-cb4fda84ef31" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:05:13.845432 master-0 kubenswrapper[15493]: I0216 17:05:13.844983 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5985bd5d-ee56-4995-a4d3-cb4fda84ef31" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:05:13.845432 master-0 kubenswrapper[15493]: I0216 17:05:13.845030 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-kube-api-access-qk5ht" (OuterVolumeSpecName: "kube-api-access-qk5ht") pod "5985bd5d-ee56-4995-a4d3-cb4fda84ef31" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31"). InnerVolumeSpecName "kube-api-access-qk5ht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:05:13.846159 master-0 kubenswrapper[15493]: I0216 17:05:13.846126 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5985bd5d-ee56-4995-a4d3-cb4fda84ef31" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:05:13.846618 master-0 kubenswrapper[15493]: I0216 17:05:13.846561 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5985bd5d-ee56-4995-a4d3-cb4fda84ef31" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:05:13.853305 master-0 kubenswrapper[15493]: I0216 17:05:13.852356 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5985bd5d-ee56-4995-a4d3-cb4fda84ef31" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:05:13.853514 master-0 kubenswrapper[15493]: I0216 17:05:13.853438 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5985bd5d-ee56-4995-a4d3-cb4fda84ef31" (UID: "5985bd5d-ee56-4995-a4d3-cb4fda84ef31"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:05:13.943002 master-0 kubenswrapper[15493]: I0216 17:05:13.942959 15493 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:13.943109 master-0 kubenswrapper[15493]: I0216 17:05:13.943020 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk5ht\" (UniqueName: \"kubernetes.io/projected/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-kube-api-access-qk5ht\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:13.943109 master-0 kubenswrapper[15493]: I0216 17:05:13.943036 15493 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:13.943109 master-0 kubenswrapper[15493]: I0216 17:05:13.943049 15493 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:13.943109 master-0 kubenswrapper[15493]: I0216 17:05:13.943063 15493 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:13.943109 master-0 kubenswrapper[15493]: I0216 17:05:13.943075 15493 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:13.943109 master-0 kubenswrapper[15493]: I0216 17:05:13.943108 15493 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:13.943293 master-0 kubenswrapper[15493]: I0216 17:05:13.943120 15493 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:13.943293 master-0 kubenswrapper[15493]: I0216 17:05:13.943136 15493 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:13.943293 master-0 kubenswrapper[15493]: I0216 17:05:13.943149 15493 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:13.943293 master-0 kubenswrapper[15493]: I0216 17:05:13.943179 15493 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:13.943293 master-0 kubenswrapper[15493]: I0216 17:05:13.943194 15493 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5985bd5d-ee56-4995-a4d3-cb4fda84ef31-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:14.084255 master-0 kubenswrapper[15493]: I0216 17:05:14.084169 15493 generic.go:334] "Generic (PLEG): container finished" podID="a94f9b8e-b020-4aab-8373-6c056ec07464" containerID="42a024cc11f9402c2308bcc6500638212a7b0764540dfc24ee82e4c33279d303" exitCode=0 Feb 16 17:05:14.084255 master-0 kubenswrapper[15493]: I0216 17:05:14.084232 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerDied","Data":"42a024cc11f9402c2308bcc6500638212a7b0764540dfc24ee82e4c33279d303"} Feb 16 17:05:14.085322 master-0 kubenswrapper[15493]: I0216 17:05:14.085230 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"6648e1eebfd3e88db006d1e478b0438156e9d91120229014683317f4088a677e"} Feb 16 17:05:14.087180 master-0 kubenswrapper[15493]: I0216 17:05:14.087141 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" event={"ID":"ba37ef0e-373c-4ccc-b082-668630399765","Type":"ContainerStarted","Data":"225a709a7c3fdb2318ff46c5c9d434f2034eab675b345be7d46a130d694c6335"} Feb 16 17:05:14.088725 master-0 kubenswrapper[15493]: I0216 17:05:14.088681 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerStarted","Data":"15735d6e78e08bb21a10fad58fd15d0445bf13026d3ed52d2dca4bfc75e63305"} Feb 16 17:05:14.089788 master-0 kubenswrapper[15493]: I0216 17:05:14.089745 15493 generic.go:334] "Generic (PLEG): container finished" podID="56a53ffd-3f43-41cb-a9a8-23fcac93f49f" containerID="1240451b8d8edbccb06eba1a7befcdaf3de33d2010502cf2ca66305dbdae7fda" exitCode=0 Feb 16 17:05:14.089862 master-0 kubenswrapper[15493]: I0216 17:05:14.089798 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-retry-1-master-0" event={"ID":"56a53ffd-3f43-41cb-a9a8-23fcac93f49f","Type":"ContainerDied","Data":"1240451b8d8edbccb06eba1a7befcdaf3de33d2010502cf2ca66305dbdae7fda"} Feb 16 17:05:14.091570 master-0 kubenswrapper[15493]: I0216 17:05:14.091529 15493 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="2e0f4b578725b1ac1619bac6b9d64a5b0f9ea710344d672aebfe445bd4732c69" exitCode=0 Feb 16 17:05:14.091666 master-0 kubenswrapper[15493]: I0216 17:05:14.091579 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"2e0f4b578725b1ac1619bac6b9d64a5b0f9ea710344d672aebfe445bd4732c69"} Feb 16 17:05:14.091666 master-0 kubenswrapper[15493]: I0216 17:05:14.091599 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"67b69705a627aa1a9fdbcdc2144e62213e5c2b1d98c7b3a156082a883450cb27"} Feb 16 17:05:14.098979 master-0 kubenswrapper[15493]: I0216 17:05:14.096543 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerStarted","Data":"b1ee61cb8ce7850ad63abcca2d0e9390c50b02337512a9dc946d98bdea931536"} Feb 16 17:05:14.098979 master-0 kubenswrapper[15493]: I0216 17:05:14.098011 15493 generic.go:334] "Generic (PLEG): container finished" podID="e1443fb7-cb1e-4105-b604-b88c749620c4" containerID="fba47eb24f74d8dc16f686568935b61829e866ee98f522e6506264cd0528411b" exitCode=0 Feb 16 17:05:14.098979 master-0 kubenswrapper[15493]: I0216 17:05:14.098055 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerDied","Data":"fba47eb24f74d8dc16f686568935b61829e866ee98f522e6506264cd0528411b"} Feb 16 17:05:14.099819 master-0 kubenswrapper[15493]: I0216 17:05:14.099791 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" event={"ID":"54fba066-0e9e-49f6-8a86-34d5b4b660df","Type":"ContainerStarted","Data":"4525d58974c57fc626b1b1d73b131501d6143b4fb363897d90c509aa694acf7d"} Feb 16 17:05:14.100337 master-0 kubenswrapper[15493]: I0216 17:05:14.100213 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:05:14.118206 master-0 kubenswrapper[15493]: I0216 17:05:14.116754 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-989b889c9-l264c" event={"ID":"5985bd5d-ee56-4995-a4d3-cb4fda84ef31","Type":"ContainerDied","Data":"811ecd50606dddc2e6a8c6214542c8af48017f90ad09bb05c9bf8405f0e0473a"} Feb 16 17:05:14.118206 master-0 kubenswrapper[15493]: I0216 17:05:14.116814 15493 scope.go:117] "RemoveContainer" containerID="fe5deb4be7c9585b3362450f2ce8ffdcd9334e9025f031fcee47ce8437c2a1fb" Feb 16 17:05:14.118206 master-0 kubenswrapper[15493]: I0216 17:05:14.117002 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-989b889c9-l264c" Feb 16 17:05:14.120104 master-0 kubenswrapper[15493]: I0216 17:05:14.120072 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:05:14.121695 master-0 kubenswrapper[15493]: I0216 17:05:14.121666 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"9e1a367362473b50af871a01fc919bca17db857d8bb5ab7a054130ebb39b1a1d"} Feb 16 17:05:14.121889 master-0 kubenswrapper[15493]: I0216 17:05:14.121870 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"5ad40818692ba2cb86d6266c6752da028d7c73dfd4d324ad54f095094ea5a5f2"} Feb 16 17:05:14.125332 master-0 kubenswrapper[15493]: I0216 17:05:14.125259 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerStarted","Data":"0347cb4112367f34332daa64a019331216bdbbe7b22ed17e57af898d234dfb13"} Feb 16 17:05:14.492996 master-0 kubenswrapper[15493]: E0216 17:05:14.492940 15493 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80420f2e7c3cdda71f7d0d6ccbe6f9f3.slice/crio-26fb7956e8f3c69eb64ff1fc06e8f70aea162bbaa7e679a2b8dbe11e568d160a.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:05:15.135300 master-0 kubenswrapper[15493]: I0216 17:05:15.135243 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"f85978e0c5382cab2b7bb125b720813a3ff3fb5061caf152aa359221eab49432"} Feb 16 17:05:15.138319 master-0 kubenswrapper[15493]: I0216 17:05:15.138291 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerStarted","Data":"c01f2942ca834ad5db3b3213b3d7ebcecde2cbfe80384e4ee342a8d3af673c4c"} Feb 16 17:05:15.138423 master-0 kubenswrapper[15493]: I0216 17:05:15.138322 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerStarted","Data":"d7eaf6b92ba384f68d33b5ffca98ff48955f77faa550df3dcb31018e0a060800"} Feb 16 17:05:15.140809 master-0 kubenswrapper[15493]: I0216 17:05:15.140780 15493 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="26fb7956e8f3c69eb64ff1fc06e8f70aea162bbaa7e679a2b8dbe11e568d160a" exitCode=1 Feb 16 17:05:15.140908 master-0 kubenswrapper[15493]: I0216 17:05:15.140844 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"26fb7956e8f3c69eb64ff1fc06e8f70aea162bbaa7e679a2b8dbe11e568d160a"} Feb 16 17:05:15.140908 master-0 kubenswrapper[15493]: I0216 17:05:15.140883 15493 scope.go:117] "RemoveContainer" containerID="9563c6ff303edb4e0a6b2f6ce6960067c267be9fe8766c7044d1f1559d05730f" Feb 16 17:05:15.141399 master-0 kubenswrapper[15493]: I0216 17:05:15.141250 15493 scope.go:117] "RemoveContainer" containerID="26fb7956e8f3c69eb64ff1fc06e8f70aea162bbaa7e679a2b8dbe11e568d160a" Feb 16 17:05:15.141496 master-0 kubenswrapper[15493]: E0216 17:05:15.141479 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:05:15.146077 master-0 kubenswrapper[15493]: I0216 17:05:15.143087 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerStarted","Data":"6dcd5d50bbf027f43829cdb53f16d45fecb46aaa4d161ff0a9649e21a30c1100"} Feb 16 17:05:15.146077 master-0 kubenswrapper[15493]: I0216 17:05:15.143116 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerStarted","Data":"90da39c5be608aef25083568573711edf3f3e1d673fa13869d5f66168a1f89df"} Feb 16 17:05:15.146077 master-0 kubenswrapper[15493]: I0216 17:05:15.145185 15493 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="6648e1eebfd3e88db006d1e478b0438156e9d91120229014683317f4088a677e" exitCode=0 Feb 16 17:05:15.146077 master-0 kubenswrapper[15493]: I0216 17:05:15.145229 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"6648e1eebfd3e88db006d1e478b0438156e9d91120229014683317f4088a677e"} Feb 16 17:05:15.147817 master-0 kubenswrapper[15493]: I0216 17:05:15.147118 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerStarted","Data":"2c91d4eba32f655f029c6a6984be69760a5965065f6832e40105737c2c813247"} Feb 16 17:05:15.147817 master-0 kubenswrapper[15493]: I0216 17:05:15.147145 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerStarted","Data":"98e282b9ddd3e1d2ffe040be844462c924b282deedc606e228c31635a2adedd9"} Feb 16 17:05:15.539120 master-0 kubenswrapper[15493]: I0216 17:05:15.539071 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:05:15.668379 master-0 kubenswrapper[15493]: I0216 17:05:15.668299 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-kubelet-dir\") pod \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\" (UID: \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\") " Feb 16 17:05:15.668595 master-0 kubenswrapper[15493]: I0216 17:05:15.668560 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "56a53ffd-3f43-41cb-a9a8-23fcac93f49f" (UID: "56a53ffd-3f43-41cb-a9a8-23fcac93f49f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:05:15.668630 master-0 kubenswrapper[15493]: I0216 17:05:15.668592 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-kube-api-access\") pod \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\" (UID: \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\") " Feb 16 17:05:15.668717 master-0 kubenswrapper[15493]: I0216 17:05:15.668692 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-var-lock\") pod \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\" (UID: \"56a53ffd-3f43-41cb-a9a8-23fcac93f49f\") " Feb 16 17:05:15.668796 master-0 kubenswrapper[15493]: I0216 17:05:15.668771 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-var-lock" (OuterVolumeSpecName: "var-lock") pod "56a53ffd-3f43-41cb-a9a8-23fcac93f49f" (UID: "56a53ffd-3f43-41cb-a9a8-23fcac93f49f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:05:15.669154 master-0 kubenswrapper[15493]: I0216 17:05:15.669131 15493 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:15.669211 master-0 kubenswrapper[15493]: I0216 17:05:15.669154 15493 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:15.671484 master-0 kubenswrapper[15493]: I0216 17:05:15.671459 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "56a53ffd-3f43-41cb-a9a8-23fcac93f49f" (UID: "56a53ffd-3f43-41cb-a9a8-23fcac93f49f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:05:15.770565 master-0 kubenswrapper[15493]: I0216 17:05:15.770429 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/56a53ffd-3f43-41cb-a9a8-23fcac93f49f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:16.220961 master-0 kubenswrapper[15493]: I0216 17:05:16.220829 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:05:16.220961 master-0 kubenswrapper[15493]: I0216 17:05:16.220858 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-retry-1-master-0" event={"ID":"56a53ffd-3f43-41cb-a9a8-23fcac93f49f","Type":"ContainerDied","Data":"68e17ad5378599c0b9f78f26d8bfa0ac4c8e1a45aad43487c9a82787afd7e198"} Feb 16 17:05:16.220961 master-0 kubenswrapper[15493]: I0216 17:05:16.220912 15493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68e17ad5378599c0b9f78f26d8bfa0ac4c8e1a45aad43487c9a82787afd7e198" Feb 16 17:05:18.256047 master-0 kubenswrapper[15493]: I0216 17:05:18.256006 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"4286c5fe2de6d9fcd50051605f2f4b5c1cba939d016239a3d000fd0f9a25e9f0"} Feb 16 17:05:18.256437 master-0 kubenswrapper[15493]: I0216 17:05:18.256069 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"1302e1b555512ed30fe816d84e187d9499595f02c728ef307269b3bb72731f62"} Feb 16 17:05:18.256437 master-0 kubenswrapper[15493]: I0216 17:05:18.256081 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"15d0ed427ea76ce837f303235b162f71a48c98f5857055297e1b3bf45627edde"} Feb 16 17:05:18.256437 master-0 kubenswrapper[15493]: I0216 17:05:18.256091 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"d7761c8f384f9fd0a76d9ffed7d0a9aec69eee80353e9faf16ba81e81e59c988"} Feb 16 17:05:18.259187 master-0 kubenswrapper[15493]: I0216 17:05:18.259125 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerStarted","Data":"c533cb820095a78f39c6f42ffac03ee174e3ee777a839e55594e3c6025a91606"} Feb 16 17:05:18.259187 master-0 kubenswrapper[15493]: I0216 17:05:18.259170 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerStarted","Data":"e2ac9e086f60d8394dafb914a71895490f5e6c94804446cbace8d6c68123a12b"} Feb 16 17:05:18.259187 master-0 kubenswrapper[15493]: I0216 17:05:18.259182 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerStarted","Data":"3bc3b0a3c7a9d33ae08dca9280e63bf069111d5618b0180dcffe8f0d1a422a94"} Feb 16 17:05:18.259336 master-0 kubenswrapper[15493]: I0216 17:05:18.259197 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerStarted","Data":"e1757e5acb838902f4011b0dfc2a220b74f028cc05105e020bfd55c78d223550"} Feb 16 17:05:18.262693 master-0 kubenswrapper[15493]: I0216 17:05:18.262638 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"fc8150e0d150326f76d6d1d696f6ff87c41e01a6de9f1e6be4e585c70167c9e4"} Feb 16 17:05:18.262693 master-0 kubenswrapper[15493]: I0216 17:05:18.262690 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"e37d1163dc16a44bc20772528909b747a42e1266d3eae2b1214ad6def8f6ca6c"} Feb 16 17:05:18.262826 master-0 kubenswrapper[15493]: I0216 17:05:18.262701 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"bbe096837bb2071dce0c03fcbc8368a495ddd75aeaf1694f35e02bf2253be8b8"} Feb 16 17:05:18.262878 master-0 kubenswrapper[15493]: I0216 17:05:18.262864 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:05:18.270044 master-0 kubenswrapper[15493]: I0216 17:05:18.270018 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:05:19.275613 master-0 kubenswrapper[15493]: I0216 17:05:19.275571 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"a4d180212b837cef90316f3a72302aee5c06f2305971b7807d93998938653588"} Feb 16 17:05:19.275613 master-0 kubenswrapper[15493]: I0216 17:05:19.275618 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"88b34e595911c25aba850a546f90a694c0bd49c6505002297eb0ff69b947d240"} Feb 16 17:05:19.278877 master-0 kubenswrapper[15493]: I0216 17:05:19.278835 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerStarted","Data":"9a2847bba4fb3a03af2537d3d204da4564b5ff5aff8a875dc47e687e78893239"} Feb 16 17:05:19.278987 master-0 kubenswrapper[15493]: I0216 17:05:19.278881 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerStarted","Data":"31bd9a890a5059e0b580fdd2f2ff6c6e7a091febf9ee4623d9457ea21452090a"} Feb 16 17:05:19.775856 master-0 kubenswrapper[15493]: E0216 17:05:19.775776 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:05:19.949308 master-0 kubenswrapper[15493]: I0216 17:05:19.949259 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:05:20.294098 master-0 kubenswrapper[15493]: I0216 17:05:20.294009 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:05:20.294979 master-0 kubenswrapper[15493]: I0216 17:05:20.294912 15493 scope.go:117] "RemoveContainer" containerID="26fb7956e8f3c69eb64ff1fc06e8f70aea162bbaa7e679a2b8dbe11e568d160a" Feb 16 17:05:20.295612 master-0 kubenswrapper[15493]: E0216 17:05:20.295579 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:05:20.953071 master-0 kubenswrapper[15493]: I0216 17:05:20.952999 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:05:21.292733 master-0 kubenswrapper[15493]: I0216 17:05:21.292644 15493 scope.go:117] "RemoveContainer" containerID="26fb7956e8f3c69eb64ff1fc06e8f70aea162bbaa7e679a2b8dbe11e568d160a" Feb 16 17:05:21.293214 master-0 kubenswrapper[15493]: E0216 17:05:21.293163 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:05:22.903500 master-0 kubenswrapper[15493]: E0216 17:05:22.903442 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:05:23.723801 master-0 kubenswrapper[15493]: I0216 17:05:23.723695 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:05:23.725006 master-0 kubenswrapper[15493]: I0216 17:05:23.724910 15493 scope.go:117] "RemoveContainer" containerID="26fb7956e8f3c69eb64ff1fc06e8f70aea162bbaa7e679a2b8dbe11e568d160a" Feb 16 17:05:23.725355 master-0 kubenswrapper[15493]: E0216 17:05:23.725294 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:05:27.097188 master-0 kubenswrapper[15493]: E0216 17:05:27.097108 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 17:05:27.343740 master-0 kubenswrapper[15493]: I0216 17:05:27.343677 15493 generic.go:334] "Generic (PLEG): container finished" podID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerID="eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9" exitCode=0 Feb 16 17:05:28.353660 master-0 kubenswrapper[15493]: I0216 17:05:28.353587 15493 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="42c32ff24def97563a472e34d5e231cd397ebb778d1fabc4818fc275f6f09d01" exitCode=0 Feb 16 17:05:28.354335 master-0 kubenswrapper[15493]: I0216 17:05:28.353654 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"42c32ff24def97563a472e34d5e231cd397ebb778d1fabc4818fc275f6f09d01"} Feb 16 17:05:28.811292 master-0 kubenswrapper[15493]: I0216 17:05:28.811193 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:05:28.811568 master-0 kubenswrapper[15493]: I0216 17:05:28.811424 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:05:29.669680 master-0 kubenswrapper[15493]: I0216 17:05:29.669635 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_400a178a4d5e9a88ba5bbbd1da2ad15e/etcdctl/0.log" Feb 16 17:05:29.670552 master-0 kubenswrapper[15493]: I0216 17:05:29.669720 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:05:29.679490 master-0 kubenswrapper[15493]: I0216 17:05:29.679407 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") pod \"400a178a4d5e9a88ba5bbbd1da2ad15e\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " Feb 16 17:05:29.679643 master-0 kubenswrapper[15493]: I0216 17:05:29.679534 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir" (OuterVolumeSpecName: "data-dir") pod "400a178a4d5e9a88ba5bbbd1da2ad15e" (UID: "400a178a4d5e9a88ba5bbbd1da2ad15e"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:05:29.679748 master-0 kubenswrapper[15493]: I0216 17:05:29.679717 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") pod \"400a178a4d5e9a88ba5bbbd1da2ad15e\" (UID: \"400a178a4d5e9a88ba5bbbd1da2ad15e\") " Feb 16 17:05:29.679834 master-0 kubenswrapper[15493]: I0216 17:05:29.679801 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs" (OuterVolumeSpecName: "certs") pod "400a178a4d5e9a88ba5bbbd1da2ad15e" (UID: "400a178a4d5e9a88ba5bbbd1da2ad15e"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:05:29.680182 master-0 kubenswrapper[15493]: I0216 17:05:29.680141 15493 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:29.680182 master-0 kubenswrapper[15493]: I0216 17:05:29.680179 15493 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/400a178a4d5e9a88ba5bbbd1da2ad15e-data-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:05:29.776958 master-0 kubenswrapper[15493]: E0216 17:05:29.776671 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:05:30.371583 master-0 kubenswrapper[15493]: I0216 17:05:30.371522 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_400a178a4d5e9a88ba5bbbd1da2ad15e/etcdctl/0.log" Feb 16 17:05:30.371809 master-0 kubenswrapper[15493]: I0216 17:05:30.371595 15493 generic.go:334] "Generic (PLEG): container finished" podID="400a178a4d5e9a88ba5bbbd1da2ad15e" containerID="c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841" exitCode=137 Feb 16 17:05:30.371809 master-0 kubenswrapper[15493]: I0216 17:05:30.371672 15493 scope.go:117] "RemoveContainer" containerID="eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9" Feb 16 17:05:30.371809 master-0 kubenswrapper[15493]: I0216 17:05:30.371695 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:05:30.392863 master-0 kubenswrapper[15493]: I0216 17:05:30.390741 15493 scope.go:117] "RemoveContainer" containerID="c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841" Feb 16 17:05:30.410241 master-0 kubenswrapper[15493]: I0216 17:05:30.410194 15493 scope.go:117] "RemoveContainer" containerID="eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9" Feb 16 17:05:30.410640 master-0 kubenswrapper[15493]: E0216 17:05:30.410605 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9\": container with ID starting with eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9 not found: ID does not exist" containerID="eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9" Feb 16 17:05:30.410717 master-0 kubenswrapper[15493]: I0216 17:05:30.410641 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9"} err="failed to get container status \"eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9\": rpc error: code = NotFound desc = could not find container \"eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9\": container with ID starting with eb9629fc9fd47dab2069b3f1a2e6ecff0a928056a010858921adbc2994f281c9 not found: ID does not exist" Feb 16 17:05:30.410717 master-0 kubenswrapper[15493]: I0216 17:05:30.410662 15493 scope.go:117] "RemoveContainer" containerID="c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841" Feb 16 17:05:30.411151 master-0 kubenswrapper[15493]: E0216 17:05:30.411127 15493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841\": container with ID starting with c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841 not found: ID does not exist" containerID="c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841" Feb 16 17:05:30.411229 master-0 kubenswrapper[15493]: I0216 17:05:30.411149 15493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841"} err="failed to get container status \"c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841\": rpc error: code = NotFound desc = could not find container \"c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841\": container with ID starting with c2663e7b942ddd53a6d4e4473bd497f7b865064936d93f3d18b89ab60572b841 not found: ID does not exist" Feb 16 17:05:31.066461 master-0 kubenswrapper[15493]: I0216 17:05:31.066348 15493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="400a178a4d5e9a88ba5bbbd1da2ad15e" path="/var/lib/kubelet/pods/400a178a4d5e9a88ba5bbbd1da2ad15e/volumes" Feb 16 17:05:31.067389 master-0 kubenswrapper[15493]: I0216 17:05:31.067340 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 17:05:32.904994 master-0 kubenswrapper[15493]: E0216 17:05:32.904950 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:05:33.672036 master-0 kubenswrapper[15493]: E0216 17:05:33.671877 15493 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.1894c8f30439de8d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:400a178a4d5e9a88ba5bbbd1da2ad15e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:04:59.536957069 +0000 UTC m=+178.687130159,LastTimestamp:2026-02-16 17:04:59.536957069 +0000 UTC m=+178.687130159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:05:39.777619 master-0 kubenswrapper[15493]: E0216 17:05:39.777468 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:05:41.363320 master-0 kubenswrapper[15493]: E0216 17:05:41.363178 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 17:05:42.472057 master-0 kubenswrapper[15493]: I0216 17:05:42.471964 15493 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="3f3ed06afef55b4f67d79b69d14cf21e310e3d93e7708293634b3fabc3a05a24" exitCode=0 Feb 16 17:05:42.906683 master-0 kubenswrapper[15493]: E0216 17:05:42.906599 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:05:49.779235 master-0 kubenswrapper[15493]: E0216 17:05:49.778491 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:05:49.779235 master-0 kubenswrapper[15493]: I0216 17:05:49.779215 15493 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 17:05:52.572904 master-0 kubenswrapper[15493]: I0216 17:05:52.572816 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hhcpr_39387549-c636-4bd4-b463-f6a93810f277/approver/1.log" Feb 16 17:05:52.573881 master-0 kubenswrapper[15493]: I0216 17:05:52.573822 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hhcpr_39387549-c636-4bd4-b463-f6a93810f277/approver/0.log" Feb 16 17:05:52.574569 master-0 kubenswrapper[15493]: I0216 17:05:52.574509 15493 generic.go:334] "Generic (PLEG): container finished" podID="39387549-c636-4bd4-b463-f6a93810f277" containerID="cf00a7735d0ab343338acb080927ee517385e8abb1b426c1e996a640ce7fcbfa" exitCode=1 Feb 16 17:05:52.907789 master-0 kubenswrapper[15493]: E0216 17:05:52.907645 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:05:52.907789 master-0 kubenswrapper[15493]: E0216 17:05:52.907718 15493 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:05:59.779824 master-0 kubenswrapper[15493]: E0216 17:05:59.779733 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Feb 16 17:06:01.071994 master-0 kubenswrapper[15493]: I0216 17:06:01.071874 15493 status_manager.go:851] "Failed to get status for pod" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-state-metrics-7cc9598d54-8j5rk)" Feb 16 17:06:05.070901 master-0 kubenswrapper[15493]: E0216 17:06:05.070770 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:06:05.071530 master-0 kubenswrapper[15493]: E0216 17:06:05.071101 15493 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.017s" Feb 16 17:06:05.071530 master-0 kubenswrapper[15493]: I0216 17:06:05.071206 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:06:05.071530 master-0 kubenswrapper[15493]: I0216 17:06:05.071247 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:05.083309 master-0 kubenswrapper[15493]: I0216 17:06:05.083194 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 17:06:07.675296 master-0 kubenswrapper[15493]: E0216 17:06:07.675060 15493 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{node-exporter-8256c.1894c8f63f61e22c openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:node-exporter-8256c,UID:a94f9b8e-b020-4aab-8373-6c056ec07464,APIVersion:v1,ResourceVersion:13109,FieldPath:spec.initContainers{init-textfile},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f\" in 18.431s (18.431s including waiting). Image size: 412516925 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:05:13.414337068 +0000 UTC m=+192.564510148,LastTimestamp:2026-02-16 17:05:13.414337068 +0000 UTC m=+192.564510148,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:06:09.981604 master-0 kubenswrapper[15493]: E0216 17:06:09.981273 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Feb 16 17:06:10.720954 master-0 kubenswrapper[15493]: I0216 17:06:10.720872 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_4e206017-9a4e-4db1-9f43-60db756a022d/installer/0.log" Feb 16 17:06:10.720954 master-0 kubenswrapper[15493]: I0216 17:06:10.720939 15493 generic.go:334] "Generic (PLEG): container finished" podID="4e206017-9a4e-4db1-9f43-60db756a022d" containerID="37e6e23249cca416f9a227102f928e3e16fa858be68bdd8d60d856c492484f5f" exitCode=1 Feb 16 17:06:13.223984 master-0 kubenswrapper[15493]: E0216 17:06:13.223680 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:06:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:06:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:06:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:06:03Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38\\\"],\\\"sizeBytes\\\":2890715256},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e2f869b1c4f98a628b2e54c1516a0d0c09c760c91e0e1a940cb76149217661b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:97930d07a108f20287bd5ceb046a5ab125604b2e3564077db9f7d7c077cc5852\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701129928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:2b4063aefbb56035efcf6afc17079b35aab0a5cb6975753fbf6e10285b3b7ebe\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:54361ade94847ff2d99d9ec2248939cc7451a5db4f03ae1e495dfc351bcb48e0\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1232417490},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:06dcb25b4ae74ef159663cc2318f84e4665c7889b38ed62940259e5edd2b576f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a81101fb2bf3c75acf3e62bf09b19b67bccbde0faf09bd379a491f5eadb8afc1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213098166},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13\\\"],\\\"sizeBytes\\\":875178413},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9\\\"],\\\"sizeBytes\\\":857023173},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e\\\"],\\\"sizeBytes\\\":600528538},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471\\\"],\\\"sizeBytes\\\":552251951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f\\\"],\\\"sizeBytes\\\":508404525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730\\\"],\\\"sizeBytes\\\":507065596},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30\\\"],\\\"sizeBytes\\\":499489508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9\\\"],\\\"sizeBytes\\\":497535620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb\\\"],\\\"sizeBytes\\\":481879166}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:06:13.509346 master-0 kubenswrapper[15493]: I0216 17:06:13.509219 15493 scope.go:117] "RemoveContainer" containerID="500d24f874646514d290aa65da48da18a395647cf9847d120c566c759fe02946" Feb 16 17:06:20.382551 master-0 kubenswrapper[15493]: E0216 17:06:20.382442 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Feb 16 17:06:23.225425 master-0 kubenswrapper[15493]: E0216 17:06:23.224998 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:06:31.184506 master-0 kubenswrapper[15493]: E0216 17:06:31.184408 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Feb 16 17:06:33.226286 master-0 kubenswrapper[15493]: E0216 17:06:33.226205 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:06:39.086375 master-0 kubenswrapper[15493]: E0216 17:06:39.086334 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:06:39.087159 master-0 kubenswrapper[15493]: E0216 17:06:39.087139 15493 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Feb 16 17:06:39.088216 master-0 kubenswrapper[15493]: I0216 17:06:39.088177 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:06:39.090401 master-0 kubenswrapper[15493]: I0216 17:06:39.090346 15493 scope.go:117] "RemoveContainer" containerID="cf00a7735d0ab343338acb080927ee517385e8abb1b426c1e996a640ce7fcbfa" Feb 16 17:06:39.092323 master-0 kubenswrapper[15493]: I0216 17:06:39.092222 15493 scope.go:117] "RemoveContainer" containerID="26fb7956e8f3c69eb64ff1fc06e8f70aea162bbaa7e679a2b8dbe11e568d160a" Feb 16 17:06:39.094986 master-0 kubenswrapper[15493]: I0216 17:06:39.094953 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:39.095096 master-0 kubenswrapper[15493]: I0216 17:06:39.095075 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:06:39.095096 master-0 kubenswrapper[15493]: I0216 17:06:39.095094 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"3f3ed06afef55b4f67d79b69d14cf21e310e3d93e7708293634b3fabc3a05a24"} Feb 16 17:06:39.106394 master-0 kubenswrapper[15493]: I0216 17:06:39.106346 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 17:06:39.923597 master-0 kubenswrapper[15493]: I0216 17:06:39.923532 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hhcpr_39387549-c636-4bd4-b463-f6a93810f277/approver/1.log" Feb 16 17:06:39.924171 master-0 kubenswrapper[15493]: I0216 17:06:39.924139 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hhcpr_39387549-c636-4bd4-b463-f6a93810f277/approver/0.log" Feb 16 17:06:40.311060 master-0 kubenswrapper[15493]: I0216 17:06:40.311003 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_4e206017-9a4e-4db1-9f43-60db756a022d/installer/0.log" Feb 16 17:06:40.311590 master-0 kubenswrapper[15493]: I0216 17:06:40.311095 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:06:40.380061 master-0 kubenswrapper[15493]: I0216 17:06:40.379964 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e206017-9a4e-4db1-9f43-60db756a022d-kubelet-dir\") pod \"4e206017-9a4e-4db1-9f43-60db756a022d\" (UID: \"4e206017-9a4e-4db1-9f43-60db756a022d\") " Feb 16 17:06:40.380301 master-0 kubenswrapper[15493]: I0216 17:06:40.380081 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e206017-9a4e-4db1-9f43-60db756a022d-var-lock\") pod \"4e206017-9a4e-4db1-9f43-60db756a022d\" (UID: \"4e206017-9a4e-4db1-9f43-60db756a022d\") " Feb 16 17:06:40.380301 master-0 kubenswrapper[15493]: I0216 17:06:40.380138 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e206017-9a4e-4db1-9f43-60db756a022d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4e206017-9a4e-4db1-9f43-60db756a022d" (UID: "4e206017-9a4e-4db1-9f43-60db756a022d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:06:40.380301 master-0 kubenswrapper[15493]: I0216 17:06:40.380178 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e206017-9a4e-4db1-9f43-60db756a022d-var-lock" (OuterVolumeSpecName: "var-lock") pod "4e206017-9a4e-4db1-9f43-60db756a022d" (UID: "4e206017-9a4e-4db1-9f43-60db756a022d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:06:40.380301 master-0 kubenswrapper[15493]: I0216 17:06:40.380167 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e206017-9a4e-4db1-9f43-60db756a022d-kube-api-access\") pod \"4e206017-9a4e-4db1-9f43-60db756a022d\" (UID: \"4e206017-9a4e-4db1-9f43-60db756a022d\") " Feb 16 17:06:40.380857 master-0 kubenswrapper[15493]: I0216 17:06:40.380815 15493 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e206017-9a4e-4db1-9f43-60db756a022d-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:06:40.380857 master-0 kubenswrapper[15493]: I0216 17:06:40.380846 15493 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4e206017-9a4e-4db1-9f43-60db756a022d-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:06:40.383277 master-0 kubenswrapper[15493]: I0216 17:06:40.383236 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e206017-9a4e-4db1-9f43-60db756a022d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4e206017-9a4e-4db1-9f43-60db756a022d" (UID: "4e206017-9a4e-4db1-9f43-60db756a022d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:06:40.482403 master-0 kubenswrapper[15493]: I0216 17:06:40.482332 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4e206017-9a4e-4db1-9f43-60db756a022d-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:06:40.936475 master-0 kubenswrapper[15493]: I0216 17:06:40.936406 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_4e206017-9a4e-4db1-9f43-60db756a022d/installer/0.log" Feb 16 17:06:40.936763 master-0 kubenswrapper[15493]: I0216 17:06:40.936502 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:06:41.678853 master-0 kubenswrapper[15493]: E0216 17:06:41.678713 15493 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{openshift-state-metrics-546cc7d765-94nfl.1894c8f64430af95 openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:openshift-state-metrics-546cc7d765-94nfl,UID:ae20b683-dac8-419e-808a-ddcdb3c564e1,APIVersion:v1,ResourceVersion:13104,FieldPath:spec.containers{openshift-state-metrics},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f08586dd67c2d3d21053a044138f1bbedceb0847f1af8c3aa76127d86135a58\" in 17.882s (17.882s including waiting). Image size: 426804569 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:05:13.494998933 +0000 UTC m=+192.645172013,LastTimestamp:2026-02-16 17:05:13.494998933 +0000 UTC m=+192.645172013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:06:42.786175 master-0 kubenswrapper[15493]: E0216 17:06:42.786079 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Feb 16 17:06:43.227469 master-0 kubenswrapper[15493]: E0216 17:06:43.227344 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:06:52.102065 master-0 kubenswrapper[15493]: E0216 17:06:52.101834 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Feb 16 17:06:53.228995 master-0 kubenswrapper[15493]: E0216 17:06:53.228803 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:06:53.228995 master-0 kubenswrapper[15493]: E0216 17:06:53.228988 15493 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:06:55.986887 master-0 kubenswrapper[15493]: E0216 17:06:55.986784 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 17:07:01.074694 master-0 kubenswrapper[15493]: I0216 17:07:01.074553 15493 status_manager.go:851] "Failed to get status for pod" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods monitoring-plugin-555857f695-nlrnr)" Feb 16 17:07:02.121502 master-0 kubenswrapper[15493]: I0216 17:07:02.121434 15493 generic.go:334] "Generic (PLEG): container finished" podID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" containerID="05aafc42942edec512935971aa649b31b51a37bb17ad3e45d4a47e5503a28fae" exitCode=0 Feb 16 17:07:05.467333 master-0 kubenswrapper[15493]: I0216 17:07:05.467233 15493 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-s4gp2 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" start-of-body= Feb 16 17:07:05.467333 master-0 kubenswrapper[15493]: I0216 17:07:05.467308 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" Feb 16 17:07:05.467333 master-0 kubenswrapper[15493]: I0216 17:07:05.467326 15493 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-s4gp2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" start-of-body= Feb 16 17:07:05.468582 master-0 kubenswrapper[15493]: I0216 17:07:05.467420 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" Feb 16 17:07:08.166583 master-0 kubenswrapper[15493]: I0216 17:07:08.166382 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-mn6cr_8e90be63-ff6c-4e9e-8b9e-1ad9cf941845/manager/0.log" Feb 16 17:07:08.167471 master-0 kubenswrapper[15493]: I0216 17:07:08.166992 15493 generic.go:334] "Generic (PLEG): container finished" podID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" containerID="11df536ab46de7aea5d67794ede57f343d242c66232b1933e38f8621505f15f7" exitCode=1 Feb 16 17:07:08.169524 master-0 kubenswrapper[15493]: I0216 17:07:08.169472 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-lj58b_54f29618-42c2-4270-9af7-7d82852d7cec/manager/0.log" Feb 16 17:07:08.169677 master-0 kubenswrapper[15493]: I0216 17:07:08.169526 15493 generic.go:334] "Generic (PLEG): container finished" podID="54f29618-42c2-4270-9af7-7d82852d7cec" containerID="af4f06c8656e24dc76c11a21937d73b5e139ad31b06bedcdf3957bacba32069a" exitCode=1 Feb 16 17:07:10.576302 master-0 kubenswrapper[15493]: I0216 17:07:10.576223 15493 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-lj58b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.38:8081/readyz\": dial tcp 10.128.0.38:8081: connect: connection refused" start-of-body= Feb 16 17:07:10.576302 master-0 kubenswrapper[15493]: I0216 17:07:10.576296 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.38:8081/readyz\": dial tcp 10.128.0.38:8081: connect: connection refused" Feb 16 17:07:10.577055 master-0 kubenswrapper[15493]: I0216 17:07:10.576320 15493 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-lj58b container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.38:8081/healthz\": dial tcp 10.128.0.38:8081: connect: connection refused" start-of-body= Feb 16 17:07:10.577055 master-0 kubenswrapper[15493]: I0216 17:07:10.576410 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.38:8081/healthz\": dial tcp 10.128.0.38:8081: connect: connection refused" Feb 16 17:07:11.196776 master-0 kubenswrapper[15493]: I0216 17:07:11.196584 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pfzq2_80d3b238-70c3-4e71-96a1-99405352033f/snapshot-controller/0.log" Feb 16 17:07:11.196776 master-0 kubenswrapper[15493]: I0216 17:07:11.196643 15493 generic.go:334] "Generic (PLEG): container finished" podID="80d3b238-70c3-4e71-96a1-99405352033f" containerID="7573bf948e4a5ccb81f3214838cf4ecabd14ac4f2c4a11558ad134016b1c1851" exitCode=1 Feb 16 17:07:12.388708 master-0 kubenswrapper[15493]: E0216 17:07:12.388331 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 17:07:13.109244 master-0 kubenswrapper[15493]: E0216 17:07:13.109117 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:07:13.109579 master-0 kubenswrapper[15493]: E0216 17:07:13.109515 15493 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.014s" Feb 16 17:07:13.113400 master-0 kubenswrapper[15493]: I0216 17:07:13.113327 15493 scope.go:117] "RemoveContainer" containerID="11df536ab46de7aea5d67794ede57f343d242c66232b1933e38f8621505f15f7" Feb 16 17:07:13.124172 master-0 kubenswrapper[15493]: I0216 17:07:13.124092 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 17:07:13.424177 master-0 kubenswrapper[15493]: E0216 17:07:13.423553 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:07:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:07:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:07:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:07:03Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7fccb6e19eb4caa16d32f4cf59670c2c741c98b099d1f12368b85aab3f84dc38\\\"],\\\"sizeBytes\\\":2890715256},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:3e2f869b1c4f98a628b2e54c1516a0d0c09c760c91e0e1a940cb76149217661b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:97930d07a108f20287bd5ceb046a5ab125604b2e3564077db9f7d7c077cc5852\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1701129928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\"],\\\"sizeBytes\\\":1631983282},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\"],\\\"sizeBytes\\\":1232696860},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:2b4063aefbb56035efcf6afc17079b35aab0a5cb6975753fbf6e10285b3b7ebe\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:54361ade94847ff2d99d9ec2248939cc7451a5db4f03ae1e495dfc351bcb48e0\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1232417490},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:06dcb25b4ae74ef159663cc2318f84e4665c7889b38ed62940259e5edd2b576f\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:a81101fb2bf3c75acf3e62bf09b19b67bccbde0faf09bd379a491f5eadb8afc1\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1213098166},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:28df36269fc553eb1adba5566d6dfc258a1a74063c4cfe8b5bdd3f202591cf56\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:7fa59a55753e6c646b3b56a1a7080a5d70767fb964f1857c411fdf4e05ad4c71\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1201887930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\"],\\\"sizeBytes\\\":987280724},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\"],\\\"sizeBytes\\\":938665460},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\"],\\\"sizeBytes\\\":913084961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13\\\"],\\\"sizeBytes\\\":875178413},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:df623c15a78ca969fb8ad134bde911c2047bf82b50244ee8e523763b6587e072\\\"],\\\"sizeBytes\\\":870929735},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\"],\\\"sizeBytes\\\":857432360},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9\\\"],\\\"sizeBytes\\\":857023173},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07093043bca0089b3c56d9e5331e68f549541e5661e2a39a260aa534dc9528bd\\\"],\\\"sizeBytes\\\":767663184},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e30865ea7d55b76cb925c7d26c650f0bc70fd9a02d7d59d0fe1a3024426229ad\\\"],\\\"sizeBytes\\\":682673937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e786e28fbe0b95c4f5723bebc3abde1333b259fd26673716fc5638d88286d8b7\\\"],\\\"sizeBytes\\\":677894171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\"],\\\"sizeBytes\\\":672642165},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\"],\\\"sizeBytes\\\":616473928},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e\\\"],\\\"sizeBytes\\\":600528538},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\"],\\\"sizeBytes\\\":584205881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\"],\\\"sizeBytes\\\":576983707},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f122c11c2f6a10ca150b136f7291d2e135b3a182d67809aa49727da289787cee\\\"],\\\"sizeBytes\\\":553036394},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f2b80358f029728d7f4ce46418bb6859d9ea7365de7b6f97a5f549ed6e77471\\\"],\\\"sizeBytes\\\":552251951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\"],\\\"sizeBytes\\\":543577525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8bed6766df40c0c172611f3e4555cd20db639eb505b2345abed6d5babdcbb5e3\\\"],\\\"sizeBytes\\\":524042902},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfc52d6ca96f377d53757dc437ca720e860e3e016d16c084bd5f6f2e337d3a1d\\\"],\\\"sizeBytes\\\":523760203},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\"],\\\"sizeBytes\\\":513211213},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\"],\\\"sizeBytes\\\":512819769},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\"],\\\"sizeBytes\\\":509806416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f\\\"],\\\"sizeBytes\\\":508404525},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\"],\\\"sizeBytes\\\":508050651},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\"],\\\"sizeBytes\\\":507103881},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730\\\"],\\\"sizeBytes\\\":507065596},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\"],\\\"sizeBytes\\\":506056636},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\"],\\\"sizeBytes\\\":505990615},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\"],\\\"sizeBytes\\\":503717987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\"],\\\"sizeBytes\\\":503374574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\"],\\\"sizeBytes\\\":502798848},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\"],\\\"sizeBytes\\\":501305896},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\"],\\\"sizeBytes\\\":501222351},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\"],\\\"sizeBytes\\\":500175306},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\"],\\\"sizeBytes\\\":500068323},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d6ab8803bac3ebada13e90d9dd6208301b981488277cdeb847c25ff8002f5a30\\\"],\\\"sizeBytes\\\":499489508},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\"],\\\"sizeBytes\\\":499445182},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9\\\"],\\\"sizeBytes\\\":497535620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\"],\\\"sizeBytes\\\":490819380},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\"],\\\"sizeBytes\\\":489891070},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\"],\\\"sizeBytes\\\":481921522},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb\\\"],\\\"sizeBytes\\\":481879166}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:07:14.223101 master-0 kubenswrapper[15493]: I0216 17:07:14.223049 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-mn6cr_8e90be63-ff6c-4e9e-8b9e-1ad9cf941845/manager/0.log" Feb 16 17:07:15.466681 master-0 kubenswrapper[15493]: I0216 17:07:15.466616 15493 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-s4gp2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" start-of-body= Feb 16 17:07:15.467490 master-0 kubenswrapper[15493]: I0216 17:07:15.466714 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" Feb 16 17:07:15.467490 master-0 kubenswrapper[15493]: I0216 17:07:15.466632 15493 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-s4gp2 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" start-of-body= Feb 16 17:07:15.467490 master-0 kubenswrapper[15493]: I0216 17:07:15.466806 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" Feb 16 17:07:15.681609 master-0 kubenswrapper[15493]: E0216 17:07:15.681449 15493 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{metrics-server-745bd8d89b-qr4zh.1894c8f644696f9a openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:metrics-server-745bd8d89b-qr4zh,UID:ba37ef0e-373c-4ccc-b082-668630399765,APIVersion:v1,ResourceVersion:13383,FieldPath:spec.containers{metrics-server},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692\" in 17.656s (17.656s including waiting). Image size: 466257032 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:05:13.498718106 +0000 UTC m=+192.648891176,LastTimestamp:2026-02-16 17:05:13.498718106 +0000 UTC m=+192.648891176,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:07:15.762446 master-0 kubenswrapper[15493]: I0216 17:07:15.761866 15493 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:07:15.762446 master-0 kubenswrapper[15493]: I0216 17:07:15.762030 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:07:20.575264 master-0 kubenswrapper[15493]: I0216 17:07:20.575183 15493 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-lj58b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.38:8081/readyz\": dial tcp 10.128.0.38:8081: connect: connection refused" start-of-body= Feb 16 17:07:20.575264 master-0 kubenswrapper[15493]: I0216 17:07:20.575250 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.38:8081/readyz\": dial tcp 10.128.0.38:8081: connect: connection refused" Feb 16 17:07:23.295639 master-0 kubenswrapper[15493]: I0216 17:07:23.295569 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler/0.log" Feb 16 17:07:23.296487 master-0 kubenswrapper[15493]: I0216 17:07:23.296044 15493 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="5da187904655b0c19cc34caacbc09e4445fb954c92e48787fb496fa93fb862d3" exitCode=1 Feb 16 17:07:23.298159 master-0 kubenswrapper[15493]: I0216 17:07:23.298073 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-92rqx_404c402a-705f-4352-b9df-b89562070d9c/machine-api-operator/0.log" Feb 16 17:07:23.298526 master-0 kubenswrapper[15493]: I0216 17:07:23.298473 15493 generic.go:334] "Generic (PLEG): container finished" podID="404c402a-705f-4352-b9df-b89562070d9c" containerID="8fca295bb1baf8b775d772272c7b49fe8ab92fdfd4a954cb26df77e5bc91d265" exitCode=255 Feb 16 17:07:23.425011 master-0 kubenswrapper[15493]: E0216 17:07:23.424760 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:07:24.306155 master-0 kubenswrapper[15493]: I0216 17:07:24.306077 15493 generic.go:334] "Generic (PLEG): container finished" podID="e1a7c783-2e23-4284-b648-147984cf1022" containerID="abc0a1f84bde8763c28cad4b7f880d6652bce9442417fc89848d5368bf9822ad" exitCode=0 Feb 16 17:07:24.308666 master-0 kubenswrapper[15493]: I0216 17:07:24.308624 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-4j7pn_4488757c-f0fd-48fa-a3f9-6373b0bcafe4/cluster-baremetal-operator/0.log" Feb 16 17:07:24.308738 master-0 kubenswrapper[15493]: I0216 17:07:24.308680 15493 generic.go:334] "Generic (PLEG): container finished" podID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" containerID="e33dd133299981fac9e32c9766093c6f93957b6afe0a293539ddeda20c06cf82" exitCode=1 Feb 16 17:07:25.426778 master-0 kubenswrapper[15493]: I0216 17:07:25.426614 15493 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 16 17:07:25.426778 master-0 kubenswrapper[15493]: I0216 17:07:25.426680 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 16 17:07:25.466564 master-0 kubenswrapper[15493]: I0216 17:07:25.466454 15493 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-s4gp2 container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" start-of-body= Feb 16 17:07:25.466564 master-0 kubenswrapper[15493]: I0216 17:07:25.466492 15493 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-s4gp2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" start-of-body= Feb 16 17:07:25.466564 master-0 kubenswrapper[15493]: I0216 17:07:25.466532 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" Feb 16 17:07:25.466564 master-0 kubenswrapper[15493]: I0216 17:07:25.466539 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" Feb 16 17:07:25.486114 master-0 kubenswrapper[15493]: I0216 17:07:25.486037 15493 patch_prober.go:28] interesting pod/controller-manager-7fc9897cf8-9rjwd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" start-of-body= Feb 16 17:07:25.486382 master-0 kubenswrapper[15493]: I0216 17:07:25.486114 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" Feb 16 17:07:25.486543 master-0 kubenswrapper[15493]: I0216 17:07:25.486416 15493 patch_prober.go:28] interesting pod/controller-manager-7fc9897cf8-9rjwd container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" start-of-body= Feb 16 17:07:25.486641 master-0 kubenswrapper[15493]: I0216 17:07:25.486584 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" Feb 16 17:07:29.157167 master-0 kubenswrapper[15493]: I0216 17:07:29.157082 15493 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 16 17:07:29.157837 master-0 kubenswrapper[15493]: I0216 17:07:29.157173 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 16 17:07:29.390451 master-0 kubenswrapper[15493]: E0216 17:07:29.390343 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 17:07:30.574895 master-0 kubenswrapper[15493]: I0216 17:07:30.574817 15493 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-lj58b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.38:8081/readyz\": dial tcp 10.128.0.38:8081: connect: connection refused" start-of-body= Feb 16 17:07:30.574895 master-0 kubenswrapper[15493]: I0216 17:07:30.574890 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.38:8081/readyz\": dial tcp 10.128.0.38:8081: connect: connection refused" Feb 16 17:07:30.575465 master-0 kubenswrapper[15493]: I0216 17:07:30.574995 15493 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-lj58b container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.38:8081/healthz\": dial tcp 10.128.0.38:8081: connect: connection refused" start-of-body= Feb 16 17:07:30.575465 master-0 kubenswrapper[15493]: I0216 17:07:30.575071 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.38:8081/healthz\": dial tcp 10.128.0.38:8081: connect: connection refused" Feb 16 17:07:31.366944 master-0 kubenswrapper[15493]: I0216 17:07:31.366854 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/1.log" Feb 16 17:07:31.367748 master-0 kubenswrapper[15493]: I0216 17:07:31.367680 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/0.log" Feb 16 17:07:31.368345 master-0 kubenswrapper[15493]: I0216 17:07:31.368279 15493 generic.go:334] "Generic (PLEG): container finished" podID="702322ac-7610-4568-9a68-b6acbd1f0c12" containerID="30bf73d84862c88d4c4114e9d6fc64cf4ffbe405a1bbd1d4d5e42a328739ac61" exitCode=255 Feb 16 17:07:33.425740 master-0 kubenswrapper[15493]: E0216 17:07:33.425602 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:07:34.391748 master-0 kubenswrapper[15493]: I0216 17:07:34.391697 15493 generic.go:334] "Generic (PLEG): container finished" podID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" containerID="e71887dbe9906e96a642b9c5769b2a44ce06586a2f093a64ec9c69238b229e86" exitCode=0 Feb 16 17:07:35.426729 master-0 kubenswrapper[15493]: I0216 17:07:35.426644 15493 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 16 17:07:35.426729 master-0 kubenswrapper[15493]: I0216 17:07:35.426722 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 16 17:07:35.467626 master-0 kubenswrapper[15493]: I0216 17:07:35.467517 15493 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-s4gp2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" start-of-body= Feb 16 17:07:35.467874 master-0 kubenswrapper[15493]: I0216 17:07:35.467629 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" Feb 16 17:07:35.485806 master-0 kubenswrapper[15493]: I0216 17:07:35.485732 15493 patch_prober.go:28] interesting pod/controller-manager-7fc9897cf8-9rjwd container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" start-of-body= Feb 16 17:07:35.485984 master-0 kubenswrapper[15493]: I0216 17:07:35.485822 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" Feb 16 17:07:35.485984 master-0 kubenswrapper[15493]: I0216 17:07:35.485760 15493 patch_prober.go:28] interesting pod/controller-manager-7fc9897cf8-9rjwd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" start-of-body= Feb 16 17:07:35.485984 master-0 kubenswrapper[15493]: I0216 17:07:35.485910 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" Feb 16 17:07:36.409049 master-0 kubenswrapper[15493]: I0216 17:07:36.408983 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-m66tx_642e5115-b7f2-4561-bc6b-1a74b6d891c4/control-plane-machine-set-operator/0.log" Feb 16 17:07:36.409049 master-0 kubenswrapper[15493]: I0216 17:07:36.409043 15493 generic.go:334] "Generic (PLEG): container finished" podID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" containerID="221dc64441e450195317a3ad8eacbbb293523d0726dbd96812217d44d6f1da31" exitCode=1 Feb 16 17:07:39.157875 master-0 kubenswrapper[15493]: I0216 17:07:39.157732 15493 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 16 17:07:39.157875 master-0 kubenswrapper[15493]: I0216 17:07:39.157874 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 16 17:07:40.575349 master-0 kubenswrapper[15493]: I0216 17:07:40.575175 15493 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-lj58b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.38:8081/readyz\": dial tcp 10.128.0.38:8081: connect: connection refused" start-of-body= Feb 16 17:07:40.575349 master-0 kubenswrapper[15493]: I0216 17:07:40.575277 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.38:8081/readyz\": dial tcp 10.128.0.38:8081: connect: connection refused" Feb 16 17:07:42.454529 master-0 kubenswrapper[15493]: I0216 17:07:42.454389 15493 generic.go:334] "Generic (PLEG): container finished" podID="ab80e0fb-09dd-4c93-b235-1487024105d2" containerID="7c3069013a087b4b128510ad9f826bdcec64055b56ce1f6796106b46734c14be" exitCode=0 Feb 16 17:07:43.426819 master-0 kubenswrapper[15493]: E0216 17:07:43.426705 15493 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:07:45.426644 master-0 kubenswrapper[15493]: I0216 17:07:45.426547 15493 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 16 17:07:45.427432 master-0 kubenswrapper[15493]: I0216 17:07:45.427389 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 16 17:07:45.467213 master-0 kubenswrapper[15493]: I0216 17:07:45.467119 15493 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-s4gp2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" start-of-body= Feb 16 17:07:45.467479 master-0 kubenswrapper[15493]: I0216 17:07:45.467213 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" Feb 16 17:07:45.487670 master-0 kubenswrapper[15493]: I0216 17:07:45.487580 15493 patch_prober.go:28] interesting pod/controller-manager-7fc9897cf8-9rjwd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" start-of-body= Feb 16 17:07:45.487670 master-0 kubenswrapper[15493]: I0216 17:07:45.487639 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" Feb 16 17:07:45.487670 master-0 kubenswrapper[15493]: I0216 17:07:45.487591 15493 patch_prober.go:28] interesting pod/controller-manager-7fc9897cf8-9rjwd container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" start-of-body= Feb 16 17:07:45.488270 master-0 kubenswrapper[15493]: I0216 17:07:45.487689 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" Feb 16 17:07:45.503537 master-0 kubenswrapper[15493]: I0216 17:07:45.503476 15493 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:07:45.503537 master-0 kubenswrapper[15493]: I0216 17:07:45.503535 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:07:46.391852 master-0 kubenswrapper[15493]: E0216 17:07:46.391772 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 17:07:47.127295 master-0 kubenswrapper[15493]: E0216 17:07:47.127207 15493 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Feb 16 17:07:47.128150 master-0 kubenswrapper[15493]: E0216 17:07:47.127513 15493 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.018s" Feb 16 17:07:47.149870 master-0 kubenswrapper[15493]: I0216 17:07:47.149779 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 17:07:49.158219 master-0 kubenswrapper[15493]: I0216 17:07:49.158080 15493 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 16 17:07:49.159267 master-0 kubenswrapper[15493]: I0216 17:07:49.158296 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 16 17:07:49.683595 master-0 kubenswrapper[15493]: E0216 17:07:49.683425 15493 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{prometheus-k8s-0.1894c8f644728071 openshift-monitoring 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-monitoring,Name:prometheus-k8s-0,UID:1cd29be8-2b2a-49f7-badd-ff53c686a63d,APIVersion:v1,ResourceVersion:13445,FieldPath:spec.initContainers{init-config-reloader},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a\" in 17.637s (17.638s including waiting). Image size: 432739783 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:05:13.499312241 +0000 UTC m=+192.649485331,LastTimestamp:2026-02-16 17:05:13.499312241 +0000 UTC m=+192.649485331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:07:50.575294 master-0 kubenswrapper[15493]: I0216 17:07:50.575205 15493 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-lj58b container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.38:8081/readyz\": dial tcp 10.128.0.38:8081: connect: connection refused" start-of-body= Feb 16 17:07:50.575959 master-0 kubenswrapper[15493]: I0216 17:07:50.575295 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.38:8081/readyz\": dial tcp 10.128.0.38:8081: connect: connection refused" Feb 16 17:07:50.575959 master-0 kubenswrapper[15493]: I0216 17:07:50.575216 15493 patch_prober.go:28] interesting pod/operator-controller-controller-manager-85c9b89969-lj58b container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.38:8081/healthz\": dial tcp 10.128.0.38:8081: connect: connection refused" start-of-body= Feb 16 17:07:50.575959 master-0 kubenswrapper[15493]: I0216 17:07:50.575454 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.38:8081/healthz\": dial tcp 10.128.0.38:8081: connect: connection refused" Feb 16 17:07:51.277984 master-0 kubenswrapper[15493]: E0216 17:07:51.277771 15493 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.15s" Feb 16 17:07:51.277984 master-0 kubenswrapper[15493]: I0216 17:07:51.277891 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerDied","Data":"cf00a7735d0ab343338acb080927ee517385e8abb1b426c1e996a640ce7fcbfa"} Feb 16 17:07:51.278452 master-0 kubenswrapper[15493]: I0216 17:07:51.278160 15493 scope.go:117] "RemoveContainer" containerID="0003ee69c56b0c73d7d4526fa1f5d5fb937628023fcef99de3436e9f297fc1a8" Feb 16 17:07:51.292319 master-0 kubenswrapper[15493]: I0216 17:07:51.292254 15493 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Feb 16 17:07:51.295180 master-0 kubenswrapper[15493]: I0216 17:07:51.295110 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:07:51.296369 master-0 kubenswrapper[15493]: I0216 17:07:51.296314 15493 scope.go:117] "RemoveContainer" containerID="30bf73d84862c88d4c4114e9d6fc64cf4ffbe405a1bbd1d4d5e42a328739ac61" Feb 16 17:07:51.296477 master-0 kubenswrapper[15493]: I0216 17:07:51.296416 15493 scope.go:117] "RemoveContainer" containerID="05aafc42942edec512935971aa649b31b51a37bb17ad3e45d4a47e5503a28fae" Feb 16 17:07:51.298679 master-0 kubenswrapper[15493]: I0216 17:07:51.297157 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"4e206017-9a4e-4db1-9f43-60db756a022d","Type":"ContainerDied","Data":"37e6e23249cca416f9a227102f928e3e16fa858be68bdd8d60d856c492484f5f"} Feb 16 17:07:51.307078 master-0 kubenswrapper[15493]: I0216 17:07:51.306889 15493 scope.go:117] "RemoveContainer" containerID="af4f06c8656e24dc76c11a21937d73b5e139ad31b06bedcdf3957bacba32069a" Feb 16 17:07:51.310449 master-0 kubenswrapper[15493]: I0216 17:07:51.310389 15493 scope.go:117] "RemoveContainer" containerID="5da187904655b0c19cc34caacbc09e4445fb954c92e48787fb496fa93fb862d3" Feb 16 17:07:51.317166 master-0 kubenswrapper[15493]: I0216 17:07:51.317117 15493 scope.go:117] "RemoveContainer" containerID="7573bf948e4a5ccb81f3214838cf4ecabd14ac4f2c4a11558ad134016b1c1851" Feb 16 17:07:51.317551 master-0 kubenswrapper[15493]: I0216 17:07:51.317508 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:07:51.317551 master-0 kubenswrapper[15493]: I0216 17:07:51.317551 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:07:51.317658 master-0 kubenswrapper[15493]: I0216 17:07:51.317561 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:07:51.317658 master-0 kubenswrapper[15493]: I0216 17:07:51.317569 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:07:51.317658 master-0 kubenswrapper[15493]: I0216 17:07:51.317579 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:07:51.317865 master-0 kubenswrapper[15493]: I0216 17:07:51.317829 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:07:51.317865 master-0 kubenswrapper[15493]: I0216 17:07:51.317851 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:07:51.317974 master-0 kubenswrapper[15493]: I0216 17:07:51.317955 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:07:51.318026 master-0 kubenswrapper[15493]: I0216 17:07:51.318001 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerStarted","Data":"d03772cee9e1c4250dee38ee136391b1f13825fdf156227a38ae5496af2de176"} Feb 16 17:07:51.318077 master-0 kubenswrapper[15493]: I0216 17:07:51.318038 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:07:51.318077 master-0 kubenswrapper[15493]: I0216 17:07:51.318055 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"64282a4ce180de12ca5dda82666544a85bcd78785d5b9841fd753f40e066bf7d"} Feb 16 17:07:51.318160 master-0 kubenswrapper[15493]: I0216 17:07:51.318075 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"4e206017-9a4e-4db1-9f43-60db756a022d","Type":"ContainerDied","Data":"9b4561b52e2ec188d328992c37415bb43461b89a3939aa9d311729544dd52a0c"} Feb 16 17:07:51.318160 master-0 kubenswrapper[15493]: I0216 17:07:51.318096 15493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b4561b52e2ec188d328992c37415bb43461b89a3939aa9d311729544dd52a0c" Feb 16 17:07:51.318160 master-0 kubenswrapper[15493]: I0216 17:07:51.318111 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"6d952c37f35d4501ad0e0b7175a5e8f1839306c9bac326ebc7eb11692169c1d3"} Feb 16 17:07:51.318160 master-0 kubenswrapper[15493]: I0216 17:07:51.318129 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"128cff88ef56cf543296950508266ca81354faf0b4703951fd7b7e2a57baf467"} Feb 16 17:07:51.318160 master-0 kubenswrapper[15493]: I0216 17:07:51.318145 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"4189c2d6d8ef2623cadd74fd4074654aa0566518da84f032117001d0a4834fca"} Feb 16 17:07:51.318160 master-0 kubenswrapper[15493]: I0216 17:07:51.318158 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"5bce1cf05f07b621a13e12b73d50c64ea3ff3a8f4333028644cef76defec5856"} Feb 16 17:07:51.318397 master-0 kubenswrapper[15493]: I0216 17:07:51.318172 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"c9667853163d6e480abe0bec167f4e23a5014d1bd74d8c2d90aaa40c8e84c78f"} Feb 16 17:07:51.318397 master-0 kubenswrapper[15493]: I0216 17:07:51.318187 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerDied","Data":"05aafc42942edec512935971aa649b31b51a37bb17ad3e45d4a47e5503a28fae"} Feb 16 17:07:51.318397 master-0 kubenswrapper[15493]: I0216 17:07:51.318209 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerDied","Data":"11df536ab46de7aea5d67794ede57f343d242c66232b1933e38f8621505f15f7"} Feb 16 17:07:51.318397 master-0 kubenswrapper[15493]: I0216 17:07:51.318225 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerDied","Data":"af4f06c8656e24dc76c11a21937d73b5e139ad31b06bedcdf3957bacba32069a"} Feb 16 17:07:51.318397 master-0 kubenswrapper[15493]: I0216 17:07:51.318241 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" event={"ID":"80d3b238-70c3-4e71-96a1-99405352033f","Type":"ContainerDied","Data":"7573bf948e4a5ccb81f3214838cf4ecabd14ac4f2c4a11558ad134016b1c1851"} Feb 16 17:07:51.318397 master-0 kubenswrapper[15493]: I0216 17:07:51.318247 15493 scope.go:117] "RemoveContainer" containerID="abc0a1f84bde8763c28cad4b7f880d6652bce9442417fc89848d5368bf9822ad" Feb 16 17:07:51.318397 master-0 kubenswrapper[15493]: I0216 17:07:51.318258 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerStarted","Data":"ab850d1aa486cf37c31e8e39f8b3c0e701213d1b68a0b01c8b03ebcdcc0c19a9"} Feb 16 17:07:51.318397 master-0 kubenswrapper[15493]: I0216 17:07:51.318326 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"5da187904655b0c19cc34caacbc09e4445fb954c92e48787fb496fa93fb862d3"} Feb 16 17:07:51.318397 master-0 kubenswrapper[15493]: I0216 17:07:51.318352 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerDied","Data":"8fca295bb1baf8b775d772272c7b49fe8ab92fdfd4a954cb26df77e5bc91d265"} Feb 16 17:07:51.318397 master-0 kubenswrapper[15493]: I0216 17:07:51.318387 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" event={"ID":"e1a7c783-2e23-4284-b648-147984cf1022","Type":"ContainerDied","Data":"abc0a1f84bde8763c28cad4b7f880d6652bce9442417fc89848d5368bf9822ad"} Feb 16 17:07:51.318907 master-0 kubenswrapper[15493]: I0216 17:07:51.318408 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerDied","Data":"e33dd133299981fac9e32c9766093c6f93957b6afe0a293539ddeda20c06cf82"} Feb 16 17:07:51.318907 master-0 kubenswrapper[15493]: I0216 17:07:51.318535 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerDied","Data":"30bf73d84862c88d4c4114e9d6fc64cf4ffbe405a1bbd1d4d5e42a328739ac61"} Feb 16 17:07:51.318907 master-0 kubenswrapper[15493]: I0216 17:07:51.318560 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerDied","Data":"e71887dbe9906e96a642b9c5769b2a44ce06586a2f093a64ec9c69238b229e86"} Feb 16 17:07:51.318907 master-0 kubenswrapper[15493]: I0216 17:07:51.318726 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" event={"ID":"642e5115-b7f2-4561-bc6b-1a74b6d891c4","Type":"ContainerDied","Data":"221dc64441e450195317a3ad8eacbbb293523d0726dbd96812217d44d6f1da31"} Feb 16 17:07:51.318907 master-0 kubenswrapper[15493]: I0216 17:07:51.318775 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerDied","Data":"7c3069013a087b4b128510ad9f826bdcec64055b56ce1f6796106b46734c14be"} Feb 16 17:07:51.318907 master-0 kubenswrapper[15493]: I0216 17:07:51.318863 15493 scope.go:117] "RemoveContainer" containerID="8fca295bb1baf8b775d772272c7b49fe8ab92fdfd4a954cb26df77e5bc91d265" Feb 16 17:07:51.319192 master-0 kubenswrapper[15493]: I0216 17:07:51.319005 15493 scope.go:117] "RemoveContainer" containerID="e71887dbe9906e96a642b9c5769b2a44ce06586a2f093a64ec9c69238b229e86" Feb 16 17:07:51.319326 master-0 kubenswrapper[15493]: I0216 17:07:51.319290 15493 scope.go:117] "RemoveContainer" containerID="7c3069013a087b4b128510ad9f826bdcec64055b56ce1f6796106b46734c14be" Feb 16 17:07:51.319374 master-0 kubenswrapper[15493]: I0216 17:07:51.319356 15493 scope.go:117] "RemoveContainer" containerID="e33dd133299981fac9e32c9766093c6f93957b6afe0a293539ddeda20c06cf82" Feb 16 17:07:51.319490 master-0 kubenswrapper[15493]: I0216 17:07:51.319448 15493 scope.go:117] "RemoveContainer" containerID="221dc64441e450195317a3ad8eacbbb293523d0726dbd96812217d44d6f1da31" Feb 16 17:07:51.320714 master-0 kubenswrapper[15493]: I0216 17:07:51.320680 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:07:51.341893 master-0 kubenswrapper[15493]: I0216 17:07:51.341849 15493 scope.go:117] "RemoveContainer" containerID="e89782b445c861c527809a14cee2fd06738fb3945a0c6a3181ecbf78934d2bda" Feb 16 17:07:51.381211 master-0 kubenswrapper[15493]: I0216 17:07:51.381151 15493 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 16 17:07:51.381211 master-0 kubenswrapper[15493]: I0216 17:07:51.381202 15493 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="a03eb44a-7698-4fdc-8248-218aed4af174" Feb 16 17:07:51.381211 master-0 kubenswrapper[15493]: I0216 17:07:51.381229 15493 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Feb 16 17:07:51.381505 master-0 kubenswrapper[15493]: I0216 17:07:51.381242 15493 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="a03eb44a-7698-4fdc-8248-218aed4af174" Feb 16 17:07:51.537528 master-0 kubenswrapper[15493]: I0216 17:07:51.537473 15493 scope.go:117] "RemoveContainer" containerID="9713e0568adf454e7586d1d021067b3f58ea3654d5eca48f5359291f1475c373" Feb 16 17:07:51.544670 master-0 kubenswrapper[15493]: I0216 17:07:51.544624 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hhcpr_39387549-c636-4bd4-b463-f6a93810f277/approver/1.log" Feb 16 17:07:51.555894 master-0 kubenswrapper[15493]: I0216 17:07:51.555789 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podStartSLOduration=165.899377787 podStartE2EDuration="3m3.555745785s" podCreationTimestamp="2026-02-16 17:04:48 +0000 UTC" firstStartedPulling="2026-02-16 17:04:55.842336327 +0000 UTC m=+174.992509407" lastFinishedPulling="2026-02-16 17:05:13.498704335 +0000 UTC m=+192.648877405" observedRunningTime="2026-02-16 17:07:51.549384065 +0000 UTC m=+350.699557145" watchObservedRunningTime="2026-02-16 17:07:51.555745785 +0000 UTC m=+350.705918875" Feb 16 17:07:51.753576 master-0 kubenswrapper[15493]: I0216 17:07:51.753481 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-8256c" podStartSLOduration=170.322378538 podStartE2EDuration="3m8.753456347s" podCreationTimestamp="2026-02-16 17:04:43 +0000 UTC" firstStartedPulling="2026-02-16 17:04:54.983245279 +0000 UTC m=+174.133418349" lastFinishedPulling="2026-02-16 17:05:13.414323078 +0000 UTC m=+192.564496158" observedRunningTime="2026-02-16 17:07:51.737138925 +0000 UTC m=+350.887312015" watchObservedRunningTime="2026-02-16 17:07:51.753456347 +0000 UTC m=+350.903629417" Feb 16 17:07:51.845771 master-0 kubenswrapper[15493]: I0216 17:07:51.845701 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podStartSLOduration=170.963604065 podStartE2EDuration="3m8.8456825s" podCreationTimestamp="2026-02-16 17:04:43 +0000 UTC" firstStartedPulling="2026-02-16 17:04:55.612903267 +0000 UTC m=+174.763076327" lastFinishedPulling="2026-02-16 17:05:13.494981682 +0000 UTC m=+192.645154762" observedRunningTime="2026-02-16 17:07:51.845443603 +0000 UTC m=+350.995616683" watchObservedRunningTime="2026-02-16 17:07:51.8456825 +0000 UTC m=+350.995855570" Feb 16 17:07:51.879690 master-0 kubenswrapper[15493]: I0216 17:07:51.879596 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=166.288378522 podStartE2EDuration="3m7.879571993s" podCreationTimestamp="2026-02-16 17:04:44 +0000 UTC" firstStartedPulling="2026-02-16 17:04:55.87728774 +0000 UTC m=+175.027460810" lastFinishedPulling="2026-02-16 17:05:17.468481211 +0000 UTC m=+196.618654281" observedRunningTime="2026-02-16 17:07:51.878048195 +0000 UTC m=+351.028221285" watchObservedRunningTime="2026-02-16 17:07:51.879571993 +0000 UTC m=+351.029745063" Feb 16 17:07:51.900735 master-0 kubenswrapper[15493]: I0216 17:07:51.900566 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podStartSLOduration=171.314480989 podStartE2EDuration="3m8.900549471s" podCreationTimestamp="2026-02-16 17:04:43 +0000 UTC" firstStartedPulling="2026-02-16 17:04:55.914578292 +0000 UTC m=+175.064751362" lastFinishedPulling="2026-02-16 17:05:13.500646764 +0000 UTC m=+192.650819844" observedRunningTime="2026-02-16 17:07:51.895152872 +0000 UTC m=+351.045325952" watchObservedRunningTime="2026-02-16 17:07:51.900549471 +0000 UTC m=+351.050722541" Feb 16 17:07:51.916118 master-0 kubenswrapper[15493]: I0216 17:07:51.914408 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podStartSLOduration=166.337628536 podStartE2EDuration="3m3.914391826s" podCreationTimestamp="2026-02-16 17:04:48 +0000 UTC" firstStartedPulling="2026-02-16 17:04:55.922566171 +0000 UTC m=+175.072739251" lastFinishedPulling="2026-02-16 17:05:13.499329471 +0000 UTC m=+192.649502541" observedRunningTime="2026-02-16 17:07:51.913179547 +0000 UTC m=+351.063352617" watchObservedRunningTime="2026-02-16 17:07:51.914391826 +0000 UTC m=+351.064564896" Feb 16 17:07:51.972290 master-0 kubenswrapper[15493]: I0216 17:07:51.969726 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=161.342683655 podStartE2EDuration="3m2.969710181s" podCreationTimestamp="2026-02-16 17:04:49 +0000 UTC" firstStartedPulling="2026-02-16 17:04:55.860991923 +0000 UTC m=+175.011164993" lastFinishedPulling="2026-02-16 17:05:17.488018449 +0000 UTC m=+196.638191519" observedRunningTime="2026-02-16 17:07:51.967720288 +0000 UTC m=+351.117893378" watchObservedRunningTime="2026-02-16 17:07:51.969710181 +0000 UTC m=+351.119883251" Feb 16 17:07:52.035829 master-0 kubenswrapper[15493]: I0216 17:07:52.035592 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podStartSLOduration=166.378323707 podStartE2EDuration="3m4.035561777s" podCreationTimestamp="2026-02-16 17:04:48 +0000 UTC" firstStartedPulling="2026-02-16 17:04:55.842095191 +0000 UTC m=+174.992268261" lastFinishedPulling="2026-02-16 17:05:13.499333251 +0000 UTC m=+192.649506331" observedRunningTime="2026-02-16 17:07:52.030217179 +0000 UTC m=+351.180390269" watchObservedRunningTime="2026-02-16 17:07:52.035561777 +0000 UTC m=+351.185734867" Feb 16 17:07:52.110182 master-0 kubenswrapper[15493]: I0216 17:07:52.110112 15493 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-989b889c9-l264c"] Feb 16 17:07:52.123914 master-0 kubenswrapper[15493]: I0216 17:07:52.123629 15493 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-989b889c9-l264c"] Feb 16 17:07:52.145085 master-0 kubenswrapper[15493]: I0216 17:07:52.145011 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podStartSLOduration=165.52100212 podStartE2EDuration="3m7.144991449s" podCreationTimestamp="2026-02-16 17:04:45 +0000 UTC" firstStartedPulling="2026-02-16 17:04:55.845185059 +0000 UTC m=+174.995358129" lastFinishedPulling="2026-02-16 17:05:17.469174388 +0000 UTC m=+196.619347458" observedRunningTime="2026-02-16 17:07:52.139020262 +0000 UTC m=+351.289193342" watchObservedRunningTime="2026-02-16 17:07:52.144991449 +0000 UTC m=+351.295164519" Feb 16 17:07:52.556609 master-0 kubenswrapper[15493]: I0216 17:07:52.556530 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-92rqx_404c402a-705f-4352-b9df-b89562070d9c/machine-api-operator/0.log" Feb 16 17:07:52.557387 master-0 kubenswrapper[15493]: I0216 17:07:52.557306 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerStarted","Data":"90cbf2116f3e25b749b07925ed91de371ea44d17a0d85fece1ea2429638db035"} Feb 16 17:07:52.560550 master-0 kubenswrapper[15493]: I0216 17:07:52.560398 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerStarted","Data":"c35c88702bd8a621849f8082fe4f63e6885d894b4f65ad5d41bfc83d85f97dcf"} Feb 16 17:07:52.560766 master-0 kubenswrapper[15493]: I0216 17:07:52.560712 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:07:52.563731 master-0 kubenswrapper[15493]: I0216 17:07:52.563624 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:07:52.565354 master-0 kubenswrapper[15493]: I0216 17:07:52.565296 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/1.log" Feb 16 17:07:52.566161 master-0 kubenswrapper[15493]: I0216 17:07:52.566086 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerStarted","Data":"59609566742bf5ead9458043d860c5784e5a3eba6e4f48cd4386c000dcf98a3f"} Feb 16 17:07:52.569839 master-0 kubenswrapper[15493]: I0216 17:07:52.569814 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pfzq2_80d3b238-70c3-4e71-96a1-99405352033f/snapshot-controller/0.log" Feb 16 17:07:52.570131 master-0 kubenswrapper[15493]: I0216 17:07:52.570072 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" event={"ID":"80d3b238-70c3-4e71-96a1-99405352033f","Type":"ContainerStarted","Data":"d33f6a3f5621bc1476673c5054e5f7762d9e97c50291405678774a966801267f"} Feb 16 17:07:52.573985 master-0 kubenswrapper[15493]: I0216 17:07:52.573946 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerStarted","Data":"9f6816b707db3ee7a3ce4874d11d951c9bf04af4e701546c6792509a97799184"} Feb 16 17:07:52.579087 master-0 kubenswrapper[15493]: I0216 17:07:52.579030 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-lj58b_54f29618-42c2-4270-9af7-7d82852d7cec/manager/0.log" Feb 16 17:07:52.579250 master-0 kubenswrapper[15493]: I0216 17:07:52.579210 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerStarted","Data":"8be786986ba9a6b558bcc95a5388404ffe3d212fd816f257969a4588305eb16d"} Feb 16 17:07:52.579429 master-0 kubenswrapper[15493]: I0216 17:07:52.579389 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:07:52.593812 master-0 kubenswrapper[15493]: I0216 17:07:52.593707 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" event={"ID":"e1a7c783-2e23-4284-b648-147984cf1022","Type":"ContainerStarted","Data":"2a582040a6fe07000f964991d6b5ca6719ac040d9faa11252a7ce6bf5da016e3"} Feb 16 17:07:52.594819 master-0 kubenswrapper[15493]: I0216 17:07:52.594560 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:07:52.597814 master-0 kubenswrapper[15493]: I0216 17:07:52.597759 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerStarted","Data":"20009358c0be2bc1329d86a0acf4dfbd84a7369e251f3fe1202732e17f06df3a"} Feb 16 17:07:52.600472 master-0 kubenswrapper[15493]: I0216 17:07:52.600420 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:07:52.602280 master-0 kubenswrapper[15493]: I0216 17:07:52.602239 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-4j7pn_4488757c-f0fd-48fa-a3f9-6373b0bcafe4/cluster-baremetal-operator/0.log" Feb 16 17:07:52.602394 master-0 kubenswrapper[15493]: I0216 17:07:52.602354 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerStarted","Data":"cf7896ba8c2bed977d056d90bc44e7d6d6d178fe65604d077d79101df877c0f7"} Feb 16 17:07:52.613215 master-0 kubenswrapper[15493]: I0216 17:07:52.613162 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler/0.log" Feb 16 17:07:52.614041 master-0 kubenswrapper[15493]: I0216 17:07:52.614000 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"3a7438c404beed179198e48ea8b265d5cc7d16f7a97df452ce2fd8035a2ab01d"} Feb 16 17:07:52.614290 master-0 kubenswrapper[15493]: I0216 17:07:52.614268 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:07:52.618786 master-0 kubenswrapper[15493]: I0216 17:07:52.618735 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-m66tx_642e5115-b7f2-4561-bc6b-1a74b6d891c4/control-plane-machine-set-operator/0.log" Feb 16 17:07:52.618999 master-0 kubenswrapper[15493]: I0216 17:07:52.618887 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" event={"ID":"642e5115-b7f2-4561-bc6b-1a74b6d891c4","Type":"ContainerStarted","Data":"0378dec77c7a2ef4be2733e7030469db31488b7e185a34697e5785f613bb63ff"} Feb 16 17:07:52.668910 master-0 kubenswrapper[15493]: I0216 17:07:52.668788 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 16 17:07:52.668910 master-0 kubenswrapper[15493]: I0216 17:07:52.668868 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 16 17:07:52.699737 master-0 kubenswrapper[15493]: I0216 17:07:52.699608 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 16 17:07:53.067093 master-0 kubenswrapper[15493]: I0216 17:07:53.067023 15493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5985bd5d-ee56-4995-a4d3-cb4fda84ef31" path="/var/lib/kubelet/pods/5985bd5d-ee56-4995-a4d3-cb4fda84ef31/volumes" Feb 16 17:07:53.723856 master-0 kubenswrapper[15493]: I0216 17:07:53.723769 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:07:53.731682 master-0 kubenswrapper[15493]: I0216 17:07:53.731591 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:07:53.929553 master-0 kubenswrapper[15493]: I0216 17:07:53.929479 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Feb 16 17:07:54.633913 master-0 kubenswrapper[15493]: I0216 17:07:54.633677 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:07:54.637826 master-0 kubenswrapper[15493]: I0216 17:07:54.637804 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:07:54.645558 master-0 kubenswrapper[15493]: I0216 17:07:54.645510 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 16 17:07:54.647003 master-0 kubenswrapper[15493]: E0216 17:07:54.646533 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 16 17:07:54.689886 master-0 kubenswrapper[15493]: I0216 17:07:54.689802 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.6897808269999999 podStartE2EDuration="1.689780827s" podCreationTimestamp="2026-02-16 17:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:07:54.689403375 +0000 UTC m=+353.839576455" watchObservedRunningTime="2026-02-16 17:07:54.689780827 +0000 UTC m=+353.839953907" Feb 16 17:07:58.216338 master-0 kubenswrapper[15493]: I0216 17:07:58.216178 15493 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 16 17:07:58.217286 master-0 kubenswrapper[15493]: E0216 17:07:58.216716 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e206017-9a4e-4db1-9f43-60db756a022d" containerName="installer" Feb 16 17:07:58.217286 master-0 kubenswrapper[15493]: I0216 17:07:58.216759 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e206017-9a4e-4db1-9f43-60db756a022d" containerName="installer" Feb 16 17:07:58.217286 master-0 kubenswrapper[15493]: E0216 17:07:58.216782 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56a53ffd-3f43-41cb-a9a8-23fcac93f49f" containerName="installer" Feb 16 17:07:58.217286 master-0 kubenswrapper[15493]: I0216 17:07:58.216791 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="56a53ffd-3f43-41cb-a9a8-23fcac93f49f" containerName="installer" Feb 16 17:07:58.217286 master-0 kubenswrapper[15493]: E0216 17:07:58.216844 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5985bd5d-ee56-4995-a4d3-cb4fda84ef31" containerName="oauth-openshift" Feb 16 17:07:58.217286 master-0 kubenswrapper[15493]: I0216 17:07:58.216855 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="5985bd5d-ee56-4995-a4d3-cb4fda84ef31" containerName="oauth-openshift" Feb 16 17:07:58.217286 master-0 kubenswrapper[15493]: I0216 17:07:58.217082 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e206017-9a4e-4db1-9f43-60db756a022d" containerName="installer" Feb 16 17:07:58.217286 master-0 kubenswrapper[15493]: I0216 17:07:58.217143 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="5985bd5d-ee56-4995-a4d3-cb4fda84ef31" containerName="oauth-openshift" Feb 16 17:07:58.217286 master-0 kubenswrapper[15493]: I0216 17:07:58.217169 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="56a53ffd-3f43-41cb-a9a8-23fcac93f49f" containerName="installer" Feb 16 17:07:58.217840 master-0 kubenswrapper[15493]: I0216 17:07:58.217715 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:07:58.220416 master-0 kubenswrapper[15493]: I0216 17:07:58.220320 15493 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 17:07:58.222095 master-0 kubenswrapper[15493]: I0216 17:07:58.220636 15493 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-z9qtm" Feb 16 17:07:58.240497 master-0 kubenswrapper[15493]: I0216 17:07:58.240421 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 16 17:07:58.306363 master-0 kubenswrapper[15493]: I0216 17:07:58.306307 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-var-lock\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:07:58.306605 master-0 kubenswrapper[15493]: I0216 17:07:58.306393 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kube-api-access\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:07:58.306605 master-0 kubenswrapper[15493]: I0216 17:07:58.306429 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:07:58.408127 master-0 kubenswrapper[15493]: I0216 17:07:58.407619 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kube-api-access\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:07:58.408127 master-0 kubenswrapper[15493]: I0216 17:07:58.407702 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:07:58.408127 master-0 kubenswrapper[15493]: I0216 17:07:58.407874 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-var-lock\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:07:58.408127 master-0 kubenswrapper[15493]: I0216 17:07:58.407955 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:07:58.408127 master-0 kubenswrapper[15493]: I0216 17:07:58.408009 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-var-lock\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:07:59.805262 master-0 kubenswrapper[15493]: I0216 17:07:59.805184 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kube-api-access\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:08:00.049641 master-0 kubenswrapper[15493]: I0216 17:08:00.049565 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:08:00.452628 master-0 kubenswrapper[15493]: I0216 17:08:00.452574 15493 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Feb 16 17:08:00.578898 master-0 kubenswrapper[15493]: I0216 17:08:00.577452 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:08:00.678708 master-0 kubenswrapper[15493]: I0216 17:08:00.678647 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"1ea5bf67-1fd1-488a-a440-00bb9a8533d0","Type":"ContainerStarted","Data":"476c1ab5895246950ff0af3a254d45f710acb8d5fa693b21601ad92d3de01336"} Feb 16 17:08:01.685414 master-0 kubenswrapper[15493]: I0216 17:08:01.685343 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"1ea5bf67-1fd1-488a-a440-00bb9a8533d0","Type":"ContainerStarted","Data":"f6c9fdfd97c165e8ebd7f3a9510d3689be682907a4ea0b0e75b885d792411309"} Feb 16 17:08:01.708295 master-0 kubenswrapper[15493]: I0216 17:08:01.708187 15493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=3.708160949 podStartE2EDuration="3.708160949s" podCreationTimestamp="2026-02-16 17:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:08:01.703145721 +0000 UTC m=+360.853318801" watchObservedRunningTime="2026-02-16 17:08:01.708160949 +0000 UTC m=+360.858334049" Feb 16 17:08:03.394378 master-0 kubenswrapper[15493]: E0216 17:08:03.394300 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Feb 16 17:08:15.503004 master-0 kubenswrapper[15493]: I0216 17:08:15.502151 15493 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:08:15.503004 master-0 kubenswrapper[15493]: I0216 17:08:15.502215 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:08:15.503004 master-0 kubenswrapper[15493]: I0216 17:08:15.502259 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:08:15.503004 master-0 kubenswrapper[15493]: I0216 17:08:15.502905 15493 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e14c52dbf9d0263521da55835d9630da4b72192e3d1606e8dd551ca67592feb1"} pod="openshift-machine-config-operator/machine-config-daemon-98q6v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:08:15.503004 master-0 kubenswrapper[15493]: I0216 17:08:15.502986 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" containerID="cri-o://e14c52dbf9d0263521da55835d9630da4b72192e3d1606e8dd551ca67592feb1" gracePeriod=600 Feb 16 17:08:15.791457 master-0 kubenswrapper[15493]: I0216 17:08:15.791338 15493 generic.go:334] "Generic (PLEG): container finished" podID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerID="e14c52dbf9d0263521da55835d9630da4b72192e3d1606e8dd551ca67592feb1" exitCode=0 Feb 16 17:08:15.791457 master-0 kubenswrapper[15493]: I0216 17:08:15.791408 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerDied","Data":"e14c52dbf9d0263521da55835d9630da4b72192e3d1606e8dd551ca67592feb1"} Feb 16 17:08:15.791457 master-0 kubenswrapper[15493]: I0216 17:08:15.791452 15493 scope.go:117] "RemoveContainer" containerID="df55732c40e933ea372f1214a91cde4306eb5555441b3143bbda5066dd5d87f2" Feb 16 17:08:16.803944 master-0 kubenswrapper[15493]: I0216 17:08:16.803852 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"0a4e3758232c1b4d1cd852c6c0d2cb896a9be7004e29268b474e13c843b389c0"} Feb 16 17:08:38.487589 master-0 kubenswrapper[15493]: E0216 17:08:38.487442 15493 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Feb 16 17:08:38.489950 master-0 kubenswrapper[15493]: I0216 17:08:38.489890 15493 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 17:08:38.491444 master-0 kubenswrapper[15493]: I0216 17:08:38.491416 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.509312 master-0 kubenswrapper[15493]: I0216 17:08:38.509241 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.509572 master-0 kubenswrapper[15493]: I0216 17:08:38.509364 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.509572 master-0 kubenswrapper[15493]: I0216 17:08:38.509390 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.509572 master-0 kubenswrapper[15493]: I0216 17:08:38.509426 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.509572 master-0 kubenswrapper[15493]: I0216 17:08:38.509511 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.534299 master-0 kubenswrapper[15493]: E0216 17:08:38.534250 15493 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Feb 16 17:08:38.534800 master-0 kubenswrapper[15493]: I0216 17:08:38.534757 15493 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:08:38.535326 master-0 kubenswrapper[15493]: I0216 17:08:38.535269 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver" containerID="cri-o://01f4f89971ebb359e0eeec52882d07d21354b8e08fc6c1173fde440ef3e5d38b" gracePeriod=15 Feb 16 17:08:38.535491 master-0 kubenswrapper[15493]: I0216 17:08:38.535392 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://fe91129f000124aca4ffcb46fef9c002b10b433ceec06a4c01ffe2fc33ca2b2a" gracePeriod=15 Feb 16 17:08:38.535556 master-0 kubenswrapper[15493]: I0216 17:08:38.535312 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ab4cf257e6e0f29ed254052561254039fa1b8a8f9b4ce54fa741917d9a4c1648" gracePeriod=15 Feb 16 17:08:38.535601 master-0 kubenswrapper[15493]: I0216 17:08:38.535320 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://cd7ca8585e9770f668282d25d327e2a91ec2db08fd3ae45538225dc8a5ab9091" gracePeriod=15 Feb 16 17:08:38.535656 master-0 kubenswrapper[15493]: I0216 17:08:38.535619 15493 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-check-endpoints" containerID="cri-o://85003325a476039d5eb44432fdcf7d2532212a32580f381790355fd21b4f3730" gracePeriod=15 Feb 16 17:08:38.538060 master-0 kubenswrapper[15493]: I0216 17:08:38.538032 15493 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:08:38.538322 master-0 kubenswrapper[15493]: E0216 17:08:38.538302 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-insecure-readyz" Feb 16 17:08:38.538322 master-0 kubenswrapper[15493]: I0216 17:08:38.538322 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-insecure-readyz" Feb 16 17:08:38.538398 master-0 kubenswrapper[15493]: E0216 17:08:38.538332 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="setup" Feb 16 17:08:38.538398 master-0 kubenswrapper[15493]: I0216 17:08:38.538342 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="setup" Feb 16 17:08:38.538398 master-0 kubenswrapper[15493]: E0216 17:08:38.538358 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-check-endpoints" Feb 16 17:08:38.538398 master-0 kubenswrapper[15493]: I0216 17:08:38.538366 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-check-endpoints" Feb 16 17:08:38.538398 master-0 kubenswrapper[15493]: E0216 17:08:38.538386 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:08:38.538398 master-0 kubenswrapper[15493]: I0216 17:08:38.538395 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:08:38.538603 master-0 kubenswrapper[15493]: E0216 17:08:38.538407 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver" Feb 16 17:08:38.538603 master-0 kubenswrapper[15493]: I0216 17:08:38.538415 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver" Feb 16 17:08:38.538603 master-0 kubenswrapper[15493]: E0216 17:08:38.538440 15493 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-syncer" Feb 16 17:08:38.538603 master-0 kubenswrapper[15493]: I0216 17:08:38.538449 15493 state_mem.go:107] "Deleted CPUSet assignment" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-syncer" Feb 16 17:08:38.538603 master-0 kubenswrapper[15493]: I0216 17:08:38.538600 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-insecure-readyz" Feb 16 17:08:38.538741 master-0 kubenswrapper[15493]: I0216 17:08:38.538641 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:08:38.538741 master-0 kubenswrapper[15493]: I0216 17:08:38.538654 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-check-endpoints" Feb 16 17:08:38.538741 master-0 kubenswrapper[15493]: I0216 17:08:38.538664 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver-cert-syncer" Feb 16 17:08:38.538741 master-0 kubenswrapper[15493]: I0216 17:08:38.538688 15493 memory_manager.go:354] "RemoveStaleState removing state" podUID="e300ec3a145c1339a627607b3c84b99d" containerName="kube-apiserver" Feb 16 17:08:38.616245 master-0 kubenswrapper[15493]: I0216 17:08:38.616162 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.620040 master-0 kubenswrapper[15493]: I0216 17:08:38.611993 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.620040 master-0 kubenswrapper[15493]: I0216 17:08:38.619236 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.620040 master-0 kubenswrapper[15493]: I0216 17:08:38.619352 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.620040 master-0 kubenswrapper[15493]: I0216 17:08:38.619525 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:08:38.620040 master-0 kubenswrapper[15493]: I0216 17:08:38.619616 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.620040 master-0 kubenswrapper[15493]: I0216 17:08:38.619647 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:08:38.620040 master-0 kubenswrapper[15493]: I0216 17:08:38.619743 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.620040 master-0 kubenswrapper[15493]: I0216 17:08:38.619831 15493 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:08:38.620555 master-0 kubenswrapper[15493]: I0216 17:08:38.620151 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.620555 master-0 kubenswrapper[15493]: I0216 17:08:38.620193 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.620555 master-0 kubenswrapper[15493]: I0216 17:08:38.620217 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.620555 master-0 kubenswrapper[15493]: I0216 17:08:38.620246 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:38.721123 master-0 kubenswrapper[15493]: I0216 17:08:38.721062 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:08:38.721295 master-0 kubenswrapper[15493]: I0216 17:08:38.721169 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:08:38.721295 master-0 kubenswrapper[15493]: I0216 17:08:38.721231 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:08:38.721295 master-0 kubenswrapper[15493]: I0216 17:08:38.721263 15493 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:08:38.721406 master-0 kubenswrapper[15493]: I0216 17:08:38.721283 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:08:38.721810 master-0 kubenswrapper[15493]: I0216 17:08:38.721779 15493 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:08:38.954318 master-0 kubenswrapper[15493]: I0216 17:08:38.954248 15493 generic.go:334] "Generic (PLEG): container finished" podID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" containerID="f6c9fdfd97c165e8ebd7f3a9510d3689be682907a4ea0b0e75b885d792411309" exitCode=0 Feb 16 17:08:38.954550 master-0 kubenswrapper[15493]: I0216 17:08:38.954355 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"1ea5bf67-1fd1-488a-a440-00bb9a8533d0","Type":"ContainerDied","Data":"f6c9fdfd97c165e8ebd7f3a9510d3689be682907a4ea0b0e75b885d792411309"} Feb 16 17:08:38.955381 master-0 kubenswrapper[15493]: I0216 17:08:38.955342 15493 status_manager.go:851] "Failed to get status for pod" podUID="e300ec3a145c1339a627607b3c84b99d" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:38.956186 master-0 kubenswrapper[15493]: I0216 17:08:38.956137 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:38.957662 master-0 kubenswrapper[15493]: I0216 17:08:38.957623 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_e300ec3a145c1339a627607b3c84b99d/kube-apiserver-cert-syncer/0.log" Feb 16 17:08:38.958912 master-0 kubenswrapper[15493]: I0216 17:08:38.958855 15493 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="85003325a476039d5eb44432fdcf7d2532212a32580f381790355fd21b4f3730" exitCode=0 Feb 16 17:08:38.958912 master-0 kubenswrapper[15493]: I0216 17:08:38.958895 15493 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="fe91129f000124aca4ffcb46fef9c002b10b433ceec06a4c01ffe2fc33ca2b2a" exitCode=0 Feb 16 17:08:38.958912 master-0 kubenswrapper[15493]: I0216 17:08:38.958911 15493 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="cd7ca8585e9770f668282d25d327e2a91ec2db08fd3ae45538225dc8a5ab9091" exitCode=0 Feb 16 17:08:38.958912 master-0 kubenswrapper[15493]: I0216 17:08:38.958949 15493 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="ab4cf257e6e0f29ed254052561254039fa1b8a8f9b4ce54fa741917d9a4c1648" exitCode=2 Feb 16 17:08:40.846613 master-0 kubenswrapper[15493]: E0216 17:08:40.846524 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:40.847230 master-0 kubenswrapper[15493]: E0216 17:08:40.847193 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:40.847812 master-0 kubenswrapper[15493]: E0216 17:08:40.847768 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:40.848495 master-0 kubenswrapper[15493]: E0216 17:08:40.848416 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:40.849043 master-0 kubenswrapper[15493]: E0216 17:08:40.849014 15493 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:40.849085 master-0 kubenswrapper[15493]: I0216 17:08:40.849043 15493 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 17:08:40.849522 master-0 kubenswrapper[15493]: E0216 17:08:40.849486 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 16 17:08:40.977016 master-0 kubenswrapper[15493]: I0216 17:08:40.976952 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"1ea5bf67-1fd1-488a-a440-00bb9a8533d0","Type":"ContainerDied","Data":"476c1ab5895246950ff0af3a254d45f710acb8d5fa693b21601ad92d3de01336"} Feb 16 17:08:40.977016 master-0 kubenswrapper[15493]: I0216 17:08:40.976993 15493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="476c1ab5895246950ff0af3a254d45f710acb8d5fa693b21601ad92d3de01336" Feb 16 17:08:40.980657 master-0 kubenswrapper[15493]: I0216 17:08:40.980607 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_e300ec3a145c1339a627607b3c84b99d/kube-apiserver-cert-syncer/0.log" Feb 16 17:08:40.981593 master-0 kubenswrapper[15493]: I0216 17:08:40.981550 15493 generic.go:334] "Generic (PLEG): container finished" podID="e300ec3a145c1339a627607b3c84b99d" containerID="01f4f89971ebb359e0eeec52882d07d21354b8e08fc6c1173fde440ef3e5d38b" exitCode=0 Feb 16 17:08:40.981593 master-0 kubenswrapper[15493]: I0216 17:08:40.981584 15493 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55448d8bea6b7d300f8becd37c0b5654a24938ecf842378babc2a1e0bcb81d5b" Feb 16 17:08:41.051624 master-0 kubenswrapper[15493]: E0216 17:08:41.051495 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 16 17:08:41.062512 master-0 kubenswrapper[15493]: I0216 17:08:41.062420 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:41.452474 master-0 kubenswrapper[15493]: E0216 17:08:41.452374 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 16 17:08:42.253949 master-0 kubenswrapper[15493]: E0216 17:08:42.253835 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 16 17:08:43.070129 master-0 kubenswrapper[15493]: E0216 17:08:43.070058 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:43.070501 master-0 kubenswrapper[15493]: I0216 17:08:43.070472 15493 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:43.075574 master-0 kubenswrapper[15493]: I0216 17:08:43.075535 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:08:43.076428 master-0 kubenswrapper[15493]: I0216 17:08:43.076382 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:43.101148 master-0 kubenswrapper[15493]: W0216 17:08:43.101080 15493 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32286c81635de6de1cf7f328273c1a49.slice/crio-fb8e6c20437263cdd7c1b3ec81c35824763837446eb013bf230f57c37a5c7d4c WatchSource:0}: Error finding container fb8e6c20437263cdd7c1b3ec81c35824763837446eb013bf230f57c37a5c7d4c: Status 404 returned error can't find the container with id fb8e6c20437263cdd7c1b3ec81c35824763837446eb013bf230f57c37a5c7d4c Feb 16 17:08:43.103958 master-0 kubenswrapper[15493]: E0216 17:08:43.103828 15493 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1894c92711cf73af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:32286c81635de6de1cf7f328273c1a49,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:08:43.103163311 +0000 UTC m=+402.253336421,LastTimestamp:2026-02-16 17:08:43.103163311 +0000 UTC m=+402.253336421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:08:43.123971 master-0 kubenswrapper[15493]: I0216 17:08:43.123913 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kubelet-dir\") pod \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " Feb 16 17:08:43.124069 master-0 kubenswrapper[15493]: I0216 17:08:43.123982 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1ea5bf67-1fd1-488a-a440-00bb9a8533d0" (UID: "1ea5bf67-1fd1-488a-a440-00bb9a8533d0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:08:43.124069 master-0 kubenswrapper[15493]: I0216 17:08:43.123995 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kube-api-access\") pod \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " Feb 16 17:08:43.124149 master-0 kubenswrapper[15493]: I0216 17:08:43.124116 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-var-lock\") pod \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " Feb 16 17:08:43.124399 master-0 kubenswrapper[15493]: I0216 17:08:43.124348 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-var-lock" (OuterVolumeSpecName: "var-lock") pod "1ea5bf67-1fd1-488a-a440-00bb9a8533d0" (UID: "1ea5bf67-1fd1-488a-a440-00bb9a8533d0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:08:43.124666 master-0 kubenswrapper[15493]: I0216 17:08:43.124641 15493 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:08:43.124666 master-0 kubenswrapper[15493]: I0216 17:08:43.124664 15493 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:08:43.127547 master-0 kubenswrapper[15493]: I0216 17:08:43.127513 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1ea5bf67-1fd1-488a-a440-00bb9a8533d0" (UID: "1ea5bf67-1fd1-488a-a440-00bb9a8533d0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:08:43.226463 master-0 kubenswrapper[15493]: I0216 17:08:43.226414 15493 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:08:43.854738 master-0 kubenswrapper[15493]: E0216 17:08:43.854570 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 16 17:08:44.011889 master-0 kubenswrapper[15493]: I0216 17:08:44.011442 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:08:44.011889 master-0 kubenswrapper[15493]: I0216 17:08:44.011435 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"32286c81635de6de1cf7f328273c1a49","Type":"ContainerStarted","Data":"fb8e6c20437263cdd7c1b3ec81c35824763837446eb013bf230f57c37a5c7d4c"} Feb 16 17:08:44.750968 master-0 kubenswrapper[15493]: I0216 17:08:44.750910 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_e300ec3a145c1339a627607b3c84b99d/kube-apiserver-cert-syncer/0.log" Feb 16 17:08:44.751578 master-0 kubenswrapper[15493]: I0216 17:08:44.751547 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:08:44.752387 master-0 kubenswrapper[15493]: I0216 17:08:44.752318 15493 status_manager.go:851] "Failed to get status for pod" podUID="e300ec3a145c1339a627607b3c84b99d" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:44.752885 master-0 kubenswrapper[15493]: I0216 17:08:44.752853 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:44.772340 master-0 kubenswrapper[15493]: I0216 17:08:44.772279 15493 status_manager.go:851] "Failed to get status for pod" podUID="e300ec3a145c1339a627607b3c84b99d" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:44.773016 master-0 kubenswrapper[15493]: I0216 17:08:44.772911 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:44.851491 master-0 kubenswrapper[15493]: I0216 17:08:44.851432 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") pod \"e300ec3a145c1339a627607b3c84b99d\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " Feb 16 17:08:44.851671 master-0 kubenswrapper[15493]: I0216 17:08:44.851506 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") pod \"e300ec3a145c1339a627607b3c84b99d\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " Feb 16 17:08:44.851671 master-0 kubenswrapper[15493]: I0216 17:08:44.851589 15493 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") pod \"e300ec3a145c1339a627607b3c84b99d\" (UID: \"e300ec3a145c1339a627607b3c84b99d\") " Feb 16 17:08:44.852991 master-0 kubenswrapper[15493]: I0216 17:08:44.852098 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "e300ec3a145c1339a627607b3c84b99d" (UID: "e300ec3a145c1339a627607b3c84b99d"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:08:44.852991 master-0 kubenswrapper[15493]: I0216 17:08:44.852108 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "e300ec3a145c1339a627607b3c84b99d" (UID: "e300ec3a145c1339a627607b3c84b99d"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:08:44.852991 master-0 kubenswrapper[15493]: I0216 17:08:44.852221 15493 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "e300ec3a145c1339a627607b3c84b99d" (UID: "e300ec3a145c1339a627607b3c84b99d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:08:44.953212 master-0 kubenswrapper[15493]: I0216 17:08:44.953153 15493 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:08:44.953212 master-0 kubenswrapper[15493]: I0216 17:08:44.953205 15493 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:08:44.953862 master-0 kubenswrapper[15493]: I0216 17:08:44.953222 15493 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e300ec3a145c1339a627607b3c84b99d-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:08:45.019041 master-0 kubenswrapper[15493]: I0216 17:08:45.018880 15493 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:08:45.063607 master-0 kubenswrapper[15493]: I0216 17:08:45.063541 15493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e300ec3a145c1339a627607b3c84b99d" path="/var/lib/kubelet/pods/e300ec3a145c1339a627607b3c84b99d/volumes" Feb 16 17:08:45.430080 master-0 kubenswrapper[15493]: I0216 17:08:45.429915 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:08:45.430698 master-0 kubenswrapper[15493]: I0216 17:08:45.430640 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:45.431247 master-0 kubenswrapper[15493]: I0216 17:08:45.431193 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:46.436988 master-0 kubenswrapper[15493]: E0216 17:08:46.436820 15493 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1894c92711cf73af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:32286c81635de6de1cf7f328273c1a49,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:08:43.103163311 +0000 UTC m=+402.253336421,LastTimestamp:2026-02-16 17:08:43.103163311 +0000 UTC m=+402.253336421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:08:47.034411 master-0 kubenswrapper[15493]: I0216 17:08:47.034331 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"32286c81635de6de1cf7f328273c1a49","Type":"ContainerStarted","Data":"3268564838eb6e6a4f98f7ce91f31bb8894c255d354c86d8cadc7b120d01a6ff"} Feb 16 17:08:47.035409 master-0 kubenswrapper[15493]: I0216 17:08:47.035356 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:47.035520 master-0 kubenswrapper[15493]: E0216 17:08:47.035361 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:47.035990 master-0 kubenswrapper[15493]: I0216 17:08:47.035896 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:47.056480 master-0 kubenswrapper[15493]: E0216 17:08:47.056376 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 16 17:08:48.046567 master-0 kubenswrapper[15493]: E0216 17:08:48.046507 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:50.954051 master-0 kubenswrapper[15493]: I0216 17:08:50.953806 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Feb 16 17:08:51.059711 master-0 kubenswrapper[15493]: I0216 17:08:51.059577 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:51.064106 master-0 kubenswrapper[15493]: I0216 17:08:51.060486 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:51.068618 master-0 kubenswrapper[15493]: I0216 17:08:51.068553 15493 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="64282a4ce180de12ca5dda82666544a85bcd78785d5b9841fd753f40e066bf7d" exitCode=1 Feb 16 17:08:51.068618 master-0 kubenswrapper[15493]: I0216 17:08:51.068601 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"64282a4ce180de12ca5dda82666544a85bcd78785d5b9841fd753f40e066bf7d"} Feb 16 17:08:51.068618 master-0 kubenswrapper[15493]: I0216 17:08:51.068629 15493 scope.go:117] "RemoveContainer" containerID="26fb7956e8f3c69eb64ff1fc06e8f70aea162bbaa7e679a2b8dbe11e568d160a" Feb 16 17:08:51.069312 master-0 kubenswrapper[15493]: I0216 17:08:51.069258 15493 scope.go:117] "RemoveContainer" containerID="64282a4ce180de12ca5dda82666544a85bcd78785d5b9841fd753f40e066bf7d" Feb 16 17:08:51.069584 master-0 kubenswrapper[15493]: E0216 17:08:51.069538 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:08:51.069911 master-0 kubenswrapper[15493]: I0216 17:08:51.069836 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:51.070836 master-0 kubenswrapper[15493]: I0216 17:08:51.070762 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:51.071807 master-0 kubenswrapper[15493]: I0216 17:08:51.071745 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:53.458428 master-0 kubenswrapper[15493]: E0216 17:08:53.458318 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Feb 16 17:08:53.723096 master-0 kubenswrapper[15493]: I0216 17:08:53.722956 15493 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:08:53.723507 master-0 kubenswrapper[15493]: I0216 17:08:53.723483 15493 scope.go:117] "RemoveContainer" containerID="64282a4ce180de12ca5dda82666544a85bcd78785d5b9841fd753f40e066bf7d" Feb 16 17:08:53.723735 master-0 kubenswrapper[15493]: E0216 17:08:53.723698 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:08:53.724516 master-0 kubenswrapper[15493]: I0216 17:08:53.724435 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:53.725434 master-0 kubenswrapper[15493]: I0216 17:08:53.725375 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:53.726595 master-0 kubenswrapper[15493]: I0216 17:08:53.726553 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:56.438657 master-0 kubenswrapper[15493]: E0216 17:08:56.438548 15493 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1894c92711cf73af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:32286c81635de6de1cf7f328273c1a49,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:08:43.103163311 +0000 UTC m=+402.253336421,LastTimestamp:2026-02-16 17:08:43.103163311 +0000 UTC m=+402.253336421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:08:58.971257 master-0 kubenswrapper[15493]: I0216 17:08:58.971201 15493 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:08:58.971257 master-0 kubenswrapper[15493]: [+]has-synced ok Feb 16 17:08:58.971257 master-0 kubenswrapper[15493]: [-]process-running failed: reason withheld Feb 16 17:08:58.971257 master-0 kubenswrapper[15493]: healthz check failed Feb 16 17:08:58.971699 master-0 kubenswrapper[15493]: I0216 17:08:58.971533 15493 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]controller ok Feb 16 17:08:58.971699 master-0 kubenswrapper[15493]: [-]backend-http failed: reason withheld Feb 16 17:08:58.971699 master-0 kubenswrapper[15493]: healthz check failed Feb 16 17:08:58.971699 master-0 kubenswrapper[15493]: I0216 17:08:58.971594 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:08:58.972023 master-0 kubenswrapper[15493]: I0216 17:08:58.971975 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:08:59.126223 master-0 kubenswrapper[15493]: I0216 17:08:59.126177 15493 generic.go:334] "Generic (PLEG): container finished" podID="0d980a9a-2574-41b9-b970-0718cd97c8cd" containerID="6085bc9ebf42425482da1178217497bf1485803c2eb95c3c1e42d6ee2c909484" exitCode=0 Feb 16 17:08:59.126333 master-0 kubenswrapper[15493]: I0216 17:08:59.126255 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerDied","Data":"6085bc9ebf42425482da1178217497bf1485803c2eb95c3c1e42d6ee2c909484"} Feb 16 17:08:59.126766 master-0 kubenswrapper[15493]: I0216 17:08:59.126747 15493 scope.go:117] "RemoveContainer" containerID="6085bc9ebf42425482da1178217497bf1485803c2eb95c3c1e42d6ee2c909484" Feb 16 17:08:59.127262 master-0 kubenswrapper[15493]: I0216 17:08:59.127214 15493 status_manager.go:851] "Failed to get status for pod" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-6d678b8d67-5n9cl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.127822 master-0 kubenswrapper[15493]: I0216 17:08:59.127780 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.128584 master-0 kubenswrapper[15493]: I0216 17:08:59.128531 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.129059 master-0 kubenswrapper[15493]: I0216 17:08:59.129029 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.129223 master-0 kubenswrapper[15493]: I0216 17:08:59.129198 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_7adecad495595c43c57c30abd350e987/etcd-rev/0.log" Feb 16 17:08:59.132825 master-0 kubenswrapper[15493]: I0216 17:08:59.132789 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_7adecad495595c43c57c30abd350e987/etcd-metrics/0.log" Feb 16 17:08:59.134141 master-0 kubenswrapper[15493]: I0216 17:08:59.134115 15493 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="4189c2d6d8ef2623cadd74fd4074654aa0566518da84f032117001d0a4834fca" exitCode=2 Feb 16 17:08:59.134141 master-0 kubenswrapper[15493]: I0216 17:08:59.134137 15493 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="5bce1cf05f07b621a13e12b73d50c64ea3ff3a8f4333028644cef76defec5856" exitCode=0 Feb 16 17:08:59.134141 master-0 kubenswrapper[15493]: I0216 17:08:59.134145 15493 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="c9667853163d6e480abe0bec167f4e23a5014d1bd74d8c2d90aaa40c8e84c78f" exitCode=2 Feb 16 17:08:59.134314 master-0 kubenswrapper[15493]: I0216 17:08:59.134154 15493 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="128cff88ef56cf543296950508266ca81354faf0b4703951fd7b7e2a57baf467" exitCode=0 Feb 16 17:08:59.134314 master-0 kubenswrapper[15493]: I0216 17:08:59.134206 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"4189c2d6d8ef2623cadd74fd4074654aa0566518da84f032117001d0a4834fca"} Feb 16 17:08:59.134314 master-0 kubenswrapper[15493]: I0216 17:08:59.134250 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"5bce1cf05f07b621a13e12b73d50c64ea3ff3a8f4333028644cef76defec5856"} Feb 16 17:08:59.134314 master-0 kubenswrapper[15493]: I0216 17:08:59.134271 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"c9667853163d6e480abe0bec167f4e23a5014d1bd74d8c2d90aaa40c8e84c78f"} Feb 16 17:08:59.134314 master-0 kubenswrapper[15493]: I0216 17:08:59.134282 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"128cff88ef56cf543296950508266ca81354faf0b4703951fd7b7e2a57baf467"} Feb 16 17:08:59.134822 master-0 kubenswrapper[15493]: I0216 17:08:59.134791 15493 scope.go:117] "RemoveContainer" containerID="128cff88ef56cf543296950508266ca81354faf0b4703951fd7b7e2a57baf467" Feb 16 17:08:59.134899 master-0 kubenswrapper[15493]: I0216 17:08:59.134860 15493 scope.go:117] "RemoveContainer" containerID="c9667853163d6e480abe0bec167f4e23a5014d1bd74d8c2d90aaa40c8e84c78f" Feb 16 17:08:59.134899 master-0 kubenswrapper[15493]: I0216 17:08:59.134875 15493 scope.go:117] "RemoveContainer" containerID="5bce1cf05f07b621a13e12b73d50c64ea3ff3a8f4333028644cef76defec5856" Feb 16 17:08:59.134899 master-0 kubenswrapper[15493]: I0216 17:08:59.134887 15493 scope.go:117] "RemoveContainer" containerID="4189c2d6d8ef2623cadd74fd4074654aa0566518da84f032117001d0a4834fca" Feb 16 17:08:59.135667 master-0 kubenswrapper[15493]: I0216 17:08:59.135527 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.135799 master-0 kubenswrapper[15493]: I0216 17:08:59.135772 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rjdlk_ab5760f1-b2e0-4138-9383-e4827154ac50/kube-multus-additional-cni-plugins/0.log" Feb 16 17:08:59.136191 master-0 kubenswrapper[15493]: I0216 17:08:59.136152 15493 status_manager.go:851] "Failed to get status for pod" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-6d678b8d67-5n9cl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.136604 master-0 kubenswrapper[15493]: I0216 17:08:59.136556 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.137029 master-0 kubenswrapper[15493]: I0216 17:08:59.136996 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.137496 master-0 kubenswrapper[15493]: I0216 17:08:59.137423 15493 status_manager.go:851] "Failed to get status for pod" podUID="7adecad495595c43c57c30abd350e987" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.138172 master-0 kubenswrapper[15493]: I0216 17:08:59.138135 15493 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="71373993bd8fa85e34385967dc668cef9cf33a45809ff033e291394c3abdeb57" exitCode=143 Feb 16 17:08:59.138172 master-0 kubenswrapper[15493]: I0216 17:08:59.138155 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"71373993bd8fa85e34385967dc668cef9cf33a45809ff033e291394c3abdeb57"} Feb 16 17:08:59.138635 master-0 kubenswrapper[15493]: I0216 17:08:59.138601 15493 scope.go:117] "RemoveContainer" containerID="71373993bd8fa85e34385967dc668cef9cf33a45809ff033e291394c3abdeb57" Feb 16 17:08:59.139491 master-0 kubenswrapper[15493]: I0216 17:08:59.139465 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.140121 master-0 kubenswrapper[15493]: I0216 17:08:59.140081 15493 status_manager.go:851] "Failed to get status for pod" podUID="7adecad495595c43c57c30abd350e987" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.140664 master-0 kubenswrapper[15493]: I0216 17:08:59.140578 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.141113 master-0 kubenswrapper[15493]: I0216 17:08:59.141067 15493 status_manager.go:851] "Failed to get status for pod" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-6d678b8d67-5n9cl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.141698 master-0 kubenswrapper[15493]: I0216 17:08:59.141657 15493 generic.go:334] "Generic (PLEG): container finished" podID="cc9a20f4-255a-4312-8f43-174a28c06340" containerID="915f8db950ac7ab932dfa55756083249cd00e3b20e2ab5de6ceb63fdfe934d23" exitCode=0 Feb 16 17:08:59.141767 master-0 kubenswrapper[15493]: I0216 17:08:59.141698 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.141767 master-0 kubenswrapper[15493]: I0216 17:08:59.141720 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerDied","Data":"915f8db950ac7ab932dfa55756083249cd00e3b20e2ab5de6ceb63fdfe934d23"} Feb 16 17:08:59.142142 master-0 kubenswrapper[15493]: I0216 17:08:59.142045 15493 scope.go:117] "RemoveContainer" containerID="915f8db950ac7ab932dfa55756083249cd00e3b20e2ab5de6ceb63fdfe934d23" Feb 16 17:08:59.143719 master-0 kubenswrapper[15493]: I0216 17:08:59.143603 15493 status_manager.go:851] "Failed to get status for pod" podUID="ab5760f1-b2e0-4138-9383-e4827154ac50" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-rjdlk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.144536 master-0 kubenswrapper[15493]: I0216 17:08:59.144523 15493 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:08:59.145846 master-0 kubenswrapper[15493]: I0216 17:08:59.145782 15493 status_manager.go:851] "Failed to get status for pod" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-6d678b8d67-5n9cl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.146531 master-0 kubenswrapper[15493]: I0216 17:08:59.146475 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.146607 master-0 kubenswrapper[15493]: I0216 17:08:59.146579 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerDied","Data":"321cf557aeb107d8d573f4ad125d9c41970fc9988ae80bf9900e02e207922125"} Feb 16 17:08:59.146869 master-0 kubenswrapper[15493]: I0216 17:08:59.146562 15493 generic.go:334] "Generic (PLEG): container finished" podID="f3beb7bf-922f-425d-8a19-fd407a7153a8" containerID="321cf557aeb107d8d573f4ad125d9c41970fc9988ae80bf9900e02e207922125" exitCode=0 Feb 16 17:08:59.147000 master-0 kubenswrapper[15493]: I0216 17:08:59.146972 15493 scope.go:117] "RemoveContainer" containerID="321cf557aeb107d8d573f4ad125d9c41970fc9988ae80bf9900e02e207922125" Feb 16 17:08:59.147268 master-0 kubenswrapper[15493]: I0216 17:08:59.147222 15493 status_manager.go:851] "Failed to get status for pod" podUID="ab5760f1-b2e0-4138-9383-e4827154ac50" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-rjdlk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.147716 master-0 kubenswrapper[15493]: I0216 17:08:59.147679 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.148200 master-0 kubenswrapper[15493]: I0216 17:08:59.148160 15493 status_manager.go:851] "Failed to get status for pod" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" pod="openshift-marketplace/community-operators-7w4km" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7w4km\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.148821 master-0 kubenswrapper[15493]: I0216 17:08:59.148769 15493 status_manager.go:851] "Failed to get status for pod" podUID="7adecad495595c43c57c30abd350e987" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.149300 master-0 kubenswrapper[15493]: I0216 17:08:59.149262 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.149599 master-0 kubenswrapper[15493]: I0216 17:08:59.149572 15493 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="67636bc611814bbf34e6bb9093e3c3fce5ce2b828a2dd05d2b7fdd2dd015348f" exitCode=0 Feb 16 17:08:59.149635 master-0 kubenswrapper[15493]: I0216 17:08:59.149623 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"67636bc611814bbf34e6bb9093e3c3fce5ce2b828a2dd05d2b7fdd2dd015348f"} Feb 16 17:08:59.149773 master-0 kubenswrapper[15493]: I0216 17:08:59.149739 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.149946 master-0 kubenswrapper[15493]: I0216 17:08:59.149903 15493 scope.go:117] "RemoveContainer" containerID="64282a4ce180de12ca5dda82666544a85bcd78785d5b9841fd753f40e066bf7d" Feb 16 17:08:59.149946 master-0 kubenswrapper[15493]: I0216 17:08:59.149934 15493 scope.go:117] "RemoveContainer" containerID="67636bc611814bbf34e6bb9093e3c3fce5ce2b828a2dd05d2b7fdd2dd015348f" Feb 16 17:08:59.150219 master-0 kubenswrapper[15493]: I0216 17:08:59.150180 15493 status_manager.go:851] "Failed to get status for pod" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" pod="openshift-marketplace/community-operators-7w4km" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7w4km\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.150603 master-0 kubenswrapper[15493]: I0216 17:08:59.150563 15493 status_manager.go:851] "Failed to get status for pod" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" pod="openshift-marketplace/certified-operators-z69zq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z69zq\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.151101 master-0 kubenswrapper[15493]: I0216 17:08:59.151049 15493 status_manager.go:851] "Failed to get status for pod" podUID="7adecad495595c43c57c30abd350e987" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.151375 master-0 kubenswrapper[15493]: I0216 17:08:59.151342 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-2ws9r_9c48005e-c4df-4332-87fc-ec028f2c6921/machine-config-server/0.log" Feb 16 17:08:59.151418 master-0 kubenswrapper[15493]: I0216 17:08:59.151391 15493 generic.go:334] "Generic (PLEG): container finished" podID="9c48005e-c4df-4332-87fc-ec028f2c6921" containerID="88fb0564c391b1d841f5663a68574e4b3c75822e3555a9b9f404dbe3fc5c5089" exitCode=2 Feb 16 17:08:59.151487 master-0 kubenswrapper[15493]: I0216 17:08:59.151457 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2ws9r" event={"ID":"9c48005e-c4df-4332-87fc-ec028f2c6921","Type":"ContainerDied","Data":"88fb0564c391b1d841f5663a68574e4b3c75822e3555a9b9f404dbe3fc5c5089"} Feb 16 17:08:59.152194 master-0 kubenswrapper[15493]: I0216 17:08:59.152162 15493 scope.go:117] "RemoveContainer" containerID="88fb0564c391b1d841f5663a68574e4b3c75822e3555a9b9f404dbe3fc5c5089" Feb 16 17:08:59.152622 master-0 kubenswrapper[15493]: I0216 17:08:59.152557 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.153131 master-0 kubenswrapper[15493]: I0216 17:08:59.153099 15493 status_manager.go:851] "Failed to get status for pod" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-6d678b8d67-5n9cl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.153816 master-0 kubenswrapper[15493]: I0216 17:08:59.153763 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.154155 master-0 kubenswrapper[15493]: I0216 17:08:59.154123 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-qqvg4_1363cb7b-62cc-497b-af6f-4d5e0eb7f174/serve-healthcheck-canary/0.log" Feb 16 17:08:59.154194 master-0 kubenswrapper[15493]: I0216 17:08:59.154163 15493 generic.go:334] "Generic (PLEG): container finished" podID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" containerID="1f9fdf8ad8b22c269fdbde8bae7ca0001ee8651ea5ecbb2a592ce042830398a8" exitCode=2 Feb 16 17:08:59.154226 master-0 kubenswrapper[15493]: I0216 17:08:59.154207 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qqvg4" event={"ID":"1363cb7b-62cc-497b-af6f-4d5e0eb7f174","Type":"ContainerDied","Data":"1f9fdf8ad8b22c269fdbde8bae7ca0001ee8651ea5ecbb2a592ce042830398a8"} Feb 16 17:08:59.154525 master-0 kubenswrapper[15493]: I0216 17:08:59.154489 15493 status_manager.go:851] "Failed to get status for pod" podUID="ab5760f1-b2e0-4138-9383-e4827154ac50" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-rjdlk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.154614 master-0 kubenswrapper[15493]: I0216 17:08:59.154498 15493 scope.go:117] "RemoveContainer" containerID="1f9fdf8ad8b22c269fdbde8bae7ca0001ee8651ea5ecbb2a592ce042830398a8" Feb 16 17:08:59.155052 master-0 kubenswrapper[15493]: I0216 17:08:59.155017 15493 status_manager.go:851] "Failed to get status for pod" podUID="ab5760f1-b2e0-4138-9383-e4827154ac50" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-rjdlk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.155407 master-0 kubenswrapper[15493]: I0216 17:08:59.155380 15493 status_manager.go:851] "Failed to get status for pod" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" pod="openshift-ingress-canary/ingress-canary-qqvg4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/pods/ingress-canary-qqvg4\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.156028 master-0 kubenswrapper[15493]: I0216 17:08:59.155993 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.156092 master-0 kubenswrapper[15493]: I0216 17:08:59.156072 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-olm-operator_cluster-olm-operator-55b69c6c48-7chjv_4e51bba5-0ebe-4e55-a588-38b71548c605/cluster-olm-operator/0.log" Feb 16 17:08:59.156450 master-0 kubenswrapper[15493]: I0216 17:08:59.156417 15493 status_manager.go:851] "Failed to get status for pod" podUID="9c48005e-c4df-4332-87fc-ec028f2c6921" pod="openshift-machine-config-operator/machine-config-server-2ws9r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-server-2ws9r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.156738 master-0 kubenswrapper[15493]: I0216 17:08:59.156704 15493 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 16 17:08:59.156777 master-0 kubenswrapper[15493]: I0216 17:08:59.156735 15493 generic.go:334] "Generic (PLEG): container finished" podID="4e51bba5-0ebe-4e55-a588-38b71548c605" containerID="079e840529eb6d74a125e4d8873e01bd5f48d0a6e891c798f77f912c0e2b6249" exitCode=0 Feb 16 17:08:59.156777 master-0 kubenswrapper[15493]: I0216 17:08:59.156757 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerDied","Data":"079e840529eb6d74a125e4d8873e01bd5f48d0a6e891c798f77f912c0e2b6249"} Feb 16 17:08:59.156777 master-0 kubenswrapper[15493]: I0216 17:08:59.156738 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 16 17:08:59.156865 master-0 kubenswrapper[15493]: I0216 17:08:59.156784 15493 scope.go:117] "RemoveContainer" containerID="d0f7e8be40545fa33b748eaa6f879efc2d956e86b6534dcac117b6e66db8cbc2" Feb 16 17:08:59.157339 master-0 kubenswrapper[15493]: I0216 17:08:59.157308 15493 scope.go:117] "RemoveContainer" containerID="079e840529eb6d74a125e4d8873e01bd5f48d0a6e891c798f77f912c0e2b6249" Feb 16 17:08:59.157783 master-0 kubenswrapper[15493]: I0216 17:08:59.157745 15493 status_manager.go:851] "Failed to get status for pod" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" pod="openshift-marketplace/community-operators-7w4km" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7w4km\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.158281 master-0 kubenswrapper[15493]: I0216 17:08:59.158232 15493 status_manager.go:851] "Failed to get status for pod" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" pod="openshift-marketplace/certified-operators-z69zq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z69zq\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.160242 master-0 kubenswrapper[15493]: I0216 17:08:59.160143 15493 status_manager.go:851] "Failed to get status for pod" podUID="7adecad495595c43c57c30abd350e987" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.160587 master-0 kubenswrapper[15493]: I0216 17:08:59.160548 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.161709 master-0 kubenswrapper[15493]: I0216 17:08:59.161681 15493 status_manager.go:851] "Failed to get status for pod" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-6d678b8d67-5n9cl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.162090 master-0 kubenswrapper[15493]: I0216 17:08:59.162067 15493 generic.go:334] "Generic (PLEG): container finished" podID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" containerID="fc8150e0d150326f76d6d1d696f6ff87c41e01a6de9f1e6be4e585c70167c9e4" exitCode=0 Feb 16 17:08:59.162090 master-0 kubenswrapper[15493]: I0216 17:08:59.162088 15493 generic.go:334] "Generic (PLEG): container finished" podID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" containerID="e37d1163dc16a44bc20772528909b747a42e1266d3eae2b1214ad6def8f6ca6c" exitCode=0 Feb 16 17:08:59.162200 master-0 kubenswrapper[15493]: I0216 17:08:59.162097 15493 generic.go:334] "Generic (PLEG): container finished" podID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" containerID="bbe096837bb2071dce0c03fcbc8368a495ddd75aeaf1694f35e02bf2253be8b8" exitCode=0 Feb 16 17:08:59.162200 master-0 kubenswrapper[15493]: I0216 17:08:59.162104 15493 generic.go:334] "Generic (PLEG): container finished" podID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" containerID="f85978e0c5382cab2b7bb125b720813a3ff3fb5061caf152aa359221eab49432" exitCode=0 Feb 16 17:08:59.162200 master-0 kubenswrapper[15493]: I0216 17:08:59.162111 15493 generic.go:334] "Generic (PLEG): container finished" podID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" containerID="9e1a367362473b50af871a01fc919bca17db857d8bb5ab7a054130ebb39b1a1d" exitCode=0 Feb 16 17:08:59.162200 master-0 kubenswrapper[15493]: I0216 17:08:59.162118 15493 generic.go:334] "Generic (PLEG): container finished" podID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" containerID="5ad40818692ba2cb86d6266c6752da028d7c73dfd4d324ad54f095094ea5a5f2" exitCode=0 Feb 16 17:08:59.162200 master-0 kubenswrapper[15493]: I0216 17:08:59.162145 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerDied","Data":"fc8150e0d150326f76d6d1d696f6ff87c41e01a6de9f1e6be4e585c70167c9e4"} Feb 16 17:08:59.162200 master-0 kubenswrapper[15493]: I0216 17:08:59.162171 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerDied","Data":"e37d1163dc16a44bc20772528909b747a42e1266d3eae2b1214ad6def8f6ca6c"} Feb 16 17:08:59.162200 master-0 kubenswrapper[15493]: I0216 17:08:59.162200 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerDied","Data":"bbe096837bb2071dce0c03fcbc8368a495ddd75aeaf1694f35e02bf2253be8b8"} Feb 16 17:08:59.162460 master-0 kubenswrapper[15493]: I0216 17:08:59.162211 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerDied","Data":"f85978e0c5382cab2b7bb125b720813a3ff3fb5061caf152aa359221eab49432"} Feb 16 17:08:59.162460 master-0 kubenswrapper[15493]: I0216 17:08:59.162220 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerDied","Data":"9e1a367362473b50af871a01fc919bca17db857d8bb5ab7a054130ebb39b1a1d"} Feb 16 17:08:59.162460 master-0 kubenswrapper[15493]: I0216 17:08:59.162231 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerDied","Data":"5ad40818692ba2cb86d6266c6752da028d7c73dfd4d324ad54f095094ea5a5f2"} Feb 16 17:08:59.162592 master-0 kubenswrapper[15493]: I0216 17:08:59.162404 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.163253 master-0 kubenswrapper[15493]: I0216 17:08:59.163235 15493 scope.go:117] "RemoveContainer" containerID="5ad40818692ba2cb86d6266c6752da028d7c73dfd4d324ad54f095094ea5a5f2" Feb 16 17:08:59.163299 master-0 kubenswrapper[15493]: I0216 17:08:59.163258 15493 scope.go:117] "RemoveContainer" containerID="9e1a367362473b50af871a01fc919bca17db857d8bb5ab7a054130ebb39b1a1d" Feb 16 17:08:59.163403 master-0 kubenswrapper[15493]: I0216 17:08:59.163367 15493 generic.go:334] "Generic (PLEG): container finished" podID="0ff68421-1741-41c1-93d5-5c722dfd295e" containerID="a3effd6b237c4893b5a519d4b8fb5bea7b5a96d22cc8bc7d99b660e1adef87fb" exitCode=0 Feb 16 17:08:59.163521 master-0 kubenswrapper[15493]: I0216 17:08:59.163490 15493 status_manager.go:851] "Failed to get status for pod" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-6d678b8d67-5n9cl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.163604 master-0 kubenswrapper[15493]: I0216 17:08:59.163270 15493 scope.go:117] "RemoveContainer" containerID="f85978e0c5382cab2b7bb125b720813a3ff3fb5061caf152aa359221eab49432" Feb 16 17:08:59.163666 master-0 kubenswrapper[15493]: I0216 17:08:59.163609 15493 scope.go:117] "RemoveContainer" containerID="bbe096837bb2071dce0c03fcbc8368a495ddd75aeaf1694f35e02bf2253be8b8" Feb 16 17:08:59.163666 master-0 kubenswrapper[15493]: I0216 17:08:59.163620 15493 scope.go:117] "RemoveContainer" containerID="e37d1163dc16a44bc20772528909b747a42e1266d3eae2b1214ad6def8f6ca6c" Feb 16 17:08:59.164417 master-0 kubenswrapper[15493]: I0216 17:08:59.163434 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" event={"ID":"0ff68421-1741-41c1-93d5-5c722dfd295e","Type":"ContainerDied","Data":"a3effd6b237c4893b5a519d4b8fb5bea7b5a96d22cc8bc7d99b660e1adef87fb"} Feb 16 17:08:59.164471 master-0 kubenswrapper[15493]: I0216 17:08:59.164427 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.164503 master-0 kubenswrapper[15493]: I0216 17:08:59.164484 15493 scope.go:117] "RemoveContainer" containerID="fc8150e0d150326f76d6d1d696f6ff87c41e01a6de9f1e6be4e585c70167c9e4" Feb 16 17:08:59.164833 master-0 kubenswrapper[15493]: I0216 17:08:59.164803 15493 scope.go:117] "RemoveContainer" containerID="a3effd6b237c4893b5a519d4b8fb5bea7b5a96d22cc8bc7d99b660e1adef87fb" Feb 16 17:08:59.165323 master-0 kubenswrapper[15493]: I0216 17:08:59.165257 15493 status_manager.go:851] "Failed to get status for pod" podUID="ab5760f1-b2e0-4138-9383-e4827154ac50" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-rjdlk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.165820 master-0 kubenswrapper[15493]: I0216 17:08:59.165774 15493 status_manager.go:851] "Failed to get status for pod" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-64bf6cdbbc-tpd6h\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.165953 master-0 kubenswrapper[15493]: I0216 17:08:59.165929 15493 generic.go:334] "Generic (PLEG): container finished" podID="ee84198d-6357-4429-a90c-455c3850a788" containerID="4a3fbb1a388ca141e061ddd3f456a30e0ea19e4b3d5d971ef21b891853ddad88" exitCode=0 Feb 16 17:08:59.165953 master-0 kubenswrapper[15493]: I0216 17:08:59.165951 15493 generic.go:334] "Generic (PLEG): container finished" podID="ee84198d-6357-4429-a90c-455c3850a788" containerID="510990a72db12a97eef2b9c9fbdaec55abf5d52c68ce419a7f5a87a3062f73f1" exitCode=0 Feb 16 17:08:59.166028 master-0 kubenswrapper[15493]: I0216 17:08:59.165995 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerDied","Data":"4a3fbb1a388ca141e061ddd3f456a30e0ea19e4b3d5d971ef21b891853ddad88"} Feb 16 17:08:59.166028 master-0 kubenswrapper[15493]: I0216 17:08:59.166013 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerDied","Data":"510990a72db12a97eef2b9c9fbdaec55abf5d52c68ce419a7f5a87a3062f73f1"} Feb 16 17:08:59.166296 master-0 kubenswrapper[15493]: I0216 17:08:59.166268 15493 scope.go:117] "RemoveContainer" containerID="510990a72db12a97eef2b9c9fbdaec55abf5d52c68ce419a7f5a87a3062f73f1" Feb 16 17:08:59.166296 master-0 kubenswrapper[15493]: I0216 17:08:59.166289 15493 scope.go:117] "RemoveContainer" containerID="4a3fbb1a388ca141e061ddd3f456a30e0ea19e4b3d5d971ef21b891853ddad88" Feb 16 17:08:59.166362 master-0 kubenswrapper[15493]: I0216 17:08:59.166332 15493 status_manager.go:851] "Failed to get status for pod" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" pod="openshift-ingress-canary/ingress-canary-qqvg4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/pods/ingress-canary-qqvg4\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.166729 master-0 kubenswrapper[15493]: I0216 17:08:59.166689 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.167187 master-0 kubenswrapper[15493]: I0216 17:08:59.167152 15493 status_manager.go:851] "Failed to get status for pod" podUID="9c48005e-c4df-4332-87fc-ec028f2c6921" pod="openshift-machine-config-operator/machine-config-server-2ws9r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-server-2ws9r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.167671 master-0 kubenswrapper[15493]: I0216 17:08:59.167638 15493 status_manager.go:851] "Failed to get status for pod" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/pods/cluster-olm-operator-55b69c6c48-7chjv\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.168066 master-0 kubenswrapper[15493]: I0216 17:08:59.168028 15493 generic.go:334] "Generic (PLEG): container finished" podID="78be97a3-18d1-4962-804f-372974dc8ccc" containerID="7d077fd0a75015a74c26fba1db2c5751ce399c05a04ad0dce4ab7670133702c9" exitCode=0 Feb 16 17:08:59.168129 master-0 kubenswrapper[15493]: I0216 17:08:59.168067 15493 status_manager.go:851] "Failed to get status for pod" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" pod="openshift-marketplace/community-operators-7w4km" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7w4km\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.168129 master-0 kubenswrapper[15493]: I0216 17:08:59.168101 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" event={"ID":"78be97a3-18d1-4962-804f-372974dc8ccc","Type":"ContainerDied","Data":"7d077fd0a75015a74c26fba1db2c5751ce399c05a04ad0dce4ab7670133702c9"} Feb 16 17:08:59.168583 master-0 kubenswrapper[15493]: I0216 17:08:59.168555 15493 scope.go:117] "RemoveContainer" containerID="7d077fd0a75015a74c26fba1db2c5751ce399c05a04ad0dce4ab7670133702c9" Feb 16 17:08:59.168583 master-0 kubenswrapper[15493]: I0216 17:08:59.168564 15493 status_manager.go:851] "Failed to get status for pod" podUID="7adecad495595c43c57c30abd350e987" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.169285 master-0 kubenswrapper[15493]: I0216 17:08:59.169242 15493 status_manager.go:851] "Failed to get status for pod" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" pod="openshift-marketplace/certified-operators-z69zq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z69zq\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.169696 master-0 kubenswrapper[15493]: I0216 17:08:59.169654 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.170392 master-0 kubenswrapper[15493]: I0216 17:08:59.170356 15493 status_manager.go:851] "Failed to get status for pod" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-7d8f4c8c66-qjq9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.170845 master-0 kubenswrapper[15493]: I0216 17:08:59.170813 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.171381 master-0 kubenswrapper[15493]: I0216 17:08:59.171343 15493 status_manager.go:851] "Failed to get status for pod" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-64bf6cdbbc-tpd6h\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.171911 master-0 kubenswrapper[15493]: I0216 17:08:59.171880 15493 status_manager.go:851] "Failed to get status for pod" podUID="ab5760f1-b2e0-4138-9383-e4827154ac50" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-rjdlk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.172521 master-0 kubenswrapper[15493]: I0216 17:08:59.172482 15493 status_manager.go:851] "Failed to get status for pod" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" pod="openshift-ingress-canary/ingress-canary-qqvg4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/pods/ingress-canary-qqvg4\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.172904 master-0 kubenswrapper[15493]: I0216 17:08:59.172867 15493 status_manager.go:851] "Failed to get status for pod" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-dcdb76cc6-5rcvl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.174048 master-0 kubenswrapper[15493]: I0216 17:08:59.174014 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.174156 master-0 kubenswrapper[15493]: I0216 17:08:59.174127 15493 generic.go:334] "Generic (PLEG): container finished" podID="e1443fb7-cb1e-4105-b604-b88c749620c4" containerID="9a2847bba4fb3a03af2537d3d204da4564b5ff5aff8a875dc47e687e78893239" exitCode=0 Feb 16 17:08:59.174156 master-0 kubenswrapper[15493]: I0216 17:08:59.174149 15493 generic.go:334] "Generic (PLEG): container finished" podID="e1443fb7-cb1e-4105-b604-b88c749620c4" containerID="31bd9a890a5059e0b580fdd2f2ff6c6e7a091febf9ee4623d9457ea21452090a" exitCode=0 Feb 16 17:08:59.174254 master-0 kubenswrapper[15493]: I0216 17:08:59.174237 15493 generic.go:334] "Generic (PLEG): container finished" podID="e1443fb7-cb1e-4105-b604-b88c749620c4" containerID="c533cb820095a78f39c6f42ffac03ee174e3ee777a839e55594e3c6025a91606" exitCode=0 Feb 16 17:08:59.174254 master-0 kubenswrapper[15493]: I0216 17:08:59.174253 15493 generic.go:334] "Generic (PLEG): container finished" podID="e1443fb7-cb1e-4105-b604-b88c749620c4" containerID="e2ac9e086f60d8394dafb914a71895490f5e6c94804446cbace8d6c68123a12b" exitCode=0 Feb 16 17:08:59.174310 master-0 kubenswrapper[15493]: I0216 17:08:59.174260 15493 generic.go:334] "Generic (PLEG): container finished" podID="e1443fb7-cb1e-4105-b604-b88c749620c4" containerID="3bc3b0a3c7a9d33ae08dca9280e63bf069111d5618b0180dcffe8f0d1a422a94" exitCode=0 Feb 16 17:08:59.174310 master-0 kubenswrapper[15493]: I0216 17:08:59.174273 15493 generic.go:334] "Generic (PLEG): container finished" podID="e1443fb7-cb1e-4105-b604-b88c749620c4" containerID="e1757e5acb838902f4011b0dfc2a220b74f028cc05105e020bfd55c78d223550" exitCode=0 Feb 16 17:08:59.174373 master-0 kubenswrapper[15493]: I0216 17:08:59.174323 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerDied","Data":"9a2847bba4fb3a03af2537d3d204da4564b5ff5aff8a875dc47e687e78893239"} Feb 16 17:08:59.174373 master-0 kubenswrapper[15493]: I0216 17:08:59.174345 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerDied","Data":"31bd9a890a5059e0b580fdd2f2ff6c6e7a091febf9ee4623d9457ea21452090a"} Feb 16 17:08:59.174373 master-0 kubenswrapper[15493]: I0216 17:08:59.174346 15493 status_manager.go:851] "Failed to get status for pod" podUID="9c48005e-c4df-4332-87fc-ec028f2c6921" pod="openshift-machine-config-operator/machine-config-server-2ws9r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-server-2ws9r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.174484 master-0 kubenswrapper[15493]: I0216 17:08:59.174358 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerDied","Data":"c533cb820095a78f39c6f42ffac03ee174e3ee777a839e55594e3c6025a91606"} Feb 16 17:08:59.174520 master-0 kubenswrapper[15493]: I0216 17:08:59.174502 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerDied","Data":"e2ac9e086f60d8394dafb914a71895490f5e6c94804446cbace8d6c68123a12b"} Feb 16 17:08:59.174554 master-0 kubenswrapper[15493]: I0216 17:08:59.174518 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerDied","Data":"3bc3b0a3c7a9d33ae08dca9280e63bf069111d5618b0180dcffe8f0d1a422a94"} Feb 16 17:08:59.174554 master-0 kubenswrapper[15493]: I0216 17:08:59.174530 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"e1443fb7-cb1e-4105-b604-b88c749620c4","Type":"ContainerDied","Data":"e1757e5acb838902f4011b0dfc2a220b74f028cc05105e020bfd55c78d223550"} Feb 16 17:08:59.174851 master-0 kubenswrapper[15493]: I0216 17:08:59.174831 15493 scope.go:117] "RemoveContainer" containerID="e1757e5acb838902f4011b0dfc2a220b74f028cc05105e020bfd55c78d223550" Feb 16 17:08:59.174900 master-0 kubenswrapper[15493]: I0216 17:08:59.174857 15493 scope.go:117] "RemoveContainer" containerID="3bc3b0a3c7a9d33ae08dca9280e63bf069111d5618b0180dcffe8f0d1a422a94" Feb 16 17:08:59.174900 master-0 kubenswrapper[15493]: I0216 17:08:59.174851 15493 status_manager.go:851] "Failed to get status for pod" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/pods/cluster-olm-operator-55b69c6c48-7chjv\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.175038 master-0 kubenswrapper[15493]: I0216 17:08:59.174868 15493 scope.go:117] "RemoveContainer" containerID="e2ac9e086f60d8394dafb914a71895490f5e6c94804446cbace8d6c68123a12b" Feb 16 17:08:59.175038 master-0 kubenswrapper[15493]: I0216 17:08:59.174967 15493 scope.go:117] "RemoveContainer" containerID="c533cb820095a78f39c6f42ffac03ee174e3ee777a839e55594e3c6025a91606" Feb 16 17:08:59.175038 master-0 kubenswrapper[15493]: I0216 17:08:59.174978 15493 scope.go:117] "RemoveContainer" containerID="31bd9a890a5059e0b580fdd2f2ff6c6e7a091febf9ee4623d9457ea21452090a" Feb 16 17:08:59.175038 master-0 kubenswrapper[15493]: I0216 17:08:59.174989 15493 scope.go:117] "RemoveContainer" containerID="9a2847bba4fb3a03af2537d3d204da4564b5ff5aff8a875dc47e687e78893239" Feb 16 17:08:59.175332 master-0 kubenswrapper[15493]: I0216 17:08:59.175302 15493 status_manager.go:851] "Failed to get status for pod" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" pod="openshift-marketplace/community-operators-7w4km" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7w4km\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.175771 master-0 kubenswrapper[15493]: I0216 17:08:59.175742 15493 status_manager.go:851] "Failed to get status for pod" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" pod="openshift-marketplace/certified-operators-z69zq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z69zq\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.176276 master-0 kubenswrapper[15493]: I0216 17:08:59.176251 15493 status_manager.go:851] "Failed to get status for pod" podUID="7adecad495595c43c57c30abd350e987" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.176636 master-0 kubenswrapper[15493]: I0216 17:08:59.176601 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.177163 master-0 kubenswrapper[15493]: I0216 17:08:59.177120 15493 status_manager.go:851] "Failed to get status for pod" podUID="ee84198d-6357-4429-a90c-455c3850a788" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-67fd9768b5-zcwwd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.177499 master-0 kubenswrapper[15493]: I0216 17:08:59.177464 15493 status_manager.go:851] "Failed to get status for pod" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-6d678b8d67-5n9cl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.177838 master-0 kubenswrapper[15493]: I0216 17:08:59.177816 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-flr86_9f9bf4ab-5415-4616-aa36-ea387c699ea9/ovn-acl-logging/0.log" Feb 16 17:08:59.178022 master-0 kubenswrapper[15493]: I0216 17:08:59.177983 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.178310 master-0 kubenswrapper[15493]: I0216 17:08:59.178285 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-flr86_9f9bf4ab-5415-4616-aa36-ea387c699ea9/ovn-controller/0.log" Feb 16 17:08:59.178494 master-0 kubenswrapper[15493]: I0216 17:08:59.178459 15493 status_manager.go:851] "Failed to get status for pod" podUID="9c48005e-c4df-4332-87fc-ec028f2c6921" pod="openshift-machine-config-operator/machine-config-server-2ws9r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-server-2ws9r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.179090 master-0 kubenswrapper[15493]: I0216 17:08:59.179062 15493 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="9f614b14cbff08be0e14be8cba5e89de122b81583a34321af46bbe62e5a802b3" exitCode=0 Feb 16 17:08:59.179165 master-0 kubenswrapper[15493]: I0216 17:08:59.179094 15493 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="e9b033b48182246ed491c211e63d13c81386e7e5e19d72d1dd3822fc6dd2d4e4" exitCode=0 Feb 16 17:08:59.179165 master-0 kubenswrapper[15493]: I0216 17:08:59.179108 15493 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="6964245d58587f66b762b5ac2d9e1b1dc13364bf0c4f27c746f3f696d56a4d52" exitCode=0 Feb 16 17:08:59.179165 master-0 kubenswrapper[15493]: I0216 17:08:59.179107 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"9f614b14cbff08be0e14be8cba5e89de122b81583a34321af46bbe62e5a802b3"} Feb 16 17:08:59.179165 master-0 kubenswrapper[15493]: I0216 17:08:59.179118 15493 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="80a684ef556f1e87f3c9e02305940474602ddfe3de8f6beeb708e0f676fea206" exitCode=0 Feb 16 17:08:59.179165 master-0 kubenswrapper[15493]: I0216 17:08:59.179152 15493 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="75939ba6ce47e33cbb255166206afbbb5bb2eddc8618e626a18427d506fc7a2f" exitCode=0 Feb 16 17:08:59.179165 master-0 kubenswrapper[15493]: I0216 17:08:59.179165 15493 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="21bc05ac92fb28745962add720939690ac3c68281bef41a2c339dfc844b33eb9" exitCode=0 Feb 16 17:08:59.179411 master-0 kubenswrapper[15493]: I0216 17:08:59.179174 15493 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="9c906689512d1a1264797a823e480178e96aca8c88376bbe95cad584cee2c02c" exitCode=143 Feb 16 17:08:59.179411 master-0 kubenswrapper[15493]: I0216 17:08:59.179184 15493 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="bfff95a0d14f0841a22b2fd65881101b798827da455a93e9bb8b076c265fc42a" exitCode=143 Feb 16 17:08:59.179411 master-0 kubenswrapper[15493]: I0216 17:08:59.179134 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"e9b033b48182246ed491c211e63d13c81386e7e5e19d72d1dd3822fc6dd2d4e4"} Feb 16 17:08:59.179411 master-0 kubenswrapper[15493]: I0216 17:08:59.179252 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"6964245d58587f66b762b5ac2d9e1b1dc13364bf0c4f27c746f3f696d56a4d52"} Feb 16 17:08:59.179411 master-0 kubenswrapper[15493]: I0216 17:08:59.179266 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"80a684ef556f1e87f3c9e02305940474602ddfe3de8f6beeb708e0f676fea206"} Feb 16 17:08:59.179411 master-0 kubenswrapper[15493]: I0216 17:08:59.179277 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"75939ba6ce47e33cbb255166206afbbb5bb2eddc8618e626a18427d506fc7a2f"} Feb 16 17:08:59.179411 master-0 kubenswrapper[15493]: I0216 17:08:59.179288 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"21bc05ac92fb28745962add720939690ac3c68281bef41a2c339dfc844b33eb9"} Feb 16 17:08:59.179411 master-0 kubenswrapper[15493]: I0216 17:08:59.179298 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"9c906689512d1a1264797a823e480178e96aca8c88376bbe95cad584cee2c02c"} Feb 16 17:08:59.179411 master-0 kubenswrapper[15493]: I0216 17:08:59.179312 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"bfff95a0d14f0841a22b2fd65881101b798827da455a93e9bb8b076c265fc42a"} Feb 16 17:08:59.179880 master-0 kubenswrapper[15493]: I0216 17:08:59.179559 15493 scope.go:117] "RemoveContainer" containerID="bfff95a0d14f0841a22b2fd65881101b798827da455a93e9bb8b076c265fc42a" Feb 16 17:08:59.179880 master-0 kubenswrapper[15493]: I0216 17:08:59.179575 15493 scope.go:117] "RemoveContainer" containerID="9c906689512d1a1264797a823e480178e96aca8c88376bbe95cad584cee2c02c" Feb 16 17:08:59.179880 master-0 kubenswrapper[15493]: I0216 17:08:59.179581 15493 scope.go:117] "RemoveContainer" containerID="21bc05ac92fb28745962add720939690ac3c68281bef41a2c339dfc844b33eb9" Feb 16 17:08:59.179880 master-0 kubenswrapper[15493]: I0216 17:08:59.179590 15493 scope.go:117] "RemoveContainer" containerID="75939ba6ce47e33cbb255166206afbbb5bb2eddc8618e626a18427d506fc7a2f" Feb 16 17:08:59.179880 master-0 kubenswrapper[15493]: I0216 17:08:59.179597 15493 scope.go:117] "RemoveContainer" containerID="80a684ef556f1e87f3c9e02305940474602ddfe3de8f6beeb708e0f676fea206" Feb 16 17:08:59.179880 master-0 kubenswrapper[15493]: I0216 17:08:59.179603 15493 scope.go:117] "RemoveContainer" containerID="6964245d58587f66b762b5ac2d9e1b1dc13364bf0c4f27c746f3f696d56a4d52" Feb 16 17:08:59.179880 master-0 kubenswrapper[15493]: I0216 17:08:59.179610 15493 scope.go:117] "RemoveContainer" containerID="e9b033b48182246ed491c211e63d13c81386e7e5e19d72d1dd3822fc6dd2d4e4" Feb 16 17:08:59.179880 master-0 kubenswrapper[15493]: I0216 17:08:59.179619 15493 scope.go:117] "RemoveContainer" containerID="9f614b14cbff08be0e14be8cba5e89de122b81583a34321af46bbe62e5a802b3" Feb 16 17:08:59.180341 master-0 kubenswrapper[15493]: I0216 17:08:59.180116 15493 status_manager.go:851] "Failed to get status for pod" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/pods/cluster-olm-operator-55b69c6c48-7chjv\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.180561 master-0 kubenswrapper[15493]: I0216 17:08:59.180535 15493 status_manager.go:851] "Failed to get status for pod" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" pod="openshift-marketplace/community-operators-7w4km" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7w4km\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.181086 master-0 kubenswrapper[15493]: I0216 17:08:59.181051 15493 status_manager.go:851] "Failed to get status for pod" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" pod="openshift-marketplace/certified-operators-z69zq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z69zq\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.181735 master-0 kubenswrapper[15493]: I0216 17:08:59.181662 15493 status_manager.go:851] "Failed to get status for pod" podUID="7adecad495595c43c57c30abd350e987" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.182391 master-0 kubenswrapper[15493]: I0216 17:08:59.182348 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6bbd87b65b-mt2mz_06067627-6ccf-4cc8-bd20-dabdd776bb46/telemeter-client/0.log" Feb 16 17:08:59.182451 master-0 kubenswrapper[15493]: I0216 17:08:59.182392 15493 generic.go:334] "Generic (PLEG): container finished" podID="06067627-6ccf-4cc8-bd20-dabdd776bb46" containerID="c01f2942ca834ad5db3b3213b3d7ebcecde2cbfe80384e4ee342a8d3af673c4c" exitCode=0 Feb 16 17:08:59.182451 master-0 kubenswrapper[15493]: I0216 17:08:59.182403 15493 generic.go:334] "Generic (PLEG): container finished" podID="06067627-6ccf-4cc8-bd20-dabdd776bb46" containerID="d7eaf6b92ba384f68d33b5ffca98ff48955f77faa550df3dcb31018e0a060800" exitCode=0 Feb 16 17:08:59.182451 master-0 kubenswrapper[15493]: I0216 17:08:59.182412 15493 generic.go:334] "Generic (PLEG): container finished" podID="06067627-6ccf-4cc8-bd20-dabdd776bb46" containerID="b1ee61cb8ce7850ad63abcca2d0e9390c50b02337512a9dc946d98bdea931536" exitCode=2 Feb 16 17:08:59.182770 master-0 kubenswrapper[15493]: I0216 17:08:59.182453 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerDied","Data":"c01f2942ca834ad5db3b3213b3d7ebcecde2cbfe80384e4ee342a8d3af673c4c"} Feb 16 17:08:59.182770 master-0 kubenswrapper[15493]: I0216 17:08:59.182472 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerDied","Data":"d7eaf6b92ba384f68d33b5ffca98ff48955f77faa550df3dcb31018e0a060800"} Feb 16 17:08:59.182770 master-0 kubenswrapper[15493]: I0216 17:08:59.182481 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerDied","Data":"b1ee61cb8ce7850ad63abcca2d0e9390c50b02337512a9dc946d98bdea931536"} Feb 16 17:08:59.182940 master-0 kubenswrapper[15493]: I0216 17:08:59.182798 15493 scope.go:117] "RemoveContainer" containerID="b1ee61cb8ce7850ad63abcca2d0e9390c50b02337512a9dc946d98bdea931536" Feb 16 17:08:59.183001 master-0 kubenswrapper[15493]: I0216 17:08:59.182986 15493 scope.go:117] "RemoveContainer" containerID="d7eaf6b92ba384f68d33b5ffca98ff48955f77faa550df3dcb31018e0a060800" Feb 16 17:08:59.183001 master-0 kubenswrapper[15493]: I0216 17:08:59.182999 15493 scope.go:117] "RemoveContainer" containerID="c01f2942ca834ad5db3b3213b3d7ebcecde2cbfe80384e4ee342a8d3af673c4c" Feb 16 17:08:59.183546 master-0 kubenswrapper[15493]: I0216 17:08:59.183115 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.183598 master-0 kubenswrapper[15493]: I0216 17:08:59.183576 15493 status_manager.go:851] "Failed to get status for pod" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.184303 master-0 kubenswrapper[15493]: I0216 17:08:59.184166 15493 status_manager.go:851] "Failed to get status for pod" podUID="ee84198d-6357-4429-a90c-455c3850a788" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-67fd9768b5-zcwwd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.184901 master-0 kubenswrapper[15493]: I0216 17:08:59.184717 15493 status_manager.go:851] "Failed to get status for pod" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-6d678b8d67-5n9cl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.186841 master-0 kubenswrapper[15493]: I0216 17:08:59.186115 15493 status_manager.go:851] "Failed to get status for pod" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-7d8f4c8c66-qjq9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.187148 master-0 kubenswrapper[15493]: I0216 17:08:59.187093 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.187193 master-0 kubenswrapper[15493]: I0216 17:08:59.187150 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-service-ca-operator_service-ca-operator-5dc4688546-pl7r5_29402454-a920-471e-895e-764235d16eb4/service-ca-operator/1.log" Feb 16 17:08:59.187222 master-0 kubenswrapper[15493]: I0216 17:08:59.187191 15493 generic.go:334] "Generic (PLEG): container finished" podID="29402454-a920-471e-895e-764235d16eb4" containerID="3a45efa110b434ba3eaa66dc54c0ad512d611956acbf896c50fe7ddda2a43beb" exitCode=0 Feb 16 17:08:59.187265 master-0 kubenswrapper[15493]: I0216 17:08:59.187249 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerDied","Data":"3a45efa110b434ba3eaa66dc54c0ad512d611956acbf896c50fe7ddda2a43beb"} Feb 16 17:08:59.187644 master-0 kubenswrapper[15493]: I0216 17:08:59.187574 15493 scope.go:117] "RemoveContainer" containerID="3a45efa110b434ba3eaa66dc54c0ad512d611956acbf896c50fe7ddda2a43beb" Feb 16 17:08:59.187696 master-0 kubenswrapper[15493]: I0216 17:08:59.187665 15493 status_manager.go:851] "Failed to get status for pod" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-64bf6cdbbc-tpd6h\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.187780 master-0 kubenswrapper[15493]: E0216 17:08:59.187759 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=service-ca-operator pod=service-ca-operator-5dc4688546-pl7r5_openshift-service-ca-operator(29402454-a920-471e-895e-764235d16eb4)\"" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:08:59.189976 master-0 kubenswrapper[15493]: I0216 17:08:59.189887 15493 status_manager.go:851] "Failed to get status for pod" podUID="ab5760f1-b2e0-4138-9383-e4827154ac50" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-rjdlk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.190610 master-0 kubenswrapper[15493]: I0216 17:08:59.190581 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pfzq2_80d3b238-70c3-4e71-96a1-99405352033f/snapshot-controller/1.log" Feb 16 17:08:59.190688 master-0 kubenswrapper[15493]: I0216 17:08:59.190659 15493 status_manager.go:851] "Failed to get status for pod" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-dcdb76cc6-5rcvl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.191084 master-0 kubenswrapper[15493]: I0216 17:08:59.191047 15493 status_manager.go:851] "Failed to get status for pod" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" pod="openshift-ingress-canary/ingress-canary-qqvg4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/pods/ingress-canary-qqvg4\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.191531 master-0 kubenswrapper[15493]: I0216 17:08:59.191496 15493 status_manager.go:851] "Failed to get status for pod" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/pods/cluster-olm-operator-55b69c6c48-7chjv\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.191847 master-0 kubenswrapper[15493]: I0216 17:08:59.191812 15493 status_manager.go:851] "Failed to get status for pod" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" pod="openshift-marketplace/community-operators-7w4km" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7w4km\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.192178 master-0 kubenswrapper[15493]: I0216 17:08:59.192144 15493 status_manager.go:851] "Failed to get status for pod" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" pod="openshift-marketplace/certified-operators-z69zq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z69zq\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.192483 master-0 kubenswrapper[15493]: I0216 17:08:59.192452 15493 status_manager.go:851] "Failed to get status for pod" podUID="7adecad495595c43c57c30abd350e987" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.192777 master-0 kubenswrapper[15493]: I0216 17:08:59.192747 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.193069 master-0 kubenswrapper[15493]: I0216 17:08:59.193039 15493 status_manager.go:851] "Failed to get status for pod" podUID="29402454-a920-471e-895e-764235d16eb4" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-5dc4688546-pl7r5\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.193521 master-0 kubenswrapper[15493]: I0216 17:08:59.193487 15493 status_manager.go:851] "Failed to get status for pod" podUID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-flr86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.193988 master-0 kubenswrapper[15493]: I0216 17:08:59.193961 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pfzq2_80d3b238-70c3-4e71-96a1-99405352033f/snapshot-controller/0.log" Feb 16 17:08:59.194031 master-0 kubenswrapper[15493]: I0216 17:08:59.193995 15493 generic.go:334] "Generic (PLEG): container finished" podID="80d3b238-70c3-4e71-96a1-99405352033f" containerID="d33f6a3f5621bc1476673c5054e5f7762d9e97c50291405678774a966801267f" exitCode=2 Feb 16 17:08:59.194072 master-0 kubenswrapper[15493]: I0216 17:08:59.194036 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" event={"ID":"80d3b238-70c3-4e71-96a1-99405352033f","Type":"ContainerDied","Data":"d33f6a3f5621bc1476673c5054e5f7762d9e97c50291405678774a966801267f"} Feb 16 17:08:59.194368 master-0 kubenswrapper[15493]: I0216 17:08:59.194341 15493 scope.go:117] "RemoveContainer" containerID="d33f6a3f5621bc1476673c5054e5f7762d9e97c50291405678774a966801267f" Feb 16 17:08:59.194540 master-0 kubenswrapper[15493]: E0216 17:08:59.194513 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-74b6595c6d-pfzq2_openshift-cluster-storage-operator(80d3b238-70c3-4e71-96a1-99405352033f)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:08:59.199301 master-0 kubenswrapper[15493]: I0216 17:08:59.199263 15493 generic.go:334] "Generic (PLEG): container finished" podID="4549ea98-7379-49e1-8452-5efb643137ca" containerID="8928e3bf46f9c2e9543fd483f5a6715160d68a0e0514884803acf476ebf5679a" exitCode=0 Feb 16 17:08:59.199360 master-0 kubenswrapper[15493]: I0216 17:08:59.199322 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" event={"ID":"4549ea98-7379-49e1-8452-5efb643137ca","Type":"ContainerDied","Data":"8928e3bf46f9c2e9543fd483f5a6715160d68a0e0514884803acf476ebf5679a"} Feb 16 17:08:59.199831 master-0 kubenswrapper[15493]: I0216 17:08:59.199803 15493 scope.go:117] "RemoveContainer" containerID="8928e3bf46f9c2e9543fd483f5a6715160d68a0e0514884803acf476ebf5679a" Feb 16 17:08:59.200029 master-0 kubenswrapper[15493]: E0216 17:08:59.200002 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=network-operator pod=network-operator-6fcf4c966-6bmf9_openshift-network-operator(4549ea98-7379-49e1-8452-5efb643137ca)\"" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" podUID="4549ea98-7379-49e1-8452-5efb643137ca" Feb 16 17:08:59.200369 master-0 kubenswrapper[15493]: I0216 17:08:59.200339 15493 status_manager.go:851] "Failed to get status for pod" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.201177 master-0 kubenswrapper[15493]: I0216 17:08:59.201108 15493 status_manager.go:851] "Failed to get status for pod" podUID="ee84198d-6357-4429-a90c-455c3850a788" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-67fd9768b5-zcwwd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.201695 master-0 kubenswrapper[15493]: I0216 17:08:59.201658 15493 status_manager.go:851] "Failed to get status for pod" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-6d678b8d67-5n9cl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.202132 master-0 kubenswrapper[15493]: I0216 17:08:59.202099 15493 status_manager.go:851] "Failed to get status for pod" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-7d8f4c8c66-qjq9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.202469 master-0 kubenswrapper[15493]: I0216 17:08:59.202439 15493 status_manager.go:851] "Failed to get status for pod" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-6bbd87b65b-mt2mz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.202768 master-0 kubenswrapper[15493]: I0216 17:08:59.202735 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.203282 master-0 kubenswrapper[15493]: I0216 17:08:59.203250 15493 status_manager.go:851] "Failed to get status for pod" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-64bf6cdbbc-tpd6h\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.203563 master-0 kubenswrapper[15493]: I0216 17:08:59.203529 15493 status_manager.go:851] "Failed to get status for pod" podUID="ab5760f1-b2e0-4138-9383-e4827154ac50" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-rjdlk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.203896 master-0 kubenswrapper[15493]: I0216 17:08:59.203863 15493 status_manager.go:851] "Failed to get status for pod" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" pod="openshift-ingress-canary/ingress-canary-qqvg4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/pods/ingress-canary-qqvg4\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.204262 master-0 kubenswrapper[15493]: I0216 17:08:59.204228 15493 status_manager.go:851] "Failed to get status for pod" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-dcdb76cc6-5rcvl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.204553 master-0 kubenswrapper[15493]: I0216 17:08:59.204520 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.205362 master-0 kubenswrapper[15493]: I0216 17:08:59.205331 15493 status_manager.go:851] "Failed to get status for pod" podUID="9c48005e-c4df-4332-87fc-ec028f2c6921" pod="openshift-machine-config-operator/machine-config-server-2ws9r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-server-2ws9r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.206037 master-0 kubenswrapper[15493]: I0216 17:08:59.205911 15493 status_manager.go:851] "Failed to get status for pod" podUID="4549ea98-7379-49e1-8452-5efb643137ca" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-6fcf4c966-6bmf9\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.207038 master-0 kubenswrapper[15493]: I0216 17:08:59.207006 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.211634 master-0 kubenswrapper[15493]: I0216 17:08:59.211596 15493 generic.go:334] "Generic (PLEG): container finished" podID="e1a7c783-2e23-4284-b648-147984cf1022" containerID="2a582040a6fe07000f964991d6b5ca6719ac040d9faa11252a7ce6bf5da016e3" exitCode=0 Feb 16 17:08:59.211701 master-0 kubenswrapper[15493]: I0216 17:08:59.211648 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" event={"ID":"e1a7c783-2e23-4284-b648-147984cf1022","Type":"ContainerDied","Data":"2a582040a6fe07000f964991d6b5ca6719ac040d9faa11252a7ce6bf5da016e3"} Feb 16 17:08:59.212173 master-0 kubenswrapper[15493]: I0216 17:08:59.212144 15493 scope.go:117] "RemoveContainer" containerID="2a582040a6fe07000f964991d6b5ca6719ac040d9faa11252a7ce6bf5da016e3" Feb 16 17:08:59.212359 master-0 kubenswrapper[15493]: E0216 17:08:59.212328 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-7fc9897cf8-9rjwd_openshift-controller-manager(e1a7c783-2e23-4284-b648-147984cf1022)\"" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:08:59.214964 master-0 kubenswrapper[15493]: I0216 17:08:59.214899 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-54984b6678-gp8gv_d020c902-2adb-4919-8dd9-0c2109830580/kube-apiserver-operator/1.log" Feb 16 17:08:59.214964 master-0 kubenswrapper[15493]: I0216 17:08:59.214951 15493 generic.go:334] "Generic (PLEG): container finished" podID="d020c902-2adb-4919-8dd9-0c2109830580" containerID="58ea7e7597fd84e0ed74580e261b589711b1b87586741887fc2593fecc63262c" exitCode=0 Feb 16 17:08:59.215083 master-0 kubenswrapper[15493]: I0216 17:08:59.214989 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerDied","Data":"58ea7e7597fd84e0ed74580e261b589711b1b87586741887fc2593fecc63262c"} Feb 16 17:08:59.215658 master-0 kubenswrapper[15493]: I0216 17:08:59.215466 15493 scope.go:117] "RemoveContainer" containerID="58ea7e7597fd84e0ed74580e261b589711b1b87586741887fc2593fecc63262c" Feb 16 17:08:59.215658 master-0 kubenswrapper[15493]: E0216 17:08:59.215627 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-54984b6678-gp8gv_openshift-kube-apiserver-operator(d020c902-2adb-4919-8dd9-0c2109830580)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:08:59.215774 master-0 kubenswrapper[15493]: I0216 17:08:59.215707 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:08:59.217825 master-0 kubenswrapper[15493]: I0216 17:08:59.217786 15493 generic.go:334] "Generic (PLEG): container finished" podID="0393fe12-2533-4c9c-a8e4-a58003c88f36" containerID="80e725c54b230c9d93fa31b6c0bcfa809e24c7926e13502059b3983fcd1b3d79" exitCode=0 Feb 16 17:08:59.217946 master-0 kubenswrapper[15493]: I0216 17:08:59.217838 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerDied","Data":"80e725c54b230c9d93fa31b6c0bcfa809e24c7926e13502059b3983fcd1b3d79"} Feb 16 17:08:59.218909 master-0 kubenswrapper[15493]: I0216 17:08:59.218110 15493 scope.go:117] "RemoveContainer" containerID="80e725c54b230c9d93fa31b6c0bcfa809e24c7926e13502059b3983fcd1b3d79" Feb 16 17:08:59.219910 master-0 kubenswrapper[15493]: I0216 17:08:59.219874 15493 generic.go:334] "Generic (PLEG): container finished" podID="a6fe41b0-1a42-4f07-8220-d9aaa50788ad" containerID="f10912c30fd4a11ea42c60b953841baf59f4219d858a735d1f2aa7871453e0dd" exitCode=0 Feb 16 17:08:59.219910 master-0 kubenswrapper[15493]: I0216 17:08:59.219913 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vfxj4" event={"ID":"a6fe41b0-1a42-4f07-8220-d9aaa50788ad","Type":"ContainerDied","Data":"f10912c30fd4a11ea42c60b953841baf59f4219d858a735d1f2aa7871453e0dd"} Feb 16 17:08:59.220211 master-0 kubenswrapper[15493]: I0216 17:08:59.220175 15493 scope.go:117] "RemoveContainer" containerID="f10912c30fd4a11ea42c60b953841baf59f4219d858a735d1f2aa7871453e0dd" Feb 16 17:08:59.221867 master-0 kubenswrapper[15493]: I0216 17:08:59.221828 15493 generic.go:334] "Generic (PLEG): container finished" podID="e73ee493-de15-44c2-bd51-e12fcbb27a15" containerID="7ac85a9051d1d62aecdb0aea9f364c312e41f09a8b1c3d2e9cdedd31994406f5" exitCode=0 Feb 16 17:08:59.221954 master-0 kubenswrapper[15493]: I0216 17:08:59.221868 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" event={"ID":"e73ee493-de15-44c2-bd51-e12fcbb27a15","Type":"ContainerDied","Data":"7ac85a9051d1d62aecdb0aea9f364c312e41f09a8b1c3d2e9cdedd31994406f5"} Feb 16 17:08:59.222146 master-0 kubenswrapper[15493]: I0216 17:08:59.222115 15493 scope.go:117] "RemoveContainer" containerID="7ac85a9051d1d62aecdb0aea9f364c312e41f09a8b1c3d2e9cdedd31994406f5" Feb 16 17:08:59.223444 master-0 kubenswrapper[15493]: I0216 17:08:59.223411 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-controller-686c884b4d-ksx48_c8729b1a-e365-4cf7-8a05-91a9987dabe9/machine-config-controller/1.log" Feb 16 17:08:59.224221 master-0 kubenswrapper[15493]: I0216 17:08:59.224204 15493 generic.go:334] "Generic (PLEG): container finished" podID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" containerID="9f6816b707db3ee7a3ce4874d11d951c9bf04af4e701546c6792509a97799184" exitCode=2 Feb 16 17:08:59.224304 master-0 kubenswrapper[15493]: I0216 17:08:59.224222 15493 generic.go:334] "Generic (PLEG): container finished" podID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" containerID="86f6bc50c80ff4e338be6c35167f66696d920488cfb3363a969927c949d6b7a8" exitCode=0 Feb 16 17:08:59.224304 master-0 kubenswrapper[15493]: I0216 17:08:59.224247 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerDied","Data":"9f6816b707db3ee7a3ce4874d11d951c9bf04af4e701546c6792509a97799184"} Feb 16 17:08:59.224446 master-0 kubenswrapper[15493]: I0216 17:08:59.224316 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerDied","Data":"86f6bc50c80ff4e338be6c35167f66696d920488cfb3363a969927c949d6b7a8"} Feb 16 17:08:59.225131 master-0 kubenswrapper[15493]: I0216 17:08:59.225063 15493 scope.go:117] "RemoveContainer" containerID="9f6816b707db3ee7a3ce4874d11d951c9bf04af4e701546c6792509a97799184" Feb 16 17:08:59.225219 master-0 kubenswrapper[15493]: I0216 17:08:59.225130 15493 scope.go:117] "RemoveContainer" containerID="86f6bc50c80ff4e338be6c35167f66696d920488cfb3363a969927c949d6b7a8" Feb 16 17:08:59.227609 master-0 kubenswrapper[15493]: I0216 17:08:59.227543 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7cc9598d54-8j5rk_55d635cd-1f0d-4086-96f2-9f3524f3f18c/kube-state-metrics/0.log" Feb 16 17:08:59.227609 master-0 kubenswrapper[15493]: I0216 17:08:59.227586 15493 generic.go:334] "Generic (PLEG): container finished" podID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" containerID="2c91d4eba32f655f029c6a6984be69760a5965065f6832e40105737c2c813247" exitCode=0 Feb 16 17:08:59.227609 master-0 kubenswrapper[15493]: I0216 17:08:59.227603 15493 generic.go:334] "Generic (PLEG): container finished" podID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" containerID="98e282b9ddd3e1d2ffe040be844462c924b282deedc606e228c31635a2adedd9" exitCode=0 Feb 16 17:08:59.227854 master-0 kubenswrapper[15493]: I0216 17:08:59.227615 15493 generic.go:334] "Generic (PLEG): container finished" podID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" containerID="15735d6e78e08bb21a10fad58fd15d0445bf13026d3ed52d2dca4bfc75e63305" exitCode=2 Feb 16 17:08:59.227854 master-0 kubenswrapper[15493]: I0216 17:08:59.227622 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerDied","Data":"2c91d4eba32f655f029c6a6984be69760a5965065f6832e40105737c2c813247"} Feb 16 17:08:59.227854 master-0 kubenswrapper[15493]: I0216 17:08:59.227675 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerDied","Data":"98e282b9ddd3e1d2ffe040be844462c924b282deedc606e228c31635a2adedd9"} Feb 16 17:08:59.227854 master-0 kubenswrapper[15493]: I0216 17:08:59.227696 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerDied","Data":"15735d6e78e08bb21a10fad58fd15d0445bf13026d3ed52d2dca4bfc75e63305"} Feb 16 17:08:59.231182 master-0 kubenswrapper[15493]: I0216 17:08:59.228331 15493 scope.go:117] "RemoveContainer" containerID="15735d6e78e08bb21a10fad58fd15d0445bf13026d3ed52d2dca4bfc75e63305" Feb 16 17:08:59.231182 master-0 kubenswrapper[15493]: I0216 17:08:59.228339 15493 status_manager.go:851] "Failed to get status for pod" podUID="9c48005e-c4df-4332-87fc-ec028f2c6921" pod="openshift-machine-config-operator/machine-config-server-2ws9r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-server-2ws9r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.231182 master-0 kubenswrapper[15493]: I0216 17:08:59.228365 15493 scope.go:117] "RemoveContainer" containerID="98e282b9ddd3e1d2ffe040be844462c924b282deedc606e228c31635a2adedd9" Feb 16 17:08:59.231182 master-0 kubenswrapper[15493]: I0216 17:08:59.228489 15493 scope.go:117] "RemoveContainer" containerID="2c91d4eba32f655f029c6a6984be69760a5965065f6832e40105737c2c813247" Feb 16 17:08:59.231182 master-0 kubenswrapper[15493]: I0216 17:08:59.229725 15493 generic.go:334] "Generic (PLEG): container finished" podID="62220aa5-4065-472c-8a17-c0a58942ab8a" containerID="976b039c9b06af0f3723d83d4469ee022692218ee590a3983454ac89413005ba" exitCode=0 Feb 16 17:08:59.231182 master-0 kubenswrapper[15493]: I0216 17:08:59.229805 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" event={"ID":"62220aa5-4065-472c-8a17-c0a58942ab8a","Type":"ContainerDied","Data":"976b039c9b06af0f3723d83d4469ee022692218ee590a3983454ac89413005ba"} Feb 16 17:08:59.231182 master-0 kubenswrapper[15493]: I0216 17:08:59.230379 15493 scope.go:117] "RemoveContainer" containerID="976b039c9b06af0f3723d83d4469ee022692218ee590a3983454ac89413005ba" Feb 16 17:08:59.232387 master-0 kubenswrapper[15493]: I0216 17:08:59.232353 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-c588d8cb4-wjr7d_9609a4f3-b947-47af-a685-baae26c50fa3/ingress-operator/0.log" Feb 16 17:08:59.232451 master-0 kubenswrapper[15493]: I0216 17:08:59.232400 15493 generic.go:334] "Generic (PLEG): container finished" podID="9609a4f3-b947-47af-a685-baae26c50fa3" containerID="d3da8ba42d4b0c66c19212e6b0d32e25aca9d72e06c94a833859dc0c4a30c389" exitCode=0 Feb 16 17:08:59.232451 master-0 kubenswrapper[15493]: I0216 17:08:59.232414 15493 generic.go:334] "Generic (PLEG): container finished" podID="9609a4f3-b947-47af-a685-baae26c50fa3" containerID="f3e3e8e94dc6c217da7c3312700e3c981cf01212e798fb2c9ea5fc2b31f6b8aa" exitCode=0 Feb 16 17:08:59.232572 master-0 kubenswrapper[15493]: I0216 17:08:59.232464 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerDied","Data":"d3da8ba42d4b0c66c19212e6b0d32e25aca9d72e06c94a833859dc0c4a30c389"} Feb 16 17:08:59.232572 master-0 kubenswrapper[15493]: I0216 17:08:59.232484 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerDied","Data":"f3e3e8e94dc6c217da7c3312700e3c981cf01212e798fb2c9ea5fc2b31f6b8aa"} Feb 16 17:08:59.232781 master-0 kubenswrapper[15493]: I0216 17:08:59.232746 15493 scope.go:117] "RemoveContainer" containerID="d3da8ba42d4b0c66c19212e6b0d32e25aca9d72e06c94a833859dc0c4a30c389" Feb 16 17:08:59.232781 master-0 kubenswrapper[15493]: I0216 17:08:59.232767 15493 scope.go:117] "RemoveContainer" containerID="f3e3e8e94dc6c217da7c3312700e3c981cf01212e798fb2c9ea5fc2b31f6b8aa" Feb 16 17:08:59.235070 master-0 kubenswrapper[15493]: I0216 17:08:59.235041 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler-cert-syncer/0.log" Feb 16 17:08:59.235563 master-0 kubenswrapper[15493]: I0216 17:08:59.235532 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler/0.log" Feb 16 17:08:59.235841 master-0 kubenswrapper[15493]: I0216 17:08:59.235810 15493 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="3a7438c404beed179198e48ea8b265d5cc7d16f7a97df452ce2fd8035a2ab01d" exitCode=0 Feb 16 17:08:59.235841 master-0 kubenswrapper[15493]: I0216 17:08:59.235831 15493 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="144a82e5a5354aec78564d4133ad4c697667355391b76ed74ba7976065eba35a" exitCode=0 Feb 16 17:08:59.235841 master-0 kubenswrapper[15493]: I0216 17:08:59.235839 15493 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="3bfeebda3fb13280918301b723ffaeab4eecd5112011293cfc4c1452d31aedfc" exitCode=2 Feb 16 17:08:59.235999 master-0 kubenswrapper[15493]: I0216 17:08:59.235868 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"3a7438c404beed179198e48ea8b265d5cc7d16f7a97df452ce2fd8035a2ab01d"} Feb 16 17:08:59.235999 master-0 kubenswrapper[15493]: I0216 17:08:59.235883 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"144a82e5a5354aec78564d4133ad4c697667355391b76ed74ba7976065eba35a"} Feb 16 17:08:59.235999 master-0 kubenswrapper[15493]: I0216 17:08:59.235892 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"3bfeebda3fb13280918301b723ffaeab4eecd5112011293cfc4c1452d31aedfc"} Feb 16 17:08:59.236182 master-0 kubenswrapper[15493]: I0216 17:08:59.236152 15493 scope.go:117] "RemoveContainer" containerID="3a7438c404beed179198e48ea8b265d5cc7d16f7a97df452ce2fd8035a2ab01d" Feb 16 17:08:59.236182 master-0 kubenswrapper[15493]: I0216 17:08:59.236174 15493 scope.go:117] "RemoveContainer" containerID="3bfeebda3fb13280918301b723ffaeab4eecd5112011293cfc4c1452d31aedfc" Feb 16 17:08:59.236182 master-0 kubenswrapper[15493]: I0216 17:08:59.236184 15493 scope.go:117] "RemoveContainer" containerID="144a82e5a5354aec78564d4133ad4c697667355391b76ed74ba7976065eba35a" Feb 16 17:08:59.238499 master-0 kubenswrapper[15493]: I0216 17:08:59.238465 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-8256c_a94f9b8e-b020-4aab-8373-6c056ec07464/node-exporter/0.log" Feb 16 17:08:59.238800 master-0 kubenswrapper[15493]: I0216 17:08:59.238779 15493 generic.go:334] "Generic (PLEG): container finished" podID="a94f9b8e-b020-4aab-8373-6c056ec07464" containerID="6dcd5d50bbf027f43829cdb53f16d45fecb46aaa4d161ff0a9649e21a30c1100" exitCode=0 Feb 16 17:08:59.238800 master-0 kubenswrapper[15493]: I0216 17:08:59.238797 15493 generic.go:334] "Generic (PLEG): container finished" podID="a94f9b8e-b020-4aab-8373-6c056ec07464" containerID="90da39c5be608aef25083568573711edf3f3e1d673fa13869d5f66168a1f89df" exitCode=143 Feb 16 17:08:59.239078 master-0 kubenswrapper[15493]: I0216 17:08:59.238827 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerDied","Data":"6dcd5d50bbf027f43829cdb53f16d45fecb46aaa4d161ff0a9649e21a30c1100"} Feb 16 17:08:59.239078 master-0 kubenswrapper[15493]: I0216 17:08:59.238843 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerDied","Data":"90da39c5be608aef25083568573711edf3f3e1d673fa13869d5f66168a1f89df"} Feb 16 17:08:59.239418 master-0 kubenswrapper[15493]: I0216 17:08:59.239099 15493 scope.go:117] "RemoveContainer" containerID="90da39c5be608aef25083568573711edf3f3e1d673fa13869d5f66168a1f89df" Feb 16 17:08:59.239418 master-0 kubenswrapper[15493]: I0216 17:08:59.239114 15493 scope.go:117] "RemoveContainer" containerID="6dcd5d50bbf027f43829cdb53f16d45fecb46aaa4d161ff0a9649e21a30c1100" Feb 16 17:08:59.240820 master-0 kubenswrapper[15493]: I0216 17:08:59.240790 15493 generic.go:334] "Generic (PLEG): container finished" podID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" containerID="dcb8ee027102c635f78357f4ce72bd5b7efe3822aa120feb8c3bf58da7f28758" exitCode=0 Feb 16 17:08:59.240905 master-0 kubenswrapper[15493]: I0216 17:08:59.240832 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-dhhfh" event={"ID":"08a90dc5-b0d8-4aad-a002-736492b6c1a9","Type":"ContainerDied","Data":"dcb8ee027102c635f78357f4ce72bd5b7efe3822aa120feb8c3bf58da7f28758"} Feb 16 17:08:59.241110 master-0 kubenswrapper[15493]: I0216 17:08:59.241070 15493 scope.go:117] "RemoveContainer" containerID="dcb8ee027102c635f78357f4ce72bd5b7efe3822aa120feb8c3bf58da7f28758" Feb 16 17:08:59.243056 master-0 kubenswrapper[15493]: I0216 17:08:59.242965 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-lj58b_54f29618-42c2-4270-9af7-7d82852d7cec/manager/1.log" Feb 16 17:08:59.244122 master-0 kubenswrapper[15493]: I0216 17:08:59.244065 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-85c9b89969-lj58b_54f29618-42c2-4270-9af7-7d82852d7cec/manager/0.log" Feb 16 17:08:59.244122 master-0 kubenswrapper[15493]: I0216 17:08:59.244109 15493 generic.go:334] "Generic (PLEG): container finished" podID="54f29618-42c2-4270-9af7-7d82852d7cec" containerID="8be786986ba9a6b558bcc95a5388404ffe3d212fd816f257969a4588305eb16d" exitCode=1 Feb 16 17:08:59.244458 master-0 kubenswrapper[15493]: I0216 17:08:59.244131 15493 generic.go:334] "Generic (PLEG): container finished" podID="54f29618-42c2-4270-9af7-7d82852d7cec" containerID="e843fe6093fddb4f0608997e3c887c540c5790a52102b7ba7e769e8ae9904f7d" exitCode=0 Feb 16 17:08:59.244458 master-0 kubenswrapper[15493]: I0216 17:08:59.244186 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerDied","Data":"8be786986ba9a6b558bcc95a5388404ffe3d212fd816f257969a4588305eb16d"} Feb 16 17:08:59.244458 master-0 kubenswrapper[15493]: I0216 17:08:59.244217 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerDied","Data":"e843fe6093fddb4f0608997e3c887c540c5790a52102b7ba7e769e8ae9904f7d"} Feb 16 17:08:59.244687 master-0 kubenswrapper[15493]: I0216 17:08:59.244665 15493 scope.go:117] "RemoveContainer" containerID="8be786986ba9a6b558bcc95a5388404ffe3d212fd816f257969a4588305eb16d" Feb 16 17:08:59.244687 master-0 kubenswrapper[15493]: I0216 17:08:59.244686 15493 scope.go:117] "RemoveContainer" containerID="e843fe6093fddb4f0608997e3c887c540c5790a52102b7ba7e769e8ae9904f7d" Feb 16 17:08:59.245739 master-0 kubenswrapper[15493]: I0216 17:08:59.245712 15493 generic.go:334] "Generic (PLEG): container finished" podID="18e9a9d3-9b18-4c19-9558-f33c68101922" containerID="179f3f8e9463125ded1c5a4f832192a17edba6e13a5506acf48e86abcd40cda7" exitCode=0 Feb 16 17:08:59.245821 master-0 kubenswrapper[15493]: I0216 17:08:59.245739 15493 generic.go:334] "Generic (PLEG): container finished" podID="18e9a9d3-9b18-4c19-9558-f33c68101922" containerID="be513303bf4ed878c6c5f6ef9c7437f58c4a298f57e9e8964fc17527ef538c38" exitCode=0 Feb 16 17:08:59.245821 master-0 kubenswrapper[15493]: I0216 17:08:59.245777 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerDied","Data":"179f3f8e9463125ded1c5a4f832192a17edba6e13a5506acf48e86abcd40cda7"} Feb 16 17:08:59.245821 master-0 kubenswrapper[15493]: I0216 17:08:59.245794 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerDied","Data":"be513303bf4ed878c6c5f6ef9c7437f58c4a298f57e9e8964fc17527ef538c38"} Feb 16 17:08:59.246132 master-0 kubenswrapper[15493]: I0216 17:08:59.246107 15493 scope.go:117] "RemoveContainer" containerID="be513303bf4ed878c6c5f6ef9c7437f58c4a298f57e9e8964fc17527ef538c38" Feb 16 17:08:59.246132 master-0 kubenswrapper[15493]: I0216 17:08:59.246131 15493 scope.go:117] "RemoveContainer" containerID="179f3f8e9463125ded1c5a4f832192a17edba6e13a5506acf48e86abcd40cda7" Feb 16 17:08:59.247566 master-0 kubenswrapper[15493]: I0216 17:08:59.247440 15493 status_manager.go:851] "Failed to get status for pod" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/pods/cluster-olm-operator-55b69c6c48-7chjv\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.249366 master-0 kubenswrapper[15493]: I0216 17:08:59.249143 15493 generic.go:334] "Generic (PLEG): container finished" podID="2d96ccdc-0b09-437d-bfca-1958af5d9953" containerID="0bee485c8968ce0e68dba41fcbcee4d323847661d4d7322172f3a42844676150" exitCode=0 Feb 16 17:08:59.249366 master-0 kubenswrapper[15493]: I0216 17:08:59.249197 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerDied","Data":"0bee485c8968ce0e68dba41fcbcee4d323847661d4d7322172f3a42844676150"} Feb 16 17:08:59.249798 master-0 kubenswrapper[15493]: I0216 17:08:59.249549 15493 scope.go:117] "RemoveContainer" containerID="0bee485c8968ce0e68dba41fcbcee4d323847661d4d7322172f3a42844676150" Feb 16 17:08:59.250843 master-0 kubenswrapper[15493]: I0216 17:08:59.250810 15493 generic.go:334] "Generic (PLEG): container finished" podID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" containerID="22751fbbdf7aa3224dae4e546afa76f6812f3b8e22c34ed3ba395d1643038f1f" exitCode=0 Feb 16 17:08:59.251008 master-0 kubenswrapper[15493]: I0216 17:08:59.250869 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" event={"ID":"188e42e5-9f9c-42af-ba15-5548c4fa4b52","Type":"ContainerDied","Data":"22751fbbdf7aa3224dae4e546afa76f6812f3b8e22c34ed3ba395d1643038f1f"} Feb 16 17:08:59.251239 master-0 kubenswrapper[15493]: I0216 17:08:59.251213 15493 scope.go:117] "RemoveContainer" containerID="22751fbbdf7aa3224dae4e546afa76f6812f3b8e22c34ed3ba395d1643038f1f" Feb 16 17:08:59.253376 master-0 kubenswrapper[15493]: I0216 17:08:59.253338 15493 generic.go:334] "Generic (PLEG): container finished" podID="2d1636c0-f34d-444c-822d-77f1d203ddc4" containerID="5e22f442c57442c5b54cef3353bfe6f252841ff0c85eccba549a06ebef40c806" exitCode=0 Feb 16 17:08:59.253376 master-0 kubenswrapper[15493]: I0216 17:08:59.253358 15493 generic.go:334] "Generic (PLEG): container finished" podID="2d1636c0-f34d-444c-822d-77f1d203ddc4" containerID="e2cd6c58d3deb1bb956b3338bb6da8c859fdee1ec0ad904ec42c12d7238536af" exitCode=0 Feb 16 17:08:59.253649 master-0 kubenswrapper[15493]: I0216 17:08:59.253396 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" event={"ID":"2d1636c0-f34d-444c-822d-77f1d203ddc4","Type":"ContainerDied","Data":"5e22f442c57442c5b54cef3353bfe6f252841ff0c85eccba549a06ebef40c806"} Feb 16 17:08:59.253649 master-0 kubenswrapper[15493]: I0216 17:08:59.253420 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" event={"ID":"2d1636c0-f34d-444c-822d-77f1d203ddc4","Type":"ContainerDied","Data":"e2cd6c58d3deb1bb956b3338bb6da8c859fdee1ec0ad904ec42c12d7238536af"} Feb 16 17:08:59.253974 master-0 kubenswrapper[15493]: I0216 17:08:59.253800 15493 scope.go:117] "RemoveContainer" containerID="e2cd6c58d3deb1bb956b3338bb6da8c859fdee1ec0ad904ec42c12d7238536af" Feb 16 17:08:59.253974 master-0 kubenswrapper[15493]: I0216 17:08:59.253820 15493 scope.go:117] "RemoveContainer" containerID="5e22f442c57442c5b54cef3353bfe6f252841ff0c85eccba549a06ebef40c806" Feb 16 17:08:59.255260 master-0 kubenswrapper[15493]: I0216 17:08:59.255238 15493 generic.go:334] "Generic (PLEG): container finished" podID="ad805251-19d0-4d2f-b741-7d11158f1f03" containerID="41145f961148dffbd55b7be77a9591605ef99767213da81b0ba442326c4b3012" exitCode=0 Feb 16 17:08:59.255260 master-0 kubenswrapper[15493]: I0216 17:08:59.255257 15493 generic.go:334] "Generic (PLEG): container finished" podID="ad805251-19d0-4d2f-b741-7d11158f1f03" containerID="8621c35772b0fa0c74746882d26cde088c3ee0e7e232d2738c23c769fa66118c" exitCode=0 Feb 16 17:08:59.255406 master-0 kubenswrapper[15493]: I0216 17:08:59.255290 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerDied","Data":"41145f961148dffbd55b7be77a9591605ef99767213da81b0ba442326c4b3012"} Feb 16 17:08:59.255406 master-0 kubenswrapper[15493]: I0216 17:08:59.255305 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerDied","Data":"8621c35772b0fa0c74746882d26cde088c3ee0e7e232d2738c23c769fa66118c"} Feb 16 17:08:59.255530 master-0 kubenswrapper[15493]: I0216 17:08:59.255522 15493 scope.go:117] "RemoveContainer" containerID="8621c35772b0fa0c74746882d26cde088c3ee0e7e232d2738c23c769fa66118c" Feb 16 17:08:59.255603 master-0 kubenswrapper[15493]: I0216 17:08:59.255535 15493 scope.go:117] "RemoveContainer" containerID="41145f961148dffbd55b7be77a9591605ef99767213da81b0ba442326c4b3012" Feb 16 17:08:59.258182 master-0 kubenswrapper[15493]: I0216 17:08:59.258153 15493 generic.go:334] "Generic (PLEG): container finished" podID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" containerID="81ddf55b61540f7b5e030d229eea51d26c8a5bda0650c33851cbe3fbbeefd261" exitCode=0 Feb 16 17:08:59.258296 master-0 kubenswrapper[15493]: I0216 17:08:59.258227 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" event={"ID":"e10d0b0c-4c2a-45b3-8d69-3070d566b97d","Type":"ContainerDied","Data":"81ddf55b61540f7b5e030d229eea51d26c8a5bda0650c33851cbe3fbbeefd261"} Feb 16 17:08:59.258637 master-0 kubenswrapper[15493]: I0216 17:08:59.258466 15493 scope.go:117] "RemoveContainer" containerID="81ddf55b61540f7b5e030d229eea51d26c8a5bda0650c33851cbe3fbbeefd261" Feb 16 17:08:59.260049 master-0 kubenswrapper[15493]: I0216 17:08:59.260028 15493 generic.go:334] "Generic (PLEG): container finished" podID="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" containerID="2dafd39a483160a1d39cbe9a3a9409c939da33f2a648ec553387255240b550e9" exitCode=0 Feb 16 17:08:59.260155 master-0 kubenswrapper[15493]: I0216 17:08:59.260072 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" event={"ID":"b6ad958f-25e4-40cb-89ec-5da9cb6395c7","Type":"ContainerDied","Data":"2dafd39a483160a1d39cbe9a3a9409c939da33f2a648ec553387255240b550e9"} Feb 16 17:08:59.260318 master-0 kubenswrapper[15493]: I0216 17:08:59.260299 15493 scope.go:117] "RemoveContainer" containerID="2dafd39a483160a1d39cbe9a3a9409c939da33f2a648ec553387255240b550e9" Feb 16 17:08:59.272897 master-0 kubenswrapper[15493]: I0216 17:08:59.271310 15493 status_manager.go:851] "Failed to get status for pod" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" pod="openshift-marketplace/community-operators-7w4km" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7w4km\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.274669 master-0 kubenswrapper[15493]: I0216 17:08:59.274592 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-vwvwx_c303189e-adae-4fe2-8dd7-cc9b80f73e66/network-check-target-container/0.log" Feb 16 17:08:59.274669 master-0 kubenswrapper[15493]: I0216 17:08:59.274654 15493 generic.go:334] "Generic (PLEG): container finished" podID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" containerID="ade1880aca33a2a6fecd8de7a6fb9caa6cf30a4d0a9280f0ea929a2643dc290b" exitCode=2 Feb 16 17:08:59.274792 master-0 kubenswrapper[15493]: I0216 17:08:59.274723 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vwvwx" event={"ID":"c303189e-adae-4fe2-8dd7-cc9b80f73e66","Type":"ContainerDied","Data":"ade1880aca33a2a6fecd8de7a6fb9caa6cf30a4d0a9280f0ea929a2643dc290b"} Feb 16 17:08:59.275452 master-0 kubenswrapper[15493]: I0216 17:08:59.275329 15493 scope.go:117] "RemoveContainer" containerID="ade1880aca33a2a6fecd8de7a6fb9caa6cf30a4d0a9280f0ea929a2643dc290b" Feb 16 17:08:59.278710 master-0 kubenswrapper[15493]: I0216 17:08:59.278678 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-d8bf84b88-m66tx_642e5115-b7f2-4561-bc6b-1a74b6d891c4/control-plane-machine-set-operator/0.log" Feb 16 17:08:59.278777 master-0 kubenswrapper[15493]: I0216 17:08:59.278726 15493 generic.go:334] "Generic (PLEG): container finished" podID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" containerID="0378dec77c7a2ef4be2733e7030469db31488b7e185a34697e5785f613bb63ff" exitCode=0 Feb 16 17:08:59.278832 master-0 kubenswrapper[15493]: I0216 17:08:59.278772 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" event={"ID":"642e5115-b7f2-4561-bc6b-1a74b6d891c4","Type":"ContainerDied","Data":"0378dec77c7a2ef4be2733e7030469db31488b7e185a34697e5785f613bb63ff"} Feb 16 17:08:59.279180 master-0 kubenswrapper[15493]: I0216 17:08:59.279154 15493 scope.go:117] "RemoveContainer" containerID="0378dec77c7a2ef4be2733e7030469db31488b7e185a34697e5785f613bb63ff" Feb 16 17:08:59.279389 master-0 kubenswrapper[15493]: E0216 17:08:59.279362 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"control-plane-machine-set-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=control-plane-machine-set-operator pod=control-plane-machine-set-operator-d8bf84b88-m66tx_openshift-machine-api(642e5115-b7f2-4561-bc6b-1a74b6d891c4)\"" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:08:59.281084 master-0 kubenswrapper[15493]: I0216 17:08:59.281028 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_32286c81635de6de1cf7f328273c1a49/startup-monitor/0.log" Feb 16 17:08:59.281160 master-0 kubenswrapper[15493]: I0216 17:08:59.281092 15493 generic.go:334] "Generic (PLEG): container finished" podID="32286c81635de6de1cf7f328273c1a49" containerID="3268564838eb6e6a4f98f7ce91f31bb8894c255d354c86d8cadc7b120d01a6ff" exitCode=255 Feb 16 17:08:59.281160 master-0 kubenswrapper[15493]: I0216 17:08:59.281141 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"32286c81635de6de1cf7f328273c1a49","Type":"ContainerDied","Data":"3268564838eb6e6a4f98f7ce91f31bb8894c255d354c86d8cadc7b120d01a6ff"} Feb 16 17:08:59.292693 master-0 kubenswrapper[15493]: I0216 17:08:59.292653 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-cb4f7b4cf-6qrw5_c2511146-1d04-4ecd-a28e-79662ef7b9d3/insights-operator/3.log" Feb 16 17:08:59.293215 master-0 kubenswrapper[15493]: I0216 17:08:59.293181 15493 generic.go:334] "Generic (PLEG): container finished" podID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" containerID="5d5a5bf8842891b7c39944888923705610466aef86b87837b0015a5336b1bc63" exitCode=2 Feb 16 17:08:59.293280 master-0 kubenswrapper[15493]: I0216 17:08:59.293234 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerDied","Data":"5d5a5bf8842891b7c39944888923705610466aef86b87837b0015a5336b1bc63"} Feb 16 17:08:59.293575 master-0 kubenswrapper[15493]: I0216 17:08:59.293550 15493 scope.go:117] "RemoveContainer" containerID="5d5a5bf8842891b7c39944888923705610466aef86b87837b0015a5336b1bc63" Feb 16 17:08:59.293736 master-0 kubenswrapper[15493]: E0216 17:08:59.293712 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=insights-operator pod=insights-operator-cb4f7b4cf-6qrw5_openshift-insights(c2511146-1d04-4ecd-a28e-79662ef7b9d3)\"" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:08:59.293797 master-0 kubenswrapper[15493]: I0216 17:08:59.293773 15493 patch_prober.go:28] interesting pod/monitoring-plugin-555857f695-nlrnr container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.128.0.88:9443/health\": dial tcp 10.128.0.88:9443: connect: connection refused" start-of-body= Feb 16 17:08:59.293839 master-0 kubenswrapper[15493]: I0216 17:08:59.293809 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.128.0.88:9443/health\": dial tcp 10.128.0.88:9443: connect: connection refused" Feb 16 17:08:59.294458 master-0 kubenswrapper[15493]: I0216 17:08:59.294415 15493 status_manager.go:851] "Failed to get status for pod" podUID="7adecad495595c43c57c30abd350e987" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.295050 master-0 kubenswrapper[15493]: I0216 17:08:59.295020 15493 generic.go:334] "Generic (PLEG): container finished" podID="d1524fc1-d157-435a-8bf8-7e877c45909d" containerID="c7a678a1566dce1a83b3b33b3d0dd73aa2c7ba1c17bac97e5cf444e5f241b28a" exitCode=0 Feb 16 17:08:59.295050 master-0 kubenswrapper[15493]: I0216 17:08:59.295044 15493 generic.go:334] "Generic (PLEG): container finished" podID="d1524fc1-d157-435a-8bf8-7e877c45909d" containerID="836a8b0540247df6e45c7363dec062ae3f1c759c61215fa36d1a8c35a0e755fb" exitCode=0 Feb 16 17:08:59.295142 master-0 kubenswrapper[15493]: I0216 17:08:59.295089 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerDied","Data":"c7a678a1566dce1a83b3b33b3d0dd73aa2c7ba1c17bac97e5cf444e5f241b28a"} Feb 16 17:08:59.295142 master-0 kubenswrapper[15493]: I0216 17:08:59.295115 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerDied","Data":"836a8b0540247df6e45c7363dec062ae3f1c759c61215fa36d1a8c35a0e755fb"} Feb 16 17:08:59.295569 master-0 kubenswrapper[15493]: I0216 17:08:59.295501 15493 scope.go:117] "RemoveContainer" containerID="836a8b0540247df6e45c7363dec062ae3f1c759c61215fa36d1a8c35a0e755fb" Feb 16 17:08:59.295569 master-0 kubenswrapper[15493]: I0216 17:08:59.295528 15493 scope.go:117] "RemoveContainer" containerID="c7a678a1566dce1a83b3b33b3d0dd73aa2c7ba1c17bac97e5cf444e5f241b28a" Feb 16 17:08:59.298705 master-0 kubenswrapper[15493]: I0216 17:08:59.298677 15493 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="a4d180212b837cef90316f3a72302aee5c06f2305971b7807d93998938653588" exitCode=0 Feb 16 17:08:59.298795 master-0 kubenswrapper[15493]: I0216 17:08:59.298709 15493 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="88b34e595911c25aba850a546f90a694c0bd49c6505002297eb0ff69b947d240" exitCode=0 Feb 16 17:08:59.298795 master-0 kubenswrapper[15493]: I0216 17:08:59.298726 15493 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="4286c5fe2de6d9fcd50051605f2f4b5c1cba939d016239a3d000fd0f9a25e9f0" exitCode=0 Feb 16 17:08:59.298795 master-0 kubenswrapper[15493]: I0216 17:08:59.298739 15493 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="1302e1b555512ed30fe816d84e187d9499595f02c728ef307269b3bb72731f62" exitCode=0 Feb 16 17:08:59.298795 master-0 kubenswrapper[15493]: I0216 17:08:59.298752 15493 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="15d0ed427ea76ce837f303235b162f71a48c98f5857055297e1b3bf45627edde" exitCode=0 Feb 16 17:08:59.298795 master-0 kubenswrapper[15493]: I0216 17:08:59.298767 15493 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="d7761c8f384f9fd0a76d9ffed7d0a9aec69eee80353e9faf16ba81e81e59c988" exitCode=0 Feb 16 17:08:59.299008 master-0 kubenswrapper[15493]: I0216 17:08:59.298816 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"a4d180212b837cef90316f3a72302aee5c06f2305971b7807d93998938653588"} Feb 16 17:08:59.299008 master-0 kubenswrapper[15493]: I0216 17:08:59.298843 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"88b34e595911c25aba850a546f90a694c0bd49c6505002297eb0ff69b947d240"} Feb 16 17:08:59.299008 master-0 kubenswrapper[15493]: I0216 17:08:59.298860 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"4286c5fe2de6d9fcd50051605f2f4b5c1cba939d016239a3d000fd0f9a25e9f0"} Feb 16 17:08:59.299008 master-0 kubenswrapper[15493]: I0216 17:08:59.298876 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"1302e1b555512ed30fe816d84e187d9499595f02c728ef307269b3bb72731f62"} Feb 16 17:08:59.299008 master-0 kubenswrapper[15493]: I0216 17:08:59.298891 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"15d0ed427ea76ce837f303235b162f71a48c98f5857055297e1b3bf45627edde"} Feb 16 17:08:59.299008 master-0 kubenswrapper[15493]: I0216 17:08:59.298903 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"d7761c8f384f9fd0a76d9ffed7d0a9aec69eee80353e9faf16ba81e81e59c988"} Feb 16 17:08:59.299371 master-0 kubenswrapper[15493]: I0216 17:08:59.299352 15493 scope.go:117] "RemoveContainer" containerID="d7761c8f384f9fd0a76d9ffed7d0a9aec69eee80353e9faf16ba81e81e59c988" Feb 16 17:08:59.299444 master-0 kubenswrapper[15493]: I0216 17:08:59.299379 15493 scope.go:117] "RemoveContainer" containerID="15d0ed427ea76ce837f303235b162f71a48c98f5857055297e1b3bf45627edde" Feb 16 17:08:59.299444 master-0 kubenswrapper[15493]: I0216 17:08:59.299392 15493 scope.go:117] "RemoveContainer" containerID="1302e1b555512ed30fe816d84e187d9499595f02c728ef307269b3bb72731f62" Feb 16 17:08:59.299444 master-0 kubenswrapper[15493]: I0216 17:08:59.299406 15493 scope.go:117] "RemoveContainer" containerID="4286c5fe2de6d9fcd50051605f2f4b5c1cba939d016239a3d000fd0f9a25e9f0" Feb 16 17:08:59.299444 master-0 kubenswrapper[15493]: I0216 17:08:59.299417 15493 scope.go:117] "RemoveContainer" containerID="88b34e595911c25aba850a546f90a694c0bd49c6505002297eb0ff69b947d240" Feb 16 17:08:59.299444 master-0 kubenswrapper[15493]: I0216 17:08:59.299429 15493 scope.go:117] "RemoveContainer" containerID="a4d180212b837cef90316f3a72302aee5c06f2305971b7807d93998938653588" Feb 16 17:08:59.301768 master-0 kubenswrapper[15493]: I0216 17:08:59.301749 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:08:59.301861 master-0 kubenswrapper[15493]: I0216 17:08:59.301769 15493 generic.go:334] "Generic (PLEG): container finished" podID="d9859457-f0d1-4754-a6c5-cf05d5abf447" containerID="06940a658879a063b012c2bf76a3258fbdd61e5203f5587e2a2a955dfa358b02" exitCode=0 Feb 16 17:08:59.301861 master-0 kubenswrapper[15493]: I0216 17:08:59.301787 15493 generic.go:334] "Generic (PLEG): container finished" podID="d9859457-f0d1-4754-a6c5-cf05d5abf447" containerID="fa96b9440dbcead07a8e8a2883de97575b011436686f4fab2170bdfcc0a3f79e" exitCode=0 Feb 16 17:08:59.301861 master-0 kubenswrapper[15493]: I0216 17:08:59.301775 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerDied","Data":"06940a658879a063b012c2bf76a3258fbdd61e5203f5587e2a2a955dfa358b02"} Feb 16 17:08:59.301861 master-0 kubenswrapper[15493]: I0216 17:08:59.301848 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:08:59.302068 master-0 kubenswrapper[15493]: I0216 17:08:59.301863 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerDied","Data":"fa96b9440dbcead07a8e8a2883de97575b011436686f4fab2170bdfcc0a3f79e"} Feb 16 17:08:59.302348 master-0 kubenswrapper[15493]: I0216 17:08:59.302198 15493 scope.go:117] "RemoveContainer" containerID="fa96b9440dbcead07a8e8a2883de97575b011436686f4fab2170bdfcc0a3f79e" Feb 16 17:08:59.302348 master-0 kubenswrapper[15493]: I0216 17:08:59.302215 15493 scope.go:117] "RemoveContainer" containerID="06940a658879a063b012c2bf76a3258fbdd61e5203f5587e2a2a955dfa358b02" Feb 16 17:08:59.306217 master-0 kubenswrapper[15493]: I0216 17:08:59.306184 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-czzz2_b3fa6ac1-781f-446c-b6b4-18bdb7723c23/iptables-alerter/0.log" Feb 16 17:08:59.306321 master-0 kubenswrapper[15493]: I0216 17:08:59.306241 15493 generic.go:334] "Generic (PLEG): container finished" podID="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" containerID="1dd5d8988b37bdb2482d281fd59a39049b27b81843f30e0726690490865aefa6" exitCode=143 Feb 16 17:08:59.306448 master-0 kubenswrapper[15493]: I0216 17:08:59.306356 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-czzz2" event={"ID":"b3fa6ac1-781f-446c-b6b4-18bdb7723c23","Type":"ContainerDied","Data":"1dd5d8988b37bdb2482d281fd59a39049b27b81843f30e0726690490865aefa6"} Feb 16 17:08:59.307887 master-0 kubenswrapper[15493]: I0216 17:08:59.307298 15493 scope.go:117] "RemoveContainer" containerID="1dd5d8988b37bdb2482d281fd59a39049b27b81843f30e0726690490865aefa6" Feb 16 17:08:59.307887 master-0 kubenswrapper[15493]: E0216 17:08:59.307401 15493 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:08:59.307887 master-0 kubenswrapper[15493]: I0216 17:08:59.307551 15493 scope.go:117] "RemoveContainer" containerID="3268564838eb6e6a4f98f7ce91f31bb8894c255d354c86d8cadc7b120d01a6ff" Feb 16 17:08:59.308534 master-0 kubenswrapper[15493]: I0216 17:08:59.308251 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9_edbaac23-11f0-4bc7-a7ce-b593c774c0fa/openshift-controller-manager-operator/2.log" Feb 16 17:08:59.308863 master-0 kubenswrapper[15493]: I0216 17:08:59.308796 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-5f5f84757d-ktmm9_edbaac23-11f0-4bc7-a7ce-b593c774c0fa/openshift-controller-manager-operator/1.log" Feb 16 17:08:59.309801 master-0 kubenswrapper[15493]: I0216 17:08:59.308855 15493 generic.go:334] "Generic (PLEG): container finished" podID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" containerID="8683030b69ae10922c055c637d624b11c398b29a58d7bc6013013bff4035d97c" exitCode=1 Feb 16 17:08:59.309801 master-0 kubenswrapper[15493]: I0216 17:08:59.308954 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" event={"ID":"edbaac23-11f0-4bc7-a7ce-b593c774c0fa","Type":"ContainerDied","Data":"8683030b69ae10922c055c637d624b11c398b29a58d7bc6013013bff4035d97c"} Feb 16 17:08:59.309801 master-0 kubenswrapper[15493]: I0216 17:08:59.309341 15493 scope.go:117] "RemoveContainer" containerID="8683030b69ae10922c055c637d624b11c398b29a58d7bc6013013bff4035d97c" Feb 16 17:08:59.309801 master-0 kubenswrapper[15493]: E0216 17:08:59.309551 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-5f5f84757d-ktmm9_openshift-controller-manager-operator(edbaac23-11f0-4bc7-a7ce-b593c774c0fa)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:08:59.310994 master-0 kubenswrapper[15493]: I0216 17:08:59.310898 15493 generic.go:334] "Generic (PLEG): container finished" podID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" containerID="12b0103601aa9d452d5c380b8174f625698cf75c8ec9ba10415964e9b65d2f4f" exitCode=0 Feb 16 17:08:59.311108 master-0 kubenswrapper[15493]: I0216 17:08:59.310981 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" event={"ID":"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41","Type":"ContainerDied","Data":"12b0103601aa9d452d5c380b8174f625698cf75c8ec9ba10415964e9b65d2f4f"} Feb 16 17:08:59.311392 master-0 kubenswrapper[15493]: I0216 17:08:59.311353 15493 scope.go:117] "RemoveContainer" containerID="12b0103601aa9d452d5c380b8174f625698cf75c8ec9ba10415964e9b65d2f4f" Feb 16 17:08:59.312992 master-0 kubenswrapper[15493]: I0216 17:08:59.312905 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-555857f695-nlrnr_54fba066-0e9e-49f6-8a86-34d5b4b660df/monitoring-plugin/0.log" Feb 16 17:08:59.312992 master-0 kubenswrapper[15493]: I0216 17:08:59.312975 15493 generic.go:334] "Generic (PLEG): container finished" podID="54fba066-0e9e-49f6-8a86-34d5b4b660df" containerID="4525d58974c57fc626b1b1d73b131501d6143b4fb363897d90c509aa694acf7d" exitCode=2 Feb 16 17:08:59.313877 master-0 kubenswrapper[15493]: I0216 17:08:59.313073 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" event={"ID":"54fba066-0e9e-49f6-8a86-34d5b4b660df","Type":"ContainerDied","Data":"4525d58974c57fc626b1b1d73b131501d6143b4fb363897d90c509aa694acf7d"} Feb 16 17:08:59.313877 master-0 kubenswrapper[15493]: I0216 17:08:59.313437 15493 scope.go:117] "RemoveContainer" containerID="4525d58974c57fc626b1b1d73b131501d6143b4fb363897d90c509aa694acf7d" Feb 16 17:08:59.315994 master-0 kubenswrapper[15493]: I0216 17:08:59.315966 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-7bc947fc7d-4j7pn_4488757c-f0fd-48fa-a3f9-6373b0bcafe4/cluster-baremetal-operator/0.log" Feb 16 17:08:59.316083 master-0 kubenswrapper[15493]: I0216 17:08:59.316016 15493 generic.go:334] "Generic (PLEG): container finished" podID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" containerID="cf7896ba8c2bed977d056d90bc44e7d6d6d178fe65604d077d79101df877c0f7" exitCode=0 Feb 16 17:08:59.316083 master-0 kubenswrapper[15493]: I0216 17:08:59.316035 15493 generic.go:334] "Generic (PLEG): container finished" podID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" containerID="32282e3210e204263457f22f7fb6c9b2c61db1832f983d1236a1034b1a5140d4" exitCode=0 Feb 16 17:08:59.316083 master-0 kubenswrapper[15493]: I0216 17:08:59.316090 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerDied","Data":"cf7896ba8c2bed977d056d90bc44e7d6d6d178fe65604d077d79101df877c0f7"} Feb 16 17:08:59.316371 master-0 kubenswrapper[15493]: I0216 17:08:59.316112 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerDied","Data":"32282e3210e204263457f22f7fb6c9b2c61db1832f983d1236a1034b1a5140d4"} Feb 16 17:08:59.316469 master-0 kubenswrapper[15493]: I0216 17:08:59.316445 15493 scope.go:117] "RemoveContainer" containerID="cf7896ba8c2bed977d056d90bc44e7d6d6d178fe65604d077d79101df877c0f7" Feb 16 17:08:59.316546 master-0 kubenswrapper[15493]: I0216 17:08:59.316474 15493 scope.go:117] "RemoveContainer" containerID="32282e3210e204263457f22f7fb6c9b2c61db1832f983d1236a1034b1a5140d4" Feb 16 17:08:59.319211 master-0 kubenswrapper[15493]: I0216 17:08:59.318592 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-7485d55966-sgmpf_eaf7edff-0a89-4ac0-b9dd-511e098b5434/kube-scheduler-operator-container/1.log" Feb 16 17:08:59.319211 master-0 kubenswrapper[15493]: I0216 17:08:59.318627 15493 generic.go:334] "Generic (PLEG): container finished" podID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" containerID="6417c6ab09f776c6cd5527ce7ed0c693dfb9915491fd7480b2522f60cdf3a710" exitCode=0 Feb 16 17:08:59.319211 master-0 kubenswrapper[15493]: I0216 17:08:59.318676 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" event={"ID":"eaf7edff-0a89-4ac0-b9dd-511e098b5434","Type":"ContainerDied","Data":"6417c6ab09f776c6cd5527ce7ed0c693dfb9915491fd7480b2522f60cdf3a710"} Feb 16 17:08:59.319211 master-0 kubenswrapper[15493]: I0216 17:08:59.318976 15493 scope.go:117] "RemoveContainer" containerID="6417c6ab09f776c6cd5527ce7ed0c693dfb9915491fd7480b2522f60cdf3a710" Feb 16 17:08:59.319211 master-0 kubenswrapper[15493]: E0216 17:08:59.319171 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler-operator-container pod=openshift-kube-scheduler-operator-7485d55966-sgmpf_openshift-kube-scheduler-operator(eaf7edff-0a89-4ac0-b9dd-511e098b5434)\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:08:59.324453 master-0 kubenswrapper[15493]: I0216 17:08:59.324343 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-mn6cr_8e90be63-ff6c-4e9e-8b9e-1ad9cf941845/manager/1.log" Feb 16 17:08:59.326057 master-0 kubenswrapper[15493]: I0216 17:08:59.325891 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-67bc7c997f-mn6cr_8e90be63-ff6c-4e9e-8b9e-1ad9cf941845/manager/0.log" Feb 16 17:08:59.326480 master-0 kubenswrapper[15493]: I0216 17:08:59.326434 15493 generic.go:334] "Generic (PLEG): container finished" podID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" containerID="ab850d1aa486cf37c31e8e39f8b3c0e701213d1b68a0b01c8b03ebcdcc0c19a9" exitCode=1 Feb 16 17:08:59.326480 master-0 kubenswrapper[15493]: I0216 17:08:59.326473 15493 generic.go:334] "Generic (PLEG): container finished" podID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" containerID="8e6333d17c854be811265371ff3fa3a77118514f88a15fbd08c26eea148ad400" exitCode=0 Feb 16 17:08:59.326581 master-0 kubenswrapper[15493]: I0216 17:08:59.326520 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerDied","Data":"ab850d1aa486cf37c31e8e39f8b3c0e701213d1b68a0b01c8b03ebcdcc0c19a9"} Feb 16 17:08:59.326581 master-0 kubenswrapper[15493]: I0216 17:08:59.326549 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerDied","Data":"8e6333d17c854be811265371ff3fa3a77118514f88a15fbd08c26eea148ad400"} Feb 16 17:08:59.327269 master-0 kubenswrapper[15493]: I0216 17:08:59.327232 15493 scope.go:117] "RemoveContainer" containerID="8e6333d17c854be811265371ff3fa3a77118514f88a15fbd08c26eea148ad400" Feb 16 17:08:59.327269 master-0 kubenswrapper[15493]: I0216 17:08:59.327253 15493 scope.go:117] "RemoveContainer" containerID="ab850d1aa486cf37c31e8e39f8b3c0e701213d1b68a0b01c8b03ebcdcc0c19a9" Feb 16 17:08:59.328717 master-0 kubenswrapper[15493]: I0216 17:08:59.328672 15493 status_manager.go:851] "Failed to get status for pod" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" pod="openshift-marketplace/certified-operators-z69zq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z69zq\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.334496 master-0 kubenswrapper[15493]: I0216 17:08:59.334453 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/2.log" Feb 16 17:08:59.335422 master-0 kubenswrapper[15493]: I0216 17:08:59.335375 15493 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="d85f4bae9120dd5571ac4aef5b4bc508cd0c2e61ac41e2e016d2fca33cf2c0df" exitCode=0 Feb 16 17:08:59.335487 master-0 kubenswrapper[15493]: I0216 17:08:59.335449 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"d85f4bae9120dd5571ac4aef5b4bc508cd0c2e61ac41e2e016d2fca33cf2c0df"} Feb 16 17:08:59.336055 master-0 kubenswrapper[15493]: I0216 17:08:59.336026 15493 scope.go:117] "RemoveContainer" containerID="d85f4bae9120dd5571ac4aef5b4bc508cd0c2e61ac41e2e016d2fca33cf2c0df" Feb 16 17:08:59.340172 master-0 kubenswrapper[15493]: I0216 17:08:59.340130 15493 generic.go:334] "Generic (PLEG): container finished" podID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerID="0a4e3758232c1b4d1cd852c6c0d2cb896a9be7004e29268b474e13c843b389c0" exitCode=0 Feb 16 17:08:59.340172 master-0 kubenswrapper[15493]: I0216 17:08:59.340163 15493 generic.go:334] "Generic (PLEG): container finished" podID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerID="d333f86f0a8ab06d569bfb3d4f4ee86bbc505f7ff52162d4fe6868c5e30caf74" exitCode=0 Feb 16 17:08:59.340262 master-0 kubenswrapper[15493]: I0216 17:08:59.340206 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerDied","Data":"0a4e3758232c1b4d1cd852c6c0d2cb896a9be7004e29268b474e13c843b389c0"} Feb 16 17:08:59.340262 master-0 kubenswrapper[15493]: I0216 17:08:59.340236 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerDied","Data":"d333f86f0a8ab06d569bfb3d4f4ee86bbc505f7ff52162d4fe6868c5e30caf74"} Feb 16 17:08:59.340621 master-0 kubenswrapper[15493]: I0216 17:08:59.340590 15493 scope.go:117] "RemoveContainer" containerID="0a4e3758232c1b4d1cd852c6c0d2cb896a9be7004e29268b474e13c843b389c0" Feb 16 17:08:59.340621 master-0 kubenswrapper[15493]: I0216 17:08:59.340619 15493 scope.go:117] "RemoveContainer" containerID="d333f86f0a8ab06d569bfb3d4f4ee86bbc505f7ff52162d4fe6868c5e30caf74" Feb 16 17:08:59.343832 master-0 kubenswrapper[15493]: I0216 17:08:59.343788 15493 generic.go:334] "Generic (PLEG): container finished" podID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" containerID="9b7fb081aed84bf24e9c173f0f69fce2b7aba0738037ca960752a4a6a87b8388" exitCode=0 Feb 16 17:08:59.343895 master-0 kubenswrapper[15493]: I0216 17:08:59.343853 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerDied","Data":"9b7fb081aed84bf24e9c173f0f69fce2b7aba0738037ca960752a4a6a87b8388"} Feb 16 17:08:59.344361 master-0 kubenswrapper[15493]: I0216 17:08:59.344335 15493 scope.go:117] "RemoveContainer" containerID="9b7fb081aed84bf24e9c173f0f69fce2b7aba0738037ca960752a4a6a87b8388" Feb 16 17:08:59.347145 master-0 kubenswrapper[15493]: I0216 17:08:59.347094 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.347570 master-0 kubenswrapper[15493]: I0216 17:08:59.347532 15493 generic.go:334] "Generic (PLEG): container finished" podID="ab80e0fb-09dd-4c93-b235-1487024105d2" containerID="20009358c0be2bc1329d86a0acf4dfbd84a7369e251f3fe1202732e17f06df3a" exitCode=0 Feb 16 17:08:59.347570 master-0 kubenswrapper[15493]: I0216 17:08:59.347561 15493 generic.go:334] "Generic (PLEG): container finished" podID="ab80e0fb-09dd-4c93-b235-1487024105d2" containerID="6f66af8b0562664573bf8d9a4bb0da731f2d18edeb2c73c463d4bf0acaedcb60" exitCode=0 Feb 16 17:08:59.347655 master-0 kubenswrapper[15493]: I0216 17:08:59.347604 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerDied","Data":"20009358c0be2bc1329d86a0acf4dfbd84a7369e251f3fe1202732e17f06df3a"} Feb 16 17:08:59.347655 master-0 kubenswrapper[15493]: I0216 17:08:59.347630 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerDied","Data":"6f66af8b0562664573bf8d9a4bb0da731f2d18edeb2c73c463d4bf0acaedcb60"} Feb 16 17:08:59.349268 master-0 kubenswrapper[15493]: I0216 17:08:59.349233 15493 scope.go:117] "RemoveContainer" containerID="6f66af8b0562664573bf8d9a4bb0da731f2d18edeb2c73c463d4bf0acaedcb60" Feb 16 17:08:59.349268 master-0 kubenswrapper[15493]: I0216 17:08:59.349257 15493 scope.go:117] "RemoveContainer" containerID="20009358c0be2bc1329d86a0acf4dfbd84a7369e251f3fe1202732e17f06df3a" Feb 16 17:08:59.351004 master-0 kubenswrapper[15493]: I0216 17:08:59.350961 15493 generic.go:334] "Generic (PLEG): container finished" podID="48801344-a48a-493e-aea4-19d998d0b708" containerID="554bf64355b5b3eed04f706e68cd50dcf6f9b6576e2e066858b9fbe0374728cf" exitCode=0 Feb 16 17:08:59.351086 master-0 kubenswrapper[15493]: I0216 17:08:59.351025 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" event={"ID":"48801344-a48a-493e-aea4-19d998d0b708","Type":"ContainerDied","Data":"554bf64355b5b3eed04f706e68cd50dcf6f9b6576e2e066858b9fbe0374728cf"} Feb 16 17:08:59.351399 master-0 kubenswrapper[15493]: I0216 17:08:59.351318 15493 scope.go:117] "RemoveContainer" containerID="554bf64355b5b3eed04f706e68cd50dcf6f9b6576e2e066858b9fbe0374728cf" Feb 16 17:08:59.359009 master-0 kubenswrapper[15493]: I0216 17:08:59.358969 15493 generic.go:334] "Generic (PLEG): container finished" podID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" containerID="4b036f06ab4b931412c4f49164cb23548d94c16700a2c873ec169ead4f77b13a" exitCode=0 Feb 16 17:08:59.359076 master-0 kubenswrapper[15493]: I0216 17:08:59.359036 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" event={"ID":"544c6815-81d7-422a-9e4a-5fcbfabe8da8","Type":"ContainerDied","Data":"4b036f06ab4b931412c4f49164cb23548d94c16700a2c873ec169ead4f77b13a"} Feb 16 17:08:59.359517 master-0 kubenswrapper[15493]: I0216 17:08:59.359486 15493 scope.go:117] "RemoveContainer" containerID="4b036f06ab4b931412c4f49164cb23548d94c16700a2c873ec169ead4f77b13a" Feb 16 17:08:59.368509 master-0 kubenswrapper[15493]: I0216 17:08:59.368207 15493 status_manager.go:851] "Failed to get status for pod" podUID="29402454-a920-471e-895e-764235d16eb4" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-5dc4688546-pl7r5\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.370329 master-0 kubenswrapper[15493]: I0216 17:08:59.370242 15493 generic.go:334] "Generic (PLEG): container finished" podID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" containerID="ebcc47375c8090ea868a5deccf7dc1e91eebca2d21948753da2f002b09800231" exitCode=0 Feb 16 17:08:59.370781 master-0 kubenswrapper[15493]: I0216 17:08:59.370295 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerDied","Data":"ebcc47375c8090ea868a5deccf7dc1e91eebca2d21948753da2f002b09800231"} Feb 16 17:08:59.371847 master-0 kubenswrapper[15493]: I0216 17:08:59.371788 15493 scope.go:117] "RemoveContainer" containerID="ebcc47375c8090ea868a5deccf7dc1e91eebca2d21948753da2f002b09800231" Feb 16 17:08:59.379527 master-0 kubenswrapper[15493]: I0216 17:08:59.375908 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-7777d5cc66-64vhv_0517b180-00ee-47fe-a8e7-36a3931b7e72/console-operator/2.log" Feb 16 17:08:59.380173 master-0 kubenswrapper[15493]: I0216 17:08:59.380146 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-7777d5cc66-64vhv_0517b180-00ee-47fe-a8e7-36a3931b7e72/console-operator/1.log" Feb 16 17:08:59.380290 master-0 kubenswrapper[15493]: I0216 17:08:59.380269 15493 generic.go:334] "Generic (PLEG): container finished" podID="0517b180-00ee-47fe-a8e7-36a3931b7e72" containerID="5280a9e5fa47ae992e070a527ddf28952c9ffc9ee73154415f61b91183dd9a89" exitCode=1 Feb 16 17:08:59.380402 master-0 kubenswrapper[15493]: I0216 17:08:59.380355 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" event={"ID":"0517b180-00ee-47fe-a8e7-36a3931b7e72","Type":"ContainerDied","Data":"5280a9e5fa47ae992e070a527ddf28952c9ffc9ee73154415f61b91183dd9a89"} Feb 16 17:08:59.380815 master-0 kubenswrapper[15493]: I0216 17:08:59.380792 15493 scope.go:117] "RemoveContainer" containerID="5280a9e5fa47ae992e070a527ddf28952c9ffc9ee73154415f61b91183dd9a89" Feb 16 17:08:59.381101 master-0 kubenswrapper[15493]: E0216 17:08:59.381069 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-7777d5cc66-64vhv_openshift-console-operator(0517b180-00ee-47fe-a8e7-36a3931b7e72)\"" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:08:59.384745 master-0 kubenswrapper[15493]: I0216 17:08:59.384706 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-92rqx_404c402a-705f-4352-b9df-b89562070d9c/machine-api-operator/1.log" Feb 16 17:08:59.385706 master-0 kubenswrapper[15493]: I0216 17:08:59.385673 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-92rqx_404c402a-705f-4352-b9df-b89562070d9c/machine-api-operator/0.log" Feb 16 17:08:59.386213 master-0 kubenswrapper[15493]: I0216 17:08:59.386188 15493 generic.go:334] "Generic (PLEG): container finished" podID="404c402a-705f-4352-b9df-b89562070d9c" containerID="90cbf2116f3e25b749b07925ed91de371ea44d17a0d85fece1ea2429638db035" exitCode=2 Feb 16 17:08:59.386309 master-0 kubenswrapper[15493]: I0216 17:08:59.386294 15493 generic.go:334] "Generic (PLEG): container finished" podID="404c402a-705f-4352-b9df-b89562070d9c" containerID="b0576ce377a5cee2ae182a3190bd7d01c4057d29cbcd5c8c32f7d95440a684f0" exitCode=0 Feb 16 17:08:59.386434 master-0 kubenswrapper[15493]: I0216 17:08:59.386250 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerDied","Data":"90cbf2116f3e25b749b07925ed91de371ea44d17a0d85fece1ea2429638db035"} Feb 16 17:08:59.386500 master-0 kubenswrapper[15493]: I0216 17:08:59.386449 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerDied","Data":"b0576ce377a5cee2ae182a3190bd7d01c4057d29cbcd5c8c32f7d95440a684f0"} Feb 16 17:08:59.386783 master-0 kubenswrapper[15493]: I0216 17:08:59.386756 15493 scope.go:117] "RemoveContainer" containerID="b0576ce377a5cee2ae182a3190bd7d01c4057d29cbcd5c8c32f7d95440a684f0" Feb 16 17:08:59.386783 master-0 kubenswrapper[15493]: I0216 17:08:59.386775 15493 scope.go:117] "RemoveContainer" containerID="90cbf2116f3e25b749b07925ed91de371ea44d17a0d85fece1ea2429638db035" Feb 16 17:08:59.387190 master-0 kubenswrapper[15493]: I0216 17:08:59.387156 15493 status_manager.go:851] "Failed to get status for pod" podUID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-flr86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.388868 master-0 kubenswrapper[15493]: I0216 17:08:59.388843 15493 generic.go:334] "Generic (PLEG): container finished" podID="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" containerID="b5d57d5fbcb5111bf8621480b2ca2d7036238ef5e1ed6356c78b025bd5430216" exitCode=0 Feb 16 17:08:59.388868 master-0 kubenswrapper[15493]: I0216 17:08:59.388865 15493 generic.go:334] "Generic (PLEG): container finished" podID="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" containerID="6f91303a217830df2deeb75c5b85b160be832bd4796840ca479eb3bf0757299c" exitCode=0 Feb 16 17:08:59.389066 master-0 kubenswrapper[15493]: I0216 17:08:59.388873 15493 generic.go:334] "Generic (PLEG): container finished" podID="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" containerID="49c7bd744b82a93bdf93e0a45df427c7a02a45864796806bad6477a11f0df882" exitCode=0 Feb 16 17:08:59.389066 master-0 kubenswrapper[15493]: I0216 17:08:59.388915 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerDied","Data":"b5d57d5fbcb5111bf8621480b2ca2d7036238ef5e1ed6356c78b025bd5430216"} Feb 16 17:08:59.389066 master-0 kubenswrapper[15493]: I0216 17:08:59.388953 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerDied","Data":"6f91303a217830df2deeb75c5b85b160be832bd4796840ca479eb3bf0757299c"} Feb 16 17:08:59.389066 master-0 kubenswrapper[15493]: I0216 17:08:59.388964 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerDied","Data":"49c7bd744b82a93bdf93e0a45df427c7a02a45864796806bad6477a11f0df882"} Feb 16 17:08:59.389441 master-0 kubenswrapper[15493]: I0216 17:08:59.389420 15493 scope.go:117] "RemoveContainer" containerID="49c7bd744b82a93bdf93e0a45df427c7a02a45864796806bad6477a11f0df882" Feb 16 17:08:59.389441 master-0 kubenswrapper[15493]: I0216 17:08:59.389442 15493 scope.go:117] "RemoveContainer" containerID="6f91303a217830df2deeb75c5b85b160be832bd4796840ca479eb3bf0757299c" Feb 16 17:08:59.389526 master-0 kubenswrapper[15493]: I0216 17:08:59.389452 15493 scope.go:117] "RemoveContainer" containerID="b5d57d5fbcb5111bf8621480b2ca2d7036238ef5e1ed6356c78b025bd5430216" Feb 16 17:08:59.390736 master-0 kubenswrapper[15493]: I0216 17:08:59.390718 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-hhcpr_39387549-c636-4bd4-b463-f6a93810f277/approver/1.log" Feb 16 17:08:59.391139 master-0 kubenswrapper[15493]: I0216 17:08:59.391116 15493 generic.go:334] "Generic (PLEG): container finished" podID="39387549-c636-4bd4-b463-f6a93810f277" containerID="d03772cee9e1c4250dee38ee136391b1f13825fdf156227a38ae5496af2de176" exitCode=0 Feb 16 17:08:59.391203 master-0 kubenswrapper[15493]: I0216 17:08:59.391163 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerDied","Data":"d03772cee9e1c4250dee38ee136391b1f13825fdf156227a38ae5496af2de176"} Feb 16 17:08:59.391442 master-0 kubenswrapper[15493]: I0216 17:08:59.391414 15493 scope.go:117] "RemoveContainer" containerID="d03772cee9e1c4250dee38ee136391b1f13825fdf156227a38ae5496af2de176" Feb 16 17:08:59.391575 master-0 kubenswrapper[15493]: E0216 17:08:59.391554 15493 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=approver pod=network-node-identity-hhcpr_openshift-network-node-identity(39387549-c636-4bd4-b463-f6a93810f277)\"" pod="openshift-network-node-identity/network-node-identity-hhcpr" podUID="39387549-c636-4bd4-b463-f6a93810f277" Feb 16 17:08:59.399047 master-0 kubenswrapper[15493]: I0216 17:08:59.399012 15493 generic.go:334] "Generic (PLEG): container finished" podID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" containerID="461a2f0f61f0fcc0eb519485188a2e4212d395f0c1a67321cce2d8f4b7ef3e1c" exitCode=0 Feb 16 17:08:59.399144 master-0 kubenswrapper[15493]: I0216 17:08:59.399074 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerDied","Data":"461a2f0f61f0fcc0eb519485188a2e4212d395f0c1a67321cce2d8f4b7ef3e1c"} Feb 16 17:08:59.399559 master-0 kubenswrapper[15493]: I0216 17:08:59.399528 15493 scope.go:117] "RemoveContainer" containerID="461a2f0f61f0fcc0eb519485188a2e4212d395f0c1a67321cce2d8f4b7ef3e1c" Feb 16 17:08:59.401798 master-0 kubenswrapper[15493]: I0216 17:08:59.401780 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-546cc7d765-94nfl_ae20b683-dac8-419e-808a-ddcdb3c564e1/openshift-state-metrics/0.log" Feb 16 17:08:59.402527 master-0 kubenswrapper[15493]: I0216 17:08:59.402509 15493 generic.go:334] "Generic (PLEG): container finished" podID="ae20b683-dac8-419e-808a-ddcdb3c564e1" containerID="0347cb4112367f34332daa64a019331216bdbbe7b22ed17e57af898d234dfb13" exitCode=2 Feb 16 17:08:59.402527 master-0 kubenswrapper[15493]: I0216 17:08:59.402525 15493 generic.go:334] "Generic (PLEG): container finished" podID="ae20b683-dac8-419e-808a-ddcdb3c564e1" containerID="271a64f88c33115a51b95b2f92773598ac51c97a03b1f3cba45b0dab0c8fe865" exitCode=0 Feb 16 17:08:59.402527 master-0 kubenswrapper[15493]: I0216 17:08:59.402533 15493 generic.go:334] "Generic (PLEG): container finished" podID="ae20b683-dac8-419e-808a-ddcdb3c564e1" containerID="b72dba53719e20d1f39a1a24f3506f415214cc3ac1cfd20c226c7b2f937c48b8" exitCode=0 Feb 16 17:08:59.402666 master-0 kubenswrapper[15493]: I0216 17:08:59.402553 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerDied","Data":"0347cb4112367f34332daa64a019331216bdbbe7b22ed17e57af898d234dfb13"} Feb 16 17:08:59.402666 master-0 kubenswrapper[15493]: I0216 17:08:59.402609 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerDied","Data":"271a64f88c33115a51b95b2f92773598ac51c97a03b1f3cba45b0dab0c8fe865"} Feb 16 17:08:59.402666 master-0 kubenswrapper[15493]: I0216 17:08:59.402622 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerDied","Data":"b72dba53719e20d1f39a1a24f3506f415214cc3ac1cfd20c226c7b2f937c48b8"} Feb 16 17:08:59.403149 master-0 kubenswrapper[15493]: I0216 17:08:59.403120 15493 scope.go:117] "RemoveContainer" containerID="b72dba53719e20d1f39a1a24f3506f415214cc3ac1cfd20c226c7b2f937c48b8" Feb 16 17:08:59.403149 master-0 kubenswrapper[15493]: I0216 17:08:59.403143 15493 scope.go:117] "RemoveContainer" containerID="271a64f88c33115a51b95b2f92773598ac51c97a03b1f3cba45b0dab0c8fe865" Feb 16 17:08:59.403149 master-0 kubenswrapper[15493]: I0216 17:08:59.403152 15493 scope.go:117] "RemoveContainer" containerID="0347cb4112367f34332daa64a019331216bdbbe7b22ed17e57af898d234dfb13" Feb 16 17:08:59.406094 master-0 kubenswrapper[15493]: I0216 17:08:59.406073 15493 generic.go:334] "Generic (PLEG): container finished" podID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" containerID="4680b8c2d5e31d1d35cc0e3e5320c2ad6ac1474aaaf6f05440e71e203962ad7d" exitCode=0 Feb 16 17:08:59.406199 master-0 kubenswrapper[15493]: I0216 17:08:59.406180 15493 generic.go:334] "Generic (PLEG): container finished" podID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" containerID="a4258db92ccb0e5bfb5051d02b4ac371ae71dd3a55d7950001a7b771cb5d1c29" exitCode=0 Feb 16 17:08:59.406315 master-0 kubenswrapper[15493]: I0216 17:08:59.406136 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerDied","Data":"4680b8c2d5e31d1d35cc0e3e5320c2ad6ac1474aaaf6f05440e71e203962ad7d"} Feb 16 17:08:59.406428 master-0 kubenswrapper[15493]: I0216 17:08:59.406406 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerDied","Data":"a4258db92ccb0e5bfb5051d02b4ac371ae71dd3a55d7950001a7b771cb5d1c29"} Feb 16 17:08:59.406719 master-0 kubenswrapper[15493]: I0216 17:08:59.406696 15493 scope.go:117] "RemoveContainer" containerID="a4258db92ccb0e5bfb5051d02b4ac371ae71dd3a55d7950001a7b771cb5d1c29" Feb 16 17:08:59.406719 master-0 kubenswrapper[15493]: I0216 17:08:59.406714 15493 scope.go:117] "RemoveContainer" containerID="4680b8c2d5e31d1d35cc0e3e5320c2ad6ac1474aaaf6f05440e71e203962ad7d" Feb 16 17:08:59.407334 master-0 kubenswrapper[15493]: I0216 17:08:59.407289 15493 status_manager.go:851] "Failed to get status for pod" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.408306 master-0 kubenswrapper[15493]: I0216 17:08:59.408289 15493 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/1.log" Feb 16 17:08:59.408792 master-0 kubenswrapper[15493]: I0216 17:08:59.408775 15493 generic.go:334] "Generic (PLEG): container finished" podID="702322ac-7610-4568-9a68-b6acbd1f0c12" containerID="59609566742bf5ead9458043d860c5784e5a3eba6e4f48cd4386c000dcf98a3f" exitCode=0 Feb 16 17:08:59.408872 master-0 kubenswrapper[15493]: I0216 17:08:59.408860 15493 generic.go:334] "Generic (PLEG): container finished" podID="702322ac-7610-4568-9a68-b6acbd1f0c12" containerID="f0fc172e061d9b845719aaf0e0bb5928b1dd8b2b359ec58034b976a3ab24fcfb" exitCode=0 Feb 16 17:08:59.408952 master-0 kubenswrapper[15493]: I0216 17:08:59.408817 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerDied","Data":"59609566742bf5ead9458043d860c5784e5a3eba6e4f48cd4386c000dcf98a3f"} Feb 16 17:08:59.409052 master-0 kubenswrapper[15493]: I0216 17:08:59.409039 15493 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerDied","Data":"f0fc172e061d9b845719aaf0e0bb5928b1dd8b2b359ec58034b976a3ab24fcfb"} Feb 16 17:08:59.409381 master-0 kubenswrapper[15493]: I0216 17:08:59.409353 15493 scope.go:117] "RemoveContainer" containerID="f0fc172e061d9b845719aaf0e0bb5928b1dd8b2b359ec58034b976a3ab24fcfb" Feb 16 17:08:59.409381 master-0 kubenswrapper[15493]: I0216 17:08:59.409375 15493 scope.go:117] "RemoveContainer" containerID="59609566742bf5ead9458043d860c5784e5a3eba6e4f48cd4386c000dcf98a3f" Feb 16 17:08:59.432470 master-0 kubenswrapper[15493]: I0216 17:08:59.432401 15493 status_manager.go:851] "Failed to get status for pod" podUID="ee84198d-6357-4429-a90c-455c3850a788" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-67fd9768b5-zcwwd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.447727 master-0 kubenswrapper[15493]: I0216 17:08:59.447666 15493 status_manager.go:851] "Failed to get status for pod" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-6d678b8d67-5n9cl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.467290 master-0 kubenswrapper[15493]: I0216 17:08:59.467219 15493 status_manager.go:851] "Failed to get status for pod" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-7d8f4c8c66-qjq9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.478059 master-0 kubenswrapper[15493]: E0216 17:08:59.477761 15493 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4549ea98_7379_49e1_8452_5efb643137ca.slice/crio-8928e3bf46f9c2e9543fd483f5a6715160d68a0e0514884803acf476ebf5679a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55d635cd_1f0d_4086_96f2_9f3524f3f18c.slice/crio-conmon-98e282b9ddd3e1d2ffe040be844462c924b282deedc606e228c31635a2adedd9.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1443fb7_cb1e_4105_b604_b88c749620c4.slice/crio-conmon-3bc3b0a3c7a9d33ae08dca9280e63bf069111d5618b0180dcffe8f0d1a422a94.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54f29618_42c2_4270_9af7_7d82852d7cec.slice/crio-conmon-e843fe6093fddb4f0608997e3c887c540c5790a52102b7ba7e769e8ae9904f7d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedbaac23_11f0_4bc7_a7ce_b593c774c0fa.slice/crio-conmon-8683030b69ae10922c055c637d624b11c398b29a58d7bc6013013bff4035d97c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae20b683_dac8_419e_808a_ddcdb3c564e1.slice/crio-conmon-b72dba53719e20d1f39a1a24f3506f415214cc3ac1cfd20c226c7b2f937c48b8.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:08:59.490302 master-0 kubenswrapper[15493]: I0216 17:08:59.490245 15493 status_manager.go:851] "Failed to get status for pod" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-6bbd87b65b-mt2mz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.508018 master-0 kubenswrapper[15493]: I0216 17:08:59.507962 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.527747 master-0 kubenswrapper[15493]: I0216 17:08:59.527686 15493 status_manager.go:851] "Failed to get status for pod" podUID="ab5760f1-b2e0-4138-9383-e4827154ac50" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-rjdlk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.548080 master-0 kubenswrapper[15493]: I0216 17:08:59.547959 15493 status_manager.go:851] "Failed to get status for pod" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-64bf6cdbbc-tpd6h\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.567950 master-0 kubenswrapper[15493]: I0216 17:08:59.567867 15493 status_manager.go:851] "Failed to get status for pod" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-dcdb76cc6-5rcvl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.587678 master-0 kubenswrapper[15493]: I0216 17:08:59.587604 15493 status_manager.go:851] "Failed to get status for pod" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" pod="openshift-ingress-canary/ingress-canary-qqvg4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/pods/ingress-canary-qqvg4\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.608584 master-0 kubenswrapper[15493]: I0216 17:08:59.608499 15493 status_manager.go:851] "Failed to get status for pod" podUID="80d3b238-70c3-4e71-96a1-99405352033f" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-74b6595c6d-pfzq2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.627772 master-0 kubenswrapper[15493]: I0216 17:08:59.627676 15493 status_manager.go:851] "Failed to get status for pod" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/openshift-state-metrics-546cc7d765-94nfl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.648347 master-0 kubenswrapper[15493]: I0216 17:08:59.648223 15493 status_manager.go:851] "Failed to get status for pod" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.668060 master-0 kubenswrapper[15493]: I0216 17:08:59.667988 15493 status_manager.go:851] "Failed to get status for pod" podUID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-flr86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.687976 master-0 kubenswrapper[15493]: I0216 17:08:59.687880 15493 status_manager.go:851] "Failed to get status for pod" podUID="ee84198d-6357-4429-a90c-455c3850a788" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-67fd9768b5-zcwwd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.707228 master-0 kubenswrapper[15493]: I0216 17:08:59.707154 15493 status_manager.go:851] "Failed to get status for pod" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-6d5d8c8c95-kzfjw\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.727470 master-0 kubenswrapper[15493]: I0216 17:08:59.727398 15493 status_manager.go:851] "Failed to get status for pod" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/kube-state-metrics-7cc9598d54-8j5rk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.748048 master-0 kubenswrapper[15493]: I0216 17:08:59.747975 15493 status_manager.go:851] "Failed to get status for pod" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-756d64c8c4-ln4wm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.768276 master-0 kubenswrapper[15493]: I0216 17:08:59.768164 15493 status_manager.go:851] "Failed to get status for pod" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/monitoring-plugin-555857f695-nlrnr\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.788038 master-0 kubenswrapper[15493]: I0216 17:08:59.787972 15493 status_manager.go:851] "Failed to get status for pod" podUID="b3322fd3717f4aec0d8f54ec7862c07e" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/kube-rbac-proxy-crio-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.808170 master-0 kubenswrapper[15493]: I0216 17:08:59.808105 15493 status_manager.go:851] "Failed to get status for pod" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-6d678b8d67-5n9cl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.827424 master-0 kubenswrapper[15493]: I0216 17:08:59.827349 15493 status_manager.go:851] "Failed to get status for pod" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/pods/dns-operator-86b8869b79-nhxlp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.848022 master-0 kubenswrapper[15493]: I0216 17:08:59.847961 15493 status_manager.go:851] "Failed to get status for pod" podUID="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" pod="openshift-network-operator/iptables-alerter-czzz2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/iptables-alerter-czzz2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.867885 master-0 kubenswrapper[15493]: I0216 17:08:59.867818 15493 status_manager.go:851] "Failed to get status for pod" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-fc4bf7f79-tqnlw\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.887799 master-0 kubenswrapper[15493]: I0216 17:08:59.887724 15493 status_manager.go:851] "Failed to get status for pod" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" pod="openshift-marketplace/redhat-operators-lnzfx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lnzfx\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.907684 master-0 kubenswrapper[15493]: I0216 17:08:59.907620 15493 status_manager.go:851] "Failed to get status for pod" podUID="d020c902-2adb-4919-8dd9-0c2109830580" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-54984b6678-gp8gv\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.928058 master-0 kubenswrapper[15493]: I0216 17:08:59.927988 15493 status_manager.go:851] "Failed to get status for pod" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" pod="openshift-console/downloads-dcd7b7d95-dhhfh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-dcd7b7d95-dhhfh\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.948362 master-0 kubenswrapper[15493]: I0216 17:08:59.948276 15493 status_manager.go:851] "Failed to get status for pod" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.949336 master-0 kubenswrapper[15493]: I0216 17:08:59.949283 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:08:59.949336 master-0 kubenswrapper[15493]: I0216 17:08:59.949330 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:08:59.968108 master-0 kubenswrapper[15493]: I0216 17:08:59.968042 15493 status_manager.go:851] "Failed to get status for pod" podUID="702322ac-7610-4568-9a68-b6acbd1f0c12" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/pods/machine-approver-8569dd85ff-4vxmz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:08:59.988234 master-0 kubenswrapper[15493]: I0216 17:08:59.988184 15493 status_manager.go:851] "Failed to get status for pod" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-7485d645b8-zxxwd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.007827 master-0 kubenswrapper[15493]: I0216 17:09:00.007772 15493 status_manager.go:851] "Failed to get status for pod" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-7d8f4c8c66-qjq9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.028945 master-0 kubenswrapper[15493]: I0216 17:09:00.028288 15493 status_manager.go:851] "Failed to get status for pod" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-7bc947fc7d-4j7pn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.047938 master-0 kubenswrapper[15493]: I0216 17:09:00.047871 15493 status_manager.go:851] "Failed to get status for pod" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-588944557d-5drhs\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.067938 master-0 kubenswrapper[15493]: I0216 17:09:00.067863 15493 status_manager.go:851] "Failed to get status for pod" podUID="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/pods/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.087638 master-0 kubenswrapper[15493]: I0216 17:09:00.087457 15493 status_manager.go:851] "Failed to get status for pod" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" pod="openshift-multus/network-metrics-daemon-279g6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-279g6\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.107614 master-0 kubenswrapper[15493]: I0216 17:09:00.107550 15493 status_manager.go:851] "Failed to get status for pod" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-cb4f7b4cf-6qrw5\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.127369 master-0 kubenswrapper[15493]: I0216 17:09:00.127324 15493 status_manager.go:851] "Failed to get status for pod" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-6bbd87b65b-mt2mz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.147856 master-0 kubenswrapper[15493]: I0216 17:09:00.147805 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.167944 master-0 kubenswrapper[15493]: I0216 17:09:00.167864 15493 status_manager.go:851] "Failed to get status for pod" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-7777d5cc66-64vhv\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.188311 master-0 kubenswrapper[15493]: I0216 17:09:00.188238 15493 status_manager.go:851] "Failed to get status for pod" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c696dbdcd-qrrc6\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.207370 master-0 kubenswrapper[15493]: I0216 17:09:00.207243 15493 status_manager.go:851] "Failed to get status for pod" podUID="39387549-c636-4bd4-b463-f6a93810f277" pod="openshift-network-node-identity/network-node-identity-hhcpr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-hhcpr\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.227575 master-0 kubenswrapper[15493]: I0216 17:09:00.227504 15493 status_manager.go:851] "Failed to get status for pod" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-695b766898-h94zg\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.248514 master-0 kubenswrapper[15493]: I0216 17:09:00.248452 15493 status_manager.go:851] "Failed to get status for pod" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-64bf6cdbbc-tpd6h\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.267807 master-0 kubenswrapper[15493]: I0216 17:09:00.267754 15493 status_manager.go:851] "Failed to get status for pod" podUID="ab5760f1-b2e0-4138-9383-e4827154ac50" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-rjdlk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.287844 master-0 kubenswrapper[15493]: I0216 17:09:00.287799 15493 status_manager.go:851] "Failed to get status for pod" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-dcdb76cc6-5rcvl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.293861 master-0 kubenswrapper[15493]: I0216 17:09:00.293805 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:09:00.307688 master-0 kubenswrapper[15493]: I0216 17:09:00.307626 15493 status_manager.go:851] "Failed to get status for pod" podUID="404c402a-705f-4352-b9df-b89562070d9c" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-92rqx\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.327664 master-0 kubenswrapper[15493]: I0216 17:09:00.327605 15493 status_manager.go:851] "Failed to get status for pod" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-686c884b4d-ksx48\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.347841 master-0 kubenswrapper[15493]: I0216 17:09:00.347744 15493 status_manager.go:851] "Failed to get status for pod" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" pod="openshift-ingress-canary/ingress-canary-qqvg4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/pods/ingress-canary-qqvg4\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.367545 master-0 kubenswrapper[15493]: I0216 17:09:00.367456 15493 status_manager.go:851] "Failed to get status for pod" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" pod="openshift-network-diagnostics/network-check-target-vwvwx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-vwvwx\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.387718 master-0 kubenswrapper[15493]: I0216 17:09:00.387594 15493 status_manager.go:851] "Failed to get status for pod" podUID="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/pods/cluster-version-operator-649c4f5445-vt6wb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.407205 master-0 kubenswrapper[15493]: I0216 17:09:00.407132 15493 status_manager.go:851] "Failed to get status for pod" podUID="80d3b238-70c3-4e71-96a1-99405352033f" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-74b6595c6d-pfzq2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.428325 master-0 kubenswrapper[15493]: I0216 17:09:00.428206 15493 status_manager.go:851] "Failed to get status for pod" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/pods/catalogd-controller-manager-67bc7c997f-mn6cr\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.433206 master-0 kubenswrapper[15493]: I0216 17:09:00.433143 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:09:00.448134 master-0 kubenswrapper[15493]: I0216 17:09:00.448001 15493 status_manager.go:851] "Failed to get status for pod" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-d8bf84b88-m66tx\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.459802 master-0 kubenswrapper[15493]: E0216 17:09:00.459688 15493 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="7s" Feb 16 17:09:00.467381 master-0 kubenswrapper[15493]: I0216 17:09:00.467298 15493 status_manager.go:851] "Failed to get status for pod" podUID="4549ea98-7379-49e1-8452-5efb643137ca" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-6fcf4c966-6bmf9\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.471905 master-0 kubenswrapper[15493]: I0216 17:09:00.471822 15493 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-v8dr8 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Feb 16 17:09:00.471905 master-0 kubenswrapper[15493]: I0216 17:09:00.471830 15493 patch_prober.go:28] interesting pod/openshift-config-operator-7c6bdb986f-v8dr8 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" start-of-body= Feb 16 17:09:00.472135 master-0 kubenswrapper[15493]: I0216 17:09:00.471902 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Feb 16 17:09:00.472135 master-0 kubenswrapper[15493]: I0216 17:09:00.471963 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.8:8443/healthz\": dial tcp 10.128.0.8:8443: connect: connection refused" Feb 16 17:09:00.474323 master-0 kubenswrapper[15493]: I0216 17:09:00.474261 15493 patch_prober.go:28] interesting pod/dns-default-qcgxx container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.128.0.32:8181/ready\": dial tcp 10.128.0.32:8181: connect: connection refused" start-of-body= Feb 16 17:09:00.474468 master-0 kubenswrapper[15493]: I0216 17:09:00.474336 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" containerName="dns" probeResult="failure" output="Get \"http://10.128.0.32:8181/ready\": dial tcp 10.128.0.32:8181: connect: connection refused" Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: I0216 17:09:00.479050 15493 patch_prober.go:28] interesting pod/apiserver-fc4bf7f79-tqnlw container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]log ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]etcd excluded: ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]etcd-readiness excluded: ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]informer-sync ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]poststarthook/max-in-flight-filter ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]poststarthook/openshift.io-startinformers ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: [-]shutdown failed: reason withheld Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: readyz check failed Feb 16 17:09:00.479120 master-0 kubenswrapper[15493]: I0216 17:09:00.479092 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: I0216 17:09:00.483988 15493 patch_prober.go:28] interesting pod/apiserver-66788cb45c-dp9bc container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: [+]log ok Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: [+]etcd excluded: ok Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: [+]etcd-readiness excluded: ok Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: [+]informer-sync ok Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: [+]poststarthook/max-in-flight-filter ok Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: [-]shutdown failed: reason withheld Feb 16 17:09:00.484056 master-0 kubenswrapper[15493]: readyz check failed Feb 16 17:09:00.484750 master-0 kubenswrapper[15493]: I0216 17:09:00.484078 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:09:00.487687 master-0 kubenswrapper[15493]: I0216 17:09:00.487606 15493 status_manager.go:851] "Failed to get status for pod" podUID="48801344-a48a-493e-aea4-19d998d0b708" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-676cd8b9b5-cp9rb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.508547 master-0 kubenswrapper[15493]: I0216 17:09:00.508431 15493 status_manager.go:851] "Failed to get status for pod" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-755d954778-lf4cb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.528384 master-0 kubenswrapper[15493]: I0216 17:09:00.528269 15493 status_manager.go:851] "Failed to get status for pod" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-5f5f84757d-ktmm9\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.548009 master-0 kubenswrapper[15493]: I0216 17:09:00.547948 15493 status_manager.go:851] "Failed to get status for pod" podUID="a6fe41b0-1a42-4f07-8220-d9aaa50788ad" pod="openshift-dns/node-resolver-vfxj4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/node-resolver-vfxj4\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.568185 master-0 kubenswrapper[15493]: I0216 17:09:00.568066 15493 status_manager.go:851] "Failed to get status for pod" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-84976bb859-rsnqc\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.575402 master-0 kubenswrapper[15493]: I0216 17:09:00.575314 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:09:00.587939 master-0 kubenswrapper[15493]: I0216 17:09:00.587740 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.608385 master-0 kubenswrapper[15493]: I0216 17:09:00.608274 15493 status_manager.go:851] "Failed to get status for pod" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-c588d8cb4-wjr7d\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.628259 master-0 kubenswrapper[15493]: I0216 17:09:00.628174 15493 status_manager.go:851] "Failed to get status for pod" podUID="9c48005e-c4df-4332-87fc-ec028f2c6921" pod="openshift-machine-config-operator/machine-config-server-2ws9r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-server-2ws9r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.648574 master-0 kubenswrapper[15493]: I0216 17:09:00.648461 15493 status_manager.go:851] "Failed to get status for pod" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" pod="openshift-marketplace/redhat-marketplace-4kd66" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4kd66\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.667725 master-0 kubenswrapper[15493]: I0216 17:09:00.667622 15493 status_manager.go:851] "Failed to get status for pod" podUID="ab80e0fb-09dd-4c93-b235-1487024105d2" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-control-plane-bb7ffbb8d-lzgs9\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.688195 master-0 kubenswrapper[15493]: I0216 17:09:00.688068 15493 status_manager.go:851] "Failed to get status for pod" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" pod="openshift-dns/dns-default-qcgxx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/dns-default-qcgxx\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.708104 master-0 kubenswrapper[15493]: I0216 17:09:00.707971 15493 status_manager.go:851] "Failed to get status for pod" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b56bd877c-p7k2k\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.728521 master-0 kubenswrapper[15493]: I0216 17:09:00.728400 15493 status_manager.go:851] "Failed to get status for pod" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/pods/cluster-olm-operator-55b69c6c48-7chjv\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.748577 master-0 kubenswrapper[15493]: I0216 17:09:00.748473 15493 status_manager.go:851] "Failed to get status for pod" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" pod="openshift-marketplace/community-operators-7w4km" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7w4km\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.768286 master-0 kubenswrapper[15493]: I0216 17:09:00.768193 15493 status_manager.go:851] "Failed to get status for pod" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/pods/operator-controller-controller-manager-85c9b89969-lj58b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.788084 master-0 kubenswrapper[15493]: I0216 17:09:00.788013 15493 status_manager.go:851] "Failed to get status for pod" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-f8cbff74c-spxm9\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.808412 master-0 kubenswrapper[15493]: I0216 17:09:00.808341 15493 status_manager.go:851] "Failed to get status for pod" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" pod="openshift-marketplace/certified-operators-z69zq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z69zq\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.827888 master-0 kubenswrapper[15493]: I0216 17:09:00.827812 15493 status_manager.go:851] "Failed to get status for pod" podUID="7adecad495595c43c57c30abd350e987" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.848364 master-0 kubenswrapper[15493]: I0216 17:09:00.848290 15493 status_manager.go:851] "Failed to get status for pod" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-7485d55966-sgmpf\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.868396 master-0 kubenswrapper[15493]: I0216 17:09:00.868324 15493 status_manager.go:851] "Failed to get status for pod" podUID="a94f9b8e-b020-4aab-8373-6c056ec07464" pod="openshift-monitoring/node-exporter-8256c" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/node-exporter-8256c\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.888776 master-0 kubenswrapper[15493]: I0216 17:09:00.888655 15493 status_manager.go:851] "Failed to get status for pod" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-595c8f9ff-b9nvq\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.908238 master-0 kubenswrapper[15493]: I0216 17:09:00.908166 15493 status_manager.go:851] "Failed to get status for pod" podUID="e1a7c783-2e23-4284-b648-147984cf1022" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7fc9897cf8-9rjwd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.928033 master-0 kubenswrapper[15493]: I0216 17:09:00.927910 15493 status_manager.go:851] "Failed to get status for pod" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-98q6v\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.948317 master-0 kubenswrapper[15493]: I0216 17:09:00.948227 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:00.953477 master-0 kubenswrapper[15493]: I0216 17:09:00.953417 15493 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:09:00.968084 master-0 kubenswrapper[15493]: I0216 17:09:00.967976 15493 status_manager.go:851] "Failed to get status for pod" podUID="29402454-a920-471e-895e-764235d16eb4" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-5dc4688546-pl7r5\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.056817 master-0 kubenswrapper[15493]: I0216 17:09:01.056691 15493 status_manager.go:851] "Failed to get status for pod" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-7485d55966-sgmpf\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.058638 master-0 kubenswrapper[15493]: I0216 17:09:01.057684 15493 status_manager.go:851] "Failed to get status for pod" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-controller/pods/operator-controller-controller-manager-85c9b89969-lj58b\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.058638 master-0 kubenswrapper[15493]: I0216 17:09:01.058552 15493 status_manager.go:851] "Failed to get status for pod" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-f8cbff74c-spxm9\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.059519 master-0 kubenswrapper[15493]: I0216 17:09:01.059433 15493 status_manager.go:851] "Failed to get status for pod" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" pod="openshift-marketplace/certified-operators-z69zq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-z69zq\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.067885 master-0 kubenswrapper[15493]: I0216 17:09:01.067600 15493 status_manager.go:851] "Failed to get status for pod" podUID="7adecad495595c43c57c30abd350e987" pod="openshift-etcd/etcd-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-etcd/pods/etcd-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.087989 master-0 kubenswrapper[15493]: I0216 17:09:01.087847 15493 status_manager.go:851] "Failed to get status for pod" podUID="a94f9b8e-b020-4aab-8373-6c056ec07464" pod="openshift-monitoring/node-exporter-8256c" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/node-exporter-8256c\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.108838 master-0 kubenswrapper[15493]: I0216 17:09:01.108699 15493 status_manager.go:851] "Failed to get status for pod" podUID="29402454-a920-471e-895e-764235d16eb4" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca-operator/pods/service-ca-operator-5dc4688546-pl7r5\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.128827 master-0 kubenswrapper[15493]: I0216 17:09:01.128691 15493 status_manager.go:851] "Failed to get status for pod" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-595c8f9ff-b9nvq\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.148471 master-0 kubenswrapper[15493]: I0216 17:09:01.148357 15493 status_manager.go:851] "Failed to get status for pod" podUID="e1a7c783-2e23-4284-b648-147984cf1022" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7fc9897cf8-9rjwd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.167907 master-0 kubenswrapper[15493]: I0216 17:09:01.167799 15493 status_manager.go:851] "Failed to get status for pod" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-98q6v\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.188359 master-0 kubenswrapper[15493]: I0216 17:09:01.188264 15493 status_manager.go:851] "Failed to get status for pod" podUID="b8fa563c7331931f00ce0006e522f0f1" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.207961 master-0 kubenswrapper[15493]: I0216 17:09:01.207863 15493 status_manager.go:851] "Failed to get status for pod" podUID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-node-flr86\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.228284 master-0 kubenswrapper[15493]: I0216 17:09:01.228151 15493 status_manager.go:851] "Failed to get status for pod" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/openshift-state-metrics-546cc7d765-94nfl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.247735 master-0 kubenswrapper[15493]: I0216 17:09:01.247588 15493 status_manager.go:851] "Failed to get status for pod" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" pod="openshift-monitoring/alertmanager-main-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.268647 master-0 kubenswrapper[15493]: I0216 17:09:01.268467 15493 status_manager.go:851] "Failed to get status for pod" podUID="ee84198d-6357-4429-a90c-455c3850a788" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-67fd9768b5-zcwwd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.287844 master-0 kubenswrapper[15493]: I0216 17:09:01.287725 15493 status_manager.go:851] "Failed to get status for pod" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-6d5d8c8c95-kzfjw\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.308022 master-0 kubenswrapper[15493]: I0216 17:09:01.307899 15493 status_manager.go:851] "Failed to get status for pod" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/kube-state-metrics-7cc9598d54-8j5rk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.328334 master-0 kubenswrapper[15493]: I0216 17:09:01.328229 15493 status_manager.go:851] "Failed to get status for pod" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/cluster-monitoring-operator-756d64c8c4-ln4wm\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.348353 master-0 kubenswrapper[15493]: I0216 17:09:01.348203 15493 status_manager.go:851] "Failed to get status for pod" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-admission-controller-6d678b8d67-5n9cl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.368314 master-0 kubenswrapper[15493]: I0216 17:09:01.368238 15493 status_manager.go:851] "Failed to get status for pod" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/monitoring-plugin-555857f695-nlrnr\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.388344 master-0 kubenswrapper[15493]: I0216 17:09:01.388245 15493 status_manager.go:851] "Failed to get status for pod" podUID="b3322fd3717f4aec0d8f54ec7862c07e" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/kube-rbac-proxy-crio-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.407664 master-0 kubenswrapper[15493]: I0216 17:09:01.407564 15493 status_manager.go:851] "Failed to get status for pod" podUID="d020c902-2adb-4919-8dd9-0c2109830580" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-54984b6678-gp8gv\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.428386 master-0 kubenswrapper[15493]: I0216 17:09:01.428315 15493 status_manager.go:851] "Failed to get status for pod" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/pods/dns-operator-86b8869b79-nhxlp\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.447786 master-0 kubenswrapper[15493]: I0216 17:09:01.447695 15493 status_manager.go:851] "Failed to get status for pod" podUID="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" pod="openshift-network-operator/iptables-alerter-czzz2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/iptables-alerter-czzz2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.467910 master-0 kubenswrapper[15493]: I0216 17:09:01.467791 15493 status_manager.go:851] "Failed to get status for pod" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-fc4bf7f79-tqnlw\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.488273 master-0 kubenswrapper[15493]: I0216 17:09:01.488155 15493 status_manager.go:851] "Failed to get status for pod" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" pod="openshift-marketplace/redhat-operators-lnzfx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-lnzfx\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.507762 master-0 kubenswrapper[15493]: I0216 17:09:01.507667 15493 status_manager.go:851] "Failed to get status for pod" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-7485d645b8-zxxwd\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.527901 master-0 kubenswrapper[15493]: I0216 17:09:01.527802 15493 status_manager.go:851] "Failed to get status for pod" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" pod="openshift-console/downloads-dcd7b7d95-dhhfh" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-dcd7b7d95-dhhfh\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.548390 master-0 kubenswrapper[15493]: I0216 17:09:01.548299 15493 status_manager.go:851] "Failed to get status for pod" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.568390 master-0 kubenswrapper[15493]: I0216 17:09:01.568293 15493 status_manager.go:851] "Failed to get status for pod" podUID="702322ac-7610-4568-9a68-b6acbd1f0c12" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-machine-approver/pods/machine-approver-8569dd85ff-4vxmz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.588421 master-0 kubenswrapper[15493]: I0216 17:09:01.588299 15493 status_manager.go:851] "Failed to get status for pod" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" pod="openshift-multus/network-metrics-daemon-279g6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-279g6\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.609067 master-0 kubenswrapper[15493]: I0216 17:09:01.608914 15493 status_manager.go:851] "Failed to get status for pod" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-7d8f4c8c66-qjq9w\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.628189 master-0 kubenswrapper[15493]: I0216 17:09:01.628085 15493 status_manager.go:851] "Failed to get status for pod" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-7bc947fc7d-4j7pn\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.648374 master-0 kubenswrapper[15493]: I0216 17:09:01.648271 15493 status_manager.go:851] "Failed to get status for pod" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-588944557d-5drhs\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.668380 master-0 kubenswrapper[15493]: I0216 17:09:01.668244 15493 status_manager.go:851] "Failed to get status for pod" podUID="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/pods/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.688559 master-0 kubenswrapper[15493]: I0216 17:09:01.688430 15493 status_manager.go:851] "Failed to get status for pod" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-5c696dbdcd-qrrc6\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.708106 master-0 kubenswrapper[15493]: I0216 17:09:01.707915 15493 status_manager.go:851] "Failed to get status for pod" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-cb4f7b4cf-6qrw5\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.728453 master-0 kubenswrapper[15493]: I0216 17:09:01.728357 15493 status_manager.go:851] "Failed to get status for pod" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-6bbd87b65b-mt2mz\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.748299 master-0 kubenswrapper[15493]: I0216 17:09:01.748178 15493 status_manager.go:851] "Failed to get status for pod" podUID="1ea5bf67-1fd1-488a-a440-00bb9a8533d0" pod="openshift-kube-apiserver/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.768939 master-0 kubenswrapper[15493]: I0216 17:09:01.768839 15493 status_manager.go:851] "Failed to get status for pod" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console-operator/pods/console-operator-7777d5cc66-64vhv\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.788822 master-0 kubenswrapper[15493]: I0216 17:09:01.788719 15493 status_manager.go:851] "Failed to get status for pod" podUID="ab5760f1-b2e0-4138-9383-e4827154ac50" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/pods/multus-additional-cni-plugins-rjdlk\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.808323 master-0 kubenswrapper[15493]: I0216 17:09:01.808189 15493 status_manager.go:851] "Failed to get status for pod" podUID="39387549-c636-4bd4-b463-f6a93810f277" pod="openshift-network-node-identity/network-node-identity-hhcpr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/pods/network-node-identity-hhcpr\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.828093 master-0 kubenswrapper[15493]: I0216 17:09:01.828010 15493 status_manager.go:851] "Failed to get status for pod" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-695b766898-h94zg\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.848104 master-0 kubenswrapper[15493]: I0216 17:09:01.847993 15493 status_manager.go:851] "Failed to get status for pod" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-64bf6cdbbc-tpd6h\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.868249 master-0 kubenswrapper[15493]: I0216 17:09:01.868063 15493 status_manager.go:851] "Failed to get status for pod" podUID="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/pods/cluster-version-operator-649c4f5445-vt6wb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.887982 master-0 kubenswrapper[15493]: I0216 17:09:01.887839 15493 status_manager.go:851] "Failed to get status for pod" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-dcdb76cc6-5rcvl\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.907727 master-0 kubenswrapper[15493]: I0216 17:09:01.907614 15493 status_manager.go:851] "Failed to get status for pod" podUID="404c402a-705f-4352-b9df-b89562070d9c" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/machine-api-operator-bd7dd5c46-92rqx\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.928691 master-0 kubenswrapper[15493]: I0216 17:09:01.928535 15493 status_manager.go:851] "Failed to get status for pod" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-controller-686c884b4d-ksx48\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.948410 master-0 kubenswrapper[15493]: I0216 17:09:01.948144 15493 status_manager.go:851] "Failed to get status for pod" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" pod="openshift-ingress-canary/ingress-canary-qqvg4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-canary/pods/ingress-canary-qqvg4\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.967806 master-0 kubenswrapper[15493]: I0216 17:09:01.967711 15493 status_manager.go:851] "Failed to get status for pod" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" pod="openshift-network-diagnostics/network-check-target-vwvwx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-target-vwvwx\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:01.988488 master-0 kubenswrapper[15493]: I0216 17:09:01.988372 15493 status_manager.go:851] "Failed to get status for pod" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-catalogd/pods/catalogd-controller-manager-67bc7c997f-mn6cr\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.009190 master-0 kubenswrapper[15493]: I0216 17:09:02.009030 15493 status_manager.go:851] "Failed to get status for pod" podUID="80d3b238-70c3-4e71-96a1-99405352033f" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/csi-snapshot-controller-74b6595c6d-pfzq2\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.028139 master-0 kubenswrapper[15493]: I0216 17:09:02.028023 15493 status_manager.go:851] "Failed to get status for pod" podUID="48801344-a48a-493e-aea4-19d998d0b708" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-service-ca/pods/service-ca-676cd8b9b5-cp9rb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.048277 master-0 kubenswrapper[15493]: I0216 17:09:02.048177 15493 status_manager.go:851] "Failed to get status for pod" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-d8bf84b88-m66tx\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.068404 master-0 kubenswrapper[15493]: I0216 17:09:02.068328 15493 status_manager.go:851] "Failed to get status for pod" podUID="4549ea98-7379-49e1-8452-5efb643137ca" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-6fcf4c966-6bmf9\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.088735 master-0 kubenswrapper[15493]: I0216 17:09:02.088626 15493 status_manager.go:851] "Failed to get status for pod" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress-operator/pods/ingress-operator-c588d8cb4-wjr7d\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.108260 master-0 kubenswrapper[15493]: I0216 17:09:02.108184 15493 status_manager.go:851] "Failed to get status for pod" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/pods/authentication-operator-755d954778-lf4cb\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.128222 master-0 kubenswrapper[15493]: I0216 17:09:02.128089 15493 status_manager.go:851] "Failed to get status for pod" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-5f5f84757d-ktmm9\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.148666 master-0 kubenswrapper[15493]: I0216 17:09:02.148548 15493 status_manager.go:851] "Failed to get status for pod" podUID="a6fe41b0-1a42-4f07-8220-d9aaa50788ad" pod="openshift-dns/node-resolver-vfxj4" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/node-resolver-vfxj4\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.167993 master-0 kubenswrapper[15493]: I0216 17:09:02.167892 15493 status_manager.go:851] "Failed to get status for pod" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-operator-84976bb859-rsnqc\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.188072 master-0 kubenswrapper[15493]: I0216 17:09:02.187970 15493 status_manager.go:851] "Failed to get status for pod" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.208158 master-0 kubenswrapper[15493]: I0216 17:09:02.208035 15493 status_manager.go:851] "Failed to get status for pod" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" pod="openshift-marketplace/redhat-marketplace-4kd66" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-4kd66\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.228076 master-0 kubenswrapper[15493]: I0216 17:09:02.227989 15493 status_manager.go:851] "Failed to get status for pod" podUID="9c48005e-c4df-4332-87fc-ec028f2c6921" pod="openshift-machine-config-operator/machine-config-server-2ws9r" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-server-2ws9r\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.248575 master-0 kubenswrapper[15493]: I0216 17:09:02.248473 15493 status_manager.go:851] "Failed to get status for pod" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-olm-operator/pods/cluster-olm-operator-55b69c6c48-7chjv\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.268326 master-0 kubenswrapper[15493]: I0216 17:09:02.268200 15493 status_manager.go:851] "Failed to get status for pod" podUID="ab80e0fb-09dd-4c93-b235-1487024105d2" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ovn-kubernetes/pods/ovnkube-control-plane-bb7ffbb8d-lzgs9\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.287650 master-0 kubenswrapper[15493]: I0216 17:09:02.287576 15493 status_manager.go:851] "Failed to get status for pod" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" pod="openshift-dns/dns-default-qcgxx" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns/pods/dns-default-qcgxx\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.308348 master-0 kubenswrapper[15493]: I0216 17:09:02.308265 15493 status_manager.go:851] "Failed to get status for pod" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/olm-operator-6b56bd877c-p7k2k\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.328906 master-0 kubenswrapper[15493]: I0216 17:09:02.328768 15493 status_manager.go:851] "Failed to get status for pod" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" pod="openshift-marketplace/community-operators-7w4km" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-7w4km\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:09:02.391948 master-0 kubenswrapper[15493]: I0216 17:09:02.391847 15493 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:09:02.669026 master-0 kubenswrapper[15493]: I0216 17:09:02.668946 15493 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Liveness probe status=failure output="Get \"https://192.168.32.10:9980/healthz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Feb 16 17:09:02.669026 master-0 kubenswrapper[15493]: I0216 17:09:02.669015 15493 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd/etcd-master-0" podUID="7adecad495595c43c57c30abd350e987" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/healthz\": dial tcp 192.168.32.10:9980: connect: connection refused" Feb 16 17:09:02.669281 master-0 kubenswrapper[15493]: I0216 17:09:02.669051 15493 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Feb 16 17:09:02.669281 master-0 kubenswrapper[15493]: I0216 17:09:02.669152 15493 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-master-0" podUID="7adecad495595c43c57c30abd350e987" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" Feb 16 17:09:03.023521 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 16 17:09:03.059261 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 16 17:09:03.059671 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 16 17:09:03.064190 master-0 systemd[1]: kubelet.service: Consumed 1min 19.042s CPU time. -- Boot 16009b8c65114dd49a27539c3ce647e4 -- Feb 16 17:13:50.154538 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 17:13:50.754665 master-0 kubenswrapper[3171]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:13:50.754665 master-0 kubenswrapper[3171]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 17:13:50.754665 master-0 kubenswrapper[3171]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:13:50.754665 master-0 kubenswrapper[3171]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:13:50.754665 master-0 kubenswrapper[3171]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 17:13:50.754665 master-0 kubenswrapper[3171]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: I0216 17:13:50.755647 3171 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763723 3171 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763744 3171 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763754 3171 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763763 3171 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763773 3171 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763782 3171 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763792 3171 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763801 3171 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763809 3171 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763818 3171 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763826 3171 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763834 3171 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763842 3171 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763851 3171 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763859 3171 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763867 3171 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763876 3171 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:13:50.765593 master-0 kubenswrapper[3171]: W0216 17:13:50.763884 3171 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.763892 3171 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.763900 3171 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.763908 3171 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.763917 3171 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.763926 3171 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.763936 3171 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.763944 3171 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.763952 3171 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.763989 3171 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.763998 3171 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.764009 3171 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.764020 3171 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.764029 3171 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.764038 3171 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.764053 3171 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.764064 3171 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.764075 3171 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.764083 3171 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.764091 3171 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:13:50.766185 master-0 kubenswrapper[3171]: W0216 17:13:50.764099 3171 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764108 3171 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764115 3171 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764123 3171 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764134 3171 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764143 3171 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764152 3171 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764160 3171 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764168 3171 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764175 3171 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764183 3171 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764191 3171 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764199 3171 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764206 3171 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764214 3171 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764222 3171 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764236 3171 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764247 3171 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764256 3171 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:13:50.766790 master-0 kubenswrapper[3171]: W0216 17:13:50.764264 3171 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764273 3171 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764282 3171 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764291 3171 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764299 3171 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764306 3171 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764314 3171 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764325 3171 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764338 3171 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764346 3171 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764355 3171 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764364 3171 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764373 3171 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764382 3171 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764390 3171 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: W0216 17:13:50.764398 3171 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: I0216 17:13:50.764593 3171 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: I0216 17:13:50.764616 3171 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: I0216 17:13:50.766246 3171 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: I0216 17:13:50.766262 3171 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: I0216 17:13:50.766275 3171 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 17:13:50.767362 master-0 kubenswrapper[3171]: I0216 17:13:50.766286 3171 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766299 3171 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766311 3171 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766322 3171 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766371 3171 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766381 3171 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766391 3171 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766401 3171 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766411 3171 flags.go:64] FLAG: --cgroup-root="" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766421 3171 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766430 3171 flags.go:64] FLAG: --client-ca-file="" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766439 3171 flags.go:64] FLAG: --cloud-config="" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766448 3171 flags.go:64] FLAG: --cloud-provider="" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766458 3171 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766472 3171 flags.go:64] FLAG: --cluster-domain="" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766481 3171 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766490 3171 flags.go:64] FLAG: --config-dir="" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766499 3171 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766509 3171 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766523 3171 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766533 3171 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766543 3171 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766553 3171 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766563 3171 flags.go:64] FLAG: --contention-profiling="false" Feb 16 17:13:50.768886 master-0 kubenswrapper[3171]: I0216 17:13:50.766573 3171 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766582 3171 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766592 3171 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766601 3171 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766613 3171 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766622 3171 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766632 3171 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766641 3171 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766650 3171 flags.go:64] FLAG: --enable-server="true" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766659 3171 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766674 3171 flags.go:64] FLAG: --event-burst="100" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766684 3171 flags.go:64] FLAG: --event-qps="50" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766693 3171 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766703 3171 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766713 3171 flags.go:64] FLAG: --eviction-hard="" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766724 3171 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766734 3171 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766744 3171 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766753 3171 flags.go:64] FLAG: --eviction-soft="" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766762 3171 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766771 3171 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766781 3171 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766792 3171 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766802 3171 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766811 3171 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 17:13:50.769596 master-0 kubenswrapper[3171]: I0216 17:13:50.766821 3171 flags.go:64] FLAG: --feature-gates="" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766833 3171 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766844 3171 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766854 3171 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766864 3171 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766874 3171 flags.go:64] FLAG: --healthz-port="10248" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766883 3171 flags.go:64] FLAG: --help="false" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766893 3171 flags.go:64] FLAG: --hostname-override="" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766902 3171 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766911 3171 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766921 3171 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766930 3171 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766939 3171 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766948 3171 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766983 3171 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.766993 3171 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.767003 3171 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.767012 3171 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.767021 3171 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.767031 3171 flags.go:64] FLAG: --kube-reserved="" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.767040 3171 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.767059 3171 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.767069 3171 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.767080 3171 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.767089 3171 flags.go:64] FLAG: --lock-file="" Feb 16 17:13:50.770324 master-0 kubenswrapper[3171]: I0216 17:13:50.767098 3171 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767108 3171 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767117 3171 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767131 3171 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767140 3171 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767150 3171 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767159 3171 flags.go:64] FLAG: --logging-format="text" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767169 3171 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767179 3171 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767189 3171 flags.go:64] FLAG: --manifest-url="" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767198 3171 flags.go:64] FLAG: --manifest-url-header="" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767211 3171 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767221 3171 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767231 3171 flags.go:64] FLAG: --max-pods="110" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767241 3171 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767250 3171 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767259 3171 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767269 3171 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767278 3171 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767288 3171 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767297 3171 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767318 3171 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767327 3171 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767337 3171 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 17:13:50.771062 master-0 kubenswrapper[3171]: I0216 17:13:50.767346 3171 flags.go:64] FLAG: --pod-cidr="" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767355 3171 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767371 3171 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767382 3171 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767392 3171 flags.go:64] FLAG: --pods-per-core="0" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767401 3171 flags.go:64] FLAG: --port="10250" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767411 3171 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767421 3171 flags.go:64] FLAG: --provider-id="" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767442 3171 flags.go:64] FLAG: --qos-reserved="" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767452 3171 flags.go:64] FLAG: --read-only-port="10255" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767462 3171 flags.go:64] FLAG: --register-node="true" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767471 3171 flags.go:64] FLAG: --register-schedulable="true" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767481 3171 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767496 3171 flags.go:64] FLAG: --registry-burst="10" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767506 3171 flags.go:64] FLAG: --registry-qps="5" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767515 3171 flags.go:64] FLAG: --reserved-cpus="" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767524 3171 flags.go:64] FLAG: --reserved-memory="" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767536 3171 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767545 3171 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767555 3171 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767565 3171 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767574 3171 flags.go:64] FLAG: --runonce="false" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767583 3171 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767592 3171 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767602 3171 flags.go:64] FLAG: --seccomp-default="false" Feb 16 17:13:50.772671 master-0 kubenswrapper[3171]: I0216 17:13:50.767612 3171 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767620 3171 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767630 3171 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767639 3171 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767651 3171 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767663 3171 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767674 3171 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767685 3171 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767696 3171 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767708 3171 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767721 3171 flags.go:64] FLAG: --system-cgroups="" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767732 3171 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767751 3171 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767763 3171 flags.go:64] FLAG: --tls-cert-file="" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767774 3171 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767791 3171 flags.go:64] FLAG: --tls-min-version="" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767802 3171 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767811 3171 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767820 3171 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767830 3171 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767840 3171 flags.go:64] FLAG: --v="2" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767856 3171 flags.go:64] FLAG: --version="false" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767867 3171 flags.go:64] FLAG: --vmodule="" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767878 3171 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: I0216 17:13:50.767888 3171 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 17:13:50.774283 master-0 kubenswrapper[3171]: W0216 17:13:50.768155 3171 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768167 3171 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768177 3171 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768187 3171 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768196 3171 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768204 3171 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768213 3171 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768221 3171 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768229 3171 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768237 3171 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768244 3171 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768252 3171 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768260 3171 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768268 3171 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768276 3171 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768283 3171 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768295 3171 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768305 3171 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768315 3171 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768324 3171 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:13:50.774912 master-0 kubenswrapper[3171]: W0216 17:13:50.768333 3171 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768342 3171 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768351 3171 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768360 3171 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768368 3171 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768377 3171 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768386 3171 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768394 3171 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768407 3171 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768415 3171 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768427 3171 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768436 3171 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768444 3171 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768451 3171 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768459 3171 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768467 3171 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768475 3171 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768485 3171 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768495 3171 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768509 3171 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:13:50.775537 master-0 kubenswrapper[3171]: W0216 17:13:50.768526 3171 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768536 3171 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768547 3171 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768556 3171 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768568 3171 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768577 3171 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768588 3171 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768598 3171 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768608 3171 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768618 3171 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768626 3171 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768634 3171 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768642 3171 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768650 3171 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768657 3171 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768665 3171 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768673 3171 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768681 3171 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768690 3171 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768701 3171 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:13:50.776144 master-0 kubenswrapper[3171]: W0216 17:13:50.768714 3171 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:13:50.776615 master-0 kubenswrapper[3171]: W0216 17:13:50.768724 3171 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:13:50.776615 master-0 kubenswrapper[3171]: W0216 17:13:50.768734 3171 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:13:50.776615 master-0 kubenswrapper[3171]: W0216 17:13:50.768744 3171 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:13:50.776615 master-0 kubenswrapper[3171]: W0216 17:13:50.768752 3171 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:13:50.776615 master-0 kubenswrapper[3171]: W0216 17:13:50.768760 3171 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:13:50.776615 master-0 kubenswrapper[3171]: W0216 17:13:50.768770 3171 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:13:50.776615 master-0 kubenswrapper[3171]: W0216 17:13:50.768778 3171 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:13:50.776615 master-0 kubenswrapper[3171]: W0216 17:13:50.768786 3171 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:13:50.776615 master-0 kubenswrapper[3171]: W0216 17:13:50.768794 3171 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:13:50.776615 master-0 kubenswrapper[3171]: W0216 17:13:50.768802 3171 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:13:50.776615 master-0 kubenswrapper[3171]: W0216 17:13:50.768809 3171 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:13:50.776615 master-0 kubenswrapper[3171]: I0216 17:13:50.768835 3171 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:13:50.778529 master-0 kubenswrapper[3171]: I0216 17:13:50.777983 3171 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 17:13:50.778529 master-0 kubenswrapper[3171]: I0216 17:13:50.778529 3171 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 17:13:50.778668 master-0 kubenswrapper[3171]: W0216 17:13:50.778650 3171 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:13:50.778668 master-0 kubenswrapper[3171]: W0216 17:13:50.778662 3171 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:13:50.778668 master-0 kubenswrapper[3171]: W0216 17:13:50.778666 3171 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778670 3171 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778674 3171 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778678 3171 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778683 3171 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778687 3171 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778690 3171 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778694 3171 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778697 3171 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778701 3171 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778705 3171 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778709 3171 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778712 3171 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778716 3171 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778720 3171 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778725 3171 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778733 3171 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778737 3171 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778741 3171 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778746 3171 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:13:50.778762 master-0 kubenswrapper[3171]: W0216 17:13:50.778750 3171 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778753 3171 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778757 3171 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778761 3171 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778765 3171 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778768 3171 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778772 3171 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778775 3171 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778780 3171 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778783 3171 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778787 3171 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778791 3171 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778798 3171 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778802 3171 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778806 3171 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778809 3171 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778812 3171 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778816 3171 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778820 3171 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778823 3171 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:13:50.779317 master-0 kubenswrapper[3171]: W0216 17:13:50.778827 3171 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778830 3171 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778834 3171 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778837 3171 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778841 3171 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778845 3171 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778848 3171 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778852 3171 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778855 3171 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778859 3171 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778863 3171 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778866 3171 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778869 3171 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778873 3171 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778878 3171 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778882 3171 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778886 3171 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778890 3171 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778894 3171 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778897 3171 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:13:50.779805 master-0 kubenswrapper[3171]: W0216 17:13:50.778901 3171 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: W0216 17:13:50.778906 3171 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: W0216 17:13:50.778910 3171 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: W0216 17:13:50.778915 3171 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: W0216 17:13:50.778920 3171 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: W0216 17:13:50.778924 3171 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: W0216 17:13:50.778928 3171 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: W0216 17:13:50.778933 3171 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: W0216 17:13:50.778937 3171 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: W0216 17:13:50.778942 3171 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: I0216 17:13:50.778950 3171 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: W0216 17:13:50.779087 3171 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: W0216 17:13:50.779096 3171 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: W0216 17:13:50.779101 3171 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:13:50.780347 master-0 kubenswrapper[3171]: W0216 17:13:50.779105 3171 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779109 3171 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779112 3171 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779116 3171 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779120 3171 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779123 3171 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779127 3171 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779130 3171 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779134 3171 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779137 3171 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779141 3171 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779145 3171 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779149 3171 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779153 3171 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779157 3171 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779161 3171 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779165 3171 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779168 3171 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779172 3171 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779176 3171 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:13:50.780699 master-0 kubenswrapper[3171]: W0216 17:13:50.779179 3171 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779183 3171 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779187 3171 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779191 3171 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779198 3171 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779203 3171 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779207 3171 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779210 3171 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779214 3171 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779218 3171 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779222 3171 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779226 3171 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779230 3171 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779235 3171 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779239 3171 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779243 3171 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779248 3171 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779252 3171 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:13:50.781199 master-0 kubenswrapper[3171]: W0216 17:13:50.779256 3171 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779260 3171 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779264 3171 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779267 3171 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779271 3171 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779275 3171 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779278 3171 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779282 3171 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779286 3171 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779289 3171 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779293 3171 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779296 3171 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779300 3171 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779304 3171 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779308 3171 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779311 3171 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779315 3171 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779318 3171 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779322 3171 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779325 3171 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:13:50.781665 master-0 kubenswrapper[3171]: W0216 17:13:50.779329 3171 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:13:50.782184 master-0 kubenswrapper[3171]: W0216 17:13:50.779332 3171 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:13:50.782184 master-0 kubenswrapper[3171]: W0216 17:13:50.779336 3171 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:13:50.782184 master-0 kubenswrapper[3171]: W0216 17:13:50.779339 3171 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:13:50.782184 master-0 kubenswrapper[3171]: W0216 17:13:50.779343 3171 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:13:50.782184 master-0 kubenswrapper[3171]: W0216 17:13:50.779346 3171 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:13:50.782184 master-0 kubenswrapper[3171]: W0216 17:13:50.779350 3171 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:13:50.782184 master-0 kubenswrapper[3171]: W0216 17:13:50.779354 3171 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:13:50.782184 master-0 kubenswrapper[3171]: W0216 17:13:50.779358 3171 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:13:50.782184 master-0 kubenswrapper[3171]: W0216 17:13:50.779362 3171 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:13:50.782184 master-0 kubenswrapper[3171]: W0216 17:13:50.779365 3171 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:13:50.782184 master-0 kubenswrapper[3171]: I0216 17:13:50.779372 3171 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:13:50.782184 master-0 kubenswrapper[3171]: I0216 17:13:50.780552 3171 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 17:13:50.783882 master-0 kubenswrapper[3171]: I0216 17:13:50.783846 3171 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 17:13:50.784006 master-0 kubenswrapper[3171]: I0216 17:13:50.783984 3171 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 17:13:50.785849 master-0 kubenswrapper[3171]: I0216 17:13:50.785821 3171 server.go:997] "Starting client certificate rotation" Feb 16 17:13:50.785849 master-0 kubenswrapper[3171]: I0216 17:13:50.785841 3171 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 17:13:50.786435 master-0 kubenswrapper[3171]: I0216 17:13:50.786270 3171 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 12:51:23.926351832 +0000 UTC Feb 16 17:13:50.786491 master-0 kubenswrapper[3171]: I0216 17:13:50.786423 3171 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h37m33.139934028s for next certificate rotation Feb 16 17:13:50.819363 master-0 kubenswrapper[3171]: I0216 17:13:50.819267 3171 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:13:50.822609 master-0 kubenswrapper[3171]: I0216 17:13:50.822526 3171 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:13:50.848173 master-0 kubenswrapper[3171]: I0216 17:13:50.848044 3171 log.go:25] "Validated CRI v1 runtime API" Feb 16 17:13:50.896220 master-0 kubenswrapper[3171]: I0216 17:13:50.896171 3171 log.go:25] "Validated CRI v1 image API" Feb 16 17:13:50.898536 master-0 kubenswrapper[3171]: I0216 17:13:50.898501 3171 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 17:13:50.904383 master-0 kubenswrapper[3171]: I0216 17:13:50.904335 3171 fs.go:135] Filesystem UUIDs: map[35a0b0cc-84b1-4374-a18a-0f49ad7a8333:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 17:13:50.904383 master-0 kubenswrapper[3171]: I0216 17:13:50.904364 3171 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Feb 16 17:13:50.922461 master-0 kubenswrapper[3171]: I0216 17:13:50.922150 3171 manager.go:217] Machine: {Timestamp:2026-02-16 17:13:50.919760432 +0000 UTC m=+0.588615708 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:47bfea951bd14de8bb3b008f6812b13f SystemUUID:47bfea95-1bd1-4de8-bb3b-008f6812b13f BootID:16009b8c-6511-4dd4-9a27-539c3ce647e4 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:52:47:03:db:66:8a Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:2c:b9:e2 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:4a:2e:ce Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:02:c0:82:fb:4a:f4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 17:13:50.922461 master-0 kubenswrapper[3171]: I0216 17:13:50.922410 3171 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 17:13:50.922679 master-0 kubenswrapper[3171]: I0216 17:13:50.922597 3171 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 17:13:50.922938 master-0 kubenswrapper[3171]: I0216 17:13:50.922913 3171 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 17:13:50.923235 master-0 kubenswrapper[3171]: I0216 17:13:50.923191 3171 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 17:13:50.923515 master-0 kubenswrapper[3171]: I0216 17:13:50.923232 3171 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 17:13:50.923566 master-0 kubenswrapper[3171]: I0216 17:13:50.923539 3171 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 17:13:50.923566 master-0 kubenswrapper[3171]: I0216 17:13:50.923548 3171 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 17:13:50.924778 master-0 kubenswrapper[3171]: I0216 17:13:50.924751 3171 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:13:50.924831 master-0 kubenswrapper[3171]: I0216 17:13:50.924784 3171 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:13:50.926277 master-0 kubenswrapper[3171]: I0216 17:13:50.926254 3171 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:13:50.926375 master-0 kubenswrapper[3171]: I0216 17:13:50.926356 3171 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 17:13:50.929586 master-0 kubenswrapper[3171]: I0216 17:13:50.929559 3171 kubelet.go:418] "Attempting to sync node with API server" Feb 16 17:13:50.929656 master-0 kubenswrapper[3171]: I0216 17:13:50.929589 3171 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 17:13:50.929656 master-0 kubenswrapper[3171]: I0216 17:13:50.929610 3171 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 17:13:50.929656 master-0 kubenswrapper[3171]: I0216 17:13:50.929626 3171 kubelet.go:324] "Adding apiserver pod source" Feb 16 17:13:50.929656 master-0 kubenswrapper[3171]: I0216 17:13:50.929640 3171 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 17:13:50.934204 master-0 kubenswrapper[3171]: I0216 17:13:50.934172 3171 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 17:13:50.935221 master-0 kubenswrapper[3171]: I0216 17:13:50.935193 3171 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 17:13:50.937693 master-0 kubenswrapper[3171]: I0216 17:13:50.937652 3171 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 17:13:50.938581 master-0 kubenswrapper[3171]: I0216 17:13:50.938558 3171 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 17:13:50.938805 master-0 kubenswrapper[3171]: I0216 17:13:50.938722 3171 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 17:13:50.938805 master-0 kubenswrapper[3171]: I0216 17:13:50.938738 3171 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 17:13:50.938805 master-0 kubenswrapper[3171]: I0216 17:13:50.938747 3171 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 17:13:50.938805 master-0 kubenswrapper[3171]: I0216 17:13:50.938756 3171 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 17:13:50.938805 master-0 kubenswrapper[3171]: I0216 17:13:50.938765 3171 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 17:13:50.938805 master-0 kubenswrapper[3171]: I0216 17:13:50.938775 3171 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 17:13:50.938805 master-0 kubenswrapper[3171]: I0216 17:13:50.938783 3171 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 17:13:50.938805 master-0 kubenswrapper[3171]: I0216 17:13:50.938796 3171 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 17:13:50.938805 master-0 kubenswrapper[3171]: I0216 17:13:50.938805 3171 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 17:13:50.939326 master-0 kubenswrapper[3171]: I0216 17:13:50.938841 3171 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 17:13:50.939326 master-0 kubenswrapper[3171]: I0216 17:13:50.938859 3171 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 17:13:50.939326 master-0 kubenswrapper[3171]: W0216 17:13:50.938833 3171 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:13:50.939326 master-0 kubenswrapper[3171]: E0216 17:13:50.938991 3171 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:13:50.939326 master-0 kubenswrapper[3171]: W0216 17:13:50.939058 3171 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:13:50.939326 master-0 kubenswrapper[3171]: E0216 17:13:50.939127 3171 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:13:50.940021 master-0 kubenswrapper[3171]: I0216 17:13:50.939994 3171 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 17:13:50.940809 master-0 kubenswrapper[3171]: I0216 17:13:50.940781 3171 server.go:1280] "Started kubelet" Feb 16 17:13:50.945323 master-0 kubenswrapper[3171]: I0216 17:13:50.945201 3171 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 17:13:50.945442 master-0 kubenswrapper[3171]: I0216 17:13:50.945387 3171 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 17:13:50.945442 master-0 kubenswrapper[3171]: I0216 17:13:50.944938 3171 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 17:13:50.946591 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 17:13:50.948302 master-0 kubenswrapper[3171]: I0216 17:13:50.948019 3171 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 17:13:50.949049 master-0 kubenswrapper[3171]: I0216 17:13:50.948926 3171 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:13:50.951464 master-0 kubenswrapper[3171]: I0216 17:13:50.951034 3171 server.go:449] "Adding debug handlers to kubelet server" Feb 16 17:13:50.956032 master-0 kubenswrapper[3171]: E0216 17:13:50.954417 3171 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1894c96ebe5c1464 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:13:50.94073866 +0000 UTC m=+0.609593926,LastTimestamp:2026-02-16 17:13:50.94073866 +0000 UTC m=+0.609593926,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:13:50.957367 master-0 kubenswrapper[3171]: I0216 17:13:50.957315 3171 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 17:13:50.957437 master-0 kubenswrapper[3171]: I0216 17:13:50.957391 3171 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 17:13:50.957516 master-0 kubenswrapper[3171]: I0216 17:13:50.957466 3171 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 12:35:26.285862125 +0000 UTC Feb 16 17:13:50.957516 master-0 kubenswrapper[3171]: I0216 17:13:50.957510 3171 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h21m35.328353571s for next certificate rotation Feb 16 17:13:50.957571 master-0 kubenswrapper[3171]: E0216 17:13:50.957521 3171 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:13:50.958046 master-0 kubenswrapper[3171]: I0216 17:13:50.957999 3171 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 17:13:50.958046 master-0 kubenswrapper[3171]: I0216 17:13:50.958042 3171 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 17:13:50.958149 master-0 kubenswrapper[3171]: I0216 17:13:50.958089 3171 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 17:13:50.959356 master-0 kubenswrapper[3171]: E0216 17:13:50.959047 3171 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 16 17:13:50.959356 master-0 kubenswrapper[3171]: W0216 17:13:50.959050 3171 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:13:50.959356 master-0 kubenswrapper[3171]: E0216 17:13:50.959128 3171 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:13:50.962329 master-0 kubenswrapper[3171]: I0216 17:13:50.962032 3171 factory.go:55] Registering systemd factory Feb 16 17:13:50.962329 master-0 kubenswrapper[3171]: I0216 17:13:50.962054 3171 factory.go:221] Registration of the systemd container factory successfully Feb 16 17:13:50.962571 master-0 kubenswrapper[3171]: I0216 17:13:50.962350 3171 factory.go:153] Registering CRI-O factory Feb 16 17:13:50.962571 master-0 kubenswrapper[3171]: I0216 17:13:50.962364 3171 factory.go:221] Registration of the crio container factory successfully Feb 16 17:13:50.962571 master-0 kubenswrapper[3171]: I0216 17:13:50.962442 3171 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 17:13:50.962571 master-0 kubenswrapper[3171]: I0216 17:13:50.962468 3171 factory.go:103] Registering Raw factory Feb 16 17:13:50.962571 master-0 kubenswrapper[3171]: I0216 17:13:50.962490 3171 manager.go:1196] Started watching for new ooms in manager Feb 16 17:13:50.963907 master-0 kubenswrapper[3171]: I0216 17:13:50.963862 3171 manager.go:319] Starting recovery of all containers Feb 16 17:13:50.981384 master-0 kubenswrapper[3171]: I0216 17:13:50.981321 3171 manager.go:324] Recovery completed Feb 16 17:13:50.991079 master-0 kubenswrapper[3171]: I0216 17:13:50.991020 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:50.994012 master-0 kubenswrapper[3171]: I0216 17:13:50.993893 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs" seLinuxMountContext="" Feb 16 17:13:50.994012 master-0 kubenswrapper[3171]: I0216 17:13:50.993987 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle" seLinuxMountContext="" Feb 16 17:13:50.994012 master-0 kubenswrapper[3171]: I0216 17:13:50.994001 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config" seLinuxMountContext="" Feb 16 17:13:50.994012 master-0 kubenswrapper[3171]: I0216 17:13:50.994014 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls" seLinuxMountContext="" Feb 16 17:13:50.994012 master-0 kubenswrapper[3171]: I0216 17:13:50.994026 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994038 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54fba066-0e9e-49f6-8a86-34d5b4b660df" volumeName="kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994049 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994060 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994073 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994084 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994094 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994106 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994116 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994127 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994139 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994148 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994161 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994173 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="970d4376-f299-412c-a8ee-90aa980c689e" volumeName="kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994184 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994184 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994195 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994208 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994213 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994222 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994235 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994247 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994225 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:50.994417 master-0 kubenswrapper[3171]: I0216 17:13:50.994260 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994293 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-metrics-client-ca" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994333 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad805251-19d0-4d2f-b741-7d11158f1f03" volumeName="kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994348 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994360 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994372 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994384 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62fc29f4-557f-4a75-8b78-6ca425c81b81" volumeName="kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994396 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994408 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994419 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994432 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994444 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994455 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994464 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994475 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994482 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994491 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-db" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994499 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994510 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994521 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994532 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g" seLinuxMountContext="" Feb 16 17:13:50.996510 master-0 kubenswrapper[3171]: I0216 17:13:50.994544 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994597 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994624 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994638 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994651 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994666 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994679 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994697 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994712 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994740 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994759 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994772 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994782 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1524fc1-d157-435a-8bf8-7e877c45909d" volumeName="kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994795 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994804 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994813 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994823 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994834 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994844 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994854 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994864 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles" seLinuxMountContext="" Feb 16 17:13:50.998248 master-0 kubenswrapper[3171]: I0216 17:13:50.994874 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.994884 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.994895 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.994905 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.994914 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.994924 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.994935 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.994948 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.994974 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.994990 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995002 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995008 3171 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995015 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995021 3171 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995027 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995041 3171 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995040 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995155 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995172 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995184 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995195 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995206 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995215 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995229 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52" seLinuxMountContext="" Feb 16 17:13:50.999547 master-0 kubenswrapper[3171]: I0216 17:13:50.995240 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995253 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995264 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995278 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995291 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995303 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995313 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995325 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995336 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995364 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995378 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995389 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995402 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995414 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995423 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995433 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995442 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995455 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995466 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995475 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d980a9a-2574-41b9-b970-0718cd97c8cd" volumeName="kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995484 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key" seLinuxMountContext="" Feb 16 17:13:51.000933 master-0 kubenswrapper[3171]: I0216 17:13:50.995493 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995503 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995516 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995527 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995536 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995545 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995555 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995566 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995578 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="544c6815-81d7-422a-9e4a-5fcbfabe8da8" volumeName="kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995592 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995604 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995615 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995628 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995645 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config-out" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995659 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995671 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995681 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995690 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995700 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995708 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995718 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config" seLinuxMountContext="" Feb 16 17:13:51.002372 master-0 kubenswrapper[3171]: I0216 17:13:50.995726 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995734 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995744 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995753 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995761 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995770 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c303189e-adae-4fe2-8dd7-cc9b80f73e66" volumeName="kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995780 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-main-db" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995788 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995797 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995806 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995814 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995822 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995831 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995840 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9859457-f0d1-4754-a6c5-cf05d5abf447" volumeName="kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995849 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995858 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995867 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18e9a9d3-9b18-4c19-9558-f33c68101922" volumeName="kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995876 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995885 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995893 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995901 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2" seLinuxMountContext="" Feb 16 17:13:51.005143 master-0 kubenswrapper[3171]: I0216 17:13:50.995910 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.995918 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="642e5115-b7f2-4561-bc6b-1a74b6d891c4" volumeName="kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.995928 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.995936 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08a90dc5-b0d8-4aad-a002-736492b6c1a9" volumeName="kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.995946 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.995954 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.995999 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996008 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996017 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996027 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996036 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996045 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996054 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996062 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996072 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996082 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996092 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996100 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996109 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1524fc1-d157-435a-8bf8-7e877c45909d" volumeName="kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996117 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996126 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config" seLinuxMountContext="" Feb 16 17:13:51.010108 master-0 kubenswrapper[3171]: I0216 17:13:50.996134 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996144 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996153 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996162 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996170 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996178 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996186 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996195 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996204 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-metrics-client-ca" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996212 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a275679-b7b6-4c28-b389-94cd2b014d6c" volumeName="kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996220 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996229 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9859457-f0d1-4754-a6c5-cf05d5abf447" volumeName="kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996238 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996246 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996254 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996264 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-kube-api-access-lgm4p" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996272 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996280 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996288 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996296 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" volumeName="kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996303 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x" seLinuxMountContext="" Feb 16 17:13:51.013721 master-0 kubenswrapper[3171]: I0216 17:13:50.996311 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80d3b238-70c3-4e71-96a1-99405352033f" volumeName="kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996319 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996327 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996335 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996343 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996353 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996362 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996370 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996378 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996386 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996395 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996403 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996413 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996421 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996429 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996438 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996451 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996459 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996469 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996478 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996487 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics" seLinuxMountContext="" Feb 16 17:13:51.017146 master-0 kubenswrapper[3171]: I0216 17:13:50.996495 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996507 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996516 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d980a9a-2574-41b9-b970-0718cd97c8cd" volumeName="kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996526 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996535 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996546 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996555 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996563 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996572 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996581 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996590 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996598 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996607 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996615 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996624 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18e9a9d3-9b18-4c19-9558-f33c68101922" volumeName="kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996632 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996641 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996652 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996662 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996670 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996679 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:13:51.035009 master-0 kubenswrapper[3171]: I0216 17:13:50.996689 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996698 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996706 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" volumeName="kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996716 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996724 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996732 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996741 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996749 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996757 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996765 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996773 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996782 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996791 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996800 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996808 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996818 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996828 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996836 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996845 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996854 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996861 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996870 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc" seLinuxMountContext="" Feb 16 17:13:51.036680 master-0 kubenswrapper[3171]: I0216 17:13:50.996878 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.996888 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.996897 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.996905 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.996915 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.996925 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.996934 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.996943 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="642e5115-b7f2-4561-bc6b-1a74b6d891c4" volumeName="kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.996952 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad805251-19d0-4d2f-b741-7d11158f1f03" volumeName="kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.997053 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.997063 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.997072 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.997081 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-kube-api-access-tjpvn" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.997089 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.997098 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.997107 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.997116 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.997125 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.997133 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.997142 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.997151 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ff68421-1741-41c1-93d5-5c722dfd295e" volumeName="kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz" seLinuxMountContext="" Feb 16 17:13:51.039793 master-0 kubenswrapper[3171]: I0216 17:13:50.997161 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997171 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997180 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a275679-b7b6-4c28-b389-94cd2b014d6c" volumeName="kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997189 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997197 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997207 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997216 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997225 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6fe41b0-1a42-4f07-8220-d9aaa50788ad" volumeName="kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997235 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997244 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997253 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997263 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997274 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997288 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-config-out" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997301 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997313 3171 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config" seLinuxMountContext="" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997327 3171 reconstruct.go:97] "Volume reconstruction finished" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:50.997336 3171 reconciler.go:26] "Reconciler: start to sync state" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:51.000205 3171 policy_none.go:49] "None policy: Start" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:51.000782 3171 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 17:13:51.041093 master-0 kubenswrapper[3171]: I0216 17:13:51.000802 3171 state_mem.go:35] "Initializing new in-memory state store" Feb 16 17:13:51.058161 master-0 kubenswrapper[3171]: E0216 17:13:51.058113 3171 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:13:51.077366 master-0 kubenswrapper[3171]: I0216 17:13:51.077242 3171 manager.go:334] "Starting Device Plugin manager" Feb 16 17:13:51.077366 master-0 kubenswrapper[3171]: I0216 17:13:51.077294 3171 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 17:13:51.077366 master-0 kubenswrapper[3171]: I0216 17:13:51.077308 3171 server.go:79] "Starting device plugin registration server" Feb 16 17:13:51.077890 master-0 kubenswrapper[3171]: I0216 17:13:51.077873 3171 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 17:13:51.077950 master-0 kubenswrapper[3171]: I0216 17:13:51.077889 3171 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 17:13:51.078306 master-0 kubenswrapper[3171]: I0216 17:13:51.078239 3171 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 17:13:51.078631 master-0 kubenswrapper[3171]: I0216 17:13:51.078585 3171 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 17:13:51.078631 master-0 kubenswrapper[3171]: I0216 17:13:51.078615 3171 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 17:13:51.089752 master-0 kubenswrapper[3171]: E0216 17:13:51.089712 3171 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 17:13:51.130329 master-0 kubenswrapper[3171]: I0216 17:13:51.130222 3171 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 17:13:51.218590 master-0 kubenswrapper[3171]: I0216 17:13:51.132010 3171 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 17:13:51.218590 master-0 kubenswrapper[3171]: I0216 17:13:51.132064 3171 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 17:13:51.218590 master-0 kubenswrapper[3171]: I0216 17:13:51.132100 3171 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 17:13:51.218590 master-0 kubenswrapper[3171]: E0216 17:13:51.132161 3171 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 16 17:13:51.218590 master-0 kubenswrapper[3171]: W0216 17:13:51.133208 3171 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:13:51.218590 master-0 kubenswrapper[3171]: E0216 17:13:51.133279 3171 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:13:51.218590 master-0 kubenswrapper[3171]: E0216 17:13:51.160618 3171 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 16 17:13:51.218590 master-0 kubenswrapper[3171]: I0216 17:13:51.178721 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.218590 master-0 kubenswrapper[3171]: I0216 17:13:51.179921 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.218590 master-0 kubenswrapper[3171]: I0216 17:13:51.179974 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.218590 master-0 kubenswrapper[3171]: I0216 17:13:51.179986 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.218590 master-0 kubenswrapper[3171]: I0216 17:13:51.180008 3171 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:13:51.218590 master-0 kubenswrapper[3171]: E0216 17:13:51.180988 3171 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:13:51.232350 master-0 kubenswrapper[3171]: I0216 17:13:51.232263 3171 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","kube-system/bootstrap-kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0"] Feb 16 17:13:51.232497 master-0 kubenswrapper[3171]: I0216 17:13:51.232409 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.233709 master-0 kubenswrapper[3171]: I0216 17:13:51.233637 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.233709 master-0 kubenswrapper[3171]: I0216 17:13:51.233707 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.233709 master-0 kubenswrapper[3171]: I0216 17:13:51.233720 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.235205 master-0 kubenswrapper[3171]: I0216 17:13:51.233894 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.235205 master-0 kubenswrapper[3171]: I0216 17:13:51.234190 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:51.235205 master-0 kubenswrapper[3171]: I0216 17:13:51.234264 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.235205 master-0 kubenswrapper[3171]: I0216 17:13:51.235057 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.235205 master-0 kubenswrapper[3171]: I0216 17:13:51.235092 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.235205 master-0 kubenswrapper[3171]: I0216 17:13:51.235108 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.235556 master-0 kubenswrapper[3171]: I0216 17:13:51.235242 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.235556 master-0 kubenswrapper[3171]: I0216 17:13:51.235327 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.235556 master-0 kubenswrapper[3171]: I0216 17:13:51.235358 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.235725 master-0 kubenswrapper[3171]: I0216 17:13:51.235553 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.235725 master-0 kubenswrapper[3171]: I0216 17:13:51.235594 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.235725 master-0 kubenswrapper[3171]: I0216 17:13:51.235605 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.236558 master-0 kubenswrapper[3171]: I0216 17:13:51.236504 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.236558 master-0 kubenswrapper[3171]: I0216 17:13:51.236528 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.236558 master-0 kubenswrapper[3171]: I0216 17:13:51.236539 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.236912 master-0 kubenswrapper[3171]: I0216 17:13:51.236651 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.236912 master-0 kubenswrapper[3171]: I0216 17:13:51.236666 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.236912 master-0 kubenswrapper[3171]: I0216 17:13:51.236675 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.236912 master-0 kubenswrapper[3171]: I0216 17:13:51.236692 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.236912 master-0 kubenswrapper[3171]: I0216 17:13:51.236848 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.236912 master-0 kubenswrapper[3171]: I0216 17:13:51.236886 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.237403 master-0 kubenswrapper[3171]: I0216 17:13:51.237368 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.237403 master-0 kubenswrapper[3171]: I0216 17:13:51.237403 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.237515 master-0 kubenswrapper[3171]: I0216 17:13:51.237419 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.238486 master-0 kubenswrapper[3171]: I0216 17:13:51.237604 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.238486 master-0 kubenswrapper[3171]: I0216 17:13:51.237773 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.238486 master-0 kubenswrapper[3171]: I0216 17:13:51.237851 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.238486 master-0 kubenswrapper[3171]: I0216 17:13:51.237853 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:13:51.238486 master-0 kubenswrapper[3171]: I0216 17:13:51.237919 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.238486 master-0 kubenswrapper[3171]: I0216 17:13:51.237869 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.238486 master-0 kubenswrapper[3171]: I0216 17:13:51.238258 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.238486 master-0 kubenswrapper[3171]: I0216 17:13:51.238286 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.238486 master-0 kubenswrapper[3171]: I0216 17:13:51.238299 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.238486 master-0 kubenswrapper[3171]: I0216 17:13:51.238421 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.239809 master-0 kubenswrapper[3171]: I0216 17:13:51.238548 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:13:51.239809 master-0 kubenswrapper[3171]: I0216 17:13:51.238587 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.239809 master-0 kubenswrapper[3171]: I0216 17:13:51.239038 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.239809 master-0 kubenswrapper[3171]: I0216 17:13:51.239084 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.239809 master-0 kubenswrapper[3171]: I0216 17:13:51.239106 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.240745 master-0 kubenswrapper[3171]: I0216 17:13:51.240342 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.240745 master-0 kubenswrapper[3171]: I0216 17:13:51.240381 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.240745 master-0 kubenswrapper[3171]: I0216 17:13:51.240397 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.240745 master-0 kubenswrapper[3171]: I0216 17:13:51.240615 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.240745 master-0 kubenswrapper[3171]: I0216 17:13:51.240645 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.240745 master-0 kubenswrapper[3171]: I0216 17:13:51.240661 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.240745 master-0 kubenswrapper[3171]: I0216 17:13:51.240715 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.241062 master-0 kubenswrapper[3171]: I0216 17:13:51.240765 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.241651 master-0 kubenswrapper[3171]: I0216 17:13:51.241546 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.241651 master-0 kubenswrapper[3171]: I0216 17:13:51.241593 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.241651 master-0 kubenswrapper[3171]: I0216 17:13:51.241610 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.304005 master-0 kubenswrapper[3171]: I0216 17:13:51.303921 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.304005 master-0 kubenswrapper[3171]: I0216 17:13:51.304000 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.304359 master-0 kubenswrapper[3171]: I0216 17:13:51.304050 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.304359 master-0 kubenswrapper[3171]: I0216 17:13:51.304085 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.304359 master-0 kubenswrapper[3171]: I0216 17:13:51.304118 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.304359 master-0 kubenswrapper[3171]: I0216 17:13:51.304151 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:51.304359 master-0 kubenswrapper[3171]: I0216 17:13:51.304189 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.304359 master-0 kubenswrapper[3171]: I0216 17:13:51.304228 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.304359 master-0 kubenswrapper[3171]: I0216 17:13:51.304260 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.304359 master-0 kubenswrapper[3171]: I0216 17:13:51.304335 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:13:51.304681 master-0 kubenswrapper[3171]: I0216 17:13:51.304424 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:13:51.304681 master-0 kubenswrapper[3171]: I0216 17:13:51.304489 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.304681 master-0 kubenswrapper[3171]: I0216 17:13:51.304538 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:51.304681 master-0 kubenswrapper[3171]: I0216 17:13:51.304572 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.304681 master-0 kubenswrapper[3171]: I0216 17:13:51.304606 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.304681 master-0 kubenswrapper[3171]: I0216 17:13:51.304638 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:13:51.304681 master-0 kubenswrapper[3171]: I0216 17:13:51.304673 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:13:51.304882 master-0 kubenswrapper[3171]: I0216 17:13:51.304704 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.304882 master-0 kubenswrapper[3171]: I0216 17:13:51.304738 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.304882 master-0 kubenswrapper[3171]: I0216 17:13:51.304769 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:51.304882 master-0 kubenswrapper[3171]: I0216 17:13:51.304800 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.304882 master-0 kubenswrapper[3171]: I0216 17:13:51.304836 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.304882 master-0 kubenswrapper[3171]: I0216 17:13:51.304866 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.381327 master-0 kubenswrapper[3171]: I0216 17:13:51.381262 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.382899 master-0 kubenswrapper[3171]: I0216 17:13:51.382847 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.382899 master-0 kubenswrapper[3171]: I0216 17:13:51.382898 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.383056 master-0 kubenswrapper[3171]: I0216 17:13:51.382916 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.383056 master-0 kubenswrapper[3171]: I0216 17:13:51.383023 3171 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:13:51.384197 master-0 kubenswrapper[3171]: E0216 17:13:51.384150 3171 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:13:51.406455 master-0 kubenswrapper[3171]: I0216 17:13:51.406363 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.406455 master-0 kubenswrapper[3171]: I0216 17:13:51.406426 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.406768 master-0 kubenswrapper[3171]: I0216 17:13:51.406717 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.406862 master-0 kubenswrapper[3171]: I0216 17:13:51.406764 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.406862 master-0 kubenswrapper[3171]: I0216 17:13:51.406805 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:51.406862 master-0 kubenswrapper[3171]: I0216 17:13:51.406753 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.407014 master-0 kubenswrapper[3171]: I0216 17:13:51.406870 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.407014 master-0 kubenswrapper[3171]: I0216 17:13:51.406922 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.407086 master-0 kubenswrapper[3171]: I0216 17:13:51.406815 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.407086 master-0 kubenswrapper[3171]: I0216 17:13:51.407022 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.407086 master-0 kubenswrapper[3171]: I0216 17:13:51.407051 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.407181 master-0 kubenswrapper[3171]: I0216 17:13:51.407119 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.407181 master-0 kubenswrapper[3171]: I0216 17:13:51.406941 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:51.407251 master-0 kubenswrapper[3171]: I0216 17:13:51.407049 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.407291 master-0 kubenswrapper[3171]: I0216 17:13:51.407241 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.407291 master-0 kubenswrapper[3171]: I0216 17:13:51.407283 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.407352 master-0 kubenswrapper[3171]: I0216 17:13:51.407319 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.407386 master-0 kubenswrapper[3171]: I0216 17:13:51.407357 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:51.407423 master-0 kubenswrapper[3171]: I0216 17:13:51.407386 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.407423 master-0 kubenswrapper[3171]: I0216 17:13:51.407408 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.407486 master-0 kubenswrapper[3171]: I0216 17:13:51.407455 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:51.407486 master-0 kubenswrapper[3171]: I0216 17:13:51.407466 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.407548 master-0 kubenswrapper[3171]: I0216 17:13:51.407462 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.407548 master-0 kubenswrapper[3171]: I0216 17:13:51.407526 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.407548 master-0 kubenswrapper[3171]: I0216 17:13:51.407544 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:13:51.407646 master-0 kubenswrapper[3171]: I0216 17:13:51.407529 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.407646 master-0 kubenswrapper[3171]: I0216 17:13:51.407575 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.407646 master-0 kubenswrapper[3171]: I0216 17:13:51.407580 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:13:51.407772 master-0 kubenswrapper[3171]: I0216 17:13:51.407648 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:13:51.407772 master-0 kubenswrapper[3171]: I0216 17:13:51.407660 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:13:51.407772 master-0 kubenswrapper[3171]: I0216 17:13:51.407700 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:13:51.407772 master-0 kubenswrapper[3171]: I0216 17:13:51.407714 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.407892 master-0 kubenswrapper[3171]: I0216 17:13:51.407789 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.407892 master-0 kubenswrapper[3171]: I0216 17:13:51.407823 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:51.407892 master-0 kubenswrapper[3171]: I0216 17:13:51.407857 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:13:51.408010 master-0 kubenswrapper[3171]: I0216 17:13:51.407927 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.408010 master-0 kubenswrapper[3171]: I0216 17:13:51.407978 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.408073 master-0 kubenswrapper[3171]: I0216 17:13:51.407934 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:51.408073 master-0 kubenswrapper[3171]: I0216 17:13:51.408019 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.408146 master-0 kubenswrapper[3171]: I0216 17:13:51.408075 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.408146 master-0 kubenswrapper[3171]: I0216 17:13:51.408081 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.408146 master-0 kubenswrapper[3171]: I0216 17:13:51.408138 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:13:51.408262 master-0 kubenswrapper[3171]: I0216 17:13:51.408145 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.408262 master-0 kubenswrapper[3171]: I0216 17:13:51.408166 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.408328 master-0 kubenswrapper[3171]: I0216 17:13:51.408248 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:13:51.408328 master-0 kubenswrapper[3171]: I0216 17:13:51.408287 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.562932 master-0 kubenswrapper[3171]: E0216 17:13:51.562637 3171 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 16 17:13:51.572387 master-0 kubenswrapper[3171]: I0216 17:13:51.572318 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:51.593075 master-0 kubenswrapper[3171]: I0216 17:13:51.593014 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:13:51.601758 master-0 kubenswrapper[3171]: W0216 17:13:51.601254 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10e298020284b0e8ffa6a0bc184059d9.slice/crio-c643d1a6bd2bbdb9a152ec5acdf256c8c4044ba37ff73d78c6f2993bc96d4a77 WatchSource:0}: Error finding container c643d1a6bd2bbdb9a152ec5acdf256c8c4044ba37ff73d78c6f2993bc96d4a77: Status 404 returned error can't find the container with id c643d1a6bd2bbdb9a152ec5acdf256c8c4044ba37ff73d78c6f2993bc96d4a77 Feb 16 17:13:51.620626 master-0 kubenswrapper[3171]: I0216 17:13:51.620581 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:51.636043 master-0 kubenswrapper[3171]: W0216 17:13:51.635933 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80420f2e7c3cdda71f7d0d6ccbe6f9f3.slice/crio-4ba4eba49a66193e7786c85a4578333fc95c4bc9a7a4bb4ef1dbbff27d009c65 WatchSource:0}: Error finding container 4ba4eba49a66193e7786c85a4578333fc95c4bc9a7a4bb4ef1dbbff27d009c65: Status 404 returned error can't find the container with id 4ba4eba49a66193e7786c85a4578333fc95c4bc9a7a4bb4ef1dbbff27d009c65 Feb 16 17:13:51.642043 master-0 kubenswrapper[3171]: I0216 17:13:51.641948 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:13:51.660704 master-0 kubenswrapper[3171]: W0216 17:13:51.660644 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8fa563c7331931f00ce0006e522f0f1.slice/crio-2700d64446e8244b9b674cd60afd215140645d2edcc3782a0dea4459ce56db2c WatchSource:0}: Error finding container 2700d64446e8244b9b674cd60afd215140645d2edcc3782a0dea4459ce56db2c: Status 404 returned error can't find the container with id 2700d64446e8244b9b674cd60afd215140645d2edcc3782a0dea4459ce56db2c Feb 16 17:13:51.666761 master-0 kubenswrapper[3171]: I0216 17:13:51.666710 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:13:51.679391 master-0 kubenswrapper[3171]: I0216 17:13:51.679343 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 17:13:51.703422 master-0 kubenswrapper[3171]: W0216 17:13:51.703368 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7adecad495595c43c57c30abd350e987.slice/crio-b76fc34b4f11f5d3f3dd2290c7e69bd90116ec4cac1df909fdf5c5e5f8bf960d WatchSource:0}: Error finding container b76fc34b4f11f5d3f3dd2290c7e69bd90116ec4cac1df909fdf5c5e5f8bf960d: Status 404 returned error can't find the container with id b76fc34b4f11f5d3f3dd2290c7e69bd90116ec4cac1df909fdf5c5e5f8bf960d Feb 16 17:13:51.774063 master-0 kubenswrapper[3171]: W0216 17:13:51.773986 3171 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:13:51.776018 master-0 kubenswrapper[3171]: E0216 17:13:51.774063 3171 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:13:51.785323 master-0 kubenswrapper[3171]: I0216 17:13:51.785251 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:51.786529 master-0 kubenswrapper[3171]: I0216 17:13:51.786483 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:51.786590 master-0 kubenswrapper[3171]: I0216 17:13:51.786532 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:51.786590 master-0 kubenswrapper[3171]: I0216 17:13:51.786545 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:51.786590 master-0 kubenswrapper[3171]: I0216 17:13:51.786575 3171 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:13:51.787534 master-0 kubenswrapper[3171]: E0216 17:13:51.787453 3171 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:13:51.950030 master-0 kubenswrapper[3171]: I0216 17:13:51.949982 3171 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:13:52.061457 master-0 kubenswrapper[3171]: W0216 17:13:52.061393 3171 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:13:52.061457 master-0 kubenswrapper[3171]: E0216 17:13:52.061457 3171 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:13:52.136265 master-0 kubenswrapper[3171]: I0216 17:13:52.136132 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"c643d1a6bd2bbdb9a152ec5acdf256c8c4044ba37ff73d78c6f2993bc96d4a77"} Feb 16 17:13:52.137407 master-0 kubenswrapper[3171]: I0216 17:13:52.137355 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"b76fc34b4f11f5d3f3dd2290c7e69bd90116ec4cac1df909fdf5c5e5f8bf960d"} Feb 16 17:13:52.138130 master-0 kubenswrapper[3171]: I0216 17:13:52.138101 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"d0c55c98db8491069414beee715fb0df1d28a36c886f9583fb6e5f20a3fd1076"} Feb 16 17:13:52.138840 master-0 kubenswrapper[3171]: I0216 17:13:52.138805 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"2700d64446e8244b9b674cd60afd215140645d2edcc3782a0dea4459ce56db2c"} Feb 16 17:13:52.139666 master-0 kubenswrapper[3171]: I0216 17:13:52.139600 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"4ba4eba49a66193e7786c85a4578333fc95c4bc9a7a4bb4ef1dbbff27d009c65"} Feb 16 17:13:52.140341 master-0 kubenswrapper[3171]: I0216 17:13:52.140308 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"32286c81635de6de1cf7f328273c1a49","Type":"ContainerStarted","Data":"06ddc2da13c0775c0e8f0acf19c817f8072a1fcf961d84d30040d9ab97e3ada6"} Feb 16 17:13:52.169206 master-0 kubenswrapper[3171]: W0216 17:13:52.169152 3171 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:13:52.169297 master-0 kubenswrapper[3171]: E0216 17:13:52.169225 3171 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:13:52.304037 master-0 kubenswrapper[3171]: W0216 17:13:52.303937 3171 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:13:52.304037 master-0 kubenswrapper[3171]: E0216 17:13:52.304025 3171 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:13:52.364254 master-0 kubenswrapper[3171]: E0216 17:13:52.364000 3171 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 16 17:13:52.587892 master-0 kubenswrapper[3171]: I0216 17:13:52.587779 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:52.589570 master-0 kubenswrapper[3171]: I0216 17:13:52.589517 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:52.589682 master-0 kubenswrapper[3171]: I0216 17:13:52.589578 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:52.589682 master-0 kubenswrapper[3171]: I0216 17:13:52.589596 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:52.589682 master-0 kubenswrapper[3171]: I0216 17:13:52.589629 3171 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:13:52.590616 master-0 kubenswrapper[3171]: E0216 17:13:52.590559 3171 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:13:52.950387 master-0 kubenswrapper[3171]: I0216 17:13:52.950010 3171 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:13:53.145655 master-0 kubenswrapper[3171]: I0216 17:13:53.145525 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"32286c81635de6de1cf7f328273c1a49","Type":"ContainerStarted","Data":"d7c38e55f71867938246c19521c872dc2168e928e2d36640288dfca85978e020"} Feb 16 17:13:53.145813 master-0 kubenswrapper[3171]: I0216 17:13:53.145650 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:53.147576 master-0 kubenswrapper[3171]: I0216 17:13:53.147545 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:53.147661 master-0 kubenswrapper[3171]: I0216 17:13:53.147581 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:53.147661 master-0 kubenswrapper[3171]: I0216 17:13:53.147593 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:53.148374 master-0 kubenswrapper[3171]: I0216 17:13:53.148348 3171 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="031351d655133b7eb2314bbf088d80efa45189d1f9252d31e6d06128b3e90f08" exitCode=0 Feb 16 17:13:53.148427 master-0 kubenswrapper[3171]: I0216 17:13:53.148401 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerDied","Data":"031351d655133b7eb2314bbf088d80efa45189d1f9252d31e6d06128b3e90f08"} Feb 16 17:13:53.148522 master-0 kubenswrapper[3171]: I0216 17:13:53.148484 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:53.149532 master-0 kubenswrapper[3171]: I0216 17:13:53.149507 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:53.149593 master-0 kubenswrapper[3171]: I0216 17:13:53.149537 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:53.149593 master-0 kubenswrapper[3171]: I0216 17:13:53.149547 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:53.151675 master-0 kubenswrapper[3171]: I0216 17:13:53.151662 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:53.152222 master-0 kubenswrapper[3171]: I0216 17:13:53.152206 3171 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="113e4c68695ecb798e25862ac1123977a0b09805e3743cfdc83f64ff3d629fe8" exitCode=0 Feb 16 17:13:53.152326 master-0 kubenswrapper[3171]: I0216 17:13:53.152312 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"113e4c68695ecb798e25862ac1123977a0b09805e3743cfdc83f64ff3d629fe8"} Feb 16 17:13:53.152464 master-0 kubenswrapper[3171]: I0216 17:13:53.152452 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:53.152826 master-0 kubenswrapper[3171]: I0216 17:13:53.152813 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:53.152902 master-0 kubenswrapper[3171]: I0216 17:13:53.152893 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:53.152976 master-0 kubenswrapper[3171]: I0216 17:13:53.152953 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:53.153464 master-0 kubenswrapper[3171]: I0216 17:13:53.153417 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:53.153464 master-0 kubenswrapper[3171]: I0216 17:13:53.153470 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:53.153564 master-0 kubenswrapper[3171]: I0216 17:13:53.153487 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:53.155254 master-0 kubenswrapper[3171]: I0216 17:13:53.155180 3171 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="a2626c4f0d02f1b08ed65857f9aece164559757f234922a3555b40b68623d959" exitCode=0 Feb 16 17:13:53.155254 master-0 kubenswrapper[3171]: I0216 17:13:53.155224 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"a2626c4f0d02f1b08ed65857f9aece164559757f234922a3555b40b68623d959"} Feb 16 17:13:53.155355 master-0 kubenswrapper[3171]: I0216 17:13:53.155275 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:53.157322 master-0 kubenswrapper[3171]: I0216 17:13:53.156887 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:53.157322 master-0 kubenswrapper[3171]: I0216 17:13:53.156901 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:53.157322 master-0 kubenswrapper[3171]: I0216 17:13:53.156908 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:53.159054 master-0 kubenswrapper[3171]: I0216 17:13:53.159023 3171 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="8774224188f63a305c99868a1a126c4172ed3b7488104d79c4b6d14629a0d4ee" exitCode=0 Feb 16 17:13:53.159201 master-0 kubenswrapper[3171]: I0216 17:13:53.159092 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"8774224188f63a305c99868a1a126c4172ed3b7488104d79c4b6d14629a0d4ee"} Feb 16 17:13:53.159201 master-0 kubenswrapper[3171]: I0216 17:13:53.159168 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:53.160625 master-0 kubenswrapper[3171]: I0216 17:13:53.160554 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:53.160625 master-0 kubenswrapper[3171]: I0216 17:13:53.160613 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:53.160625 master-0 kubenswrapper[3171]: I0216 17:13:53.160625 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:53.162935 master-0 kubenswrapper[3171]: I0216 17:13:53.162891 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"aab010443e5953fae2765f398f189cc7072cddd5f5db6fb4755e40a70cbb95c4"} Feb 16 17:13:53.163218 master-0 kubenswrapper[3171]: I0216 17:13:53.163186 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"d7a0bf294837ce8749df8c7d4ac2693e3859d287573df80f623b996335f525a5"} Feb 16 17:13:53.163851 master-0 kubenswrapper[3171]: I0216 17:13:53.162944 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:53.166394 master-0 kubenswrapper[3171]: I0216 17:13:53.166362 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:53.166394 master-0 kubenswrapper[3171]: I0216 17:13:53.166394 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:53.166555 master-0 kubenswrapper[3171]: I0216 17:13:53.166406 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:53.951037 master-0 kubenswrapper[3171]: I0216 17:13:53.950936 3171 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:13:53.952498 master-0 kubenswrapper[3171]: W0216 17:13:53.952430 3171 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:13:53.952560 master-0 kubenswrapper[3171]: E0216 17:13:53.952505 3171 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:13:53.965719 master-0 kubenswrapper[3171]: E0216 17:13:53.965666 3171 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 16 17:13:54.167055 master-0 kubenswrapper[3171]: I0216 17:13:54.166976 3171 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="d7a0bf294837ce8749df8c7d4ac2693e3859d287573df80f623b996335f525a5" exitCode=1 Feb 16 17:13:54.167238 master-0 kubenswrapper[3171]: I0216 17:13:54.167049 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"d7a0bf294837ce8749df8c7d4ac2693e3859d287573df80f623b996335f525a5"} Feb 16 17:13:54.167238 master-0 kubenswrapper[3171]: I0216 17:13:54.167082 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:54.167874 master-0 kubenswrapper[3171]: I0216 17:13:54.167845 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:54.167874 master-0 kubenswrapper[3171]: I0216 17:13:54.167872 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:54.168062 master-0 kubenswrapper[3171]: I0216 17:13:54.167887 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:54.168288 master-0 kubenswrapper[3171]: I0216 17:13:54.168261 3171 scope.go:117] "RemoveContainer" containerID="d7a0bf294837ce8749df8c7d4ac2693e3859d287573df80f623b996335f525a5" Feb 16 17:13:54.171297 master-0 kubenswrapper[3171]: I0216 17:13:54.171264 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"15bd44df4287ebd73cf51ab580577f1b0e984e37a690d3b82be46044edeeb30a"} Feb 16 17:13:54.171297 master-0 kubenswrapper[3171]: I0216 17:13:54.171291 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"25465b580bb48f3a6fb46d8c2d044d08f4836a58f5f20bd9908d153f0df9ce48"} Feb 16 17:13:54.171411 master-0 kubenswrapper[3171]: I0216 17:13:54.171304 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"1f9ccdae401f5d6081eacec1d36e73c0010b8afd14d410faf565b70a3ef4d59d"} Feb 16 17:13:54.171411 master-0 kubenswrapper[3171]: I0216 17:13:54.171316 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"1eea8daa47d9e87380f3482df90e85193de1a1240a1d240c6fe7a9ef3f0312e1"} Feb 16 17:13:54.171411 master-0 kubenswrapper[3171]: I0216 17:13:54.171326 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"4992c22e831121a95bc649c439bd4c2706d23e304f9780a4ff56be58ca6cb875"} Feb 16 17:13:54.171411 master-0 kubenswrapper[3171]: I0216 17:13:54.171328 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:54.174042 master-0 kubenswrapper[3171]: I0216 17:13:54.172509 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:54.174042 master-0 kubenswrapper[3171]: I0216 17:13:54.172541 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:54.174042 master-0 kubenswrapper[3171]: I0216 17:13:54.172553 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:54.174526 master-0 kubenswrapper[3171]: I0216 17:13:54.174493 3171 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="29b286fe19ef6218902f26309c43d4a3f1b4a5c65b84d854a9532c01548c0948" exitCode=0 Feb 16 17:13:54.174689 master-0 kubenswrapper[3171]: I0216 17:13:54.174668 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"29b286fe19ef6218902f26309c43d4a3f1b4a5c65b84d854a9532c01548c0948"} Feb 16 17:13:54.174763 master-0 kubenswrapper[3171]: I0216 17:13:54.174749 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:54.176108 master-0 kubenswrapper[3171]: I0216 17:13:54.176078 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"9cac97f2a7ed5b660f1fe9defbb77e1823cc7917bedfcd9a1ee2cf3d27a5413c"} Feb 16 17:13:54.176167 master-0 kubenswrapper[3171]: I0216 17:13:54.176159 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:54.176210 master-0 kubenswrapper[3171]: I0216 17:13:54.176177 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:54.176210 master-0 kubenswrapper[3171]: I0216 17:13:54.176188 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:54.176817 master-0 kubenswrapper[3171]: I0216 17:13:54.176767 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:54.177716 master-0 kubenswrapper[3171]: I0216 17:13:54.177677 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:54.177716 master-0 kubenswrapper[3171]: I0216 17:13:54.177706 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:54.177716 master-0 kubenswrapper[3171]: I0216 17:13:54.177717 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:54.181074 master-0 kubenswrapper[3171]: I0216 17:13:54.181062 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:54.181144 master-0 kubenswrapper[3171]: I0216 17:13:54.181094 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:54.182164 master-0 kubenswrapper[3171]: I0216 17:13:54.181360 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"35f5cd1992e81a7af4f5f45d4bb9187b72081e16a022193c8645c6082cefaf93"} Feb 16 17:13:54.182164 master-0 kubenswrapper[3171]: I0216 17:13:54.181387 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"01765c8bc2f28fd305b50bff98604cd983df450cc2ab5cd1fc4e41b470b0da56"} Feb 16 17:13:54.182164 master-0 kubenswrapper[3171]: I0216 17:13:54.181397 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"f79c4f95d84447d2060b3a9bdc40c88a369d3407543549b9827cfbca809475bb"} Feb 16 17:13:54.182164 master-0 kubenswrapper[3171]: I0216 17:13:54.181745 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:54.182164 master-0 kubenswrapper[3171]: I0216 17:13:54.181763 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:54.182164 master-0 kubenswrapper[3171]: I0216 17:13:54.181770 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:54.182751 master-0 kubenswrapper[3171]: I0216 17:13:54.182421 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:54.182751 master-0 kubenswrapper[3171]: I0216 17:13:54.182476 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:54.182751 master-0 kubenswrapper[3171]: I0216 17:13:54.182486 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:54.201160 master-0 kubenswrapper[3171]: I0216 17:13:54.201106 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:54.202074 master-0 kubenswrapper[3171]: I0216 17:13:54.202045 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:54.202074 master-0 kubenswrapper[3171]: I0216 17:13:54.202072 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:54.202144 master-0 kubenswrapper[3171]: I0216 17:13:54.202081 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:54.202144 master-0 kubenswrapper[3171]: I0216 17:13:54.202101 3171 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:13:54.202671 master-0 kubenswrapper[3171]: E0216 17:13:54.202635 3171 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:13:54.300144 master-0 kubenswrapper[3171]: I0216 17:13:54.300097 3171 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:55.188547 master-0 kubenswrapper[3171]: I0216 17:13:55.188451 3171 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="6ac11f88c295e3e046b26ff9ae40dab9b0eceda0cac3f32bc505dc4f4fee5314" exitCode=0 Feb 16 17:13:55.188547 master-0 kubenswrapper[3171]: I0216 17:13:55.188549 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"6ac11f88c295e3e046b26ff9ae40dab9b0eceda0cac3f32bc505dc4f4fee5314"} Feb 16 17:13:55.189621 master-0 kubenswrapper[3171]: I0216 17:13:55.188659 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:55.189991 master-0 kubenswrapper[3171]: I0216 17:13:55.189891 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:55.189991 master-0 kubenswrapper[3171]: I0216 17:13:55.189986 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:55.190219 master-0 kubenswrapper[3171]: I0216 17:13:55.190013 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:55.196950 master-0 kubenswrapper[3171]: I0216 17:13:55.196018 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:55.196950 master-0 kubenswrapper[3171]: I0216 17:13:55.196261 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e"} Feb 16 17:13:55.196950 master-0 kubenswrapper[3171]: I0216 17:13:55.196708 3171 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:13:55.196950 master-0 kubenswrapper[3171]: I0216 17:13:55.196800 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:55.197471 master-0 kubenswrapper[3171]: I0216 17:13:55.197223 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:55.197471 master-0 kubenswrapper[3171]: I0216 17:13:55.197226 3171 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:55.198574 master-0 kubenswrapper[3171]: I0216 17:13:55.198521 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:55.199446 master-0 kubenswrapper[3171]: I0216 17:13:55.199417 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:55.199446 master-0 kubenswrapper[3171]: I0216 17:13:55.199438 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:55.199697 master-0 kubenswrapper[3171]: I0216 17:13:55.199451 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:55.199697 master-0 kubenswrapper[3171]: I0216 17:13:55.199661 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:55.199697 master-0 kubenswrapper[3171]: I0216 17:13:55.199702 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:55.200022 master-0 kubenswrapper[3171]: I0216 17:13:55.199714 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:55.200022 master-0 kubenswrapper[3171]: I0216 17:13:55.199771 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:55.200022 master-0 kubenswrapper[3171]: I0216 17:13:55.199815 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:55.200022 master-0 kubenswrapper[3171]: I0216 17:13:55.199831 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:55.200022 master-0 kubenswrapper[3171]: I0216 17:13:55.199943 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:55.200022 master-0 kubenswrapper[3171]: I0216 17:13:55.199977 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:55.200022 master-0 kubenswrapper[3171]: I0216 17:13:55.199986 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:56.200396 master-0 kubenswrapper[3171]: I0216 17:13:56.200364 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:56.208243 master-0 kubenswrapper[3171]: I0216 17:13:56.200689 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"04067ed4187486197b26e1ae13951b566ae8dc6eabd9679686fe0234c3137a4b"} Feb 16 17:13:56.208243 master-0 kubenswrapper[3171]: I0216 17:13:56.200714 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"482b77f255da8dbdc1be0e1707334e04261602c426501a77264ce29439b9c9bd"} Feb 16 17:13:56.208243 master-0 kubenswrapper[3171]: I0216 17:13:56.200724 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"21bbf2dd3dd984198b802df6fcf5c5633dc7d3f5d39543c470dd12c4b0604853"} Feb 16 17:13:56.208243 master-0 kubenswrapper[3171]: I0216 17:13:56.200732 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"c7588158dd7d7dabaa4f447c2b9bdd6aa5e276f1eca5073f8e69a9eb9b31cfba"} Feb 16 17:13:56.208243 master-0 kubenswrapper[3171]: I0216 17:13:56.200792 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:56.208243 master-0 kubenswrapper[3171]: I0216 17:13:56.201259 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:56.208243 master-0 kubenswrapper[3171]: I0216 17:13:56.201287 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:56.208243 master-0 kubenswrapper[3171]: I0216 17:13:56.201298 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:56.208243 master-0 kubenswrapper[3171]: I0216 17:13:56.201325 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:56.208243 master-0 kubenswrapper[3171]: I0216 17:13:56.201345 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:56.208243 master-0 kubenswrapper[3171]: I0216 17:13:56.201352 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:56.496117 master-0 kubenswrapper[3171]: I0216 17:13:56.495875 3171 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:57.207151 master-0 kubenswrapper[3171]: I0216 17:13:57.207086 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:57.207151 master-0 kubenswrapper[3171]: I0216 17:13:57.207090 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"aedfff1b9149a45317431b294572c56015ded146f7b3ff9ec7a263bc47a383b3"} Feb 16 17:13:57.207787 master-0 kubenswrapper[3171]: I0216 17:13:57.207220 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:57.208022 master-0 kubenswrapper[3171]: I0216 17:13:57.207938 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:57.208111 master-0 kubenswrapper[3171]: I0216 17:13:57.208044 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:57.208111 master-0 kubenswrapper[3171]: I0216 17:13:57.208099 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:57.208261 master-0 kubenswrapper[3171]: I0216 17:13:57.208234 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:57.208261 master-0 kubenswrapper[3171]: I0216 17:13:57.208258 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:57.208319 master-0 kubenswrapper[3171]: I0216 17:13:57.208266 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:57.403093 master-0 kubenswrapper[3171]: I0216 17:13:57.402850 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:57.404761 master-0 kubenswrapper[3171]: I0216 17:13:57.404662 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:57.404761 master-0 kubenswrapper[3171]: I0216 17:13:57.404733 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:57.404761 master-0 kubenswrapper[3171]: I0216 17:13:57.404756 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:57.405252 master-0 kubenswrapper[3171]: I0216 17:13:57.404796 3171 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:13:57.444928 master-0 kubenswrapper[3171]: I0216 17:13:57.444832 3171 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:57.445207 master-0 kubenswrapper[3171]: I0216 17:13:57.445124 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:57.446601 master-0 kubenswrapper[3171]: I0216 17:13:57.446542 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:57.446745 master-0 kubenswrapper[3171]: I0216 17:13:57.446605 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:57.446745 master-0 kubenswrapper[3171]: I0216 17:13:57.446627 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:58.210204 master-0 kubenswrapper[3171]: I0216 17:13:58.210090 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:58.211283 master-0 kubenswrapper[3171]: I0216 17:13:58.211225 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:58.211283 master-0 kubenswrapper[3171]: I0216 17:13:58.211268 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:58.211283 master-0 kubenswrapper[3171]: I0216 17:13:58.211276 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:58.243009 master-0 kubenswrapper[3171]: I0216 17:13:58.242862 3171 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:13:58.243372 master-0 kubenswrapper[3171]: I0216 17:13:58.243100 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:58.244337 master-0 kubenswrapper[3171]: I0216 17:13:58.244261 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:58.244483 master-0 kubenswrapper[3171]: I0216 17:13:58.244345 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:58.244483 master-0 kubenswrapper[3171]: I0216 17:13:58.244371 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:58.874032 master-0 kubenswrapper[3171]: I0216 17:13:58.873920 3171 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 16 17:13:59.012872 master-0 kubenswrapper[3171]: I0216 17:13:59.012770 3171 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 16 17:13:59.212608 master-0 kubenswrapper[3171]: I0216 17:13:59.212409 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:59.213785 master-0 kubenswrapper[3171]: I0216 17:13:59.213704 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:59.213910 master-0 kubenswrapper[3171]: I0216 17:13:59.213789 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:59.213910 master-0 kubenswrapper[3171]: I0216 17:13:59.213814 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:59.271203 master-0 kubenswrapper[3171]: I0216 17:13:59.271110 3171 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:13:59.272203 master-0 kubenswrapper[3171]: I0216 17:13:59.271331 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:59.273071 master-0 kubenswrapper[3171]: I0216 17:13:59.272930 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:59.273071 master-0 kubenswrapper[3171]: I0216 17:13:59.273024 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:59.273071 master-0 kubenswrapper[3171]: I0216 17:13:59.273035 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:59.496217 master-0 kubenswrapper[3171]: I0216 17:13:59.496003 3171 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:13:59.689302 master-0 kubenswrapper[3171]: I0216 17:13:59.689198 3171 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:13:59.689571 master-0 kubenswrapper[3171]: I0216 17:13:59.689462 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:13:59.690635 master-0 kubenswrapper[3171]: I0216 17:13:59.690573 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:13:59.690635 master-0 kubenswrapper[3171]: I0216 17:13:59.690619 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:13:59.690635 master-0 kubenswrapper[3171]: I0216 17:13:59.690634 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:13:59.727360 master-0 kubenswrapper[3171]: I0216 17:13:59.727286 3171 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:00.215116 master-0 kubenswrapper[3171]: I0216 17:14:00.215050 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:00.215608 master-0 kubenswrapper[3171]: I0216 17:14:00.215153 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:00.216277 master-0 kubenswrapper[3171]: I0216 17:14:00.216232 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:00.216385 master-0 kubenswrapper[3171]: I0216 17:14:00.216282 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:00.216385 master-0 kubenswrapper[3171]: I0216 17:14:00.216298 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:00.216385 master-0 kubenswrapper[3171]: I0216 17:14:00.216366 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:00.216511 master-0 kubenswrapper[3171]: I0216 17:14:00.216407 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:00.216511 master-0 kubenswrapper[3171]: I0216 17:14:00.216428 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:01.089941 master-0 kubenswrapper[3171]: E0216 17:14:01.089876 3171 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 17:14:01.963501 master-0 kubenswrapper[3171]: I0216 17:14:01.963466 3171 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:01.964151 master-0 kubenswrapper[3171]: I0216 17:14:01.964123 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:01.965040 master-0 kubenswrapper[3171]: I0216 17:14:01.965013 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:01.965107 master-0 kubenswrapper[3171]: I0216 17:14:01.965047 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:01.965107 master-0 kubenswrapper[3171]: I0216 17:14:01.965059 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:02.728312 master-0 kubenswrapper[3171]: I0216 17:14:02.728157 3171 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 17:14:04.937503 master-0 kubenswrapper[3171]: W0216 17:14:04.937363 3171 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 17:14:04.937503 master-0 kubenswrapper[3171]: I0216 17:14:04.937510 3171 trace.go:236] Trace[924659776]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 17:13:54.935) (total time: 10001ms): Feb 16 17:14:04.937503 master-0 kubenswrapper[3171]: Trace[924659776]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:14:04.937) Feb 16 17:14:04.937503 master-0 kubenswrapper[3171]: Trace[924659776]: [10.001853444s] [10.001853444s] END Feb 16 17:14:04.938759 master-0 kubenswrapper[3171]: E0216 17:14:04.937542 3171 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 17:14:04.950803 master-0 kubenswrapper[3171]: I0216 17:14:04.950747 3171 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": net/http: TLS handshake timeout Feb 16 17:14:05.159230 master-0 kubenswrapper[3171]: W0216 17:14:05.159082 3171 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 17:14:05.159230 master-0 kubenswrapper[3171]: I0216 17:14:05.159246 3171 trace.go:236] Trace[361310982]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 17:13:55.155) (total time: 10003ms): Feb 16 17:14:05.159230 master-0 kubenswrapper[3171]: Trace[361310982]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10003ms (17:14:05.159) Feb 16 17:14:05.159230 master-0 kubenswrapper[3171]: Trace[361310982]: [10.003690554s] [10.003690554s] END Feb 16 17:14:05.159798 master-0 kubenswrapper[3171]: E0216 17:14:05.159285 3171 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 17:14:05.230854 master-0 kubenswrapper[3171]: I0216 17:14:05.230717 3171 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_10e298020284b0e8ffa6a0bc184059d9/kube-apiserver-check-endpoints/0.log" Feb 16 17:14:05.232777 master-0 kubenswrapper[3171]: I0216 17:14:05.232742 3171 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="15bd44df4287ebd73cf51ab580577f1b0e984e37a690d3b82be46044edeeb30a" exitCode=255 Feb 16 17:14:05.232866 master-0 kubenswrapper[3171]: I0216 17:14:05.232823 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerDied","Data":"15bd44df4287ebd73cf51ab580577f1b0e984e37a690d3b82be46044edeeb30a"} Feb 16 17:14:05.233119 master-0 kubenswrapper[3171]: I0216 17:14:05.233092 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:05.233976 master-0 kubenswrapper[3171]: I0216 17:14:05.233927 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:05.233976 master-0 kubenswrapper[3171]: I0216 17:14:05.233970 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:05.234093 master-0 kubenswrapper[3171]: I0216 17:14:05.233982 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:05.234546 master-0 kubenswrapper[3171]: I0216 17:14:05.234508 3171 scope.go:117] "RemoveContainer" containerID="15bd44df4287ebd73cf51ab580577f1b0e984e37a690d3b82be46044edeeb30a" Feb 16 17:14:05.235502 master-0 kubenswrapper[3171]: I0216 17:14:05.235143 3171 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e" exitCode=1 Feb 16 17:14:05.235502 master-0 kubenswrapper[3171]: I0216 17:14:05.235181 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e"} Feb 16 17:14:05.235502 master-0 kubenswrapper[3171]: I0216 17:14:05.235230 3171 scope.go:117] "RemoveContainer" containerID="d7a0bf294837ce8749df8c7d4ac2693e3859d287573df80f623b996335f525a5" Feb 16 17:14:05.235502 master-0 kubenswrapper[3171]: I0216 17:14:05.235314 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:05.236112 master-0 kubenswrapper[3171]: I0216 17:14:05.235996 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:05.236112 master-0 kubenswrapper[3171]: I0216 17:14:05.236040 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:05.236112 master-0 kubenswrapper[3171]: I0216 17:14:05.236058 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:05.236662 master-0 kubenswrapper[3171]: I0216 17:14:05.236582 3171 scope.go:117] "RemoveContainer" containerID="d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e" Feb 16 17:14:05.236947 master-0 kubenswrapper[3171]: E0216 17:14:05.236884 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:14:05.348872 master-0 kubenswrapper[3171]: W0216 17:14:05.348802 3171 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 17:14:05.348872 master-0 kubenswrapper[3171]: I0216 17:14:05.348887 3171 trace.go:236] Trace[956687378]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 17:13:55.347) (total time: 10001ms): Feb 16 17:14:05.348872 master-0 kubenswrapper[3171]: Trace[956687378]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:14:05.348) Feb 16 17:14:05.348872 master-0 kubenswrapper[3171]: Trace[956687378]: [10.001419634s] [10.001419634s] END Feb 16 17:14:05.349170 master-0 kubenswrapper[3171]: E0216 17:14:05.348906 3171 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 17:14:06.130235 master-0 kubenswrapper[3171]: I0216 17:14:06.130180 3171 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 17:14:06.131012 master-0 kubenswrapper[3171]: I0216 17:14:06.130251 3171 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 17:14:06.134392 master-0 kubenswrapper[3171]: I0216 17:14:06.134345 3171 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 17:14:06.134496 master-0 kubenswrapper[3171]: I0216 17:14:06.134407 3171 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 17:14:06.239261 master-0 kubenswrapper[3171]: I0216 17:14:06.239215 3171 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_10e298020284b0e8ffa6a0bc184059d9/kube-apiserver-check-endpoints/0.log" Feb 16 17:14:06.241088 master-0 kubenswrapper[3171]: I0216 17:14:06.241057 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"861e88422bffb8b290aabc9e4e2f5c409ac08d163d44f75a8570182c8883c793"} Feb 16 17:14:06.241190 master-0 kubenswrapper[3171]: I0216 17:14:06.241171 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:06.241823 master-0 kubenswrapper[3171]: I0216 17:14:06.241793 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:06.241823 master-0 kubenswrapper[3171]: I0216 17:14:06.241825 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:06.241947 master-0 kubenswrapper[3171]: I0216 17:14:06.241834 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:06.496602 master-0 kubenswrapper[3171]: I0216 17:14:06.496456 3171 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:06.496765 master-0 kubenswrapper[3171]: I0216 17:14:06.496648 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:06.497699 master-0 kubenswrapper[3171]: I0216 17:14:06.497675 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:06.497765 master-0 kubenswrapper[3171]: I0216 17:14:06.497718 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:06.497765 master-0 kubenswrapper[3171]: I0216 17:14:06.497737 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:06.498289 master-0 kubenswrapper[3171]: I0216 17:14:06.498266 3171 scope.go:117] "RemoveContainer" containerID="d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e" Feb 16 17:14:06.501733 master-0 kubenswrapper[3171]: E0216 17:14:06.500593 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: I0216 17:14:07.450304 3171 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]log ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]etcd ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/priority-and-fairness-filter ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/start-apiextensions-informers ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/start-apiextensions-controllers ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/crd-informer-synced ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/start-system-namespaces-controller ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/rbac/bootstrap-roles ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/bootstrap-controller ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/start-kube-aggregator-informers ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/apiservice-registration-controller ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]autoregister-completion ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/apiservice-openapi-controller ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: livez check failed Feb 16 17:14:07.450429 master-0 kubenswrapper[3171]: I0216 17:14:07.450391 3171 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:09.041194 master-0 kubenswrapper[3171]: I0216 17:14:09.041120 3171 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 16 17:14:09.041760 master-0 kubenswrapper[3171]: I0216 17:14:09.041429 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:09.042678 master-0 kubenswrapper[3171]: I0216 17:14:09.042624 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:09.042678 master-0 kubenswrapper[3171]: I0216 17:14:09.042679 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:09.042855 master-0 kubenswrapper[3171]: I0216 17:14:09.042690 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:09.056137 master-0 kubenswrapper[3171]: I0216 17:14:09.056073 3171 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 16 17:14:09.253403 master-0 kubenswrapper[3171]: I0216 17:14:09.253347 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:09.254413 master-0 kubenswrapper[3171]: I0216 17:14:09.254383 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:09.254466 master-0 kubenswrapper[3171]: I0216 17:14:09.254426 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:09.254466 master-0 kubenswrapper[3171]: I0216 17:14:09.254437 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:11.090299 master-0 kubenswrapper[3171]: E0216 17:14:11.090172 3171 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 17:14:11.120441 master-0 kubenswrapper[3171]: E0216 17:14:11.120311 3171 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 17:14:11.562835 master-0 kubenswrapper[3171]: I0216 17:14:11.562771 3171 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:11.563150 master-0 kubenswrapper[3171]: I0216 17:14:11.562893 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:11.564054 master-0 kubenswrapper[3171]: I0216 17:14:11.563952 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:11.564054 master-0 kubenswrapper[3171]: I0216 17:14:11.564017 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:11.564054 master-0 kubenswrapper[3171]: I0216 17:14:11.564030 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:11.564631 master-0 kubenswrapper[3171]: I0216 17:14:11.564572 3171 scope.go:117] "RemoveContainer" containerID="d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e" Feb 16 17:14:11.564922 master-0 kubenswrapper[3171]: E0216 17:14:11.564862 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:14:11.570513 master-0 kubenswrapper[3171]: I0216 17:14:11.570451 3171 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:12.260803 master-0 kubenswrapper[3171]: I0216 17:14:12.260737 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:12.262201 master-0 kubenswrapper[3171]: I0216 17:14:12.262176 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:12.262360 master-0 kubenswrapper[3171]: I0216 17:14:12.262344 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:12.262485 master-0 kubenswrapper[3171]: I0216 17:14:12.262472 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:12.263182 master-0 kubenswrapper[3171]: I0216 17:14:12.263164 3171 scope.go:117] "RemoveContainer" containerID="d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e" Feb 16 17:14:12.263619 master-0 kubenswrapper[3171]: E0216 17:14:12.263589 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:14:12.454316 master-0 kubenswrapper[3171]: I0216 17:14:12.454230 3171 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:12.454640 master-0 kubenswrapper[3171]: I0216 17:14:12.454458 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:12.454857 master-0 kubenswrapper[3171]: I0216 17:14:12.454792 3171 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:12.456121 master-0 kubenswrapper[3171]: I0216 17:14:12.456060 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:12.456281 master-0 kubenswrapper[3171]: I0216 17:14:12.456126 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:12.456281 master-0 kubenswrapper[3171]: I0216 17:14:12.456149 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:12.461647 master-0 kubenswrapper[3171]: I0216 17:14:12.461613 3171 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:13.262795 master-0 kubenswrapper[3171]: I0216 17:14:13.262732 3171 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:13.263756 master-0 kubenswrapper[3171]: I0216 17:14:13.263699 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:13.263838 master-0 kubenswrapper[3171]: I0216 17:14:13.263763 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:13.263838 master-0 kubenswrapper[3171]: I0216 17:14:13.263783 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:14.012269 master-0 kubenswrapper[3171]: I0216 17:14:14.012174 3171 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:14:14.292015 master-0 kubenswrapper[3171]: E0216 17:14:14.291802 3171 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:14.300998 master-0 kubenswrapper[3171]: I0216 17:14:14.300925 3171 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:14.315383 master-0 kubenswrapper[3171]: E0216 17:14:14.315299 3171 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:14.315546 master-0 kubenswrapper[3171]: I0216 17:14:14.315513 3171 scope.go:117] "RemoveContainer" containerID="d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e" Feb 16 17:14:14.315838 master-0 kubenswrapper[3171]: E0216 17:14:14.315798 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:14:14.941998 master-0 kubenswrapper[3171]: I0216 17:14:14.941864 3171 apiserver.go:52] "Watching apiserver" Feb 16 17:14:14.978405 master-0 kubenswrapper[3171]: I0216 17:14:14.978338 3171 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 17:14:14.981126 master-0 kubenswrapper[3171]: I0216 17:14:14.981046 3171 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-755d954778-lf4cb","openshift-etcd/etcd-master-0","openshift-kube-apiserver/installer-1-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6","openshift-network-node-identity/network-node-identity-hhcpr","openshift-etcd/installer-2-retry-1-master-0","openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc","openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2","openshift-network-diagnostics/network-check-target-vwvwx","openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b","openshift-monitoring/metrics-server-745bd8d89b-qr4zh","openshift-monitoring/prometheus-k8s-0","kube-system/bootstrap-kube-controller-manager-master-0","openshift-apiserver/apiserver-fc4bf7f79-tqnlw","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts","openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv","openshift-console-operator/console-operator-7777d5cc66-64vhv","openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv","openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9","openshift-console/downloads-dcd7b7d95-dhhfh","openshift-kube-apiserver/installer-3-master-0","openshift-multus/multus-6r7wj","openshift-multus/multus-additional-cni-plugins-rjdlk","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm","openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl","openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf","openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd","openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx","openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48","openshift-machine-config-operator/machine-config-daemon-98q6v","openshift-multus/multus-admission-controller-6d678b8d67-5n9cl","openshift-network-operator/iptables-alerter-czzz2","openshift-ovn-kubernetes/ovnkube-node-flr86","assisted-installer/assisted-installer-controller-thhq2","openshift-dns-operator/dns-operator-86b8869b79-nhxlp","openshift-ingress-canary/ingress-canary-qqvg4","openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k","openshift-marketplace/certified-operators-z69zq","openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz","openshift-dns/node-resolver-vfxj4","openshift-kube-apiserver/kube-apiserver-master-0","openshift-ingress/router-default-864ddd5f56-pm4rt","openshift-insights/insights-operator-cb4f7b4cf-6qrw5","openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw","openshift-network-operator/network-operator-6fcf4c966-6bmf9","openshift-cluster-node-tuning-operator/tuned-l5kbz","openshift-kube-controller-manager/installer-2-master-0","openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn","openshift-monitoring/alertmanager-main-0","openshift-monitoring/monitoring-plugin-555857f695-nlrnr","openshift-monitoring/prometheus-operator-7485d645b8-zxxwd","openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl","openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb","openshift-kube-apiserver/installer-4-master-0","openshift-marketplace/redhat-marketplace-4kd66","openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk","openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg","openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k","openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz","openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d","openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h","openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf","openshift-service-ca/service-ca-676cd8b9b5-cp9rb","openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6","openshift-machine-config-operator/machine-config-server-2ws9r","openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v","openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq","openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9","openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2","openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd","openshift-kube-scheduler/installer-4-master-0","openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w","openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz","openshift-dns/dns-default-qcgxx","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf","openshift-monitoring/node-exporter-8256c","openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc","openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9","openshift-etcd/installer-2-master-0","openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6","openshift-multus/network-metrics-daemon-279g6","openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5","openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp","openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8","openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8","openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk","openshift-marketplace/community-operators-7w4km","openshift-marketplace/redhat-operators-lnzfx"] Feb 16 17:14:14.981541 master-0 kubenswrapper[3171]: I0216 17:14:14.981501 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:14.981646 master-0 kubenswrapper[3171]: I0216 17:14:14.981622 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 17:14:14.982227 master-0 kubenswrapper[3171]: E0216 17:14:14.981634 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:14:14.983259 master-0 kubenswrapper[3171]: I0216 17:14:14.983218 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:14.983699 master-0 kubenswrapper[3171]: I0216 17:14:14.983648 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:14.983781 master-0 kubenswrapper[3171]: E0216 17:14:14.983361 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:14:14.984802 master-0 kubenswrapper[3171]: I0216 17:14:14.984692 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:14.984802 master-0 kubenswrapper[3171]: E0216 17:14:14.984779 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:14:14.985083 master-0 kubenswrapper[3171]: I0216 17:14:14.984839 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:14.985083 master-0 kubenswrapper[3171]: E0216 17:14:14.984881 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:14:14.985888 master-0 kubenswrapper[3171]: I0216 17:14:14.985848 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:14.986062 master-0 kubenswrapper[3171]: E0216 17:14:14.986009 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:14:14.986531 master-0 kubenswrapper[3171]: I0216 17:14:14.986479 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:14.986735 master-0 kubenswrapper[3171]: E0216 17:14:14.986682 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:14:14.986934 master-0 kubenswrapper[3171]: I0216 17:14:14.986905 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:14.987128 master-0 kubenswrapper[3171]: E0216 17:14:14.987026 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:14:14.987725 master-0 kubenswrapper[3171]: I0216 17:14:14.987147 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:14.987783 master-0 kubenswrapper[3171]: E0216 17:14:14.987756 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:14:14.987783 master-0 kubenswrapper[3171]: I0216 17:14:14.987650 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 17:14:14.987973 master-0 kubenswrapper[3171]: I0216 17:14:14.987687 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 17:14:14.988276 master-0 kubenswrapper[3171]: I0216 17:14:14.988252 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 17:14:14.988500 master-0 kubenswrapper[3171]: I0216 17:14:14.988455 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:14.988633 master-0 kubenswrapper[3171]: I0216 17:14:14.988596 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:14.989005 master-0 kubenswrapper[3171]: E0216 17:14:14.988670 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:14:14.989005 master-0 kubenswrapper[3171]: I0216 17:14:14.988487 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:14.989005 master-0 kubenswrapper[3171]: I0216 17:14:14.988815 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:14.989005 master-0 kubenswrapper[3171]: E0216 17:14:14.988862 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:14:14.989005 master-0 kubenswrapper[3171]: E0216 17:14:14.988937 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:14:14.989206 master-0 kubenswrapper[3171]: I0216 17:14:14.989006 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:14.989206 master-0 kubenswrapper[3171]: E0216 17:14:14.989067 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:14:14.990391 master-0 kubenswrapper[3171]: E0216 17:14:14.989817 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:14:14.990391 master-0 kubenswrapper[3171]: I0216 17:14:14.989900 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:14.990391 master-0 kubenswrapper[3171]: I0216 17:14:14.990184 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:14.990391 master-0 kubenswrapper[3171]: I0216 17:14:14.990210 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:14.990391 master-0 kubenswrapper[3171]: I0216 17:14:14.990024 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:14.990391 master-0 kubenswrapper[3171]: E0216 17:14:14.990281 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:14:14.990391 master-0 kubenswrapper[3171]: I0216 17:14:14.990301 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:14.990391 master-0 kubenswrapper[3171]: I0216 17:14:14.990330 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:14.990391 master-0 kubenswrapper[3171]: E0216 17:14:14.990343 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:14:14.990817 master-0 kubenswrapper[3171]: E0216 17:14:14.990397 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:14:14.990817 master-0 kubenswrapper[3171]: E0216 17:14:14.990416 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:14:14.990817 master-0 kubenswrapper[3171]: E0216 17:14:14.990458 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:14:14.991718 master-0 kubenswrapper[3171]: I0216 17:14:14.991250 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:14.991718 master-0 kubenswrapper[3171]: I0216 17:14:14.991420 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6r7wj" Feb 16 17:14:14.992130 master-0 kubenswrapper[3171]: I0216 17:14:14.992094 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:14.994193 master-0 kubenswrapper[3171]: I0216 17:14:14.992191 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:14.994193 master-0 kubenswrapper[3171]: E0216 17:14:14.992275 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:14:14.994193 master-0 kubenswrapper[3171]: I0216 17:14:14.992300 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:14.994193 master-0 kubenswrapper[3171]: E0216 17:14:14.992384 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:14:14.994193 master-0 kubenswrapper[3171]: I0216 17:14:14.993054 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:14.994193 master-0 kubenswrapper[3171]: I0216 17:14:14.993072 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:14.994193 master-0 kubenswrapper[3171]: E0216 17:14:14.993083 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:14:14.994193 master-0 kubenswrapper[3171]: E0216 17:14:14.993113 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:14:14.994193 master-0 kubenswrapper[3171]: I0216 17:14:14.993178 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:14.995079 master-0 kubenswrapper[3171]: I0216 17:14:14.994370 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:14.995079 master-0 kubenswrapper[3171]: I0216 17:14:14.994589 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:14:14.995079 master-0 kubenswrapper[3171]: I0216 17:14:14.995061 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:14.995168 master-0 kubenswrapper[3171]: I0216 17:14:14.995093 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:14.995168 master-0 kubenswrapper[3171]: E0216 17:14:14.995110 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:14:14.995168 master-0 kubenswrapper[3171]: E0216 17:14:14.995131 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:14:14.995251 master-0 kubenswrapper[3171]: I0216 17:14:14.995184 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:14.995251 master-0 kubenswrapper[3171]: E0216 17:14:14.995215 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:14:14.995339 master-0 kubenswrapper[3171]: I0216 17:14:14.995291 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:14.995489 master-0 kubenswrapper[3171]: I0216 17:14:14.995470 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:14:14.996239 master-0 kubenswrapper[3171]: I0216 17:14:14.996202 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 17:14:14.996296 master-0 kubenswrapper[3171]: I0216 17:14:14.996237 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 17:14:14.996296 master-0 kubenswrapper[3171]: I0216 17:14:14.996235 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 17:14:14.996358 master-0 kubenswrapper[3171]: I0216 17:14:14.996206 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 17:14:14.997008 master-0 kubenswrapper[3171]: I0216 17:14:14.996869 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 17:14:14.997008 master-0 kubenswrapper[3171]: I0216 17:14:14.996877 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 17:14:14.997008 master-0 kubenswrapper[3171]: I0216 17:14:14.996895 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 17:14:14.997008 master-0 kubenswrapper[3171]: I0216 17:14:14.996910 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 17:14:14.997008 master-0 kubenswrapper[3171]: I0216 17:14:14.996932 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 17:14:14.997148 master-0 kubenswrapper[3171]: I0216 17:14:14.997028 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:14.997148 master-0 kubenswrapper[3171]: I0216 17:14:14.997115 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 17:14:14.997148 master-0 kubenswrapper[3171]: I0216 17:14:14.997134 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 17:14:14.997231 master-0 kubenswrapper[3171]: E0216 17:14:14.997144 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:14:14.997342 master-0 kubenswrapper[3171]: I0216 17:14:14.997312 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 17:14:14.997378 master-0 kubenswrapper[3171]: I0216 17:14:14.997349 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 17:14:14.997409 master-0 kubenswrapper[3171]: I0216 17:14:14.997367 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 17:14:14.997479 master-0 kubenswrapper[3171]: I0216 17:14:14.997447 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 17:14:14.997479 master-0 kubenswrapper[3171]: I0216 17:14:14.997460 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 17:14:14.998346 master-0 kubenswrapper[3171]: I0216 17:14:14.997527 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:14.998346 master-0 kubenswrapper[3171]: E0216 17:14:14.997628 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:14:14.998346 master-0 kubenswrapper[3171]: I0216 17:14:14.997948 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 17:14:14.998346 master-0 kubenswrapper[3171]: I0216 17:14:14.998235 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 17:14:14.999737 master-0 kubenswrapper[3171]: I0216 17:14:14.999013 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 17:14:14.999737 master-0 kubenswrapper[3171]: I0216 17:14:14.999365 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 17:14:15.000122 master-0 kubenswrapper[3171]: I0216 17:14:14.999767 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 17:14:15.000221 master-0 kubenswrapper[3171]: I0216 17:14:15.000173 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 17:14:15.000386 master-0 kubenswrapper[3171]: I0216 17:14:15.000356 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 17:14:15.002053 master-0 kubenswrapper[3171]: I0216 17:14:15.001795 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:15.002053 master-0 kubenswrapper[3171]: E0216 17:14:15.002006 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:14:15.002292 master-0 kubenswrapper[3171]: I0216 17:14:15.002228 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:15.002678 master-0 kubenswrapper[3171]: I0216 17:14:15.002623 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:15.002793 master-0 kubenswrapper[3171]: E0216 17:14:15.002689 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:14:15.003830 master-0 kubenswrapper[3171]: I0216 17:14:15.003795 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:15.004100 master-0 kubenswrapper[3171]: E0216 17:14:15.004063 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:14:15.004349 master-0 kubenswrapper[3171]: I0216 17:14:15.004293 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 17:14:15.004574 master-0 kubenswrapper[3171]: I0216 17:14:15.004487 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:15.004675 master-0 kubenswrapper[3171]: I0216 17:14:15.004597 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:15.004675 master-0 kubenswrapper[3171]: E0216 17:14:15.004632 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:14:15.004806 master-0 kubenswrapper[3171]: I0216 17:14:15.004697 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:14:15.004806 master-0 kubenswrapper[3171]: I0216 17:14:15.004773 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:15.004934 master-0 kubenswrapper[3171]: I0216 17:14:15.004847 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 17:14:15.004934 master-0 kubenswrapper[3171]: I0216 17:14:15.004858 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 17:14:15.005093 master-0 kubenswrapper[3171]: I0216 17:14:15.004939 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 17:14:15.005093 master-0 kubenswrapper[3171]: I0216 17:14:15.005035 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 17:14:15.005329 master-0 kubenswrapper[3171]: E0216 17:14:15.005268 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:14:15.005491 master-0 kubenswrapper[3171]: I0216 17:14:15.005460 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:15.005566 master-0 kubenswrapper[3171]: I0216 17:14:15.005519 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 17:14:15.005566 master-0 kubenswrapper[3171]: E0216 17:14:15.005550 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:14:15.005710 master-0 kubenswrapper[3171]: I0216 17:14:15.005633 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:15.005710 master-0 kubenswrapper[3171]: I0216 17:14:15.005672 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:14:15.005837 master-0 kubenswrapper[3171]: E0216 17:14:15.005675 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:14:15.006592 master-0 kubenswrapper[3171]: I0216 17:14:15.006084 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 17:14:15.006592 master-0 kubenswrapper[3171]: I0216 17:14:15.006378 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:15.008763 master-0 kubenswrapper[3171]: E0216 17:14:15.006440 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:14:15.012280 master-0 kubenswrapper[3171]: I0216 17:14:15.011760 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:15.012280 master-0 kubenswrapper[3171]: E0216 17:14:15.011848 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:14:15.012675 master-0 kubenswrapper[3171]: I0216 17:14:15.012627 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:15.012945 master-0 kubenswrapper[3171]: E0216 17:14:15.012907 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:14:15.014485 master-0 kubenswrapper[3171]: I0216 17:14:15.014432 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 17:14:15.014673 master-0 kubenswrapper[3171]: I0216 17:14:15.014588 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b2561b-933b-4c58-a63a-7a8c671d0ae9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05aafc42942edec512935971aa649b31b51a37bb17ad3e45d4a47e5503a28fae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:03:27Z\\\"}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/\\\",\\\"name\\\":\\\"marketplace-trusted-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"marketplace-operator-metrics\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kx9vc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-6cc5b65c6b-s4gp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.014757 master-0 kubenswrapper[3171]: I0216 17:14:15.014714 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:15.014757 master-0 kubenswrapper[3171]: I0216 17:14:15.014639 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 17:14:15.014823 master-0 kubenswrapper[3171]: I0216 17:14:15.014802 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 17:14:15.014855 master-0 kubenswrapper[3171]: E0216 17:14:15.014797 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:14:15.014949 master-0 kubenswrapper[3171]: I0216 17:14:15.014895 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:15.015101 master-0 kubenswrapper[3171]: E0216 17:14:15.015052 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:14:15.015689 master-0 kubenswrapper[3171]: I0216 17:14:15.015532 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:15.015784 master-0 kubenswrapper[3171]: E0216 17:14:15.015746 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:14:15.015850 master-0 kubenswrapper[3171]: I0216 17:14:15.015832 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:15.015931 master-0 kubenswrapper[3171]: E0216 17:14:15.015900 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:14:15.016932 master-0 kubenswrapper[3171]: I0216 17:14:15.016850 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:15.017099 master-0 kubenswrapper[3171]: E0216 17:14:15.017049 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:14:15.017648 master-0 kubenswrapper[3171]: I0216 17:14:15.017613 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:15.017713 master-0 kubenswrapper[3171]: E0216 17:14:15.017648 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:14:15.018175 master-0 kubenswrapper[3171]: I0216 17:14:15.018147 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:15.018220 master-0 kubenswrapper[3171]: I0216 17:14:15.018205 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:14:15.018261 master-0 kubenswrapper[3171]: I0216 17:14:15.018226 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:15.018379 master-0 kubenswrapper[3171]: E0216 17:14:15.018354 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:14:15.018562 master-0 kubenswrapper[3171]: E0216 17:14:15.018543 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:14:15.018829 master-0 kubenswrapper[3171]: I0216 17:14:15.018770 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:15.018829 master-0 kubenswrapper[3171]: I0216 17:14:15.018802 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:15.019007 master-0 kubenswrapper[3171]: E0216 17:14:15.018829 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:14:15.019213 master-0 kubenswrapper[3171]: I0216 17:14:15.019198 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:15.019375 master-0 kubenswrapper[3171]: E0216 17:14:15.019354 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:14:15.019600 master-0 kubenswrapper[3171]: I0216 17:14:15.019426 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:15.019702 master-0 kubenswrapper[3171]: E0216 17:14:15.019687 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:14:15.020349 master-0 kubenswrapper[3171]: I0216 17:14:15.020321 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 17:14:15.020621 master-0 kubenswrapper[3171]: I0216 17:14:15.020598 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 17:14:15.020672 master-0 kubenswrapper[3171]: I0216 17:14:15.020639 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-q5h8t" Feb 16 17:14:15.020949 master-0 kubenswrapper[3171]: I0216 17:14:15.020934 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 17:14:15.021171 master-0 kubenswrapper[3171]: I0216 17:14:15.021159 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 17:14:15.024089 master-0 kubenswrapper[3171]: I0216 17:14:15.024070 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:15.024149 master-0 kubenswrapper[3171]: I0216 17:14:15.024084 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:15.024149 master-0 kubenswrapper[3171]: E0216 17:14:15.024109 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:14:15.024233 master-0 kubenswrapper[3171]: I0216 17:14:15.024157 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:15.024441 master-0 kubenswrapper[3171]: I0216 17:14:15.024417 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:15.024489 master-0 kubenswrapper[3171]: E0216 17:14:15.024459 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:14:15.025015 master-0 kubenswrapper[3171]: I0216 17:14:15.025001 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:15.025129 master-0 kubenswrapper[3171]: E0216 17:14:15.025111 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:14:15.025193 master-0 kubenswrapper[3171]: I0216 17:14:15.025028 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:15.025270 master-0 kubenswrapper[3171]: E0216 17:14:15.025257 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:14:15.025409 master-0 kubenswrapper[3171]: I0216 17:14:15.025396 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 17:14:15.025584 master-0 kubenswrapper[3171]: I0216 17:14:15.025416 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-wnnb7" Feb 16 17:14:15.025679 master-0 kubenswrapper[3171]: I0216 17:14:15.025656 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 17:14:15.025755 master-0 kubenswrapper[3171]: I0216 17:14:15.025743 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:14:15.026077 master-0 kubenswrapper[3171]: I0216 17:14:15.026052 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 17:14:15.026353 master-0 kubenswrapper[3171]: I0216 17:14:15.026319 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:15.026353 master-0 kubenswrapper[3171]: I0216 17:14:15.026345 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 17:14:15.026433 master-0 kubenswrapper[3171]: E0216 17:14:15.026392 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:14:15.026573 master-0 kubenswrapper[3171]: I0216 17:14:15.026549 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 17:14:15.026636 master-0 kubenswrapper[3171]: I0216 17:14:15.026624 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 17:14:15.026824 master-0 kubenswrapper[3171]: I0216 17:14:15.026761 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18e9a9d3-9b18-4c19-9558-f33c68101922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"package-server-manager-serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bbcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bbcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-5c696dbdcd-qrrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.026904 master-0 kubenswrapper[3171]: I0216 17:14:15.026801 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lc8g2" Feb 16 17:14:15.026982 master-0 kubenswrapper[3171]: I0216 17:14:15.026955 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:14:15.027179 master-0 kubenswrapper[3171]: I0216 17:14:15.027150 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 17:14:15.027747 master-0 kubenswrapper[3171]: I0216 17:14:15.027645 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 17:14:15.027893 master-0 kubenswrapper[3171]: I0216 17:14:15.027868 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:15.028021 master-0 kubenswrapper[3171]: E0216 17:14:15.027919 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:14:15.028705 master-0 kubenswrapper[3171]: I0216 17:14:15.028005 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:14:15.028705 master-0 kubenswrapper[3171]: I0216 17:14:15.028157 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:15.028705 master-0 kubenswrapper[3171]: I0216 17:14:15.028158 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:14:15.028705 master-0 kubenswrapper[3171]: E0216 17:14:15.028193 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:14:15.029102 master-0 kubenswrapper[3171]: I0216 17:14:15.029054 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:15.029408 master-0 kubenswrapper[3171]: I0216 17:14:15.029394 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:15.029518 master-0 kubenswrapper[3171]: E0216 17:14:15.029502 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:14:15.030073 master-0 kubenswrapper[3171]: I0216 17:14:15.029722 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:15.030073 master-0 kubenswrapper[3171]: E0216 17:14:15.029796 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:14:15.030073 master-0 kubenswrapper[3171]: I0216 17:14:15.029815 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:15.030351 master-0 kubenswrapper[3171]: I0216 17:14:15.030326 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 17:14:15.030621 master-0 kubenswrapper[3171]: I0216 17:14:15.030600 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 17:14:15.031937 master-0 kubenswrapper[3171]: I0216 17:14:15.030799 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:15.031937 master-0 kubenswrapper[3171]: E0216 17:14:15.030880 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" Feb 16 17:14:15.031937 master-0 kubenswrapper[3171]: I0216 17:14:15.031260 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:15.031937 master-0 kubenswrapper[3171]: E0216 17:14:15.031314 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:14:15.031937 master-0 kubenswrapper[3171]: I0216 17:14:15.031648 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 17:14:15.031937 master-0 kubenswrapper[3171]: I0216 17:14:15.031840 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 17:14:15.034041 master-0 kubenswrapper[3171]: I0216 17:14:15.032211 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 17:14:15.034041 master-0 kubenswrapper[3171]: I0216 17:14:15.032362 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:15.034041 master-0 kubenswrapper[3171]: I0216 17:14:15.032432 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:15.034041 master-0 kubenswrapper[3171]: I0216 17:14:15.032437 3171 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 17:14:15.034041 master-0 kubenswrapper[3171]: E0216 17:14:15.032465 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:14:15.034041 master-0 kubenswrapper[3171]: I0216 17:14:15.032528 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:15.034041 master-0 kubenswrapper[3171]: I0216 17:14:15.032632 3171 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 17:14:15.034041 master-0 kubenswrapper[3171]: I0216 17:14:15.032644 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:15.034041 master-0 kubenswrapper[3171]: E0216 17:14:15.032703 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:14:15.034041 master-0 kubenswrapper[3171]: E0216 17:14:15.032774 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" Feb 16 17:14:15.034041 master-0 kubenswrapper[3171]: I0216 17:14:15.033061 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:15.034041 master-0 kubenswrapper[3171]: E0216 17:14:15.033176 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:14:15.041630 master-0 kubenswrapper[3171]: I0216 17:14:15.041574 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-czzz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q46jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-czzz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.054750 master-0 kubenswrapper[3171]: I0216 17:14:15.054701 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"869cd4c8-bf00-427c-84f0-5c39517f2d27\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"collect-profiles-29521020-mtpvf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.067819 master-0 kubenswrapper[3171]: I0216 17:14:15.067734 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642e5115-b7f2-4561-bc6b-1a74b6d891c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://221dc64441e450195317a3ad8eacbbb293523d0726dbd96812217d44d6f1da31\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:07:35Z\\\",\\\"message\\\":\\\"ble\\\\\\\" controller=\\\\\\\"controlplanemachineset\\\\\\\" reconcileID=\\\\\\\"55d7d371-79df-4a0c-a2ed-4863db212bc0\\\\\\\" namespace=\\\\\\\"openshift-machine-api\\\\\\\" name=\\\\\\\"cluster\\\\\\\"\\\\nI0216 17:04:13.107502 1 controller.go:184] \\\\\\\"Finished reconciling control plane machine set\\\\\\\" controller=\\\\\\\"controlplanemachineset\\\\\\\" reconcileID=\\\\\\\"55d7d371-79df-4a0c-a2ed-4863db212bc0\\\\\\\" namespace=\\\\\\\"openshift-machine-api\\\\\\\" name=\\\\\\\"cluster\\\\\\\"\\\\nI0216 17:04:40.552339 1 controller.go:170] \\\\\\\"Reconciling control plane machine set\\\\\\\" controller=\\\\\\\"controlplanemachineset\\\\\\\" reconcileID=\\\\\\\"3f4bf694-235e-4169-8db4-85da162166c6\\\\\\\" namespace=\\\\\\\"openshift-machine-api\\\\\\\" name=\\\\\\\"cluster\\\\\\\"\\\\nI0216 17:04:40.552391 1 controller.go:178] \\\\\\\"No control plane machine set found, setting operator status available\\\\\\\" controller=\\\\\\\"controlplanemachineset\\\\\\\" reconcileID=\\\\\\\"3f4bf694-235e-4169-8db4-85da162166c6\\\\\\\" namespace=\\\\\\\"openshift-machine-api\\\\\\\" name=\\\\\\\"cluster\\\\\\\"\\\\nI0216 17:04:40.552421 1 controller.go:184] \\\\\\\"Finished reconciling control plane machine set\\\\\\\" controller=\\\\\\\"controlplanemachineset\\\\\\\" reconcileID=\\\\\\\"3f4bf694-235e-4169-8db4-85da162166c6\\\\\\\" namespace=\\\\\\\"openshift-machine-api\\\\\\\" name=\\\\\\\"cluster\\\\\\\"\\\\nE0216 17:05:48.462035 1 leaderelection.go:429] Failed to update lock optimitically: Timeout: request did not complete within requested timeout - context deadline exceeded, falling back to slow path\\\\nE0216 17:06:48.463745 1 leaderelection.go:436] error retrieving resource lock openshift-machine-api/control-plane-machine-set-leader: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io control-plane-machine-set-leader)\\\\nI0216 17:07:01.458339 1 leaderelection.go:297] failed to renew lease openshift-machine-api/control-plane-machine-set-leader: timed out waiting for the condition\\\\nE0216 17:07:35.464275 1 leaderelection.go:322] Failed to release lock: Timeout: request did not complete within requested timeout - context deadline exceeded\\\\nE0216 17:07:35.464357 1 main.go:233] \\\\\\\"problem running manager\\\\\\\" err=\\\\\\\"leader election lost\\\\\\\" logger=\\\\\\\"setup\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:01:20Z\\\"}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/k8s-webhook-server/serving-certs\\\",\\\"name\\\":\\\"control-plane-machine-set-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzpnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-d8bf84b88-m66tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.069326 master-0 kubenswrapper[3171]: I0216 17:14:15.069295 3171 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 16 17:14:15.083124 master-0 kubenswrapper[3171]: I0216 17:14:15.083075 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/installer-2-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd\"/\"installer-2-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.093919 master-0 kubenswrapper[3171]: I0216 17:14:15.093864 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d1524fc1-d157-435a-8bf8-7e877c45909d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-samples-operator cluster-samples-operator-watch]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"samples-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e446723bbab96c4e4662ff058d5eccba72d0c36d26c7b8b3f07183fa49d3ab9\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-samples-operator-watch\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-samples-operator\"/\"cluster-samples-operator-f8cbff74c-spxm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.107545 master-0 kubenswrapper[3171]: I0216 17:14:15.107490 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b2561b-933b-4c58-a63a-7a8c671d0ae9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05aafc42942edec512935971aa649b31b51a37bb17ad3e45d4a47e5503a28fae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:03:27Z\\\"}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/\\\",\\\"name\\\":\\\"marketplace-trusted-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"marketplace-operator-metrics\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kx9vc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-6cc5b65c6b-s4gp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.118075 master-0 kubenswrapper[3171]: I0216 17:14:15.118001 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaf7edff-0a89-4ac0-b9dd-511e098b5434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://385e9821f6c9a23f9fd968b241bfd034b46ca6792159d22bc7ee49611730173e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:03:17Z\\\",\\\"message\\\":\\\"I0216 17:02:47.086293 1 cmd.go:253] Using service-serving-cert provided certificates\\\\nI0216 17:02:47.086390 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0216 17:02:47.087274 1 observer_polling.go:159] Starting file observer\\\\nW0216 17:02:47.089018 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler-operator/pods/openshift-kube-scheduler-operator-7485d55966-sgmpf\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\nI0216 17:02:47.089308 1 builder.go:304] openshift-cluster-kube-scheduler-operator version 4.18.0-202601180343.p2.g971ffbb.assembly.stream.el9-971ffbb-971ffbb8239a8c49f1254e5fdaab854eed224f31\\\\nF0216 17:03:17.458108 1 cmd.go:182] failed checking apiserver connectivity: Get \\\\\\\"https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-scheduler-operator/leases/openshift-cluster-kube-scheduler-operator-lock\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:02:46Z\\\"}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-7485d55966-sgmpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.130717 master-0 kubenswrapper[3171]: I0216 17:14:15.130651 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [tuned]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [tuned]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"tuned\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/modprobe.d\\\",\\\"name\\\":\\\"etc-modprobe-d\\\"},{\\\"mountPath\\\":\\\"/etc/sysconfig\\\",\\\"name\\\":\\\"etc-sysconfig\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/sysctl.d\\\",\\\"name\\\":\\\"etc-sysctl-d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/sysctl.conf\\\",\\\"name\\\":\\\"etc-sysctl-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd\\\",\\\"name\\\":\\\"etc-systemd\\\"},{\\\"mountPath\\\":\\\"/etc/tuned\\\",\\\"name\\\":\\\"etc-tuned\\\"},{\\\"mountPath\\\":\\\"/run\\\",\\\"name\\\":\\\"run\\\"},{\\\"mountPath\\\":\\\"/sys\\\",\\\"name\\\":\\\"sys\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn82n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-cluster-node-tuning-operator\"/\"tuned-l5kbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.141015 master-0 kubenswrapper[3171]: I0216 17:14:15.140919 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab80e0fb-09dd-4c93-b235-1487024105d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c3069013a087b4b128510ad9f826bdcec64055b56ce1f6796106b46734c14be\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:07:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:41Z\\\"}},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-bb7ffbb8d-lzgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.154691 master-0 kubenswrapper[3171]: I0216 17:14:15.154560 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9609a4f3-b947-47af-a685-baae26c50fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b2ea1bb15f78693382433f2c7f09878ee2e059e95bab8649c9ca7870ea580187\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:02:50Z\\\",\\\"message\\\":\\\"it\\\\tmanager/runnable_group.go:226\\\\tAll workers finished\\\\t{\\\\\\\"controller\\\\\\\": \\\\\\\"certificate_controller\\\\\\\"}\\\\n2026-02-16T17:02:50.627Z\\\\tINFO\\\\toperator.init\\\\tmanager/runnable_group.go:226\\\\tAll workers finished\\\\t{\\\\\\\"controller\\\\\\\": \\\\\\\"status_controller\\\\\\\"}\\\\n2026-02-16T17:02:50.627Z\\\\tINFO\\\\toperator.init\\\\tmanager/runnable_group.go:226\\\\tAll workers finished\\\\t{\\\\\\\"controller\\\\\\\": \\\\\\\"gatewayapi_controller\\\\\\\"}\\\\n2026-02-16T17:02:50.627Z\\\\tINFO\\\\toperator.init\\\\tmanager/runnable_group.go:226\\\\tAll workers finished\\\\t{\\\\\\\"controller\\\\\\\": \\\\\\\"dns_controller\\\\\\\"}\\\\n2026-02-16T17:02:50.628Z\\\\tINFO\\\\toperator.init\\\\tmanager/runnable_group.go:226\\\\tAll workers finished\\\\t{\\\\\\\"controller\\\\\\\": \\\\\\\"clientca_configmap_controller\\\\\\\"}\\\\n2026-02-16T17:02:50.628Z\\\\tERROR\\\\toperator.init\\\\tcontroller/controller.go:263\\\\tReconciler error\\\\t{\\\\\\\"controller\\\\\\\": \\\\\\\"canary_controller\\\\\\\", \\\\\\\"object\\\\\\\": {\\\\\\\"name\\\\\\\":\\\\\\\"default\\\\\\\",\\\\\\\"namespace\\\\\\\":\\\\\\\"openshift-ingress-operator\\\\\\\"}, \\\\\\\"namespace\\\\\\\": \\\\\\\"openshift-ingress-operator\\\\\\\", \\\\\\\"name\\\\\\\": \\\\\\\"default\\\\\\\", \\\\\\\"reconcileID\\\\\\\": \\\\\\\"d4df3bb7-a93b-44d6-907e-6ced975abdeb\\\\\\\", \\\\\\\"error\\\\\\\": \\\\\\\"failed to ensure canary namespace: Get \\\\\\\\\\\\\\\"https://172.30.0.1:443/api/v1/namespaces/openshift-ingress-canary\\\\\\\\\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\\\\"}\\\\n2026-02-16T17:02:50.628Z\\\\tINFO\\\\toperator.init\\\\tmanager/runnable_group.go:226\\\\tAll workers finished\\\\t{\\\\\\\"controller\\\\\\\": \\\\\\\"canary_controller\\\\\\\"}\\\\n2026-02-16T17:02:50.628Z\\\\tINFO\\\\toperator.init\\\\truntime/asm_amd64.s:1695\\\\tStopping and waiting for caches\\\\n2026-02-16T17:02:50.629Z\\\\tINFO\\\\toperator.init\\\\truntime/asm_amd64.s:1695\\\\tStopping and waiting for webhooks\\\\n2026-02-16T17:02:50.629Z\\\\tINFO\\\\toperator.init\\\\truntime/asm_amd64.s:1695\\\\tStopping and waiting for HTTP servers\\\\n2026-02-16T17:02:50.629Z\\\\tINFO\\\\toperator.init.controller-runtime.metrics\\\\truntime/asm_amd64.s:1695\\\\tShutting down metrics server with timeout of 1 minute\\\\n2026-02-16T17:02:50.629Z\\\\tINFO\\\\toperator.init\\\\truntime/asm_amd64.s:1695\\\\tWait completed, proceeding to shutdown the manager\\\\n2026-02-16T17:02:50.634Z\\\\tERROR\\\\toperator.main\\\\tcobra/command.go:989\\\\terror starting\\\\t{\\\\\\\"error\\\\\\\": \\\\\\\"failed to wait for route_metrics_controller caches to sync: timed out waiting for cache to be synced for Kind *v1.Route\\\\\\\"}\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:49Z\\\"}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"trusted-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/openshift/serviceaccount\\\",\\\"name\\\":\\\"bound-sa-token\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t24jh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"metrics-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t24jh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-c588d8cb4-wjr7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.163746 master-0 kubenswrapper[3171]: I0216 17:14:15.163672 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48801344-a48a-493e-aea4-19d998d0b708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/signing-key\\\",\\\"name\\\":\\\"signing-key\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/signing-cabundle\\\",\\\"name\\\":\\\"signing-cabundle\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqfds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-676cd8b9b5-cp9rb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.175154 master-0 kubenswrapper[3171]: I0216 17:14:15.175047 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c107344ed7506b61b6ef1a5ca57eedb7069a294a5c75b6d6c41f82bdc6b94c0\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:03:17Z\\\",\\\"message\\\":\\\"I0216 17:02:47.134974 1 cmd.go:253] Using service-serving-cert provided certificates\\\\nI0216 17:02:47.135332 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0216 17:02:47.136082 1 observer_polling.go:159] Starting file observer\\\\nW0216 17:02:47.138418 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://172.30.0.1:443/api/v1/namespaces/openshift-controller-manager-operator/pods/openshift-controller-manager-operator-5f5f84757d-ktmm9\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\nI0216 17:02:47.138524 1 builder.go:304] openshift-controller-manager-operator version 4.18.0-202601170315.p2.gf1711cf.assembly.stream.el9-f1711cf-f1711cf30f683ec0eaa187cd5168caae9e8c1254\\\\nF0216 17:03:17.510520 1 cmd.go:182] failed checking apiserver connectivity: Get \\\\\\\"https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-controller-manager-operator/leases/openshift-controller-manager-operator-lock\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:02:47Z\\\"}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dptnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-5f5f84757d-ktmm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.188059 master-0 kubenswrapper[3171]: I0216 17:14:15.187939 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3e071c-1c62-489b-91c1-aef0d197f40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://925f178f46a1d5c4c22dbeed05e4d6e9975a60d252305dcd17064d2bc8dfab6e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:36Z\\\"}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-ca\\\",\\\"name\\\":\\\"etcd-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-service-ca\\\",\\\"name\\\":\\\"etcd-service-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/etcd-client\\\",\\\"name\\\":\\\"etcd-client\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjd5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-67bf55ccdd-cppj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.201819 master-0 kubenswrapper[3171]: I0216 17:14:15.201628 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r7wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43f65f23-4ddd-471a-9cb3-b0945382d83c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r28x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-multus\"/\"multus-6r7wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.214213 master-0 kubenswrapper[3171]: I0216 17:14:15.214055 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-baremetal-operator baremetal-kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-baremetal-operator baremetal-kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"baremetal-kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/baremetal-kube-rbac-proxy\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"cluster-baremetal-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hh2cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e33dd133299981fac9e32c9766093c6f93957b6afe0a293539ddeda20c06cf82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:07:23Z\\\",\\\"message\\\":\\\"Put \\\\\\\"https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/cluster-baremetal-operator\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused, falling back to slow path\\\\nE0216 17:02:52.811646 1 leaderelection.go:347] error retrieving resource lock openshift-machine-api/cluster-baremetal-operator: Get \\\\\\\"https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/cluster-baremetal-operator\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\nE0216 17:03:18.810354 1 leaderelection.go:340] Failed to update lock optimitically: Put \\\\\\\"https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/cluster-baremetal-operator\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused, falling back to slow path\\\\nE0216 17:03:18.811443 1 leaderelection.go:347] error retrieving resource lock openshift-machine-api/cluster-baremetal-operator: Get \\\\\\\"https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-machine-api/leases/cluster-baremetal-operator\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\nE0216 17:05:36.840873 1 leaderelection.go:340] Failed to update lock optimitically: Timeout: request did not complete within requested timeout - context deadline exceeded, falling back to slow path\\\\nE0216 17:06:36.843386 1 leaderelection.go:347] error retrieving resource lock openshift-machine-api/cluster-baremetal-operator: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io cluster-baremetal-operator)\\\\nI0216 17:06:49.835687 1 leaderelection.go:285] failed to renew lease openshift-machine-api/cluster-baremetal-operator: timed out waiting for the condition\\\\nE0216 17:07:23.840281 1 leaderelection.go:308] Failed to release lock: Timeout: request did not complete within requested timeout - context deadline exceeded\\\\nE0216 17:07:23.840392 1 main.go:182] \\\\\\\"problem running manager\\\\\\\" err=\\\\\\\"leader election lost\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:01:34Z\\\"}},\\\"name\\\":\\\"cluster-baremetal-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/cluster-baremetal-operator/tls\\\",\\\"name\\\":\\\"cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cluster-baremetal-operator/images\\\",\\\"name\\\":\\\"images\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hh2cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"cluster-baremetal-operator-7bc947fc7d-4j7pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.225609 master-0 kubenswrapper[3171]: I0216 17:14:15.225524 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ee84198d-6357-4429-a90c-455c3850a788\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy cluster-autoscaler-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy cluster-autoscaler-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bd8adea550cbbaf16cb9409b31ec8b997320d247f9f30c80608ac1fbf9c7a07e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-autoscaler-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/cluster-autoscaler-operator/tls\\\",\\\"name\\\":\\\"cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbq2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"auth-proxy-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbq2b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"cluster-autoscaler-operator-67fd9768b5-zcwwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.239743 master-0 kubenswrapper[3171]: I0216 17:14:15.239622 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e51bba5-0ebe-4e55-a588-38b71548c605\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0f7e8be40545fa33b748eaa6f879efc2d956e86b6534dcac117b6e66db8cbc2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:00:38Z\\\",\\\"message\\\":\\\"W0216 17:00:36.951964 1 cmd.go:254] Using insecure, self-signed certificates\\\\nI0216 17:00:37.398061 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0216 17:00:37.398528 1 observer_polling.go:159] Starting file observer\\\\nI0216 17:00:37.398757 1 observer_polling.go:120] Observed file \\\\\\\"/var/run/secrets/serving-cert/tls.crt\\\\\\\" has been modified (old=\\\\\\\"\\\\\\\", new=\\\\\\\"cea7236380fe3d1e26a675a05c62b90785edabb12c807afacce753fe34967571\\\\\\\")\\\\nW0216 17:00:37.398818 1 builder.go:154] Restart triggered because of file /var/run/secrets/serving-cert/tls.crt was modified\\\\nI0216 17:00:37.398940 1 observer_polling.go:120] Observed file \\\\\\\"/var/run/secrets/serving-cert/tls.key\\\\\\\" has been modified (old=\\\\\\\"\\\\\\\", new=\\\\\\\"c6713a3284d7bad2ff24ad6d13ca0abdd10b29add0b86eba63a5a781af2d492c\\\\\\\")\\\\nW0216 17:00:37.399001 1 builder.go:266] unable to get owner reference (falling back to namespace): client rate limiter Wait returned an error: context canceled\\\\nI0216 17:00:37.399166 1 builder.go:298] cluster-olm-operator version 4.18.0-202601181212.p2.g88088e4.assembly.stream.el9-88088e4-88088e4bfe9f55ea7ab2c4331cebee727c8c0c34\\\\nF0216 17:00:37.837825 1 cmd.go:179] failed checking apiserver connectivity: client rate limiter Wait returned an error: context canceled\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:36Z\\\"}},\\\"name\\\":\\\"cluster-olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"cluster-olm-operator-serving-cert\\\"},{\\\"mountPath\\\":\\\"/operand-assets\\\",\\\"name\\\":\\\"operand-assets\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-olm-operator\"/\"cluster-olm-operator-55b69c6c48-7chjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.249995 master-0 kubenswrapper[3171]: I0216 17:14:15.249867 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39387549-c636-4bd4-b463-f6a93810f277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf00a7735d0ab343338acb080927ee517385e8abb1b426c1e996a640ce7fcbfa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:05:52Z\\\",\\\"message\\\":\\\"114\\\\\\\"\\\\nI0216 17:02:47.000452 1 reflector.go:430] \\\\\\\"Caches populated\\\\\\\" logger=\\\\\\\"controller-runtime.cache\\\\\\\" type=\\\\\\\"*v1.CertificateSigningRequest\\\\\\\" reflector=\\\\\\\"sigs.k8s.io/controller-runtime/pkg/cache/internal/informers.go:114\\\\\\\"\\\\nI0216 17:02:47.099566 1 controller.go:186] \\\\\\\"Starting Controller\\\\\\\" controller=\\\\\\\"certificatesigningrequest\\\\\\\" controllerGroup=\\\\\\\"certificates.k8s.io\\\\\\\" controllerKind=\\\\\\\"CertificateSigningRequest\\\\\\\"\\\\nI0216 17:02:47.099702 1 controller.go:195] \\\\\\\"Starting workers\\\\\\\" controller=\\\\\\\"certificatesigningrequest\\\\\\\" controllerGroup=\\\\\\\"certificates.k8s.io\\\\\\\" controllerKind=\\\\\\\"CertificateSigningRequest\\\\\\\" worker count=1\\\\nI0216 17:02:47.099996 1 approver.go:230] Finished syncing CSR csr-k9tkc for unknown node in 233.836µs\\\\nI0216 17:02:47.100051 1 approver.go:230] Finished syncing CSR csr-nngwc for unknown node in 21.42µs\\\\nE0216 17:05:22.048167 1 leaderelection.go:429] Failed to update lock optimistically: Put \\\\\\\"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity?timeout=15s\\\\\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers), falling back to slow path\\\\nE0216 17:05:37.048182 1 leaderelection.go:436] error retrieving resource lock openshift-network-node-identity/ovnkube-identity: Get \\\\\\\"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity?timeout=15s\\\\\\\": context deadline exceeded\\\\nI0216 17:05:37.048402 1 leaderelection.go:297] failed to renew lease openshift-network-node-identity/ovnkube-identity: context deadline exceeded\\\\nE0216 17:05:52.049619 1 leaderelection.go:322] Failed to release lock: Put \\\\\\\"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/openshift-network-node-identity/leases/ovnkube-identity?timeout=15s\\\\\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\\\nF0216 17:05:52.049770 1 ovnkubeidentity.go:309] error running approver: leader election lost\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:02:46Z\\\"}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vk7xl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vk7xl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-hhcpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.261199 master-0 kubenswrapper[3171]: I0216 17:14:15.261130 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7390ccc6-dfbe-4f51-960c-7628f49bffb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/audit\\\",\\\"name\\\":\\\"audit-policies\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/etcd-client\\\",\\\"name\\\":\\\"etcd-client\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-serving-ca\\\",\\\"name\\\":\\\"etcd-serving-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca-bundle\\\",\\\"name\\\":\\\"trusted-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/encryption-config\\\",\\\"name\\\":\\\"encryption-config\\\"},{\\\"mountPath\\\":\\\"/var/log/oauth-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-66788cb45c-dp9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.268037 master-0 kubenswrapper[3171]: I0216 17:14:15.267852 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:15.268274 master-0 kubenswrapper[3171]: I0216 17:14:15.268124 3171 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="82ee9d18-ca32-47ad-aa23-0b0156fae5ae" Feb 16 17:14:15.268274 master-0 kubenswrapper[3171]: I0216 17:14:15.268152 3171 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="82ee9d18-ca32-47ad-aa23-0b0156fae5ae" Feb 16 17:14:15.268522 master-0 kubenswrapper[3171]: I0216 17:14:15.268471 3171 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.32.10:17697/healthz\": dial tcp 192.168.32.10:17697: connect: connection refused" start-of-body= Feb 16 17:14:15.268596 master-0 kubenswrapper[3171]: I0216 17:14:15.268527 3171 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.32.10:17697/healthz\": dial tcp 192.168.32.10:17697: connect: connection refused" Feb 16 17:14:15.268869 master-0 kubenswrapper[3171]: I0216 17:14:15.268827 3171 scope.go:117] "RemoveContainer" containerID="d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e" Feb 16 17:14:15.271857 master-0 kubenswrapper[3171]: I0216 17:14:15.271751 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/installer-1-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c571b6-0f65-41f0-b1be-f63d7a974782\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver\"/\"installer-1-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.275388 master-0 kubenswrapper[3171]: I0216 17:14:15.274698 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:15.279510 master-0 kubenswrapper[3171]: I0216 17:14:15.279396 3171 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:14:15.283316 master-0 kubenswrapper[3171]: I0216 17:14:15.283162 3171 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:15.284163 master-0 kubenswrapper[3171]: I0216 17:14:15.284106 3171 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:14:15.289613 master-0 kubenswrapper[3171]: I0216 17:14:15.289523 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4549ea98-7379-49e1-8452-5efb643137ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01bf42c6c3bf4f293fd2294a37aff703b4c469002ae6a87f7c50eefa7c6ae11b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:01:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T16:59:00Z\\\"}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zt8mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-6fcf4c966-6bmf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.299426 master-0 kubenswrapper[3171]: I0216 17:14:15.299377 3171 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:14:15.306891 master-0 kubenswrapper[3171]: I0216 17:14:15.306789 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e623376-9e14-4341-9dcf-7a7c218b6f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d742ee8f3ff4d437ae51d12ae2509ff6a091914c30d3aa55203939de62735fd\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:03:16Z\\\",\\\"message\\\":\\\"I0216 17:02:46.687744 1 cmd.go:253] Using service-serving-cert provided certificates\\\\nI0216 17:02:46.687998 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0216 17:02:46.688555 1 observer_polling.go:159] Starting file observer\\\\nW0216 17:02:46.691085 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/pods\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\nI0216 17:02:46.691299 1 builder.go:304] openshift-kube-storage-version-migrator-operator version 4.18.0-202601170513.p2.g59ba356.assembly.stream.el9-59ba356-59ba356f50ea3128905ffdb7137f868aa0588bab\\\\nF0216 17:03:16.926685 1 cmd.go:182] failed checking apiserver connectivity: Get \\\\\\\"https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-storage-version-migrator-operator/leases/openshift-kube-storage-version-migrator-operator-lock\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:02:46Z\\\"}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-cd5474998-829l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.320433 master-0 kubenswrapper[3171]: I0216 17:14:15.320333 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"442600dc-09b2-4fee-9f89-777296b2ee40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eecec016977d0933f995cec094efa7991dea3fd076989159458e21d05f18d3bb\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:03:21Z\\\",\\\"message\\\":\\\"I0216 17:02:51.244593 1 cmd.go:253] Using service-serving-cert provided certificates\\\\nI0216 17:02:51.244709 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0216 17:02:51.245094 1 observer_polling.go:159] Starting file observer\\\\nW0216 17:02:51.246789 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/pods/kube-controller-manager-operator-78ff47c7c5-txr5k\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\nI0216 17:02:51.246995 1 builder.go:304] kube-controller-manager-operator version 4.18.0-202601161039.p2.g9a9a437.assembly.stream.el9-9a9a437-9a9a437f2342dc9bba7844298b955d5f9bbc76bb\\\\nF0216 17:03:21.631034 1 cmd.go:182] failed checking apiserver connectivity: Get \\\\\\\"https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager-operator/leases/kube-controller-manager-operator-lock\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:02:51Z\\\"}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-78ff47c7c5-txr5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.331891 master-0 kubenswrapper[3171]: I0216 17:14:15.331790 3171 status_manager.go:875] "Failed to update status for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67aa7027-5cfd-41e1-9f0a-cb3a00bd09ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:13:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:13:51Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:13:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:13:51Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aab010443e5953fae2765f398f189cc7072cddd5f5db6fb4755e40a70cbb95c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:13:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\",\\\"name\\\":\\\"ssl-certs-host\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/secrets\\\",\\\"name\\\":\\\"secrets\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/cloud\\\",\\\"name\\\":\\\"etc-kubernetes-cloud\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/config\\\",\\\"name\\\":\\\"config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/log/bootstrap-control-plane\\\",\\\"name\\\":\\\"logs\\\"}]},{\\\"containerID\\\":\\\"cri-o://d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:14:04Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:13:54Z\\\"}},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\",\\\"name\\\":\\\"ssl-certs-host\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/secrets\\\",\\\"name\\\":\\\"secrets\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/cloud\\\",\\\"name\\\":\\\"etc-kubernetes-cloud\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/config\\\",\\\"name\\\":\\\"config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/log/bootstrap-control-plane\\\",\\\"name\\\":\\\"logs\\\"}]}],\\\"startTime\\\":\\\"2026-02-16T17:13:51Z\\\"}}\" for pod \"kube-system\"/\"bootstrap-kube-controller-manager-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.341006 master-0 kubenswrapper[3171]: I0216 17:14:15.340931 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295dd2cc-4b35-40bc-959c-aa8ad90fc453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:13:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:13:53Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:13:54Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:13:54Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:13:51Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9cac97f2a7ed5b660f1fe9defbb77e1823cc7917bedfcd9a1ee2cf3d27a5413c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":5,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:13:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a2626c4f0d02f1b08ed65857f9aece164559757f234922a3555b40b68623d959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2626c4f0d02f1b08ed65857f9aece164559757f234922a3555b40b68623d959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:13:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:13:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"startTime\\\":\\\"2026-02-16T17:13:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.352136 master-0 kubenswrapper[3171]: I0216 17:14:15.352054 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8fa44b1ac9949e31fd12e8a885f114d1074a93f335ef9c428586ae9835e14643\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:01:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca-bundle\\\",\\\"name\\\":\\\"trusted-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/service-ca-bundle\\\",\\\"name\\\":\\\"service-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f42cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-755d954778-lf4cb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.372648 master-0 kubenswrapper[3171]: I0216 17:14:15.372579 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62fc29f4-557f-4a75-8b78-6ca425c81b81\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator graceful-termination]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [migrator graceful-termination]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"graceful-termination\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs597\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:240701090a5f8e40d4b88fa200cf63dffb11a8e2eae713cf3c629b016c2823b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"migrator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bs597\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator\"/\"migrator-5bd989df77-gcfg6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.414892 master-0 kubenswrapper[3171]: I0216 17:14:15.414810 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-node-tuning-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-node-tuning-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-node-tuning-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"node-tuning-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca/\\\",\\\"name\\\":\\\"trusted-ca\\\"},{\\\"mountPath\\\":\\\"/apiserver.local.config/certificates\\\",\\\"name\\\":\\\"apiservice-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gq8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-node-tuning-operator\"/\"cluster-node-tuning-operator-ff6c9b66-6j4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.454461 master-0 kubenswrapper[3171]: I0216 17:14:15.454255 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e69d8c51-e2a6-4f61-9c26-072784f6cf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a650093628feaa4193c1b7c57ea685e55d5af706446f54a283f32836e6d703a9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:00:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:24Z\\\"}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/available-featuregates\\\",\\\"name\\\":\\\"available-featuregates\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr8t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-7c6bdb986f-v8dr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.495392 master-0 kubenswrapper[3171]: I0216 17:14:15.495278 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-279g6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad805251-19d0-4d2f-b741-7d11158f1f03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnnc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnnc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-279g6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.534242 master-0 kubenswrapper[3171]: I0216 17:14:15.534118 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/dns-default-qcgxx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d96ccdc-0b09-437d-bfca-1958af5d9953\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ea13b0cbfe9be0d3d7ea80d50e512af6a453921a553c7c79b566530142b611b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/coredns\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"metrics-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl5w2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns\"/\"dns-default-qcgxx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.574974 master-0 kubenswrapper[3171]: I0216 17:14:15.574793 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d020c902-2adb-4919-8dd9-0c2109830580\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d371c36e93606a6be62a05a6e38d4e0131418dc0eaea65b286323f5ff81944ef\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:03:16Z\\\",\\\"message\\\":\\\"I0216 17:02:46.729243 1 cmd.go:253] Using service-serving-cert provided certificates\\\\nI0216 17:02:46.729359 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0216 17:02:46.730775 1 observer_polling.go:159] Starting file observer\\\\nW0216 17:02:46.731387 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver-operator/pods/kube-apiserver-operator-54984b6678-gp8gv\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\nI0216 17:02:46.731692 1 builder.go:304] kube-apiserver-operator version 4.18.0-202601171144.p2.g416eeae.assembly.stream.el9-416eeae-416eeae4e60970f5ab52a833774c7bb60644e6af\\\\nF0216 17:03:16.968622 1 cmd.go:182] failed checking apiserver connectivity: Get \\\\\\\"https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-apiserver-operator/leases/kube-apiserver-operator-lock\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:02:46Z\\\"}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-54984b6678-gp8gv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.613056 master-0 kubenswrapper[3171]: I0216 17:14:15.612876 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\",\\\"name\\\":\\\"etc-ssl-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cvo/updatepayloads\\\",\\\"name\\\":\\\"etc-cvo-updatepayloads\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/service-ca\\\",\\\"name\\\":\\\"service-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-649c4f5445-vt6wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.654574 master-0 kubenswrapper[3171]: I0216 17:14:15.654448 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"737fcc7d-d850-4352-9f17-383c85d5bc28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1110ab99d776a3d68ff736e046cffc3c590f742752e3080d6ce45308e9fb665f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:03:17Z\\\",\\\"message\\\":\\\"I0216 17:02:47.107066 1 cmd.go:253] Using service-serving-cert provided certificates\\\\nI0216 17:02:47.107189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0216 17:02:47.107802 1 observer_polling.go:159] Starting file observer\\\\nW0216 17:02:47.110551 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://172.30.0.1:443/api/v1/namespaces/openshift-apiserver-operator/pods\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\nI0216 17:02:47.110685 1 builder.go:304] openshift-apiserver-operator version -\\\\nF0216 17:03:17.452255 1 cmd.go:182] failed checking apiserver connectivity: Get \\\\\\\"https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-apiserver-operator/leases/openshift-apiserver-operator-lock\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:02:47Z\\\"}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dpp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-6d4655d9cf-qhn9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.699253 master-0 kubenswrapper[3171]: I0216 17:14:15.699130 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5760f1-b2e0-4138-9383-e4827154ac50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5qxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjdlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.733577 master-0 kubenswrapper[3171]: I0216 17:14:15.733303 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80d3b238-70c3-4e71-96a1-99405352033f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [snapshot-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [snapshot-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a26b20d3ef7b75aeb05acf9be2702f9d478822c43f679ff578811843692b960c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7573bf948e4a5ccb81f3214838cf4ecabd14ac4f2c4a11558ad134016b1c1851\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:07:10Z\\\",\\\"message\\\":\\\" dial tcp 172.30.0.1:443: connect: connection refused, falling back to slow path\\\\nE0216 17:03:13.820845 1 leaderelection.go:436] error retrieving resource lock openshift-cluster-storage-operator/snapshot-controller-leader: Get \\\\\\\"https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-cluster-storage-operator/leases/snapshot-controller-leader\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\nI0216 17:04:04.208880 1 reflector.go:368] Caches populated for *v1.VolumeSnapshotContent from github.com/kubernetes-csi/external-snapshotter/client/v8/informers/externalversions/factory.go:142\\\\nI0216 17:04:30.569672 1 reflector.go:368] Caches populated for *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:160\\\\nI0216 17:04:49.042541 1 reflector.go:368] Caches populated for *v1.VolumeSnapshot from github.com/kubernetes-csi/external-snapshotter/client/v8/informers/externalversions/factory.go:142\\\\nI0216 17:04:49.307195 1 reflector.go:368] Caches populated for *v1.VolumeSnapshotClass from github.com/kubernetes-csi/external-snapshotter/client/v8/informers/externalversions/factory.go:142\\\\nI0216 17:05:12.636246 1 reflector.go:368] Caches populated for *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:160\\\\nE0216 17:05:57.854009 1 leaderelection.go:429] Failed to update lock optimitically: Timeout: request did not complete within requested timeout - context deadline exceeded, falling back to slow path\\\\nE0216 17:06:57.855489 1 leaderelection.go:436] error retrieving resource lock openshift-cluster-storage-operator/snapshot-controller-leader: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io snapshot-controller-leader)\\\\nI0216 17:07:10.851633 1 leaderelection.go:297] failed to renew lease openshift-cluster-storage-operator/snapshot-controller-leader: timed out waiting for the condition\\\\nE0216 17:07:10.851722 1 leader_election.go:188] \\\\\\\"Stopped leading\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:36Z\\\"}},\\\"name\\\":\\\"snapshot-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxbdv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-storage-operator\"/\"csi-snapshot-controller-74b6595c6d-pfzq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.771522 master-0 kubenswrapper[3171]: I0216 17:14:15.771428 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62220aa5-4065-472c-8a17-c0a58942ab8a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/srv-cert\\\",\\\"name\\\":\\\"srv-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/profile-collector-cert\\\",\\\"name\\\":\\\"profile-collector-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xtk9h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"olm-operator-6b56bd877c-p7k2k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.811546 master-0 kubenswrapper[3171]: I0216 17:14:15.811429 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff68421-1741-41c1-93d5-5c722dfd295e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6rwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-7d8f4c8c66-qjq9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.855674 master-0 kubenswrapper[3171]: I0216 17:14:15.855531 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-vwvwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c303189e-adae-4fe2-8dd7-cc9b80f73e66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2s8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-vwvwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.894573 master-0 kubenswrapper[3171]: I0216 17:14:15.894456 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [router]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6b318889972c37662382a2905888bb3f1cfd71a433b6afa3504cc12f3c6fa6eb\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"router\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/private\\\",\\\"name\\\":\\\"default-certificate\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/service-ca\\\",\\\"name\\\":\\\"service-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/haproxy/conf/metrics-auth\\\",\\\"name\\\":\\\"stats-auth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-certs\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94kdz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-ingress\"/\"router-default-864ddd5f56-pm4rt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.930092 master-0 kubenswrapper[3171]: I0216 17:14:15.930016 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544c6815-81d7-422a-9e4a-5fcbfabe8da8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus-operator-admission-webhook]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus-operator-admission-webhook]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"prometheus-operator-admission-webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"tls-certificates\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"prometheus-operator-admission-webhook-695b766898-h94zg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:15.944156 master-0 kubenswrapper[3171]: I0216 17:14:15.944083 3171 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 17:14:15.998728 master-0 kubenswrapper[3171]: I0216 17:14:15.998490 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3bb3c46533b24f1a6a6669117dc888ed8f0c7ae56b34068a4ff2052335e34c4e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://11df536ab46de7aea5d67794ede57f343d242c66232b1933e38f8621505f15f7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:07:07Z\\\",\\\"message\\\":\\\"Catalog\\\\\\\" ClusterCatalog=\\\\\\\"openshift-redhat-marketplace\\\\\\\" namespace=\\\\\\\"\\\\\\\" name=\\\\\\\"openshift-redhat-marketplace\\\\\\\" reconcileID=\\\\\\\"da5bb8b4-e59e-4f88-9c6a-a81d36569463\\\\\\\"\\\\nI0216 17:04:19.356513 1 clustercatalog_controller.go:86] \\\\\\\"reconcile starting\\\\\\\" logger=\\\\\\\"catalogd-controller\\\\\\\" controller=\\\\\\\"clustercatalog\\\\\\\" controllerGroup=\\\\\\\"olm.operatorframework.io\\\\\\\" controllerKind=\\\\\\\"ClusterCatalog\\\\\\\" ClusterCatalog=\\\\\\\"openshift-redhat-operators\\\\\\\" namespace=\\\\\\\"\\\\\\\" name=\\\\\\\"openshift-redhat-operators\\\\\\\" reconcileID=\\\\\\\"4da0044e-9972-4ca5-9437-76cc972a36cb\\\\\\\"\\\\nI0216 17:04:19.356663 1 clustercatalog_controller.go:134] \\\\\\\"reconcile ending\\\\\\\" logger=\\\\\\\"catalogd-controller\\\\\\\" controller=\\\\\\\"clustercatalog\\\\\\\" controllerGroup=\\\\\\\"olm.operatorframework.io\\\\\\\" controllerKind=\\\\\\\"ClusterCatalog\\\\\\\" ClusterCatalog=\\\\\\\"openshift-redhat-operators\\\\\\\" namespace=\\\\\\\"\\\\\\\" name=\\\\\\\"openshift-redhat-operators\\\\\\\" reconcileID=\\\\\\\"4da0044e-9972-4ca5-9437-76cc972a36cb\\\\\\\"\\\\nI0216 17:04:52.099501 1 reflector.go:368] Caches populated for *v1.Secret from pkg/cache/internal/informers.go:106\\\\nI0216 17:04:52.099995 1 pull_secret_controller.go:94] \\\\\\\"saved global pull secret data locally\\\\\\\" controller=\\\\\\\"secret\\\\\\\" controllerGroup=\\\\\\\"\\\\\\\" controllerKind=\\\\\\\"Secret\\\\\\\" Secret=\\\\\\\"openshift-config/pull-secret\\\\\\\" namespace=\\\\\\\"openshift-config\\\\\\\" name=\\\\\\\"pull-secret\\\\\\\" reconcileID=\\\\\\\"ad91aac7-7b51-4778-82ae-402c2e39a703\\\\\\\"\\\\nE0216 17:05:54.941249 1 leaderelection.go:429] Failed to update lock optimitically: Timeout: request did not complete within requested timeout - context deadline exceeded, falling back to slow path\\\\nE0216 17:06:54.942332 1 leaderelection.go:436] error retrieving resource lock openshift-catalogd/catalogd-operator-lock: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io catalogd-operator-lock)\\\\nI0216 17:07:07.938126 1 leaderelection.go:297] failed to renew lease openshift-catalogd/catalogd-operator-lock: timed out waiting for the condition\\\\nE0216 17:07:07.938235 1 main.go:351] \\\\\\\"problem running manager\\\\\\\" err=\\\\\\\"leader election lost\\\\\\\" logger=\\\\\\\"setup\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:59Z\\\"}},\\\"name\\\":\\\"manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cache/\\\",\\\"name\\\":\\\"cache\\\"},{\\\"mountPath\\\":\\\"/var/certs\\\",\\\"name\\\":\\\"catalogserver-certs\\\"},{\\\"mountPath\\\":\\\"/var/ca-certs\\\",\\\"name\\\":\\\"ca-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/containers\\\",\\\"name\\\":\\\"etc-containers\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/docker\\\",\\\"name\\\":\\\"etc-docker\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7p9ld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-catalogd\"/\"catalogd-controller-manager-67bc7c997f-mn6cr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:16.034416 master-0 kubenswrapper[3171]: I0216 17:14:16.034285 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-monitoring-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-monitoring-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-monitoring-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"cluster-monitoring-operator-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cluster-monitoring-operator/telemetry\\\",\\\"name\\\":\\\"telemetry-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7w67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"cluster-monitoring-operator-756d64c8c4-ln4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:16.070932 master-0 kubenswrapper[3171]: I0216 17:14:16.070844 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"970d4376-f299-412c-a8ee-90aa980c689e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [csi-snapshot-controller-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [csi-snapshot-controller-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-snapshot-controller-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqstc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-storage-operator\"/\"csi-snapshot-controller-operator-7b87b97578-q55rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:16.108625 master-0 kubenswrapper[3171]: I0216 17:14:16.108542 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vfxj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m29g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-dns\"/\"node-resolver-vfxj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:16.132488 master-0 kubenswrapper[3171]: I0216 17:14:16.132434 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:16.132679 master-0 kubenswrapper[3171]: E0216 17:14:16.132635 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:14:16.132778 master-0 kubenswrapper[3171]: I0216 17:14:16.132753 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:16.132922 master-0 kubenswrapper[3171]: E0216 17:14:16.132886 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:14:16.133070 master-0 kubenswrapper[3171]: I0216 17:14:16.133031 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:16.133293 master-0 kubenswrapper[3171]: E0216 17:14:16.133258 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:14:16.133403 master-0 kubenswrapper[3171]: I0216 17:14:16.133372 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:16.133573 master-0 kubenswrapper[3171]: E0216 17:14:16.133532 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:14:16.133686 master-0 kubenswrapper[3171]: I0216 17:14:16.133661 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:16.133865 master-0 kubenswrapper[3171]: E0216 17:14:16.133831 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:14:16.133975 master-0 kubenswrapper[3171]: I0216 17:14:16.133940 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:16.134151 master-0 kubenswrapper[3171]: E0216 17:14:16.134122 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:14:16.134238 master-0 kubenswrapper[3171]: I0216 17:14:16.134215 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:16.134381 master-0 kubenswrapper[3171]: E0216 17:14:16.134352 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:14:16.134469 master-0 kubenswrapper[3171]: I0216 17:14:16.134449 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:16.134615 master-0 kubenswrapper[3171]: E0216 17:14:16.134589 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:14:16.134700 master-0 kubenswrapper[3171]: I0216 17:14:16.134681 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:16.134849 master-0 kubenswrapper[3171]: E0216 17:14:16.134823 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:14:16.134898 master-0 kubenswrapper[3171]: I0216 17:14:16.134863 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:16.134942 master-0 kubenswrapper[3171]: I0216 17:14:16.134915 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:16.135064 master-0 kubenswrapper[3171]: E0216 17:14:16.135035 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:14:16.135064 master-0 kubenswrapper[3171]: I0216 17:14:16.135056 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:16.135130 master-0 kubenswrapper[3171]: I0216 17:14:16.135110 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:16.135130 master-0 kubenswrapper[3171]: I0216 17:14:16.135124 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:16.135263 master-0 kubenswrapper[3171]: E0216 17:14:16.135232 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:14:16.135319 master-0 kubenswrapper[3171]: I0216 17:14:16.135304 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:16.135379 master-0 kubenswrapper[3171]: I0216 17:14:16.135360 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:16.135428 master-0 kubenswrapper[3171]: I0216 17:14:16.135412 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:16.139299 master-0 kubenswrapper[3171]: I0216 17:14:16.139239 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:16.139299 master-0 kubenswrapper[3171]: I0216 17:14:16.139281 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:16.139558 master-0 kubenswrapper[3171]: E0216 17:14:16.135522 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:14:16.139558 master-0 kubenswrapper[3171]: E0216 17:14:16.139519 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:14:16.139668 master-0 kubenswrapper[3171]: E0216 17:14:16.139585 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:14:16.139668 master-0 kubenswrapper[3171]: E0216 17:14:16.139627 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:14:16.139668 master-0 kubenswrapper[3171]: E0216 17:14:16.139651 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:14:16.139771 master-0 kubenswrapper[3171]: E0216 17:14:16.139669 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:14:16.139771 master-0 kubenswrapper[3171]: E0216 17:14:16.139710 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:14:16.139771 master-0 kubenswrapper[3171]: E0216 17:14:16.139736 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:14:16.193654 master-0 kubenswrapper[3171]: I0216 17:14:16.193397 3171 trace.go:236] Trace[362177176]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 17:13:58.193) (total time: 18000ms): Feb 16 17:14:16.193654 master-0 kubenswrapper[3171]: Trace[362177176]: ---"Objects listed" error: 17999ms (17:14:16.193) Feb 16 17:14:16.193654 master-0 kubenswrapper[3171]: Trace[362177176]: [18.000262388s] [18.000262388s] END Feb 16 17:14:16.193654 master-0 kubenswrapper[3171]: I0216 17:14:16.193441 3171 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:14:16.204925 master-0 kubenswrapper[3171]: I0216 17:14:16.204850 3171 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 17:14:16.232339 master-0 kubenswrapper[3171]: I0216 17:14:16.232046 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54f29618-42c2-4270-9af7-7d82852d7cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [manager kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [manager kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4wht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af4f06c8656e24dc76c11a21937d73b5e139ad31b06bedcdf3957bacba32069a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:07:07Z\\\",\\\"message\\\":\\\"ce\\\\\\\" reconcileID=\\\\\\\"506ce77f-b541-475d-998b-93273b0808c5\\\\\\\"\\\\nI0216 17:04:24.839150 1 clustercatalog_controller.go:83] \\\\\\\"reconcile ending\\\\\\\" logger=\\\\\\\"cluster-catalog\\\\\\\" controller=\\\\\\\"clustercatalog\\\\\\\" controllerGroup=\\\\\\\"olm.operatorframework.io\\\\\\\" controllerKind=\\\\\\\"ClusterCatalog\\\\\\\" ClusterCatalog=\\\\\\\"openshift-redhat-marketplace\\\\\\\" namespace=\\\\\\\"\\\\\\\" name=\\\\\\\"openshift-redhat-marketplace\\\\\\\" reconcileID=\\\\\\\"506ce77f-b541-475d-998b-93273b0808c5\\\\\\\"\\\\nI0216 17:04:24.839163 1 clustercatalog_controller.go:54] \\\\\\\"reconcile starting\\\\\\\" logger=\\\\\\\"cluster-catalog\\\\\\\" controller=\\\\\\\"clustercatalog\\\\\\\" controllerGroup=\\\\\\\"olm.operatorframework.io\\\\\\\" controllerKind=\\\\\\\"ClusterCatalog\\\\\\\" ClusterCatalog=\\\\\\\"openshift-redhat-operators\\\\\\\" namespace=\\\\\\\"\\\\\\\" name=\\\\\\\"openshift-redhat-operators\\\\\\\" reconcileID=\\\\\\\"383448df-4543-44fe-a77c-648863f0d9fa\\\\\\\"\\\\nI0216 17:04:24.839175 1 clustercatalog_controller.go:83] \\\\\\\"reconcile ending\\\\\\\" logger=\\\\\\\"cluster-catalog\\\\\\\" controller=\\\\\\\"clustercatalog\\\\\\\" controllerGroup=\\\\\\\"olm.operatorframework.io\\\\\\\" controllerKind=\\\\\\\"ClusterCatalog\\\\\\\" ClusterCatalog=\\\\\\\"openshift-redhat-operators\\\\\\\" namespace=\\\\\\\"\\\\\\\" name=\\\\\\\"openshift-redhat-operators\\\\\\\" reconcileID=\\\\\\\"383448df-4543-44fe-a77c-648863f0d9fa\\\\\\\"\\\\nI0216 17:04:44.957984 1 reflector.go:368] Caches populated for *v1.ClusterExtension from pkg/cache/internal/informers.go:106\\\\nE0216 17:05:54.852661 1 leaderelection.go:429] Failed to update lock optimitically: Timeout: request did not complete within requested timeout - context deadline exceeded, falling back to slow path\\\\nE0216 17:06:54.854974 1 leaderelection.go:436] error retrieving resource lock openshift-operator-controller/9c4404e7.operatorframework.io: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io 9c4404e7.operatorframework.io)\\\\nI0216 17:07:07.848155 1 leaderelection.go:297] failed to renew lease openshift-operator-controller/9c4404e7.operatorframework.io: timed out waiting for the condition\\\\nE0216 17:07:07.848284 1 main.go:362] \\\\\\\"problem running manager\\\\\\\" err=\\\\\\\"leader election lost\\\\\\\" logger=\\\\\\\"setup\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:00:59Z\\\"}},\\\"name\\\":\\\"manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cache\\\",\\\"name\\\":\\\"cache\\\"},{\\\"mountPath\\\":\\\"/var/ca-certs\\\",\\\"name\\\":\\\"ca-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/containers\\\",\\\"name\\\":\\\"etc-containers\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/docker\\\",\\\"name\\\":\\\"etc-docker\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4wht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-controller\"/\"operator-controller-controller-manager-85c9b89969-lj58b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:16.264894 master-0 kubenswrapper[3171]: I0216 17:14:16.264765 3171 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:14:16.273022 master-0 kubenswrapper[3171]: I0216 17:14:16.272980 3171 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="82ee9d18-ca32-47ad-aa23-0b0156fae5ae" Feb 16 17:14:16.273022 master-0 kubenswrapper[3171]: I0216 17:14:16.273011 3171 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="82ee9d18-ca32-47ad-aa23-0b0156fae5ae" Feb 16 17:14:16.273288 master-0 kubenswrapper[3171]: I0216 17:14:16.273258 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"62478ab5c23f3ea9a0e6eac2c1335867a9ed27280579a87a6023cc6f8b882123"} Feb 16 17:14:16.273821 master-0 kubenswrapper[3171]: I0216 17:14:16.273789 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:16.292370 master-0 kubenswrapper[3171]: I0216 17:14:16.292290 3171 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 16 17:14:16.292630 master-0 kubenswrapper[3171]: I0216 17:14:16.292594 3171 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 16 17:14:16.294206 master-0 kubenswrapper[3171]: I0216 17:14:16.294160 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:14:16.294311 master-0 kubenswrapper[3171]: I0216 17:14:16.294199 3171 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:14:16Z","lastTransitionTime":"2026-02-16T17:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:14:16.304915 master-0 kubenswrapper[3171]: E0216 17:14:16.304844 3171 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179252Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330228Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16009b8c-6511-4dd4-9a27-539c3ce647e4\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:16.305693 master-0 kubenswrapper[3171]: I0216 17:14:16.305501 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:16.305693 master-0 kubenswrapper[3171]: I0216 17:14:16.305537 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:16.305693 master-0 kubenswrapper[3171]: I0216 17:14:16.305557 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:16.305693 master-0 kubenswrapper[3171]: I0216 17:14:16.305581 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.306075 master-0 kubenswrapper[3171]: I0216 17:14:16.305739 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:16.306075 master-0 kubenswrapper[3171]: I0216 17:14:16.305780 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.306075 master-0 kubenswrapper[3171]: I0216 17:14:16.305810 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:16.306075 master-0 kubenswrapper[3171]: I0216 17:14:16.305837 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:16.306075 master-0 kubenswrapper[3171]: I0216 17:14:16.305867 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: I0216 17:14:16.306150 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: I0216 17:14:16.306184 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: I0216 17:14:16.306207 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: I0216 17:14:16.306227 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: I0216 17:14:16.306248 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: I0216 17:14:16.306270 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: I0216 17:14:16.306289 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: I0216 17:14:16.306306 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: I0216 17:14:16.306353 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: E0216 17:14:16.306362 3171 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: I0216 17:14:16.306371 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: E0216 17:14:16.306368 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: I0216 17:14:16.306392 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: E0216 17:14:16.306366 3171 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: E0216 17:14:16.306423 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.806406623 +0000 UTC m=+26.475261879 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: E0216 17:14:16.306440 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.806434144 +0000 UTC m=+26.475289400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:16.306409 master-0 kubenswrapper[3171]: E0216 17:14:16.306381 3171 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306486 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306490 3171 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306496 3171 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306544 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.806537577 +0000 UTC m=+26.475392833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306586 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.806558108 +0000 UTC m=+26.475413424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306608 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.806598439 +0000 UTC m=+26.475453765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306508 3171 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306724 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.806705772 +0000 UTC m=+26.475561088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306735 3171 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306745 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306746 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.806736922 +0000 UTC m=+26.475592258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306742 3171 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: I0216 17:14:16.306794 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306810 3171 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: I0216 17:14:16.306820 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306843 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.806834015 +0000 UTC m=+26.475689271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306777 3171 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306875 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.806863506 +0000 UTC m=+26.475718762 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: I0216 17:14:16.306871 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306862 3171 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306918 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.806887406 +0000 UTC m=+26.475742702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.306949 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.806933728 +0000 UTC m=+26.475789014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.307009 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.806998239 +0000 UTC m=+26.475853535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.307031 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.80702077 +0000 UTC m=+26.475876056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: I0216 17:14:16.307096 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: I0216 17:14:16.307159 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: I0216 17:14:16.307218 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.307291 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.807264627 +0000 UTC m=+26.476119883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: I0216 17:14:16.307319 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.307342 3171 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: I0216 17:14:16.307349 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: E0216 17:14:16.307399 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.80738802 +0000 UTC m=+26.476243276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:16.312616 master-0 kubenswrapper[3171]: I0216 17:14:16.307421 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.307442 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.307467 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-config-out\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.307475 3171 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.307491 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.307512 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57455\" (UniqueName: \"kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.307590 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.807514903 +0000 UTC m=+26.476370219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.307625 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.307791 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbtld\" (UniqueName: \"kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.307876 3171 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.307905 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.308044 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.308136 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.308247 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.308266 3171 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.308553 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.808524691 +0000 UTC m=+26.477380007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.308716 3171 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.308763 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.808749667 +0000 UTC m=+26.477604933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.308880 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.308913 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.309006 3171 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.309016 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.309040 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.809030934 +0000 UTC m=+26.477886190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.309059 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.309086 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.309113 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.309093 3171 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.309127 3171 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.309152 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.809144918 +0000 UTC m=+26.478000184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.309165 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.809158638 +0000 UTC m=+26.478013904 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.309185 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: E0216 17:14:16.309218 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.809210049 +0000 UTC m=+26.478065295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.309235 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.309255 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.309273 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.309291 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.320323 master-0 kubenswrapper[3171]: I0216 17:14:16.309319 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.309593 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.309652 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.309670 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.309689 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.309697 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76rtg\" (UniqueName: \"kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.309471 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.309459 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.309737 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.309734 3171 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:14:16Z","lastTransitionTime":"2026-02-16T17:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.309760 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.309387 3171 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.309804 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.809797065 +0000 UTC m=+26.478652321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.309519 3171 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.309823 3171 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.309842 3171 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.309852 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.809843386 +0000 UTC m=+26.478698632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.309763 3171 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.309906 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.809891358 +0000 UTC m=+26.478746694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.309919 3171 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.309935 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.309974 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.309977 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.809952479 +0000 UTC m=+26.478807825 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.309998 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.310020 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.310024 3171 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.311821 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.313365 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-config-out\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.315662 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.315852 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.315983 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.815944612 +0000 UTC m=+26.484799868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.316049 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.816043184 +0000 UTC m=+26.484898440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.316096 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.316163 3171 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.316265 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.316269 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.81625788 +0000 UTC m=+26.485113236 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: E0216 17:14:16.316308 3171 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:16.326562 master-0 kubenswrapper[3171]: I0216 17:14:16.316315 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.316350 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.816342782 +0000 UTC m=+26.485198038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.316358 3171 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316379 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.316409 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.816404184 +0000 UTC m=+26.485259440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316431 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316487 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316511 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316531 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316549 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.316563 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.316600 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.816587539 +0000 UTC m=+26.485442865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316566 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316650 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.316711 3171 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.316735 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.816727433 +0000 UTC m=+26.485582699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.316744 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316758 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316791 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.316804 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.816785154 +0000 UTC m=+26.485640510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316830 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.316855 3171 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316865 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.316921 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.816910518 +0000 UTC m=+26.485765774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.316872 3171 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316942 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.316973 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.816947209 +0000 UTC m=+26.485802585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.316995 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.317026 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.317004 3171 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.317055 3171 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.317034 3171 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: I0216 17:14:16.317054 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7l6q\" (UniqueName: \"kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.317098 3171 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.317108 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.817057332 +0000 UTC m=+26.485912588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:16.333999 master-0 kubenswrapper[3171]: E0216 17:14:16.317122 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.817115323 +0000 UTC m=+26.485970579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: E0216 17:14:16.317134 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.817129314 +0000 UTC m=+26.485984560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: E0216 17:14:16.317144 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.817139664 +0000 UTC m=+26.485994920 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317186 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317209 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317229 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317246 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317404 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317473 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317515 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317523 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: E0216 17:14:16.317509 3171 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179252Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330228Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16009b8c-6511-4dd4-9a27-539c3ce647e4\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317594 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317663 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: E0216 17:14:16.317744 3171 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317756 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: E0216 17:14:16.317781 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.817771531 +0000 UTC m=+26.486626867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317807 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317885 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317945 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: E0216 17:14:16.317987 3171 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.317997 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.318043 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.318120 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.318136 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: E0216 17:14:16.318196 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.818183102 +0000 UTC m=+26.487038368 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.318198 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: E0216 17:14:16.318263 3171 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: E0216 17:14:16.318276 3171 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: E0216 17:14:16.318289 3171 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: E0216 17:14:16.318322 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.818312996 +0000 UTC m=+26.487168272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.318238 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.318361 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: I0216 17:14:16.318388 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.344277 master-0 kubenswrapper[3171]: E0216 17:14:16.318405 3171 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.318518 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.818496401 +0000 UTC m=+26.487351697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.318537 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.318470 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.318414 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.318837 3171 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.318877 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.318896 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.818877311 +0000 UTC m=+26.487732597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.318947 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.319006 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.818946713 +0000 UTC m=+26.487802009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319039 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319041 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.319134 3171 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.319184 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.819170719 +0000 UTC m=+26.488025985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319124 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319225 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319246 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319265 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319283 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319302 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319319 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319335 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.319400 3171 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319432 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94kdz\" (UniqueName: \"kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.319447 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.819435256 +0000 UTC m=+26.488290522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319466 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.319497 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.319536 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319536 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319550 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319605 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.319617 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.819605431 +0000 UTC m=+26.488460787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.319644 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.819635411 +0000 UTC m=+26.488490667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.319646 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: I0216 17:14:16.319669 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.346325 master-0 kubenswrapper[3171]: E0216 17:14:16.319698 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.819682803 +0000 UTC m=+26.488538069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.319737 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.319785 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.319807 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.819784415 +0000 UTC m=+26.488639711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.319844 3171 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.319850 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.319871 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.819863228 +0000 UTC m=+26.488718484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.319912 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.320001 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.320023 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.320044 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.320062 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.820052683 +0000 UTC m=+26.488908039 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.320081 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.320120 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.320129 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.320195 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.820179936 +0000 UTC m=+26.489035222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.320141 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.320234 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.320244 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.820233888 +0000 UTC m=+26.489089184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.320234 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.320271 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.820261198 +0000 UTC m=+26.489116574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.320295 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.320341 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kube-api-access\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.320379 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.320453 3171 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.320493 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.820482084 +0000 UTC m=+26.489337340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.320515 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.320536 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.320558 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.320635 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.320786 3171 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: E0216 17:14:16.320819 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.820809143 +0000 UTC m=+26.489664399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.321490 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.321524 3171 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:14:16Z","lastTransitionTime":"2026-02-16T17:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.322659 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.322731 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.322755 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.349553 master-0 kubenswrapper[3171]: I0216 17:14:16.322777 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.322796 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.322837 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.322848 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.322863 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgm4p\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-kube-api-access-lgm4p\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.322886 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.822875469 +0000 UTC m=+26.491730815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.322936 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323004 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq4dn\" (UniqueName: \"kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323073 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323096 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.323114 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323129 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.323154 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.823143526 +0000 UTC m=+26.491998862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323177 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.323142 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.323269 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323207 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j99jl\" (UniqueName: \"kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.323312 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.82328549 +0000 UTC m=+26.492140786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.323595 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.823582508 +0000 UTC m=+26.492437764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323621 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323647 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323671 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.323748 3171 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.323790 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.823777894 +0000 UTC m=+26.492633220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323793 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323829 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.323860 3171 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323873 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.323895 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.823884816 +0000 UTC m=+26.492740072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323875 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.323911 3171 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323917 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: E0216 17:14:16.323945 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.823935498 +0000 UTC m=+26.492790804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.323991 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.324028 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.351679 master-0 kubenswrapper[3171]: I0216 17:14:16.324064 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324095 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324119 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324146 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324173 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324199 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324222 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324247 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nfk2\" (UniqueName: \"kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324274 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.324289 3171 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324297 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324309 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324323 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324348 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.324465 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.824451122 +0000 UTC m=+26.493306458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.324508 3171 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324514 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.324549 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.824534744 +0000 UTC m=+26.493390010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324576 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.324650 3171 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.324690 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.824677918 +0000 UTC m=+26.493533204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.324692 3171 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.324731 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.824721539 +0000 UTC m=+26.493576795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324645 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324768 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324797 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324821 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-var-lock\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324851 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324874 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324888 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.324914 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.324932 3171 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.324952 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.824942305 +0000 UTC m=+26.493797571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: I0216 17:14:16.324892 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.324998 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.824983936 +0000 UTC m=+26.493839302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.325030 3171 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:14:16.353744 master-0 kubenswrapper[3171]: E0216 17:14:16.325067 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.825056908 +0000 UTC m=+26.493912224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325030 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325103 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325109 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325137 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.82512787 +0000 UTC m=+26.493983136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325158 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325174 3171 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325186 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325206 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.825196962 +0000 UTC m=+26.494052218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325229 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325259 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325288 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325314 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325369 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325399 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325429 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325458 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325469 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325487 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325531 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325544 3171 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325578 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325594 3171 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325602 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325617 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325629 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.825617353 +0000 UTC m=+26.494472619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325580 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325649 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.825639484 +0000 UTC m=+26.494494740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325731 3171 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325761 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325789 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.825773618 +0000 UTC m=+26.494628964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325831 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325870 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325898 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.825884581 +0000 UTC m=+26.494739887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: E0216 17:14:16.325924 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.825916371 +0000 UTC m=+26.494771727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:16.357774 master-0 kubenswrapper[3171]: I0216 17:14:16.325895 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326019 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326052 3171 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326101 3171 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326112 3171 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326133 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.826124157 +0000 UTC m=+26.494979413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326131 3171 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326070 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326159 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326137 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326192 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326183 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326198 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.826189139 +0000 UTC m=+26.495044395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326164 3171 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326245 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.82623097 +0000 UTC m=+26.495086276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326280 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326291 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.826273451 +0000 UTC m=+26.495128727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326326 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326367 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326389 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326402 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326440 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326464 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326477 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326497 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.826487387 +0000 UTC m=+26.495342733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326615 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326650 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326661 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.326696 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326781 3171 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326833 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.826808296 +0000 UTC m=+26.495663552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326831 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326780 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326880 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.826867517 +0000 UTC m=+26.495722853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: E0216 17:14:16.326906 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.826891618 +0000 UTC m=+26.495746954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.327086 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.360297 master-0 kubenswrapper[3171]: I0216 17:14:16.327179 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327221 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327258 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327298 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327340 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327377 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327414 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327449 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjpvn\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-kube-api-access-tjpvn\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327488 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327520 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327557 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327582 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327597 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: E0216 17:14:16.327669 3171 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: E0216 17:14:16.327717 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.82770202 +0000 UTC m=+26.496557396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327911 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327975 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.327999 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.328020 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.328043 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.328061 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: E0216 17:14:16.328125 3171 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: E0216 17:14:16.328155 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.828147062 +0000 UTC m=+26.497002318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: E0216 17:14:16.328196 3171 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: E0216 17:14:16.328216 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.828209703 +0000 UTC m=+26.497064959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.328830 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: E0216 17:14:16.328930 3171 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: E0216 17:14:16.329056 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.829045776 +0000 UTC m=+26.497901032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.329085 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: E0216 17:14:16.329150 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: E0216 17:14:16.329218 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.829167559 +0000 UTC m=+26.498022815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: E0216 17:14:16.329249 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: E0216 17:14:16.329300 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.829262752 +0000 UTC m=+26.498118008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.328081 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.329421 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.329451 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:16.362994 master-0 kubenswrapper[3171]: I0216 17:14:16.329470 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.329490 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.329512 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.329587 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.329639 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.329679 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.329718 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.329756 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.329793 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.329831 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.329870 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.329905 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.329927 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.329945 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.330168 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.330297 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config-out\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.330353 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: E0216 17:14:16.330397 3171 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.330441 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.330469 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.330482 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.330496 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.330520 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: E0216 17:14:16.330598 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: E0216 17:14:16.330686 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.83067294 +0000 UTC m=+26.499528216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: E0216 17:14:16.330782 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.830772313 +0000 UTC m=+26.499627589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.330838 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: E0216 17:14:16.330879 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.330876 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: E0216 17:14:16.330920 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.830908436 +0000 UTC m=+26.499763752 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.330945 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.330991 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.331064 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.331072 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.331186 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.331216 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: I0216 17:14:16.331247 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:16.366814 master-0 kubenswrapper[3171]: E0216 17:14:16.331271 3171 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.331323 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.831307587 +0000 UTC m=+26.500162873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.331326 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.331325 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.331276 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.331386 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.831374099 +0000 UTC m=+26.500229355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.331405 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.331431 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.331455 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.331483 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.331680 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.331714 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.331773 3171 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.331806 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.83178874 +0000 UTC m=+26.500644016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.331832 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.831823251 +0000 UTC m=+26.500678517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.331856 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.331891 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.331900 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.331911 3171 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.331919 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.331949 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.331982 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.831948645 +0000 UTC m=+26.500803921 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.332009 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.332037 3171 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.332071 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.832061038 +0000 UTC m=+26.500916364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.332072 3171 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.332039 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.332105 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.832097569 +0000 UTC m=+26.500952835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.332114 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.332127 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.332128 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.332175 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.332194 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.332213 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: I0216 17:14:16.332232 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:16.368754 master-0 kubenswrapper[3171]: E0216 17:14:16.332240 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: E0216 17:14:16.332275 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.832266033 +0000 UTC m=+26.501121299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: E0216 17:14:16.332344 3171 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332340 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: E0216 17:14:16.332381 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.832371116 +0000 UTC m=+26.501226392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332342 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: E0216 17:14:16.332469 3171 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: E0216 17:14:16.332506 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.832495969 +0000 UTC m=+26.501351235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332534 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332562 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332596 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332631 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332667 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvw4s\" (UniqueName: \"kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332709 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332750 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332792 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332818 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332844 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332870 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332897 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332925 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332953 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.332999 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.333026 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.333052 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.333077 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.333101 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.333119 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.333127 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.333154 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: E0216 17:14:16.333167 3171 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: E0216 17:14:16.333196 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.833187228 +0000 UTC m=+26.502042484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.333213 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: I0216 17:14:16.333235 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: E0216 17:14:16.333280 3171 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:16.370335 master-0 kubenswrapper[3171]: E0216 17:14:16.333333 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.833318412 +0000 UTC m=+26.502173718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.332870 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.333478 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config-out\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.333632 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.333716 3171 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.336129 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.835975904 +0000 UTC m=+26.504831190 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.336148 3171 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.336176 3171 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.336237 3171 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.336290 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.836262351 +0000 UTC m=+26.505117617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.336318 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.836306183 +0000 UTC m=+26.505161449 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.336375 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.336514 3171 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.336562 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.836549779 +0000 UTC m=+26.505405055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.336598 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.336613 3171 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.336758 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.836738094 +0000 UTC m=+26.505593360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.336824 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.336894 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.336900 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.336901 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.336955 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.337023 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.337131 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.337226 3171 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.337275 3171 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.337285 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.837260428 +0000 UTC m=+26.506115704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.337376 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.337409 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.337431 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.337462 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.337491 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.837467074 +0000 UTC m=+26.506322360 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.337517 3171 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: I0216 17:14:16.337537 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:16.371553 master-0 kubenswrapper[3171]: E0216 17:14:16.337137 3171 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179252Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330228Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16009b8c-6511-4dd4-9a27-539c3ce647e4\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.337586 3171 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.337771 3171 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.337871 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.337993 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.341401 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.837539096 +0000 UTC m=+26.506394352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.341461 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.341502 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.841474262 +0000 UTC m=+26.510329558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.341541 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.841526784 +0000 UTC m=+26.510382080 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.341700 3171 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.341749 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.841737199 +0000 UTC m=+26.510592455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.341768 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.342300 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.342346 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.342406 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.842387537 +0000 UTC m=+26.511242813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.342486 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.342548 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.342639 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.342694 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.342742 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.343727 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.344248 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.344286 3171 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:14:16Z","lastTransitionTime":"2026-02-16T17:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.344359 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.344388 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.344421 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.844404142 +0000 UTC m=+26.513259408 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.344469 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.344511 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.344540 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.344582 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.344618 3171 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.344668 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.844655028 +0000 UTC m=+26.513510284 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.345622 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.347833 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.347882 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: I0216 17:14:16.347917 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.376069 master-0 kubenswrapper[3171]: E0216 17:14:16.348309 3171 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: E0216 17:14:16.348354 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.848335938 +0000 UTC m=+26.517191204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.347952 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.348406 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.348439 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.348469 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.348502 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.348536 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.348566 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.348600 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.348632 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.348665 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.348694 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.348728 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.348758 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.348809 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: E0216 17:14:16.348883 3171 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: E0216 17:14:16.348925 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.848908494 +0000 UTC m=+26.517763750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: E0216 17:14:16.349005 3171 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: E0216 17:14:16.349069 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.849051837 +0000 UTC m=+26.517907103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.349309 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.349349 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: E0216 17:14:16.349394 3171 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.349434 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.349491 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: E0216 17:14:16.349533 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.84950743 +0000 UTC m=+26.518362686 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.349569 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.349576 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.349611 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.349615 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: E0216 17:14:16.349657 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: E0216 17:14:16.349690 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0 podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.849677404 +0000 UTC m=+26.518532660 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: E0216 17:14:16.349705 3171 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.349719 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: E0216 17:14:16.349745 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.849735276 +0000 UTC m=+26.518590522 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.349769 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:16.377265 master-0 kubenswrapper[3171]: I0216 17:14:16.349801 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.349828 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.349839 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.349855 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.349872 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.849864739 +0000 UTC m=+26.518719995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.349893 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.349925 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.349991 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.350007 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.350022 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.350087 3171 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.350112 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.350202 3171 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.350270 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.350279 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.350371 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.850357443 +0000 UTC m=+26.519212699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.350391 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.850384553 +0000 UTC m=+26.519239809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.350411 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.850399564 +0000 UTC m=+26.519254820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.350458 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.850417534 +0000 UTC m=+26.519272790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.350655 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.350702 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.350905 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.350946 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.850929068 +0000 UTC m=+26.519784324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.350991 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.351063 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.351084 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.351111 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.351123 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.851114253 +0000 UTC m=+26.519969519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: E0216 17:14:16.352445 3171 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:16.378247 master-0 kubenswrapper[3171]: I0216 17:14:16.352518 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.352652 3171 status_manager.go:875] "Failed to update status for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy cloud-credential-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy cloud-credential-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1faa2081a881db884a86bdfe33fcb6a6af1d14c3e9ee5c44dfe4b09045684e13\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cloud-credential-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca-bundle\\\",\\\"name\\\":\\\"cco-trusted-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"cloud-credential-operator-serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zdxgd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cloud-credential-operator\"/\"cloud-credential-operator-595c8f9ff-b9nvq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: E0216 17:14:16.352667 3171 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179252Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330228Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16009b8c-6511-4dd4-9a27-539c3ce647e4\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: E0216 17:14:16.352697 3171 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: E0216 17:14:16.353554 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.852494941 +0000 UTC m=+26.521350217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.353592 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.353637 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.353668 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.353700 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.353731 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.353759 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.353785 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.353822 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.353849 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: E0216 17:14:16.353852 3171 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: E0216 17:14:16.353870 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.853861708 +0000 UTC m=+26.522716964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.353892 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: E0216 17:14:16.353906 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.853894618 +0000 UTC m=+26.522749884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.353930 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.353977 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.354009 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.354040 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.354063 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: I0216 17:14:16.354122 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:16.379750 master-0 kubenswrapper[3171]: E0216 17:14:16.353934 3171 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.354223 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.854213747 +0000 UTC m=+26.523069013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.354015 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.354316 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.854296919 +0000 UTC m=+26.523152185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.354168 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.354330 3171 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.354364 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.854354011 +0000 UTC m=+26.523209287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.354393 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.354412 3171 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.354422 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.854399862 +0000 UTC m=+26.523255138 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: I0216 17:14:16.354339 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.354442 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.854434023 +0000 UTC m=+26.523289299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.354458 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.854451584 +0000 UTC m=+26.523306850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: I0216 17:14:16.354489 3171 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.354496 3171 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.354677 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.854666309 +0000 UTC m=+26.523521585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: I0216 17:14:16.356077 3171 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: I0216 17:14:16.356198 3171 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:14:16Z","lastTransitionTime":"2026-02-16T17:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: I0216 17:14:16.356869 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: I0216 17:14:16.362803 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.365761 3171 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179252Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330228Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16009b8c-6511-4dd4-9a27-539c3ce647e4\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.365841 3171 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.376262 3171 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.376289 3171 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.376300 3171 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: E0216 17:14:16.376345 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.876329726 +0000 UTC m=+26.545184982 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.380482 master-0 kubenswrapper[3171]: I0216 17:14:16.378891 3171 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:16.413370 master-0 kubenswrapper[3171]: E0216 17:14:16.413292 3171 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.413370 master-0 kubenswrapper[3171]: E0216 17:14:16.413329 3171 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.413370 master-0 kubenswrapper[3171]: E0216 17:14:16.413378 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.913361938 +0000 UTC m=+26.582217184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.426640 master-0 kubenswrapper[3171]: E0216 17:14:16.426580 3171 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.426640 master-0 kubenswrapper[3171]: E0216 17:14:16.426616 3171 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.426640 master-0 kubenswrapper[3171]: E0216 17:14:16.426629 3171 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.426862 master-0 kubenswrapper[3171]: E0216 17:14:16.426689 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.926668928 +0000 UTC m=+26.595524264 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.434214 master-0 kubenswrapper[3171]: E0216 17:14:16.434168 3171 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:16.434214 master-0 kubenswrapper[3171]: E0216 17:14:16.434211 3171 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.434347 master-0 kubenswrapper[3171]: E0216 17:14:16.434228 3171 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.434347 master-0 kubenswrapper[3171]: E0216 17:14:16.434292 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:16.934272023 +0000 UTC m=+26.603127299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.457082 master-0 kubenswrapper[3171]: I0216 17:14:16.457023 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.457272 master-0 kubenswrapper[3171]: I0216 17:14:16.457097 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.457272 master-0 kubenswrapper[3171]: I0216 17:14:16.457196 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.457352 master-0 kubenswrapper[3171]: I0216 17:14:16.457292 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.457617 master-0 kubenswrapper[3171]: I0216 17:14:16.457583 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.457672 master-0 kubenswrapper[3171]: I0216 17:14:16.457640 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:16.457672 master-0 kubenswrapper[3171]: I0216 17:14:16.457635 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.457745 master-0 kubenswrapper[3171]: I0216 17:14:16.457711 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:16.457836 master-0 kubenswrapper[3171]: I0216 17:14:16.457806 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.457880 master-0 kubenswrapper[3171]: I0216 17:14:16.457849 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.458044 master-0 kubenswrapper[3171]: I0216 17:14:16.458013 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.458119 master-0 kubenswrapper[3171]: I0216 17:14:16.458096 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.458181 master-0 kubenswrapper[3171]: I0216 17:14:16.458120 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.458226 master-0 kubenswrapper[3171]: I0216 17:14:16.458180 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.458287 master-0 kubenswrapper[3171]: I0216 17:14:16.458260 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.458334 master-0 kubenswrapper[3171]: I0216 17:14:16.458310 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.458334 master-0 kubenswrapper[3171]: I0216 17:14:16.458327 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.458413 master-0 kubenswrapper[3171]: I0216 17:14:16.458380 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.458511 master-0 kubenswrapper[3171]: I0216 17:14:16.458476 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.458511 master-0 kubenswrapper[3171]: I0216 17:14:16.458509 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.458607 master-0 kubenswrapper[3171]: I0216 17:14:16.458547 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.458607 master-0 kubenswrapper[3171]: I0216 17:14:16.458571 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:16.458695 master-0 kubenswrapper[3171]: I0216 17:14:16.458598 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.458737 master-0 kubenswrapper[3171]: I0216 17:14:16.458687 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:16.458737 master-0 kubenswrapper[3171]: I0216 17:14:16.458710 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.458805 master-0 kubenswrapper[3171]: I0216 17:14:16.458609 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.458805 master-0 kubenswrapper[3171]: I0216 17:14:16.458752 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.458805 master-0 kubenswrapper[3171]: I0216 17:14:16.458775 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.458927 master-0 kubenswrapper[3171]: I0216 17:14:16.458807 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.458927 master-0 kubenswrapper[3171]: I0216 17:14:16.458862 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.458927 master-0 kubenswrapper[3171]: I0216 17:14:16.458852 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.459058 master-0 kubenswrapper[3171]: I0216 17:14:16.458941 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.459058 master-0 kubenswrapper[3171]: I0216 17:14:16.458989 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:16.459058 master-0 kubenswrapper[3171]: I0216 17:14:16.459018 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:16.459182 master-0 kubenswrapper[3171]: I0216 17:14:16.459129 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.459227 master-0 kubenswrapper[3171]: I0216 17:14:16.459173 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.459268 master-0 kubenswrapper[3171]: I0216 17:14:16.459241 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:16.459318 master-0 kubenswrapper[3171]: I0216 17:14:16.459304 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.459360 master-0 kubenswrapper[3171]: I0216 17:14:16.459302 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.459435 master-0 kubenswrapper[3171]: I0216 17:14:16.459413 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:16.459489 master-0 kubenswrapper[3171]: I0216 17:14:16.459448 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.459536 master-0 kubenswrapper[3171]: I0216 17:14:16.459496 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.459594 master-0 kubenswrapper[3171]: I0216 17:14:16.459576 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.459639 master-0 kubenswrapper[3171]: I0216 17:14:16.459616 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.459688 master-0 kubenswrapper[3171]: I0216 17:14:16.459671 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.459794 master-0 kubenswrapper[3171]: I0216 17:14:16.459769 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.459794 master-0 kubenswrapper[3171]: I0216 17:14:16.459782 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.459887 master-0 kubenswrapper[3171]: I0216 17:14:16.459799 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.459887 master-0 kubenswrapper[3171]: I0216 17:14:16.459831 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.459887 master-0 kubenswrapper[3171]: I0216 17:14:16.459879 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.460022 master-0 kubenswrapper[3171]: I0216 17:14:16.459900 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:14:16.460022 master-0 kubenswrapper[3171]: I0216 17:14:16.459920 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.460022 master-0 kubenswrapper[3171]: I0216 17:14:16.459933 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.460022 master-0 kubenswrapper[3171]: I0216 17:14:16.459979 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.460022 master-0 kubenswrapper[3171]: I0216 17:14:16.459999 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.460022 master-0 kubenswrapper[3171]: I0216 17:14:16.460015 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.460229 master-0 kubenswrapper[3171]: I0216 17:14:16.460032 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:14:16.460229 master-0 kubenswrapper[3171]: I0216 17:14:16.460070 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.460310 master-0 kubenswrapper[3171]: I0216 17:14:16.460239 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:16.460351 master-0 kubenswrapper[3171]: I0216 17:14:16.460315 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:16.460409 master-0 kubenswrapper[3171]: I0216 17:14:16.460388 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.460466 master-0 kubenswrapper[3171]: I0216 17:14:16.460448 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:16.460508 master-0 kubenswrapper[3171]: I0216 17:14:16.460480 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.460570 master-0 kubenswrapper[3171]: I0216 17:14:16.460551 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:16.460613 master-0 kubenswrapper[3171]: I0216 17:14:16.460589 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.460669 master-0 kubenswrapper[3171]: I0216 17:14:16.460653 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.460738 master-0 kubenswrapper[3171]: I0216 17:14:16.460721 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.460738 master-0 kubenswrapper[3171]: I0216 17:14:16.460731 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.460812 master-0 kubenswrapper[3171]: I0216 17:14:16.460778 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.460812 master-0 kubenswrapper[3171]: I0216 17:14:16.460798 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:16.460882 master-0 kubenswrapper[3171]: I0216 17:14:16.460841 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.460987 master-0 kubenswrapper[3171]: I0216 17:14:16.460970 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:16.461023 master-0 kubenswrapper[3171]: I0216 17:14:16.460995 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.461023 master-0 kubenswrapper[3171]: I0216 17:14:16.461003 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.461085 master-0 kubenswrapper[3171]: I0216 17:14:16.461023 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.461085 master-0 kubenswrapper[3171]: I0216 17:14:16.461071 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.461140 master-0 kubenswrapper[3171]: I0216 17:14:16.461096 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.461170 master-0 kubenswrapper[3171]: I0216 17:14:16.461137 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.461206 master-0 kubenswrapper[3171]: I0216 17:14:16.461169 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.461206 master-0 kubenswrapper[3171]: I0216 17:14:16.461191 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.461260 master-0 kubenswrapper[3171]: I0216 17:14:16.461228 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.461260 master-0 kubenswrapper[3171]: I0216 17:14:16.461245 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.461318 master-0 kubenswrapper[3171]: I0216 17:14:16.461269 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.461318 master-0 kubenswrapper[3171]: I0216 17:14:16.461300 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.461394 master-0 kubenswrapper[3171]: I0216 17:14:16.461377 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.461463 master-0 kubenswrapper[3171]: I0216 17:14:16.461380 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.461503 master-0 kubenswrapper[3171]: I0216 17:14:16.461490 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.461532 master-0 kubenswrapper[3171]: I0216 17:14:16.461517 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.461687 master-0 kubenswrapper[3171]: I0216 17:14:16.461637 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.461775 master-0 kubenswrapper[3171]: I0216 17:14:16.461759 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:16.461861 master-0 kubenswrapper[3171]: I0216 17:14:16.461845 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:16.461914 master-0 kubenswrapper[3171]: I0216 17:14:16.461898 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.462047 master-0 kubenswrapper[3171]: I0216 17:14:16.462022 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:16.462151 master-0 kubenswrapper[3171]: I0216 17:14:16.462135 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.462211 master-0 kubenswrapper[3171]: I0216 17:14:16.462186 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.462283 master-0 kubenswrapper[3171]: I0216 17:14:16.462264 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:16.462356 master-0 kubenswrapper[3171]: I0216 17:14:16.462332 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.462481 master-0 kubenswrapper[3171]: I0216 17:14:16.462452 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.462547 master-0 kubenswrapper[3171]: I0216 17:14:16.462502 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.462547 master-0 kubenswrapper[3171]: I0216 17:14:16.462457 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.462547 master-0 kubenswrapper[3171]: I0216 17:14:16.462530 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.462663 master-0 kubenswrapper[3171]: I0216 17:14:16.462549 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.462663 master-0 kubenswrapper[3171]: I0216 17:14:16.462575 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.462663 master-0 kubenswrapper[3171]: I0216 17:14:16.462593 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.462663 master-0 kubenswrapper[3171]: I0216 17:14:16.462635 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.462816 master-0 kubenswrapper[3171]: I0216 17:14:16.462685 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:16.462816 master-0 kubenswrapper[3171]: I0216 17:14:16.462715 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.462816 master-0 kubenswrapper[3171]: I0216 17:14:16.462745 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:16.462981 master-0 kubenswrapper[3171]: I0216 17:14:16.462918 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kube-api-access\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:16.463100 master-0 kubenswrapper[3171]: I0216 17:14:16.463078 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.463162 master-0 kubenswrapper[3171]: I0216 17:14:16.463142 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.463203 master-0 kubenswrapper[3171]: I0216 17:14:16.463157 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.463203 master-0 kubenswrapper[3171]: I0216 17:14:16.463187 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:16.463292 master-0 kubenswrapper[3171]: I0216 17:14:16.463268 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.463376 master-0 kubenswrapper[3171]: I0216 17:14:16.463358 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.463498 master-0 kubenswrapper[3171]: I0216 17:14:16.463437 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.463542 master-0 kubenswrapper[3171]: I0216 17:14:16.463526 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.463601 master-0 kubenswrapper[3171]: I0216 17:14:16.463537 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:16.463650 master-0 kubenswrapper[3171]: I0216 17:14:16.463614 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.463650 master-0 kubenswrapper[3171]: I0216 17:14:16.463629 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:16.463737 master-0 kubenswrapper[3171]: I0216 17:14:16.463668 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.463737 master-0 kubenswrapper[3171]: I0216 17:14:16.463711 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.463819 master-0 kubenswrapper[3171]: I0216 17:14:16.463771 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.463819 master-0 kubenswrapper[3171]: I0216 17:14:16.463800 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.463899 master-0 kubenswrapper[3171]: I0216 17:14:16.463868 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:16.464541 master-0 kubenswrapper[3171]: I0216 17:14:16.464504 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-var-lock\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:16.464802 master-0 kubenswrapper[3171]: I0216 17:14:16.464532 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-var-lock\") pod \"installer-4-master-0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:16.468492 master-0 kubenswrapper[3171]: I0216 17:14:16.468442 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:16.482223 master-0 kubenswrapper[3171]: I0216 17:14:16.482173 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57455\" (UniqueName: \"kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.495916 master-0 kubenswrapper[3171]: I0216 17:14:16.495843 3171 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:16.499538 master-0 kubenswrapper[3171]: I0216 17:14:16.499490 3171 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:16.506243 master-0 kubenswrapper[3171]: I0216 17:14:16.506197 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbtld\" (UniqueName: \"kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:16.517635 master-0 kubenswrapper[3171]: E0216 17:14:16.517523 3171 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.517635 master-0 kubenswrapper[3171]: E0216 17:14:16.517556 3171 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.517635 master-0 kubenswrapper[3171]: E0216 17:14:16.517570 3171 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.517635 master-0 kubenswrapper[3171]: E0216 17:14:16.517627 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.017609358 +0000 UTC m=+26.686464614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.541492 master-0 kubenswrapper[3171]: I0216 17:14:16.541421 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.554700 master-0 kubenswrapper[3171]: E0216 17:14:16.554649 3171 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:14:16.554700 master-0 kubenswrapper[3171]: E0216 17:14:16.554692 3171 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.554700 master-0 kubenswrapper[3171]: E0216 17:14:16.554709 3171 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.554985 master-0 kubenswrapper[3171]: E0216 17:14:16.554779 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.054753333 +0000 UTC m=+26.723608619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.566173 master-0 kubenswrapper[3171]: I0216 17:14:16.566134 3171 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kubelet-dir\") pod \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " Feb 16 17:14:16.566339 master-0 kubenswrapper[3171]: I0216 17:14:16.566207 3171 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-var-lock\") pod \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\" (UID: \"1ea5bf67-1fd1-488a-a440-00bb9a8533d0\") " Feb 16 17:14:16.566339 master-0 kubenswrapper[3171]: I0216 17:14:16.566273 3171 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1ea5bf67-1fd1-488a-a440-00bb9a8533d0" (UID: "1ea5bf67-1fd1-488a-a440-00bb9a8533d0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:14:16.566339 master-0 kubenswrapper[3171]: I0216 17:14:16.566279 3171 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-var-lock" (OuterVolumeSpecName: "var-lock") pod "1ea5bf67-1fd1-488a-a440-00bb9a8533d0" (UID: "1ea5bf67-1fd1-488a-a440-00bb9a8533d0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:14:16.569836 master-0 kubenswrapper[3171]: I0216 17:14:16.569800 3171 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:16.569836 master-0 kubenswrapper[3171]: I0216 17:14:16.569826 3171 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ea5bf67-1fd1-488a-a440-00bb9a8533d0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:16.578941 master-0 kubenswrapper[3171]: I0216 17:14:16.578886 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76rtg\" (UniqueName: \"kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:16.594997 master-0 kubenswrapper[3171]: E0216 17:14:16.594918 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:16.594997 master-0 kubenswrapper[3171]: E0216 17:14:16.594994 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.594997 master-0 kubenswrapper[3171]: E0216 17:14:16.595011 3171 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.595256 master-0 kubenswrapper[3171]: E0216 17:14:16.595075 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.095056434 +0000 UTC m=+26.763911690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.614268 master-0 kubenswrapper[3171]: E0216 17:14:16.614194 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:16.614268 master-0 kubenswrapper[3171]: E0216 17:14:16.614254 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.614268 master-0 kubenswrapper[3171]: E0216 17:14:16.614272 3171 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.614536 master-0 kubenswrapper[3171]: E0216 17:14:16.614387 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.114358706 +0000 UTC m=+26.783213962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.641842 master-0 kubenswrapper[3171]: E0216 17:14:16.641782 3171 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:16.641842 master-0 kubenswrapper[3171]: E0216 17:14:16.641849 3171 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.642089 master-0 kubenswrapper[3171]: E0216 17:14:16.641866 3171 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.642089 master-0 kubenswrapper[3171]: E0216 17:14:16.642005 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.141933952 +0000 UTC m=+26.810789218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.655342 master-0 kubenswrapper[3171]: E0216 17:14:16.655285 3171 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.655342 master-0 kubenswrapper[3171]: E0216 17:14:16.655325 3171 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.655342 master-0 kubenswrapper[3171]: E0216 17:14:16.655340 3171 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.655622 master-0 kubenswrapper[3171]: E0216 17:14:16.655399 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.155381206 +0000 UTC m=+26.824236482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.679006 master-0 kubenswrapper[3171]: I0216 17:14:16.678932 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7l6q\" (UniqueName: \"kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:16.694177 master-0 kubenswrapper[3171]: E0216 17:14:16.694067 3171 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.694177 master-0 kubenswrapper[3171]: E0216 17:14:16.694130 3171 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.694177 master-0 kubenswrapper[3171]: E0216 17:14:16.694149 3171 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.694592 master-0 kubenswrapper[3171]: E0216 17:14:16.694242 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.194209157 +0000 UTC m=+26.863064413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.720046 master-0 kubenswrapper[3171]: E0216 17:14:16.719985 3171 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.720046 master-0 kubenswrapper[3171]: E0216 17:14:16.720031 3171 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.720046 master-0 kubenswrapper[3171]: E0216 17:14:16.720047 3171 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.720301 master-0 kubenswrapper[3171]: E0216 17:14:16.720113 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.220093887 +0000 UTC m=+26.888949163 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.738538 master-0 kubenswrapper[3171]: E0216 17:14:16.738481 3171 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.738538 master-0 kubenswrapper[3171]: E0216 17:14:16.738526 3171 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.738662 master-0 kubenswrapper[3171]: E0216 17:14:16.738604 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.238581188 +0000 UTC m=+26.907436444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.760336 master-0 kubenswrapper[3171]: I0216 17:14:16.760276 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94kdz\" (UniqueName: \"kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:16.779118 master-0 kubenswrapper[3171]: I0216 17:14:16.779001 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:16.793694 master-0 kubenswrapper[3171]: E0216 17:14:16.793597 3171 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:16.793694 master-0 kubenswrapper[3171]: E0216 17:14:16.793679 3171 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.793882 master-0 kubenswrapper[3171]: E0216 17:14:16.793709 3171 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.793882 master-0 kubenswrapper[3171]: E0216 17:14:16.793792 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.293755121 +0000 UTC m=+26.962610377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.815380 master-0 kubenswrapper[3171]: E0216 17:14:16.815297 3171 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.815380 master-0 kubenswrapper[3171]: E0216 17:14:16.815375 3171 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.815380 master-0 kubenswrapper[3171]: E0216 17:14:16.815392 3171 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.815778 master-0 kubenswrapper[3171]: E0216 17:14:16.815489 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.315464158 +0000 UTC m=+26.984319424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.833329 master-0 kubenswrapper[3171]: I0216 17:14:16.833268 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6r7wj" Feb 16 17:14:16.833697 master-0 kubenswrapper[3171]: E0216 17:14:16.833642 3171 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:16.833697 master-0 kubenswrapper[3171]: E0216 17:14:16.833676 3171 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.833697 master-0 kubenswrapper[3171]: E0216 17:14:16.833690 3171 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.833921 master-0 kubenswrapper[3171]: E0216 17:14:16.833739 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.333724031 +0000 UTC m=+27.002579287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.847792 master-0 kubenswrapper[3171]: W0216 17:14:16.847263 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43f65f23_4ddd_471a_9cb3_b0945382d83c.slice/crio-49352b0546742089f6d27ebdb79f9e6f209f38640843957969c3a7f0cde5300b WatchSource:0}: Error finding container 49352b0546742089f6d27ebdb79f9e6f209f38640843957969c3a7f0cde5300b: Status 404 returned error can't find the container with id 49352b0546742089f6d27ebdb79f9e6f209f38640843957969c3a7f0cde5300b Feb 16 17:14:16.871096 master-0 kubenswrapper[3171]: I0216 17:14:16.871046 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgm4p\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-kube-api-access-lgm4p\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.879154 master-0 kubenswrapper[3171]: I0216 17:14:16.879109 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:16.879315 master-0 kubenswrapper[3171]: I0216 17:14:16.879239 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:16.879315 master-0 kubenswrapper[3171]: I0216 17:14:16.879270 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:16.879315 master-0 kubenswrapper[3171]: I0216 17:14:16.879301 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:16.879315 master-0 kubenswrapper[3171]: E0216 17:14:16.879280 3171 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:16.879636 master-0 kubenswrapper[3171]: E0216 17:14:16.879359 3171 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:16.879636 master-0 kubenswrapper[3171]: E0216 17:14:16.879400 3171 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:16.879636 master-0 kubenswrapper[3171]: E0216 17:14:16.879414 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.879391247 +0000 UTC m=+27.548246513 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:16.879636 master-0 kubenswrapper[3171]: E0216 17:14:16.879428 3171 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:16.879636 master-0 kubenswrapper[3171]: E0216 17:14:16.879436 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.879428748 +0000 UTC m=+27.548284014 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:16.879636 master-0 kubenswrapper[3171]: E0216 17:14:16.879454 3171 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:16.879636 master-0 kubenswrapper[3171]: I0216 17:14:16.879320 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:16.879636 master-0 kubenswrapper[3171]: E0216 17:14:16.879476 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.879447458 +0000 UTC m=+27.548302724 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:16.879636 master-0 kubenswrapper[3171]: E0216 17:14:16.879500 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.879489509 +0000 UTC m=+27.548344775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:16.879636 master-0 kubenswrapper[3171]: E0216 17:14:16.879522 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.87951559 +0000 UTC m=+27.548370866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: I0216 17:14:16.879668 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: I0216 17:14:16.879723 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.879744 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: I0216 17:14:16.879753 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.879773 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0 podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.879763247 +0000 UTC m=+27.548618503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: I0216 17:14:16.879791 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.879838 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.879858 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.879851439 +0000 UTC m=+27.548706695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.879838 3171 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.879880 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.87987608 +0000 UTC m=+27.548731336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.879885 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: I0216 17:14:16.879899 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.879919 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.879908801 +0000 UTC m=+27.548764067 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.879936 3171 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: I0216 17:14:16.880011 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.880099 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.880122 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.880116476 +0000 UTC m=+27.548971732 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.880138 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.880132507 +0000 UTC m=+27.548987763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.880243 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:16.880285 master-0 kubenswrapper[3171]: E0216 17:14:16.880288 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.880282141 +0000 UTC m=+27.549137387 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.880491 3171 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.880511 3171 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.880546 3171 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.880594 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.380581839 +0000 UTC m=+27.049437105 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: I0216 17:14:16.880085 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: I0216 17:14:16.880677 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: I0216 17:14:16.880732 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.880861 3171 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.880922 3171 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.880949 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.880921398 +0000 UTC m=+27.549776684 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.880923 3171 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.881022 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.88100245 +0000 UTC m=+27.549857786 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.881095 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.881067612 +0000 UTC m=+27.549922908 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: I0216 17:14:16.880786 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: I0216 17:14:16.881258 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: I0216 17:14:16.881324 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: I0216 17:14:16.881377 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.881388 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: I0216 17:14:16.881416 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.881440 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: I0216 17:14:16.881457 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.881465 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.881458563 +0000 UTC m=+27.550313819 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.881490 3171 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.881494 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.881510 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.881503934 +0000 UTC m=+27.550359190 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: I0216 17:14:16.881508 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.881523 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.881518554 +0000 UTC m=+27.550373810 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:16.881500 master-0 kubenswrapper[3171]: E0216 17:14:16.881543 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.881529075 +0000 UTC m=+27.550384341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.881598 3171 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.881645 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.881630127 +0000 UTC m=+27.550485423 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: I0216 17:14:16.881711 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.881721 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.881747 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.88174019 +0000 UTC m=+27.550595446 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: I0216 17:14:16.881743 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.881788 3171 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: I0216 17:14:16.881797 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.881820 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.881809122 +0000 UTC m=+27.550664388 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.881829 3171 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: I0216 17:14:16.881841 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.881849 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.881843363 +0000 UTC m=+27.550698619 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: I0216 17:14:16.881871 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.881885 3171 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.881903 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.881897865 +0000 UTC m=+27.550753201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: I0216 17:14:16.881902 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.881981 3171 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.881992 3171 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: I0216 17:14:16.882014 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.882023 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882013628 +0000 UTC m=+27.550868904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.881995 3171 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: I0216 17:14:16.882038 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: I0216 17:14:16.882058 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: I0216 17:14:16.882077 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: I0216 17:14:16.882098 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: I0216 17:14:16.882117 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.882042 3171 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.882175 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.882072 3171 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: I0216 17:14:16.882194 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.882249 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.882175 3171 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.882101 3171 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.882134 3171 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.882342 3171 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.882197 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882191342 +0000 UTC m=+27.551046598 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:16.883350 master-0 kubenswrapper[3171]: E0216 17:14:16.882305 3171 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882477 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882399608 +0000 UTC m=+27.551254894 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882501 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882488801 +0000 UTC m=+27.551344127 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: I0216 17:14:16.882541 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882580 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: I0216 17:14:16.882595 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882622 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882612014 +0000 UTC m=+27.551467270 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882636 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882630854 +0000 UTC m=+27.551486110 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882648 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882642415 +0000 UTC m=+27.551497671 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882659 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882655355 +0000 UTC m=+27.551510611 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882668 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882664735 +0000 UTC m=+27.551519991 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882677 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882672645 +0000 UTC m=+27.551527901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882679 3171 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882686 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882681996 +0000 UTC m=+27.551537252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: I0216 17:14:16.882706 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882722 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882708676 +0000 UTC m=+27.551563942 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882756 3171 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: I0216 17:14:16.882753 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882775 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882770568 +0000 UTC m=+27.551625824 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: I0216 17:14:16.882790 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: I0216 17:14:16.882810 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: I0216 17:14:16.882827 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882898 3171 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.882946 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.882932853 +0000 UTC m=+27.551788119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: I0216 17:14:16.883041 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: I0216 17:14:16.883088 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.883127 3171 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: I0216 17:14:16.883138 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.883173 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.883159009 +0000 UTC m=+27.552014285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: E0216 17:14:16.883202 3171 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:16.885851 master-0 kubenswrapper[3171]: I0216 17:14:16.883209 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883244 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.883230331 +0000 UTC m=+27.552085597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: I0216 17:14:16.883279 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883291 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883333 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.883319563 +0000 UTC m=+27.552174869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.882897 3171 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: I0216 17:14:16.883367 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883376 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.883364514 +0000 UTC m=+27.552219790 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883410 3171 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: I0216 17:14:16.883426 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883446 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.883434806 +0000 UTC m=+27.552290082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: I0216 17:14:16.883473 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883597 3171 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: I0216 17:14:16.883659 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: I0216 17:14:16.883700 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883709 3171 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: I0216 17:14:16.883801 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883834 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.883809996 +0000 UTC m=+27.552665292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883882 3171 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883894 3171 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883884 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.883870058 +0000 UTC m=+27.552725364 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883936 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.883928109 +0000 UTC m=+27.552783445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883946 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.88394108 +0000 UTC m=+27.552796336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.883993 3171 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.884038 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.884024802 +0000 UTC m=+27.552880148 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.884138 3171 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.884147 3171 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.884173 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.884164526 +0000 UTC m=+27.553019882 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.884252 3171 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.884323 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.88430033 +0000 UTC m=+27.553155616 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.884381 3171 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.884382 3171 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.884452 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.884442023 +0000 UTC m=+27.553297289 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: I0216 17:14:16.884459 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.884490 3171 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:16.887822 master-0 kubenswrapper[3171]: E0216 17:14:16.884514 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.884508645 +0000 UTC m=+27.553363901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.884550 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.884612 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.884605328 +0000 UTC m=+27.553460574 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.884634 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.884678 3171 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.884689 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.884715 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.88470405 +0000 UTC m=+27.553559316 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.884775 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.884787 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.884820 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.884810823 +0000 UTC m=+27.553666149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.884847 3171 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.884873 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.884902 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.884882985 +0000 UTC m=+27.553738231 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.884947 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.884951 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.885120 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.885015 3171 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.885159 3171 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.885159 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.885045 3171 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.885186 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.885174463 +0000 UTC m=+27.554029729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.885435 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.8854178 +0000 UTC m=+27.554273076 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.885222 3171 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.885488 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.885475291 +0000 UTC m=+27.554330567 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.885516 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.885560 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.885603 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.885643 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.885628435 +0000 UTC m=+27.554483701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.885666 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.885657256 +0000 UTC m=+27.554512522 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.885726 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.885751 3171 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.885763 3171 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.885762 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.885801 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.88578682 +0000 UTC m=+27.554642146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: E0216 17:14:16.885812 3171 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:16.890418 master-0 kubenswrapper[3171]: I0216 17:14:16.885841 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.885844 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.885836381 +0000 UTC m=+27.554691647 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.885864 3171 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: I0216 17:14:16.885878 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.885889 3171 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.885893 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.885885362 +0000 UTC m=+27.554740638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.885925 3171 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.885937 3171 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: I0216 17:14:16.885939 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.885947 3171 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.885951 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.885943544 +0000 UTC m=+27.554798810 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.885973 3171 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: I0216 17:14:16.885992 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886003 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.885994965 +0000 UTC m=+27.554850291 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886009 3171 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: I0216 17:14:16.886026 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886068 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.886024496 +0000 UTC m=+27.554879772 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886103 3171 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886107 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.886098818 +0000 UTC m=+27.554954084 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886130 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.886122839 +0000 UTC m=+27.554978105 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886155 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.886139079 +0000 UTC m=+27.554994425 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886194 3171 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886231 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.886218841 +0000 UTC m=+27.555074167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: I0216 17:14:16.886190 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886250 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886295 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.886279973 +0000 UTC m=+27.555135299 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: I0216 17:14:16.886327 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: I0216 17:14:16.886370 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: I0216 17:14:16.886409 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886461 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: I0216 17:14:16.886481 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886512 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.886498219 +0000 UTC m=+27.555353495 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886523 3171 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:16.892935 master-0 kubenswrapper[3171]: E0216 17:14:16.886546 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.88654029 +0000 UTC m=+27.555395536 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: I0216 17:14:16.886545 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: I0216 17:14:16.886593 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.886635 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: I0216 17:14:16.886635 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.886641 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.886674 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.886653 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.886648003 +0000 UTC m=+27.555503259 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.886809 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.886821 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.886834 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.886828308 +0000 UTC m=+27.555683564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.886869 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.886854699 +0000 UTC m=+27.555710015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.886819 3171 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.886897 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.886879349 +0000 UTC m=+27.555734625 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.886925 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.88691237 +0000 UTC m=+27.555767686 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.886946 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.886936811 +0000 UTC m=+27.555792157 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: I0216 17:14:16.887471 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: I0216 17:14:16.887631 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: I0216 17:14:16.887678 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: I0216 17:14:16.887729 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: I0216 17:14:16.887768 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: I0216 17:14:16.887809 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.887842 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.887855 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.887873 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.887860826 +0000 UTC m=+27.556716082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.887912 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.887914 3171 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.887918 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.887900447 +0000 UTC m=+27.556755763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.887913 3171 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.887994 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.887987739 +0000 UTC m=+27.556842995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.888006 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.88800048 +0000 UTC m=+27.556855726 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.888029 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.88801167 +0000 UTC m=+27.556866956 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: I0216 17:14:16.888082 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: I0216 17:14:16.888127 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: I0216 17:14:16.888183 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: E0216 17:14:16.888240 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:16.895542 master-0 kubenswrapper[3171]: I0216 17:14:16.888267 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888280 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888289 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: I0216 17:14:16.888320 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888326 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.888314048 +0000 UTC m=+27.557169324 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888365 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.888353139 +0000 UTC m=+27.557208425 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888366 3171 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888387 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.88837628 +0000 UTC m=+27.557231576 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888328 3171 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: I0216 17:14:16.888497 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888557 3171 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: I0216 17:14:16.888566 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888591 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.888581425 +0000 UTC m=+27.557436701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888610 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.888602396 +0000 UTC m=+27.557457652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: I0216 17:14:16.888639 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888657 3171 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888691 3171 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888699 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.888685898 +0000 UTC m=+27.557541184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888721 3171 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: I0216 17:14:16.888661 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888721 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.888709729 +0000 UTC m=+27.557565015 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888772 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.88876202 +0000 UTC m=+27.557617306 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888794 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.888783941 +0000 UTC m=+27.557639227 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: I0216 17:14:16.888840 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: I0216 17:14:16.888887 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888971 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.888993 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.888985706 +0000 UTC m=+27.557840962 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.889015 3171 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: I0216 17:14:16.888947 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.889031 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.889026697 +0000 UTC m=+27.557881953 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: I0216 17:14:16.889123 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: I0216 17:14:16.889167 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: I0216 17:14:16.889221 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: E0216 17:14:16.889344 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:16.898096 master-0 kubenswrapper[3171]: I0216 17:14:16.889351 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889389 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.889379677 +0000 UTC m=+27.558234943 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: I0216 17:14:16.889413 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889432 3171 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889437 3171 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889480 3171 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889476 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: I0216 17:14:16.889442 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889512 3171 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889452 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.889446659 +0000 UTC m=+27.558301915 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889557 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.889542621 +0000 UTC m=+27.558397917 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889578 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.889567762 +0000 UTC m=+27.558423058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889598 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.889588683 +0000 UTC m=+27.558443969 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889488 3171 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889631 3171 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889648 3171 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: I0216 17:14:16.889629 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889649 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.889639324 +0000 UTC m=+27.558494620 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889732 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889755 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.889740467 +0000 UTC m=+27.558595743 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: I0216 17:14:16.889779 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889822 3171 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: I0216 17:14:16.889812 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889873 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.88985921 +0000 UTC m=+27.558714506 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889902 3171 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.889918 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.889901031 +0000 UTC m=+27.558756297 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: I0216 17:14:16.889953 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: I0216 17:14:16.890011 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.890016 3171 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: I0216 17:14:16.890043 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.890058 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890045005 +0000 UTC m=+27.558900291 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.890088 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890067846 +0000 UTC m=+27.558923112 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.890096 3171 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.890117 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:16.900279 master-0 kubenswrapper[3171]: E0216 17:14:16.890130 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890123457 +0000 UTC m=+27.558978703 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: I0216 17:14:16.890148 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890155 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890145198 +0000 UTC m=+27.559000554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890187 3171 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890205 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890200529 +0000 UTC m=+27.559055785 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: I0216 17:14:16.890183 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890234 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: I0216 17:14:16.890250 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890275 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890263241 +0000 UTC m=+27.559118577 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890308 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890340 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890332413 +0000 UTC m=+27.559187679 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: I0216 17:14:16.890339 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890370 3171 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: I0216 17:14:16.890384 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890395 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890388544 +0000 UTC m=+27.559243810 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: I0216 17:14:16.890418 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: I0216 17:14:16.890468 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890485 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: I0216 17:14:16.890500 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890508 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890501457 +0000 UTC m=+27.559356713 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890521 3171 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890535 3171 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: I0216 17:14:16.890544 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890554 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890546299 +0000 UTC m=+27.559401565 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890570 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890563129 +0000 UTC m=+27.559418405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890571 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: I0216 17:14:16.890594 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890615 3171 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890642 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890627081 +0000 UTC m=+27.559482377 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890650 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890675 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890667232 +0000 UTC m=+27.559522508 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: I0216 17:14:16.890734 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890787 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890769425 +0000 UTC m=+27.559624721 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890793 3171 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:16.902313 master-0 kubenswrapper[3171]: E0216 17:14:16.890824 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.890816916 +0000 UTC m=+27.559672192 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: I0216 17:14:16.890876 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: I0216 17:14:16.890920 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891002 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891035 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.891025362 +0000 UTC m=+27.559880638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: I0216 17:14:16.891031 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: I0216 17:14:16.891075 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891092 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: I0216 17:14:16.891120 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891126 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891136 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.891124104 +0000 UTC m=+27.559979390 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891190 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891194 3171 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: I0216 17:14:16.891228 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: I0216 17:14:16.891272 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891300 3171 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: I0216 17:14:16.891328 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891335 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.89132563 +0000 UTC m=+27.560180906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891373 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.891365161 +0000 UTC m=+27.560220427 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891392 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.891383751 +0000 UTC m=+27.560239017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891395 3171 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891408 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.891400852 +0000 UTC m=+27.560256118 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891434 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.891423092 +0000 UTC m=+27.560278378 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891450 3171 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891491 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.891483794 +0000 UTC m=+27.560339070 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: I0216 17:14:16.891490 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891538 3171 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891564 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.891557296 +0000 UTC m=+27.560412562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: I0216 17:14:16.891608 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: I0216 17:14:16.891693 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891703 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891735 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.8917267 +0000 UTC m=+27.560581966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: I0216 17:14:16.891760 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: E0216 17:14:16.891768 3171 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:16.908661 master-0 kubenswrapper[3171]: I0216 17:14:16.891810 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.891820 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.891807123 +0000 UTC m=+27.560662419 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.891851 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.891861 3171 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.891892 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.891906 3171 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.891932 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.891924736 +0000 UTC m=+27.560780002 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.891976 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.892021 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892026 3171 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892086 3171 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892095 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.89208072 +0000 UTC m=+27.560936016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892119 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.892108551 +0000 UTC m=+27.560963847 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892139 3171 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892164 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.892156332 +0000 UTC m=+27.561011598 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892182 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.892174973 +0000 UTC m=+27.561030249 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.892048 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892186 3171 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892270 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.892261145 +0000 UTC m=+27.561116511 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.892230 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892285 3171 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892218 3171 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.892321 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892354 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.892342567 +0000 UTC m=+27.561197853 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892377 3171 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.892441 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892468 3171 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892480 3171 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892506 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.892497331 +0000 UTC m=+27.561352597 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.892532 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.892560 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.892589 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.892633 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.892659 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: I0216 17:14:16.892688 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892804 3171 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:16.910303 master-0 kubenswrapper[3171]: E0216 17:14:16.892832 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.89282458 +0000 UTC m=+27.561679846 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: E0216 17:14:16.892874 3171 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: E0216 17:14:16.892901 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.892894172 +0000 UTC m=+27.561749438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: E0216 17:14:16.892919 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.892910383 +0000 UTC m=+27.561765649 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: E0216 17:14:16.892934 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.892927063 +0000 UTC m=+27.561782339 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: E0216 17:14:16.892983 3171 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: E0216 17:14:16.893014 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.893007155 +0000 UTC m=+27.561862421 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: E0216 17:14:16.893068 3171 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: E0216 17:14:16.893094 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.893086817 +0000 UTC m=+27.561942083 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: E0216 17:14:16.893140 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: E0216 17:14:16.893167 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.893157769 +0000 UTC m=+27.562013035 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: E0216 17:14:16.893227 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: E0216 17:14:16.893263 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.893254402 +0000 UTC m=+27.562109668 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: I0216 17:14:16.904000 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq4dn\" (UniqueName: \"kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:16.912686 master-0 kubenswrapper[3171]: I0216 17:14:16.909862 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:16.918942 master-0 kubenswrapper[3171]: I0216 17:14:16.918904 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:16.923617 master-0 kubenswrapper[3171]: W0216 17:14:16.923510 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0b1ebd3_1068_4624_9b6d_3e9f45ded76a.slice/crio-e979fd391b550f805e511fcc06c4da51e87eefebf9f2469af331306f8e129b95 WatchSource:0}: Error finding container e979fd391b550f805e511fcc06c4da51e87eefebf9f2469af331306f8e129b95: Status 404 returned error can't find the container with id e979fd391b550f805e511fcc06c4da51e87eefebf9f2469af331306f8e129b95 Feb 16 17:14:16.931123 master-0 kubenswrapper[3171]: I0216 17:14:16.930923 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:16.946927 master-0 kubenswrapper[3171]: W0216 17:14:16.945542 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a939dd0_fc27_4d47_b81b_96e13e4bbca9.slice/crio-f38395ae743150d868e0e9f52251b36fa3cd386c02f6210a01132b4d3e9b83fa WatchSource:0}: Error finding container f38395ae743150d868e0e9f52251b36fa3cd386c02f6210a01132b4d3e9b83fa: Status 404 returned error can't find the container with id f38395ae743150d868e0e9f52251b36fa3cd386c02f6210a01132b4d3e9b83fa Feb 16 17:14:16.957939 master-0 kubenswrapper[3171]: I0216 17:14:16.957899 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j99jl\" (UniqueName: \"kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:16.959751 master-0 kubenswrapper[3171]: I0216 17:14:16.959715 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:16.978861 master-0 kubenswrapper[3171]: I0216 17:14:16.978776 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nfk2\" (UniqueName: \"kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:16.995211 master-0 kubenswrapper[3171]: I0216 17:14:16.995145 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:16.995395 master-0 kubenswrapper[3171]: E0216 17:14:16.995342 3171 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.995395 master-0 kubenswrapper[3171]: I0216 17:14:16.995364 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:16.995514 master-0 kubenswrapper[3171]: I0216 17:14:16.995401 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:16.995514 master-0 kubenswrapper[3171]: E0216 17:14:16.995370 3171 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.995514 master-0 kubenswrapper[3171]: E0216 17:14:16.995429 3171 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:16.995622 master-0 kubenswrapper[3171]: E0216 17:14:16.995538 3171 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.995622 master-0 kubenswrapper[3171]: E0216 17:14:16.995544 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.995523979 +0000 UTC m=+27.664379235 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.995622 master-0 kubenswrapper[3171]: E0216 17:14:16.995556 3171 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.995622 master-0 kubenswrapper[3171]: E0216 17:14:16.995559 3171 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:16.995622 master-0 kubenswrapper[3171]: E0216 17:14:16.995577 3171 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:16.995622 master-0 kubenswrapper[3171]: E0216 17:14:16.995587 3171 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.995622 master-0 kubenswrapper[3171]: E0216 17:14:16.995593 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.99557512 +0000 UTC m=+27.664430376 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:16.995622 master-0 kubenswrapper[3171]: E0216 17:14:16.995630 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.995617092 +0000 UTC m=+27.664472408 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.002278 master-0 kubenswrapper[3171]: E0216 17:14:17.002246 3171 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.002278 master-0 kubenswrapper[3171]: E0216 17:14:17.002268 3171 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.002278 master-0 kubenswrapper[3171]: E0216 17:14:17.002276 3171 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.002447 master-0 kubenswrapper[3171]: E0216 17:14:17.002316 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.502304153 +0000 UTC m=+27.171159409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.016219 master-0 kubenswrapper[3171]: I0216 17:14:17.016171 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:17.036828 master-0 kubenswrapper[3171]: E0216 17:14:17.036673 3171 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.036828 master-0 kubenswrapper[3171]: E0216 17:14:17.036700 3171 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.036828 master-0 kubenswrapper[3171]: E0216 17:14:17.036712 3171 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.036828 master-0 kubenswrapper[3171]: E0216 17:14:17.036764 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.536749455 +0000 UTC m=+27.205604701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.063857 master-0 kubenswrapper[3171]: E0216 17:14:17.063816 3171 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.063857 master-0 kubenswrapper[3171]: E0216 17:14:17.063856 3171 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.063997 master-0 kubenswrapper[3171]: E0216 17:14:17.063872 3171 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.063997 master-0 kubenswrapper[3171]: E0216 17:14:17.063946 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.56392574 +0000 UTC m=+27.232781016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.079016 master-0 kubenswrapper[3171]: E0216 17:14:17.078973 3171 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.079016 master-0 kubenswrapper[3171]: E0216 17:14:17.079008 3171 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.079179 master-0 kubenswrapper[3171]: E0216 17:14:17.079069 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.579052869 +0000 UTC m=+27.247908125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.099562 master-0 kubenswrapper[3171]: I0216 17:14:17.099494 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:17.099701 master-0 kubenswrapper[3171]: E0216 17:14:17.099667 3171 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.099701 master-0 kubenswrapper[3171]: E0216 17:14:17.099696 3171 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.099818 master-0 kubenswrapper[3171]: E0216 17:14:17.099707 3171 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.099818 master-0 kubenswrapper[3171]: E0216 17:14:17.099755 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.099740829 +0000 UTC m=+27.768596085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.099907 master-0 kubenswrapper[3171]: I0216 17:14:17.099828 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:17.099907 master-0 kubenswrapper[3171]: I0216 17:14:17.099880 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:17.100020 master-0 kubenswrapper[3171]: E0216 17:14:17.099952 3171 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:14:17.100020 master-0 kubenswrapper[3171]: E0216 17:14:17.099987 3171 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.100020 master-0 kubenswrapper[3171]: E0216 17:14:17.099998 3171 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.101043 master-0 kubenswrapper[3171]: E0216 17:14:17.100033 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.100022207 +0000 UTC m=+27.768877473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.101043 master-0 kubenswrapper[3171]: E0216 17:14:17.100063 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:17.101043 master-0 kubenswrapper[3171]: E0216 17:14:17.100075 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.101043 master-0 kubenswrapper[3171]: E0216 17:14:17.100082 3171 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.101043 master-0 kubenswrapper[3171]: E0216 17:14:17.100186 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.100178141 +0000 UTC m=+27.769033397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.103349 master-0 kubenswrapper[3171]: I0216 17:14:17.103299 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:17.106339 master-0 kubenswrapper[3171]: I0216 17:14:17.106284 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:17.128662 master-0 kubenswrapper[3171]: I0216 17:14:17.128619 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:17.132856 master-0 kubenswrapper[3171]: I0216 17:14:17.132817 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:17.132925 master-0 kubenswrapper[3171]: I0216 17:14:17.132859 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:17.132983 master-0 kubenswrapper[3171]: I0216 17:14:17.132909 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:17.133029 master-0 kubenswrapper[3171]: I0216 17:14:17.132982 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:17.133029 master-0 kubenswrapper[3171]: I0216 17:14:17.132998 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:17.133029 master-0 kubenswrapper[3171]: I0216 17:14:17.133006 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:17.133029 master-0 kubenswrapper[3171]: I0216 17:14:17.132987 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:17.133029 master-0 kubenswrapper[3171]: E0216 17:14:17.132977 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:14:17.133225 master-0 kubenswrapper[3171]: I0216 17:14:17.133043 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:17.133225 master-0 kubenswrapper[3171]: I0216 17:14:17.133074 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:17.133225 master-0 kubenswrapper[3171]: I0216 17:14:17.133098 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:17.133225 master-0 kubenswrapper[3171]: E0216 17:14:17.133095 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:14:17.133225 master-0 kubenswrapper[3171]: I0216 17:14:17.133047 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:17.133225 master-0 kubenswrapper[3171]: I0216 17:14:17.133123 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:17.133225 master-0 kubenswrapper[3171]: I0216 17:14:17.133141 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:17.133225 master-0 kubenswrapper[3171]: I0216 17:14:17.133144 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:17.133225 master-0 kubenswrapper[3171]: E0216 17:14:17.133189 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:14:17.133225 master-0 kubenswrapper[3171]: I0216 17:14:17.133233 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133241 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133250 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133268 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133273 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133275 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133347 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: E0216 17:14:17.133343 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133365 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133374 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133354 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133400 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133412 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133402 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133451 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: E0216 17:14:17.133452 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133468 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133475 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133476 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133499 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133505 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:17.133552 master-0 kubenswrapper[3171]: I0216 17:14:17.133496 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: I0216 17:14:17.133611 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: I0216 17:14:17.133625 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: E0216 17:14:17.133619 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: I0216 17:14:17.133647 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: I0216 17:14:17.133664 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: I0216 17:14:17.133681 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: I0216 17:14:17.133684 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: I0216 17:14:17.133698 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: I0216 17:14:17.133695 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: E0216 17:14:17.133794 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: I0216 17:14:17.133868 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: I0216 17:14:17.133874 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: E0216 17:14:17.133976 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: E0216 17:14:17.134045 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: E0216 17:14:17.134184 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: E0216 17:14:17.134260 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:14:17.134304 master-0 kubenswrapper[3171]: E0216 17:14:17.134303 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:14:17.134878 master-0 kubenswrapper[3171]: E0216 17:14:17.134356 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:14:17.134878 master-0 kubenswrapper[3171]: E0216 17:14:17.134429 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:14:17.134878 master-0 kubenswrapper[3171]: E0216 17:14:17.134494 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:14:17.134878 master-0 kubenswrapper[3171]: E0216 17:14:17.134619 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:14:17.134878 master-0 kubenswrapper[3171]: E0216 17:14:17.134688 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:14:17.135085 master-0 kubenswrapper[3171]: E0216 17:14:17.134889 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" Feb 16 17:14:17.135085 master-0 kubenswrapper[3171]: E0216 17:14:17.134988 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:14:17.135085 master-0 kubenswrapper[3171]: E0216 17:14:17.135070 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:14:17.135206 master-0 kubenswrapper[3171]: E0216 17:14:17.135139 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:14:17.135252 master-0 kubenswrapper[3171]: E0216 17:14:17.135229 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:14:17.135336 master-0 kubenswrapper[3171]: E0216 17:14:17.135304 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:14:17.135383 master-0 kubenswrapper[3171]: E0216 17:14:17.135353 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:14:17.135444 master-0 kubenswrapper[3171]: E0216 17:14:17.135428 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:14:17.135564 master-0 kubenswrapper[3171]: E0216 17:14:17.135530 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:14:17.135696 master-0 kubenswrapper[3171]: E0216 17:14:17.135660 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:14:17.135755 master-0 kubenswrapper[3171]: E0216 17:14:17.135729 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:14:17.135833 master-0 kubenswrapper[3171]: E0216 17:14:17.135808 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:14:17.135903 master-0 kubenswrapper[3171]: E0216 17:14:17.135880 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:14:17.136009 master-0 kubenswrapper[3171]: E0216 17:14:17.135987 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:14:17.136075 master-0 kubenswrapper[3171]: E0216 17:14:17.136051 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:14:17.136126 master-0 kubenswrapper[3171]: E0216 17:14:17.136107 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:14:17.136201 master-0 kubenswrapper[3171]: E0216 17:14:17.136177 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:14:17.136251 master-0 kubenswrapper[3171]: E0216 17:14:17.136227 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:14:17.136312 master-0 kubenswrapper[3171]: E0216 17:14:17.136290 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:14:17.136382 master-0 kubenswrapper[3171]: E0216 17:14:17.136362 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:14:17.136501 master-0 kubenswrapper[3171]: E0216 17:14:17.136467 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:14:17.136546 master-0 kubenswrapper[3171]: E0216 17:14:17.136523 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:14:17.136619 master-0 kubenswrapper[3171]: E0216 17:14:17.136596 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:14:17.136703 master-0 kubenswrapper[3171]: E0216 17:14:17.136681 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:14:17.136831 master-0 kubenswrapper[3171]: E0216 17:14:17.136800 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:14:17.136880 master-0 kubenswrapper[3171]: E0216 17:14:17.136866 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:14:17.137114 master-0 kubenswrapper[3171]: E0216 17:14:17.137084 3171 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" Feb 16 17:14:17.144350 master-0 kubenswrapper[3171]: E0216 17:14:17.144314 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:17.144350 master-0 kubenswrapper[3171]: E0216 17:14:17.144346 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.144475 master-0 kubenswrapper[3171]: E0216 17:14:17.144359 3171 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.144475 master-0 kubenswrapper[3171]: E0216 17:14:17.144422 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.644404658 +0000 UTC m=+27.313259934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.155292 master-0 kubenswrapper[3171]: I0216 17:14:17.155249 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:17.161315 master-0 kubenswrapper[3171]: E0216 17:14:17.161280 3171 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.161315 master-0 kubenswrapper[3171]: E0216 17:14:17.161310 3171 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.161418 master-0 kubenswrapper[3171]: E0216 17:14:17.161324 3171 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.161418 master-0 kubenswrapper[3171]: E0216 17:14:17.161385 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.661362596 +0000 UTC m=+27.330217862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.176717 master-0 kubenswrapper[3171]: E0216 17:14:17.176692 3171 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.176717 master-0 kubenswrapper[3171]: E0216 17:14:17.176718 3171 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.176832 master-0 kubenswrapper[3171]: E0216 17:14:17.176731 3171 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.176832 master-0 kubenswrapper[3171]: E0216 17:14:17.176779 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.676763053 +0000 UTC m=+27.345618319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.179284 master-0 kubenswrapper[3171]: I0216 17:14:17.177844 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:17.200597 master-0 kubenswrapper[3171]: I0216 17:14:17.200560 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjpvn\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-kube-api-access-tjpvn\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:17.203707 master-0 kubenswrapper[3171]: I0216 17:14:17.203660 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.203892 3171 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.203931 3171 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.203948 3171 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.204025 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.20400632 +0000 UTC m=+27.872861596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: I0216 17:14:17.204014 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.204120 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.204146 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: I0216 17:14:17.204144 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.204162 3171 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.204212 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.204193385 +0000 UTC m=+27.873048651 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.204260 3171 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.204278 3171 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.204291 3171 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: I0216 17:14:17.204297 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.204328 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.204315389 +0000 UTC m=+27.873170665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.204377 3171 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.204394 3171 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.204404 3171 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.211520 master-0 kubenswrapper[3171]: E0216 17:14:17.204549 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.204533605 +0000 UTC m=+27.873388871 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.220494 master-0 kubenswrapper[3171]: E0216 17:14:17.220440 3171 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:14:17.220494 master-0 kubenswrapper[3171]: E0216 17:14:17.220475 3171 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.220494 master-0 kubenswrapper[3171]: E0216 17:14:17.220488 3171 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.220762 master-0 kubenswrapper[3171]: E0216 17:14:17.220561 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.720542768 +0000 UTC m=+27.389398014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.237325 master-0 kubenswrapper[3171]: E0216 17:14:17.237277 3171 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: object "openshift-image-registry"/"kube-root-ca.crt" not registered Feb 16 17:14:17.237325 master-0 kubenswrapper[3171]: E0216 17:14:17.237309 3171 projected.go:288] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: object "openshift-image-registry"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.237325 master-0 kubenswrapper[3171]: E0216 17:14:17.237322 3171 projected.go:194] Error preparing data for projected volume kube-api-access-b5mwd for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.237628 master-0 kubenswrapper[3171]: E0216 17:14:17.237382 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.737365413 +0000 UTC m=+27.406220669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b5mwd" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.239579 master-0 kubenswrapper[3171]: I0216 17:14:17.239541 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:17.252782 master-0 kubenswrapper[3171]: I0216 17:14:17.252731 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:17.255152 master-0 kubenswrapper[3171]: E0216 17:14:17.255112 3171 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.255152 master-0 kubenswrapper[3171]: E0216 17:14:17.255139 3171 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.255152 master-0 kubenswrapper[3171]: E0216 17:14:17.255151 3171 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.255314 master-0 kubenswrapper[3171]: E0216 17:14:17.255205 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.755188775 +0000 UTC m=+27.424044021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.276354 master-0 kubenswrapper[3171]: I0216 17:14:17.276299 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" event={"ID":"4549ea98-7379-49e1-8452-5efb643137ca","Type":"ContainerStarted","Data":"3ba5b55cdc513202565393d69d57718508e29795dfd1cdb87d49dc9c14489665"} Feb 16 17:14:17.278364 master-0 kubenswrapper[3171]: I0216 17:14:17.278327 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" event={"ID":"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a","Type":"ContainerStarted","Data":"e979fd391b550f805e511fcc06c4da51e87eefebf9f2469af331306f8e129b95"} Feb 16 17:14:17.279292 master-0 kubenswrapper[3171]: I0216 17:14:17.279258 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r7wj" event={"ID":"43f65f23-4ddd-471a-9cb3-b0945382d83c","Type":"ContainerStarted","Data":"49352b0546742089f6d27ebdb79f9e6f209f38640843957969c3a7f0cde5300b"} Feb 16 17:14:17.280234 master-0 kubenswrapper[3171]: I0216 17:14:17.280209 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerStarted","Data":"f38395ae743150d868e0e9f52251b36fa3cd386c02f6210a01132b4d3e9b83fa"} Feb 16 17:14:17.282162 master-0 kubenswrapper[3171]: I0216 17:14:17.281916 3171 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerStarted","Data":"24414ee08d96c92b5e5a2987ea123de0d7bf6e29180b3cd5f05b52963a1027d2"} Feb 16 17:14:17.282447 master-0 kubenswrapper[3171]: I0216 17:14:17.282418 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:17.282615 master-0 kubenswrapper[3171]: I0216 17:14:17.282583 3171 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:17.283100 master-0 kubenswrapper[3171]: W0216 17:14:17.283031 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod702322ac_7610_4568_9a68_b6acbd1f0c12.slice/crio-63964f47111b36b48ad0828624ccf523359d31129423bb7917fb1cd01aad8c04 WatchSource:0}: Error finding container 63964f47111b36b48ad0828624ccf523359d31129423bb7917fb1cd01aad8c04: Status 404 returned error can't find the container with id 63964f47111b36b48ad0828624ccf523359d31129423bb7917fb1cd01aad8c04 Feb 16 17:14:17.284758 master-0 kubenswrapper[3171]: W0216 17:14:17.284720 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda94f9b8e_b020_4aab_8373_6c056ec07464.slice/crio-f119761284235b155a5550b379cddae2d59c4785ac83d6f2e1cabbb819959d1e WatchSource:0}: Error finding container f119761284235b155a5550b379cddae2d59c4785ac83d6f2e1cabbb819959d1e: Status 404 returned error can't find the container with id f119761284235b155a5550b379cddae2d59c4785ac83d6f2e1cabbb819959d1e Feb 16 17:14:17.285411 master-0 kubenswrapper[3171]: W0216 17:14:17.285320 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f9bf4ab_5415_4616_aa36_ea387c699ea9.slice/crio-e8d809425731cc2967cdb379e53f1be7eba9e51662dfb79330c03d92562a8e44 WatchSource:0}: Error finding container e8d809425731cc2967cdb379e53f1be7eba9e51662dfb79330c03d92562a8e44: Status 404 returned error can't find the container with id e8d809425731cc2967cdb379e53f1be7eba9e51662dfb79330c03d92562a8e44 Feb 16 17:14:17.300982 master-0 kubenswrapper[3171]: E0216 17:14:17.300940 3171 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:17.300982 master-0 kubenswrapper[3171]: E0216 17:14:17.300980 3171 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.301101 master-0 kubenswrapper[3171]: E0216 17:14:17.300991 3171 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.301101 master-0 kubenswrapper[3171]: E0216 17:14:17.301050 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.801032306 +0000 UTC m=+27.469887562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.311348 master-0 kubenswrapper[3171]: I0216 17:14:17.311298 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:17.312077 master-0 kubenswrapper[3171]: E0216 17:14:17.311477 3171 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.312077 master-0 kubenswrapper[3171]: E0216 17:14:17.311515 3171 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.312077 master-0 kubenswrapper[3171]: E0216 17:14:17.311530 3171 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.312077 master-0 kubenswrapper[3171]: E0216 17:14:17.311589 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.311567211 +0000 UTC m=+27.980422487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.312077 master-0 kubenswrapper[3171]: I0216 17:14:17.311638 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:17.312077 master-0 kubenswrapper[3171]: I0216 17:14:17.311792 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:17.312260 master-0 kubenswrapper[3171]: E0216 17:14:17.312127 3171 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:17.312260 master-0 kubenswrapper[3171]: E0216 17:14:17.312144 3171 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.312260 master-0 kubenswrapper[3171]: E0216 17:14:17.312156 3171 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.312260 master-0 kubenswrapper[3171]: E0216 17:14:17.312202 3171 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.312260 master-0 kubenswrapper[3171]: E0216 17:14:17.312237 3171 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.312412 master-0 kubenswrapper[3171]: E0216 17:14:17.312206 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.312193968 +0000 UTC m=+27.981049234 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.312412 master-0 kubenswrapper[3171]: E0216 17:14:17.312296 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.31228043 +0000 UTC m=+27.981135686 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.324446 master-0 kubenswrapper[3171]: I0216 17:14:17.324375 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:17.334840 master-0 kubenswrapper[3171]: E0216 17:14:17.334806 3171 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:17.334840 master-0 kubenswrapper[3171]: E0216 17:14:17.334837 3171 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.335073 master-0 kubenswrapper[3171]: E0216 17:14:17.334849 3171 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.335073 master-0 kubenswrapper[3171]: E0216 17:14:17.334905 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.834890182 +0000 UTC m=+27.503745448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.342935 master-0 kubenswrapper[3171]: I0216 17:14:17.342870 3171 request.go:700] Waited for 1.012553807s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-node-identity/serviceaccounts/network-node-identity/token Feb 16 17:14:17.361749 master-0 kubenswrapper[3171]: I0216 17:14:17.361658 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:17.378616 master-0 kubenswrapper[3171]: E0216 17:14:17.378566 3171 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:17.378616 master-0 kubenswrapper[3171]: E0216 17:14:17.378612 3171 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.378771 master-0 kubenswrapper[3171]: E0216 17:14:17.378627 3171 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.378771 master-0 kubenswrapper[3171]: E0216 17:14:17.378699 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.878676237 +0000 UTC m=+27.547531553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.399565 master-0 kubenswrapper[3171]: E0216 17:14:17.399527 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:17.399565 master-0 kubenswrapper[3171]: E0216 17:14:17.399565 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.399761 master-0 kubenswrapper[3171]: E0216 17:14:17.399579 3171 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.399761 master-0 kubenswrapper[3171]: E0216 17:14:17.399662 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.899636724 +0000 UTC m=+27.568491980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.413536 master-0 kubenswrapper[3171]: I0216 17:14:17.413475 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:17.413701 master-0 kubenswrapper[3171]: E0216 17:14:17.413658 3171 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:17.413701 master-0 kubenswrapper[3171]: E0216 17:14:17.413700 3171 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.413783 master-0 kubenswrapper[3171]: E0216 17:14:17.413715 3171 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.413783 master-0 kubenswrapper[3171]: E0216 17:14:17.413772 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.413755396 +0000 UTC m=+28.082610672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.413871 master-0 kubenswrapper[3171]: I0216 17:14:17.413848 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:17.414311 master-0 kubenswrapper[3171]: I0216 17:14:17.413931 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:17.414311 master-0 kubenswrapper[3171]: E0216 17:14:17.414065 3171 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.414311 master-0 kubenswrapper[3171]: E0216 17:14:17.414079 3171 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.414311 master-0 kubenswrapper[3171]: E0216 17:14:17.414089 3171 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.414311 master-0 kubenswrapper[3171]: E0216 17:14:17.414104 3171 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.414311 master-0 kubenswrapper[3171]: E0216 17:14:17.414125 3171 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.414311 master-0 kubenswrapper[3171]: E0216 17:14:17.414135 3171 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.414311 master-0 kubenswrapper[3171]: E0216 17:14:17.414173 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.414162147 +0000 UTC m=+28.083017403 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.414311 master-0 kubenswrapper[3171]: E0216 17:14:17.414312 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.414296561 +0000 UTC m=+28.083151837 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.420740 master-0 kubenswrapper[3171]: I0216 17:14:17.420704 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:17.434805 master-0 kubenswrapper[3171]: E0216 17:14:17.434760 3171 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:14:17.434805 master-0 kubenswrapper[3171]: E0216 17:14:17.434791 3171 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.434805 master-0 kubenswrapper[3171]: E0216 17:14:17.434804 3171 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.434941 master-0 kubenswrapper[3171]: E0216 17:14:17.434859 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.934841206 +0000 UTC m=+27.603696472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.455414 master-0 kubenswrapper[3171]: E0216 17:14:17.455367 3171 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:17.455414 master-0 kubenswrapper[3171]: E0216 17:14:17.455413 3171 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.455542 master-0 kubenswrapper[3171]: E0216 17:14:17.455432 3171 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.455616 master-0 kubenswrapper[3171]: E0216 17:14:17.455513 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.955488905 +0000 UTC m=+27.624344191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.468153 master-0 kubenswrapper[3171]: I0216 17:14:17.468109 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:17.487639 master-0 kubenswrapper[3171]: E0216 17:14:17.487594 3171 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:17.487639 master-0 kubenswrapper[3171]: E0216 17:14:17.487640 3171 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.487786 master-0 kubenswrapper[3171]: E0216 17:14:17.487661 3171 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.487786 master-0 kubenswrapper[3171]: E0216 17:14:17.487744 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:17.987718817 +0000 UTC m=+27.656574113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.488723 master-0 kubenswrapper[3171]: I0216 17:14:17.488679 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:17.497678 master-0 kubenswrapper[3171]: I0216 17:14:17.497638 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:17.503295 master-0 kubenswrapper[3171]: I0216 17:14:17.503258 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:17.516454 master-0 kubenswrapper[3171]: E0216 17:14:17.516428 3171 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.516454 master-0 kubenswrapper[3171]: E0216 17:14:17.516455 3171 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.516564 master-0 kubenswrapper[3171]: E0216 17:14:17.516468 3171 projected.go:194] Error preparing data for projected volume kube-api-access-sbrtz for pod openshift-console-operator/console-operator-7777d5cc66-64vhv: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.516564 master-0 kubenswrapper[3171]: E0216 17:14:17.516524 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.016506926 +0000 UTC m=+27.685362182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sbrtz" (UniqueName: "kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.518406 master-0 kubenswrapper[3171]: I0216 17:14:17.518386 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:17.518589 master-0 kubenswrapper[3171]: E0216 17:14:17.518555 3171 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.518630 master-0 kubenswrapper[3171]: E0216 17:14:17.518596 3171 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.518630 master-0 kubenswrapper[3171]: E0216 17:14:17.518616 3171 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.518819 master-0 kubenswrapper[3171]: E0216 17:14:17.518802 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.518790988 +0000 UTC m=+28.187646244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.524849 master-0 kubenswrapper[3171]: I0216 17:14:17.524826 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:17.549142 master-0 kubenswrapper[3171]: E0216 17:14:17.549060 3171 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:14:17.549142 master-0 kubenswrapper[3171]: E0216 17:14:17.549086 3171 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.549142 master-0 kubenswrapper[3171]: E0216 17:14:17.549096 3171 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.549297 master-0 kubenswrapper[3171]: E0216 17:14:17.549160 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.049143269 +0000 UTC m=+27.717998525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.555681 master-0 kubenswrapper[3171]: W0216 17:14:17.555595 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39387549_c636_4bd4_b463_f6a93810f277.slice/crio-6b896923757b268325f5828b39f28a688ac9f66638ad3480c6f941e1ecce93ee WatchSource:0}: Error finding container 6b896923757b268325f5828b39f28a688ac9f66638ad3480c6f941e1ecce93ee: Status 404 returned error can't find the container with id 6b896923757b268325f5828b39f28a688ac9f66638ad3480c6f941e1ecce93ee Feb 16 17:14:17.571271 master-0 kubenswrapper[3171]: I0216 17:14:17.571211 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:17.580459 master-0 kubenswrapper[3171]: I0216 17:14:17.580413 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:17.600742 master-0 kubenswrapper[3171]: I0216 17:14:17.600693 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvw4s\" (UniqueName: \"kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:17.623205 master-0 kubenswrapper[3171]: I0216 17:14:17.623141 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:17.623522 master-0 kubenswrapper[3171]: E0216 17:14:17.623478 3171 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.623522 master-0 kubenswrapper[3171]: E0216 17:14:17.623518 3171 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.623697 master-0 kubenswrapper[3171]: E0216 17:14:17.623541 3171 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.623817 master-0 kubenswrapper[3171]: I0216 17:14:17.623706 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:17.623992 master-0 kubenswrapper[3171]: E0216 17:14:17.623919 3171 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.623992 master-0 kubenswrapper[3171]: E0216 17:14:17.623941 3171 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.623992 master-0 kubenswrapper[3171]: E0216 17:14:17.623981 3171 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.624218 master-0 kubenswrapper[3171]: E0216 17:14:17.624043 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.624002055 +0000 UTC m=+28.292857381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.624218 master-0 kubenswrapper[3171]: E0216 17:14:17.624100 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.624081117 +0000 UTC m=+28.292936423 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.625008 master-0 kubenswrapper[3171]: I0216 17:14:17.624929 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:17.625105 master-0 kubenswrapper[3171]: E0216 17:14:17.625089 3171 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.625155 master-0 kubenswrapper[3171]: E0216 17:14:17.625116 3171 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.625197 master-0 kubenswrapper[3171]: E0216 17:14:17.625177 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.625159926 +0000 UTC m=+28.294015212 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.634263 master-0 kubenswrapper[3171]: I0216 17:14:17.634203 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:17.638846 master-0 kubenswrapper[3171]: E0216 17:14:17.638798 3171 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:17.638846 master-0 kubenswrapper[3171]: E0216 17:14:17.638843 3171 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.638987 master-0 kubenswrapper[3171]: E0216 17:14:17.638864 3171 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.639070 master-0 kubenswrapper[3171]: E0216 17:14:17.638992 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.138931569 +0000 UTC m=+27.807786875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.666733 master-0 kubenswrapper[3171]: W0216 17:14:17.666673 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc45ce0e5_c50b_4210_b7bb_82db2b2bc1db.slice/crio-db9aa77bd3256ca9f2ff9e15705e2dcbd011130de1a8888fd119afc754a8a3bc WatchSource:0}: Error finding container db9aa77bd3256ca9f2ff9e15705e2dcbd011130de1a8888fd119afc754a8a3bc: Status 404 returned error can't find the container with id db9aa77bd3256ca9f2ff9e15705e2dcbd011130de1a8888fd119afc754a8a3bc Feb 16 17:14:17.669297 master-0 kubenswrapper[3171]: E0216 17:14:17.668374 3171 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:17.669297 master-0 kubenswrapper[3171]: E0216 17:14:17.668668 3171 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.669297 master-0 kubenswrapper[3171]: E0216 17:14:17.668684 3171 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.669297 master-0 kubenswrapper[3171]: W0216 17:14:17.668744 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3fa6ac1_781f_446c_b6b4_18bdb7723c23.slice/crio-e5383f33102a9898e2ed29273a78d4b119c0cc0618f01a3ae943b24be1f2db07 WatchSource:0}: Error finding container e5383f33102a9898e2ed29273a78d4b119c0cc0618f01a3ae943b24be1f2db07: Status 404 returned error can't find the container with id e5383f33102a9898e2ed29273a78d4b119c0cc0618f01a3ae943b24be1f2db07 Feb 16 17:14:17.669297 master-0 kubenswrapper[3171]: E0216 17:14:17.668796 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.168733235 +0000 UTC m=+27.837588501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.686773 master-0 kubenswrapper[3171]: I0216 17:14:17.686322 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:17.701503 master-0 kubenswrapper[3171]: E0216 17:14:17.701457 3171 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Feb 16 17:14:17.701733 master-0 kubenswrapper[3171]: E0216 17:14:17.701709 3171 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.701867 master-0 kubenswrapper[3171]: E0216 17:14:17.701847 3171 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.702126 master-0 kubenswrapper[3171]: E0216 17:14:17.702101 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.202070627 +0000 UTC m=+27.870925923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.716494 master-0 kubenswrapper[3171]: E0216 17:14:17.716418 3171 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.716494 master-0 kubenswrapper[3171]: E0216 17:14:17.716473 3171 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.716767 master-0 kubenswrapper[3171]: E0216 17:14:17.716490 3171 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.716767 master-0 kubenswrapper[3171]: E0216 17:14:17.716647 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.216619951 +0000 UTC m=+27.885475217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.730272 master-0 kubenswrapper[3171]: I0216 17:14:17.730176 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:17.730272 master-0 kubenswrapper[3171]: I0216 17:14:17.730259 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:17.730560 master-0 kubenswrapper[3171]: I0216 17:14:17.730481 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:17.730841 master-0 kubenswrapper[3171]: E0216 17:14:17.730635 3171 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.730841 master-0 kubenswrapper[3171]: E0216 17:14:17.730742 3171 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.730841 master-0 kubenswrapper[3171]: E0216 17:14:17.730784 3171 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.730841 master-0 kubenswrapper[3171]: E0216 17:14:17.730823 3171 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:14:17.730841 master-0 kubenswrapper[3171]: E0216 17:14:17.730836 3171 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.730841 master-0 kubenswrapper[3171]: E0216 17:14:17.730846 3171 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.731477 master-0 kubenswrapper[3171]: E0216 17:14:17.730868 3171 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.731477 master-0 kubenswrapper[3171]: E0216 17:14:17.730873 3171 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.731477 master-0 kubenswrapper[3171]: E0216 17:14:17.730882 3171 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.731477 master-0 kubenswrapper[3171]: I0216 17:14:17.730697 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:17.731477 master-0 kubenswrapper[3171]: E0216 17:14:17.730665 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:17.731477 master-0 kubenswrapper[3171]: E0216 17:14:17.730976 3171 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.731477 master-0 kubenswrapper[3171]: E0216 17:14:17.730986 3171 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.731477 master-0 kubenswrapper[3171]: E0216 17:14:17.731063 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.730894037 +0000 UTC m=+28.399749373 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.731477 master-0 kubenswrapper[3171]: E0216 17:14:17.731139 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.731125593 +0000 UTC m=+28.399981059 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.731477 master-0 kubenswrapper[3171]: E0216 17:14:17.731187 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.731176455 +0000 UTC m=+28.400031881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.731477 master-0 kubenswrapper[3171]: E0216 17:14:17.731395 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.7313677 +0000 UTC m=+28.400223146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.738269 master-0 kubenswrapper[3171]: I0216 17:14:17.738217 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:14:17.757883 master-0 kubenswrapper[3171]: E0216 17:14:17.757809 3171 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:14:17.757883 master-0 kubenswrapper[3171]: E0216 17:14:17.757837 3171 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.757883 master-0 kubenswrapper[3171]: E0216 17:14:17.757849 3171 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.758251 master-0 kubenswrapper[3171]: E0216 17:14:17.757912 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.257893928 +0000 UTC m=+27.926749194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.777116 master-0 kubenswrapper[3171]: E0216 17:14:17.777061 3171 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.777116 master-0 kubenswrapper[3171]: E0216 17:14:17.777095 3171 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.777116 master-0 kubenswrapper[3171]: E0216 17:14:17.777113 3171 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.777384 master-0 kubenswrapper[3171]: E0216 17:14:17.777173 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.277159679 +0000 UTC m=+27.946014945 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.795334 master-0 kubenswrapper[3171]: I0216 17:14:17.795233 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:14:17.805086 master-0 kubenswrapper[3171]: I0216 17:14:17.804940 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:17.814417 master-0 kubenswrapper[3171]: W0216 17:14:17.814340 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6fe41b0_1a42_4f07_8220_d9aaa50788ad.slice/crio-350fa8046d93176987309e508424a1570fc1eed5f18ae89cb9ac0b90ba3cb70f WatchSource:0}: Error finding container 350fa8046d93176987309e508424a1570fc1eed5f18ae89cb9ac0b90ba3cb70f: Status 404 returned error can't find the container with id 350fa8046d93176987309e508424a1570fc1eed5f18ae89cb9ac0b90ba3cb70f Feb 16 17:14:17.817192 master-0 kubenswrapper[3171]: I0216 17:14:17.817108 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:17.835369 master-0 kubenswrapper[3171]: I0216 17:14:17.835325 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:17.835711 master-0 kubenswrapper[3171]: I0216 17:14:17.835675 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:17.835875 master-0 kubenswrapper[3171]: E0216 17:14:17.835819 3171 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:17.835875 master-0 kubenswrapper[3171]: E0216 17:14:17.835836 3171 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.835875 master-0 kubenswrapper[3171]: E0216 17:14:17.835848 3171 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.836118 master-0 kubenswrapper[3171]: E0216 17:14:17.835896 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.835882418 +0000 UTC m=+28.504737674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.836118 master-0 kubenswrapper[3171]: I0216 17:14:17.836013 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:17.836118 master-0 kubenswrapper[3171]: I0216 17:14:17.836069 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:17.836336 master-0 kubenswrapper[3171]: I0216 17:14:17.836126 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:17.836336 master-0 kubenswrapper[3171]: E0216 17:14:17.836162 3171 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: object "openshift-image-registry"/"kube-root-ca.crt" not registered Feb 16 17:14:17.836336 master-0 kubenswrapper[3171]: E0216 17:14:17.836215 3171 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:17.836336 master-0 kubenswrapper[3171]: E0216 17:14:17.836227 3171 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.836336 master-0 kubenswrapper[3171]: E0216 17:14:17.836235 3171 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.836336 master-0 kubenswrapper[3171]: E0216 17:14:17.836243 3171 projected.go:288] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: object "openshift-image-registry"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.836336 master-0 kubenswrapper[3171]: E0216 17:14:17.836260 3171 projected.go:194] Error preparing data for projected volume kube-api-access-b5mwd for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.836336 master-0 kubenswrapper[3171]: E0216 17:14:17.836298 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.836286399 +0000 UTC m=+28.505141655 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5mwd" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.836865 master-0 kubenswrapper[3171]: E0216 17:14:17.836360 3171 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:17.836865 master-0 kubenswrapper[3171]: E0216 17:14:17.836398 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.836388572 +0000 UTC m=+28.505243828 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.836865 master-0 kubenswrapper[3171]: E0216 17:14:17.836841 3171 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.836865 master-0 kubenswrapper[3171]: E0216 17:14:17.836851 3171 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.837235 master-0 kubenswrapper[3171]: E0216 17:14:17.836879 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.836871285 +0000 UTC m=+28.505726541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.846359 master-0 kubenswrapper[3171]: I0216 17:14:17.846129 3171 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:17.846574 master-0 kubenswrapper[3171]: I0216 17:14:17.846527 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:17.852046 master-0 kubenswrapper[3171]: W0216 17:14:17.851998 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6ad958f_25e4_40cb_89ec_5da9cb6395c7.slice/crio-343b05919e1f64786e2254ca5f9bc68bb46285c032cc50fdd759ba021022cc78 WatchSource:0}: Error finding container 343b05919e1f64786e2254ca5f9bc68bb46285c032cc50fdd759ba021022cc78: Status 404 returned error can't find the container with id 343b05919e1f64786e2254ca5f9bc68bb46285c032cc50fdd759ba021022cc78 Feb 16 17:14:17.859921 master-0 kubenswrapper[3171]: I0216 17:14:17.859860 3171 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:17.863479 master-0 kubenswrapper[3171]: W0216 17:14:17.863419 3171 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c48005e_c4df_4332_87fc_ec028f2c6921.slice/crio-c7d736050372998764922f31dd6cf88581d3803877d64338fea850cc359f7da3 WatchSource:0}: Error finding container c7d736050372998764922f31dd6cf88581d3803877d64338fea850cc359f7da3: Status 404 returned error can't find the container with id c7d736050372998764922f31dd6cf88581d3803877d64338fea850cc359f7da3 Feb 16 17:14:17.911496 master-0 kubenswrapper[3171]: E0216 17:14:17.911404 3171 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:14:17.911496 master-0 kubenswrapper[3171]: E0216 17:14:17.911463 3171 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.911496 master-0 kubenswrapper[3171]: E0216 17:14:17.911484 3171 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.911790 master-0 kubenswrapper[3171]: E0216 17:14:17.911585 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.411552386 +0000 UTC m=+28.080407642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.916300 master-0 kubenswrapper[3171]: E0216 17:14:17.916238 3171 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:14:17.916300 master-0 kubenswrapper[3171]: E0216 17:14:17.916292 3171 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.916440 master-0 kubenswrapper[3171]: E0216 17:14:17.916315 3171 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.916440 master-0 kubenswrapper[3171]: E0216 17:14:17.916423 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.416394597 +0000 UTC m=+28.085249893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.936725 master-0 kubenswrapper[3171]: E0216 17:14:17.936684 3171 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:17.936820 master-0 kubenswrapper[3171]: E0216 17:14:17.936743 3171 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.936820 master-0 kubenswrapper[3171]: E0216 17:14:17.936755 3171 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.936820 master-0 kubenswrapper[3171]: E0216 17:14:17.936812 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:18.436794239 +0000 UTC m=+28.105649495 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.938650 master-0 kubenswrapper[3171]: I0216 17:14:17.938601 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:17.938711 master-0 kubenswrapper[3171]: I0216 17:14:17.938668 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:17.938817 master-0 kubenswrapper[3171]: E0216 17:14:17.938779 3171 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:17.938873 master-0 kubenswrapper[3171]: E0216 17:14:17.938855 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.938837274 +0000 UTC m=+29.607692530 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:17.938909 master-0 kubenswrapper[3171]: E0216 17:14:17.938867 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:17.939011 master-0 kubenswrapper[3171]: E0216 17:14:17.938988 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.938975608 +0000 UTC m=+29.607830904 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:17.939067 master-0 kubenswrapper[3171]: I0216 17:14:17.939034 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:17.939110 master-0 kubenswrapper[3171]: I0216 17:14:17.939086 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:17.939144 master-0 kubenswrapper[3171]: E0216 17:14:17.939137 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:17.939175 master-0 kubenswrapper[3171]: E0216 17:14:17.939166 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.939159463 +0000 UTC m=+29.608014719 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:17.939175 master-0 kubenswrapper[3171]: E0216 17:14:17.939164 3171 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:17.939230 master-0 kubenswrapper[3171]: E0216 17:14:17.939202 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.939195434 +0000 UTC m=+29.608050690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:17.939283 master-0 kubenswrapper[3171]: I0216 17:14:17.939267 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:17.939319 master-0 kubenswrapper[3171]: I0216 17:14:17.939294 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:17.939319 master-0 kubenswrapper[3171]: I0216 17:14:17.939314 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:17.939370 master-0 kubenswrapper[3171]: I0216 17:14:17.939348 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:17.939397 master-0 kubenswrapper[3171]: I0216 17:14:17.939377 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:17.939397 master-0 kubenswrapper[3171]: E0216 17:14:17.939380 3171 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:17.939465 master-0 kubenswrapper[3171]: I0216 17:14:17.939399 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:17.939465 master-0 kubenswrapper[3171]: E0216 17:14:17.939414 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.939404719 +0000 UTC m=+29.608259985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:17.939539 master-0 kubenswrapper[3171]: E0216 17:14:17.939482 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:17.939569 master-0 kubenswrapper[3171]: E0216 17:14:17.939547 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0 podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.939530723 +0000 UTC m=+29.608385999 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:17.939608 master-0 kubenswrapper[3171]: E0216 17:14:17.939565 3171 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:17.939608 master-0 kubenswrapper[3171]: E0216 17:14:17.939598 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.939590314 +0000 UTC m=+29.608445570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:17.939664 master-0 kubenswrapper[3171]: I0216 17:14:17.939626 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:17.939664 master-0 kubenswrapper[3171]: E0216 17:14:17.939646 3171 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:17.939720 master-0 kubenswrapper[3171]: E0216 17:14:17.939673 3171 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:17.939720 master-0 kubenswrapper[3171]: E0216 17:14:17.939681 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.939671656 +0000 UTC m=+29.608526922 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:17.939720 master-0 kubenswrapper[3171]: E0216 17:14:17.939696 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.939689397 +0000 UTC m=+29.608544663 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:17.939720 master-0 kubenswrapper[3171]: E0216 17:14:17.939713 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:17.939861 master-0 kubenswrapper[3171]: E0216 17:14:17.939734 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.939727928 +0000 UTC m=+29.608583174 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:17.939861 master-0 kubenswrapper[3171]: I0216 17:14:17.939650 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:17.939861 master-0 kubenswrapper[3171]: E0216 17:14:17.939743 3171 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:17.939861 master-0 kubenswrapper[3171]: I0216 17:14:17.939757 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:17.939861 master-0 kubenswrapper[3171]: E0216 17:14:17.939771 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.939763549 +0000 UTC m=+29.608618815 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:17.939861 master-0 kubenswrapper[3171]: E0216 17:14:17.939785 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:17.939861 master-0 kubenswrapper[3171]: I0216 17:14:17.939795 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:17.939861 master-0 kubenswrapper[3171]: E0216 17:14:17.939802 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.93979688 +0000 UTC m=+29.608652126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:17.939861 master-0 kubenswrapper[3171]: E0216 17:14:17.939815 3171 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:17.939861 master-0 kubenswrapper[3171]: E0216 17:14:17.939836 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.939831921 +0000 UTC m=+29.608687177 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:17.939861 master-0 kubenswrapper[3171]: I0216 17:14:17.939834 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:17.939861 master-0 kubenswrapper[3171]: I0216 17:14:17.939859 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.939881 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: I0216 17:14:17.939889 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.939907 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.939899003 +0000 UTC m=+29.608754269 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.939925 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: I0216 17:14:17.939928 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.939947 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.939940574 +0000 UTC m=+29.608795940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: I0216 17:14:17.939988 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.940000 3171 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: I0216 17:14:17.940017 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.940035 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.940024906 +0000 UTC m=+29.608880172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.940064 3171 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: I0216 17:14:17.940080 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.940087 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.940081138 +0000 UTC m=+29.608936394 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.940127 3171 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.940135 3171 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.940152 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.940145029 +0000 UTC m=+29.609000295 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.940167 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.94015985 +0000 UTC m=+29.609015116 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.940168 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.940189 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.94018417 +0000 UTC m=+29.609039426 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: I0216 17:14:17.940188 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: I0216 17:14:17.940216 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: I0216 17:14:17.940244 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: I0216 17:14:17.940269 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.940220 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: E0216 17:14:17.940297 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.940291153 +0000 UTC m=+29.609146409 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:17.940295 master-0 kubenswrapper[3171]: I0216 17:14:17.940313 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940335 3171 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940348 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940363 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.940353715 +0000 UTC m=+29.609208981 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940377 3171 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940384 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940394 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.940389556 +0000 UTC m=+29.609244812 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940250 3171 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940416 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.940411446 +0000 UTC m=+29.609266702 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940412 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940438 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940451 3171 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940462 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940476 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.940467018 +0000 UTC m=+29.609322284 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940495 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940536 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940564 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940590 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940616 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940644 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940671 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940724 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940753 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940779 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940804 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940500 3171 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940830 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940843 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.940837408 +0000 UTC m=+29.609692744 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: I0216 17:14:17.940859 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940881 3171 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940902 3171 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940908 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.94089883 +0000 UTC m=+29.609754096 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940922 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.9409159 +0000 UTC m=+29.609771176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940934 3171 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940949 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.940944261 +0000 UTC m=+29.609799517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:17.941329 master-0 kubenswrapper[3171]: E0216 17:14:17.940982 3171 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941003 3171 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941011 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941002562 +0000 UTC m=+29.609857828 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: I0216 17:14:17.940882 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941024 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941016283 +0000 UTC m=+29.609871549 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.940529 3171 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941049 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941043314 +0000 UTC m=+29.609898660 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.940987 3171 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: I0216 17:14:17.941068 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941075 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941068874 +0000 UTC m=+29.609924140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.940554 3171 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: I0216 17:14:17.941095 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941102 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941096165 +0000 UTC m=+29.609951441 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: I0216 17:14:17.941132 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: I0216 17:14:17.941158 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941162 3171 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941186 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941180687 +0000 UTC m=+29.610035943 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941199 3171 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941220 3171 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941224 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941216758 +0000 UTC m=+29.610072024 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941238 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941233409 +0000 UTC m=+29.610088665 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: I0216 17:14:17.941252 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: I0216 17:14:17.941273 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: I0216 17:14:17.941294 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: I0216 17:14:17.941320 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941275 3171 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: I0216 17:14:17.941341 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941351 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941342652 +0000 UTC m=+29.610197928 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: I0216 17:14:17.941372 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941383 3171 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941392 3171 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: I0216 17:14:17.941402 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941410 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941405133 +0000 UTC m=+29.610260389 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: I0216 17:14:17.941427 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941438 3171 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:17.942639 master-0 kubenswrapper[3171]: E0216 17:14:17.941454 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941449815 +0000 UTC m=+29.610305071 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.940583 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941490 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941485546 +0000 UTC m=+29.610340802 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941493 3171 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.940611 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941515 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941510656 +0000 UTC m=+29.610365912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941524 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941520216 +0000 UTC m=+29.610375472 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.940653 3171 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941538 3171 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941546 3171 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941550 3171 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941561 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941557117 +0000 UTC m=+29.610412373 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941572 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941568448 +0000 UTC m=+29.610423694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.940674 3171 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941583 3171 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941590 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941586608 +0000 UTC m=+29.610441864 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.940700 3171 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.940721 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.940738 3171 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.940769 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.940791 3171 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.940819 3171 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.940273 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941319 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941662 3171 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941607 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941597229 +0000 UTC m=+29.610452495 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941679 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941673651 +0000 UTC m=+29.610528907 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941642 3171 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: I0216 17:14:17.941698 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941718 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941711722 +0000 UTC m=+29.610566988 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: I0216 17:14:17.941738 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941749 3171 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: I0216 17:14:17.941763 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941772 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941764303 +0000 UTC m=+29.610619569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: I0216 17:14:17.941792 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941800 3171 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941819 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941813324 +0000 UTC m=+29.610668580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: I0216 17:14:17.941818 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:17.943644 master-0 kubenswrapper[3171]: E0216 17:14:17.941839 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941832685 +0000 UTC m=+29.610687951 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.941855 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941846535 +0000 UTC m=+29.610701811 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.941859 3171 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.941868 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941862046 +0000 UTC m=+29.610717312 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.941883 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941877046 +0000 UTC m=+29.610732312 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.941895 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941889296 +0000 UTC m=+29.610744572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.941909 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941902427 +0000 UTC m=+29.610757703 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.941918 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.941921 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941915267 +0000 UTC m=+29.610770533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.941934 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941928968 +0000 UTC m=+29.610784234 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.941947 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941941858 +0000 UTC m=+29.610797124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.941979 3171 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: I0216 17:14:17.941993 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.942001 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.941993489 +0000 UTC m=+29.610848745 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.942024 3171 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: I0216 17:14:17.942035 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.942046 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.942040141 +0000 UTC m=+29.610895397 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: I0216 17:14:17.942062 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.942086 3171 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.942102 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.942084472 +0000 UTC m=+29.610939728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.942117 3171 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.942124 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.942116983 +0000 UTC m=+29.610972239 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.942127 3171 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.942143 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.942135183 +0000 UTC m=+29.610990539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: I0216 17:14:17.942086 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.942165 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.942153424 +0000 UTC m=+29.611008680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: I0216 17:14:17.942238 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: I0216 17:14:17.942265 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: I0216 17:14:17.942299 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: I0216 17:14:17.942366 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: E0216 17:14:17.942378 3171 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:17.944710 master-0 kubenswrapper[3171]: I0216 17:14:17.942389 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942408 3171 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942414 3171 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942411 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.94240252 +0000 UTC m=+29.611257776 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942428 3171 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942436 3171 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.942443 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942446 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.942439231 +0000 UTC m=+29.611294487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942482 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.942459392 +0000 UTC m=+29.611314648 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942486 3171 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942494 3171 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942501 3171 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942510 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.942504503 +0000 UTC m=+29.611359759 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.942500 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942520 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.942516183 +0000 UTC m=+29.611371439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942529 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.942525084 +0000 UTC m=+29.611380340 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942552 3171 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.942579 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.942573605 +0000 UTC m=+29.611428861 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.943835 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.943859 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.944421 3171 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.944509 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.944454066 +0000 UTC m=+29.613309332 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.944685 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.944729 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.944719193 +0000 UTC m=+29.613574469 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.944571 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.944778 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.944805 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.944837 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.944863 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.944891 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.944921 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.944945 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.945005 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: I0216 17:14:17.945033 3171 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:17.945572 master-0 kubenswrapper[3171]: E0216 17:14:17.945112 3171 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945200 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.945191126 +0000 UTC m=+29.614046382 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945234 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945253 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.945247597 +0000 UTC m=+29.614102853 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945278 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945296 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.945291489 +0000 UTC m=+29.614146745 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945328 3171 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945346 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.94534102 +0000 UTC m=+29.614196276 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945367 3171 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945384 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.945378921 +0000 UTC m=+29.614234177 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945412 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945430 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.945425242 +0000 UTC m=+29.614280498 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945460 3171 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945478 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.945471663 +0000 UTC m=+29.614326919 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945510 3171 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945527 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.945522385 +0000 UTC m=+29.614377631 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945556 3171 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945574 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.945569006 +0000 UTC m=+29.614424262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945657 3171 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:17.946721 master-0 kubenswrapper[3171]: E0216 17:14:17.945695 3171 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:19.945687249 +0000 UTC m=+29.614542505 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:17.949127 master-0 kubenswrapper[3171]: I0216 17:14:17.948942 3171 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:14:17.949247 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 16 17:14:17.971337 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 16 17:14:17.971899 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 16 17:14:17.975160 master-0 systemd[1]: kubelet.service: Consumed 3.371s CPU time. Feb 16 17:14:17.997893 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 17:14:18.084882 master-0 kubenswrapper[4083]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:14:18.084882 master-0 kubenswrapper[4083]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 17:14:18.084882 master-0 kubenswrapper[4083]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:14:18.084882 master-0 kubenswrapper[4083]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:14:18.084882 master-0 kubenswrapper[4083]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 17:14:18.085623 master-0 kubenswrapper[4083]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:14:18.085623 master-0 kubenswrapper[4083]: I0216 17:14:18.084993 4083 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 17:14:18.087255 master-0 kubenswrapper[4083]: W0216 17:14:18.087229 4083 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:14:18.087255 master-0 kubenswrapper[4083]: W0216 17:14:18.087244 4083 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:14:18.087255 master-0 kubenswrapper[4083]: W0216 17:14:18.087248 4083 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:14:18.087255 master-0 kubenswrapper[4083]: W0216 17:14:18.087253 4083 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:14:18.087255 master-0 kubenswrapper[4083]: W0216 17:14:18.087257 4083 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:14:18.087255 master-0 kubenswrapper[4083]: W0216 17:14:18.087262 4083 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087266 4083 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087270 4083 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087273 4083 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087276 4083 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087280 4083 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087284 4083 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087288 4083 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087292 4083 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087295 4083 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087299 4083 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087303 4083 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087307 4083 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087310 4083 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087314 4083 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087317 4083 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087322 4083 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087326 4083 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087331 4083 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:14:18.087501 master-0 kubenswrapper[4083]: W0216 17:14:18.087335 4083 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087339 4083 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087343 4083 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087347 4083 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087351 4083 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087355 4083 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087359 4083 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087362 4083 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087365 4083 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087369 4083 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087374 4083 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087378 4083 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087384 4083 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087389 4083 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087394 4083 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087400 4083 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087406 4083 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087411 4083 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087416 4083 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:14:18.088516 master-0 kubenswrapper[4083]: W0216 17:14:18.087420 4083 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087425 4083 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087431 4083 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087435 4083 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087440 4083 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087444 4083 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087449 4083 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087453 4083 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087458 4083 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087462 4083 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087466 4083 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087471 4083 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087474 4083 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087478 4083 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087481 4083 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087484 4083 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087488 4083 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087492 4083 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087496 4083 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087499 4083 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:14:18.089201 master-0 kubenswrapper[4083]: W0216 17:14:18.087504 4083 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: W0216 17:14:18.087508 4083 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: W0216 17:14:18.087512 4083 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: W0216 17:14:18.087515 4083 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: W0216 17:14:18.087519 4083 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: W0216 17:14:18.087523 4083 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: W0216 17:14:18.087526 4083 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: W0216 17:14:18.087530 4083 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: W0216 17:14:18.087534 4083 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087612 4083 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087620 4083 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087626 4083 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087632 4083 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087637 4083 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087641 4083 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087646 4083 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087651 4083 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087655 4083 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087659 4083 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087664 4083 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087668 4083 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087672 4083 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 17:14:18.090277 master-0 kubenswrapper[4083]: I0216 17:14:18.087676 4083 flags.go:64] FLAG: --cgroup-root="" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087680 4083 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087685 4083 flags.go:64] FLAG: --client-ca-file="" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087689 4083 flags.go:64] FLAG: --cloud-config="" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087693 4083 flags.go:64] FLAG: --cloud-provider="" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087696 4083 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087702 4083 flags.go:64] FLAG: --cluster-domain="" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087706 4083 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087711 4083 flags.go:64] FLAG: --config-dir="" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087715 4083 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087719 4083 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087724 4083 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087728 4083 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087733 4083 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087737 4083 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087741 4083 flags.go:64] FLAG: --contention-profiling="false" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087746 4083 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087750 4083 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087754 4083 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087758 4083 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087763 4083 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087768 4083 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087772 4083 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087777 4083 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087781 4083 flags.go:64] FLAG: --enable-server="true" Feb 16 17:14:18.091747 master-0 kubenswrapper[4083]: I0216 17:14:18.087785 4083 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087790 4083 flags.go:64] FLAG: --event-burst="100" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087795 4083 flags.go:64] FLAG: --event-qps="50" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087799 4083 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087803 4083 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087807 4083 flags.go:64] FLAG: --eviction-hard="" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087813 4083 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087817 4083 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087821 4083 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087825 4083 flags.go:64] FLAG: --eviction-soft="" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087829 4083 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087833 4083 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087837 4083 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087841 4083 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087847 4083 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087851 4083 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087855 4083 flags.go:64] FLAG: --feature-gates="" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087860 4083 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087864 4083 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087868 4083 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087872 4083 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087876 4083 flags.go:64] FLAG: --healthz-port="10248" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087880 4083 flags.go:64] FLAG: --help="false" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087885 4083 flags.go:64] FLAG: --hostname-override="" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087889 4083 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087893 4083 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 17:14:18.093105 master-0 kubenswrapper[4083]: I0216 17:14:18.087898 4083 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087902 4083 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087906 4083 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087910 4083 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087914 4083 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087917 4083 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087921 4083 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087926 4083 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087930 4083 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087934 4083 flags.go:64] FLAG: --kube-reserved="" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087938 4083 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087942 4083 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087946 4083 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087950 4083 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087970 4083 flags.go:64] FLAG: --lock-file="" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087975 4083 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087980 4083 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087986 4083 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087994 4083 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.087999 4083 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.088004 4083 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.088008 4083 flags.go:64] FLAG: --logging-format="text" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.088012 4083 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.088017 4083 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.088021 4083 flags.go:64] FLAG: --manifest-url="" Feb 16 17:14:18.094568 master-0 kubenswrapper[4083]: I0216 17:14:18.088025 4083 flags.go:64] FLAG: --manifest-url-header="" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088030 4083 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088034 4083 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088040 4083 flags.go:64] FLAG: --max-pods="110" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088045 4083 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088051 4083 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088055 4083 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088059 4083 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088063 4083 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088067 4083 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088071 4083 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088084 4083 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088088 4083 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088092 4083 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088096 4083 flags.go:64] FLAG: --pod-cidr="" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088100 4083 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088106 4083 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088110 4083 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088114 4083 flags.go:64] FLAG: --pods-per-core="0" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088118 4083 flags.go:64] FLAG: --port="10250" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088122 4083 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088126 4083 flags.go:64] FLAG: --provider-id="" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088130 4083 flags.go:64] FLAG: --qos-reserved="" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088134 4083 flags.go:64] FLAG: --read-only-port="10255" Feb 16 17:14:18.095438 master-0 kubenswrapper[4083]: I0216 17:14:18.088139 4083 flags.go:64] FLAG: --register-node="true" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088143 4083 flags.go:64] FLAG: --register-schedulable="true" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088146 4083 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088154 4083 flags.go:64] FLAG: --registry-burst="10" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088158 4083 flags.go:64] FLAG: --registry-qps="5" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088162 4083 flags.go:64] FLAG: --reserved-cpus="" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088166 4083 flags.go:64] FLAG: --reserved-memory="" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088171 4083 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088175 4083 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088179 4083 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088183 4083 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088187 4083 flags.go:64] FLAG: --runonce="false" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088191 4083 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088196 4083 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088200 4083 flags.go:64] FLAG: --seccomp-default="false" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088204 4083 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088208 4083 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088212 4083 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088216 4083 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088221 4083 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088225 4083 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088229 4083 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088233 4083 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088237 4083 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088241 4083 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 17:14:18.096319 master-0 kubenswrapper[4083]: I0216 17:14:18.088245 4083 flags.go:64] FLAG: --system-cgroups="" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088250 4083 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088256 4083 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088260 4083 flags.go:64] FLAG: --tls-cert-file="" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088264 4083 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088269 4083 flags.go:64] FLAG: --tls-min-version="" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088273 4083 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088277 4083 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088281 4083 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088285 4083 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088289 4083 flags.go:64] FLAG: --v="2" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088294 4083 flags.go:64] FLAG: --version="false" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088300 4083 flags.go:64] FLAG: --vmodule="" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088304 4083 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: I0216 17:14:18.088309 4083 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: W0216 17:14:18.088399 4083 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: W0216 17:14:18.088411 4083 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: W0216 17:14:18.088416 4083 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: W0216 17:14:18.088435 4083 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: W0216 17:14:18.088441 4083 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: W0216 17:14:18.088445 4083 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: W0216 17:14:18.088449 4083 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: W0216 17:14:18.088454 4083 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:14:18.097409 master-0 kubenswrapper[4083]: W0216 17:14:18.088457 4083 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088461 4083 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088465 4083 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088468 4083 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088472 4083 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088476 4083 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088479 4083 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088483 4083 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088488 4083 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088492 4083 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088496 4083 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088502 4083 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088507 4083 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088511 4083 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088516 4083 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088520 4083 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088525 4083 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088529 4083 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088534 4083 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:14:18.098094 master-0 kubenswrapper[4083]: W0216 17:14:18.088538 4083 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088545 4083 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088550 4083 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088555 4083 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088560 4083 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088564 4083 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088570 4083 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088574 4083 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088579 4083 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088583 4083 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088589 4083 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088593 4083 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088597 4083 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088602 4083 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088606 4083 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088611 4083 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088615 4083 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088619 4083 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088622 4083 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088626 4083 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:14:18.098595 master-0 kubenswrapper[4083]: W0216 17:14:18.088629 4083 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088633 4083 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088636 4083 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088640 4083 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088643 4083 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088647 4083 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088650 4083 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088654 4083 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088659 4083 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088662 4083 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088666 4083 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088669 4083 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088673 4083 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088676 4083 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088680 4083 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088683 4083 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088688 4083 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088693 4083 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088697 4083 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088701 4083 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:14:18.099169 master-0 kubenswrapper[4083]: W0216 17:14:18.088704 4083 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:14:18.099651 master-0 kubenswrapper[4083]: W0216 17:14:18.088708 4083 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:14:18.099651 master-0 kubenswrapper[4083]: W0216 17:14:18.088712 4083 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:14:18.099651 master-0 kubenswrapper[4083]: W0216 17:14:18.088716 4083 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:14:18.099651 master-0 kubenswrapper[4083]: W0216 17:14:18.088719 4083 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:14:18.099651 master-0 kubenswrapper[4083]: I0216 17:14:18.088725 4083 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: I0216 17:14:18.099916 4083 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: I0216 17:14:18.099967 4083 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100102 4083 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100113 4083 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100119 4083 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100126 4083 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100132 4083 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100137 4083 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100143 4083 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100148 4083 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100153 4083 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100158 4083 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100163 4083 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100169 4083 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100175 4083 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:14:18.100156 master-0 kubenswrapper[4083]: W0216 17:14:18.100180 4083 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100186 4083 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100191 4083 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100196 4083 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100201 4083 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100206 4083 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100211 4083 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100217 4083 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100222 4083 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100228 4083 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100233 4083 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100238 4083 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100243 4083 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100248 4083 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100254 4083 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100259 4083 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100265 4083 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100270 4083 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100278 4083 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:14:18.101207 master-0 kubenswrapper[4083]: W0216 17:14:18.100287 4083 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100293 4083 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100299 4083 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100304 4083 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100311 4083 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100317 4083 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100324 4083 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100332 4083 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100339 4083 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100344 4083 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100349 4083 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100355 4083 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100360 4083 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100365 4083 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100370 4083 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100376 4083 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100381 4083 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100386 4083 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100391 4083 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:14:18.101999 master-0 kubenswrapper[4083]: W0216 17:14:18.100396 4083 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100402 4083 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100406 4083 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100412 4083 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100418 4083 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100423 4083 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100428 4083 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100433 4083 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100438 4083 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100443 4083 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100448 4083 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100453 4083 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100458 4083 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100463 4083 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100468 4083 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100473 4083 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100478 4083 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100483 4083 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100489 4083 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100494 4083 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:14:18.103225 master-0 kubenswrapper[4083]: W0216 17:14:18.100499 4083 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: I0216 17:14:18.100508 4083 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100665 4083 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100674 4083 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100679 4083 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100685 4083 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100690 4083 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100695 4083 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100701 4083 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100707 4083 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100712 4083 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100718 4083 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100723 4083 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100729 4083 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100734 4083 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:14:18.103775 master-0 kubenswrapper[4083]: W0216 17:14:18.100739 4083 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100744 4083 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100751 4083 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100757 4083 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100762 4083 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100769 4083 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100776 4083 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100783 4083 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100789 4083 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100795 4083 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100800 4083 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100805 4083 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100811 4083 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100816 4083 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100821 4083 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100827 4083 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100832 4083 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100837 4083 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100842 4083 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:14:18.104203 master-0 kubenswrapper[4083]: W0216 17:14:18.100848 4083 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100853 4083 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100858 4083 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100865 4083 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100873 4083 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100879 4083 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100884 4083 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100890 4083 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100895 4083 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100901 4083 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100906 4083 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100912 4083 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100917 4083 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100923 4083 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100927 4083 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100933 4083 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100938 4083 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100943 4083 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100948 4083 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100953 4083 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:14:18.104666 master-0 kubenswrapper[4083]: W0216 17:14:18.100973 4083 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.100979 4083 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.100984 4083 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.100989 4083 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.100993 4083 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.100998 4083 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101004 4083 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101009 4083 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101013 4083 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101018 4083 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101025 4083 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101030 4083 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101035 4083 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101040 4083 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101045 4083 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101050 4083 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101055 4083 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101060 4083 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101065 4083 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:14:18.105383 master-0 kubenswrapper[4083]: W0216 17:14:18.101070 4083 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:14:18.106253 master-0 kubenswrapper[4083]: I0216 17:14:18.101080 4083 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:14:18.106253 master-0 kubenswrapper[4083]: I0216 17:14:18.101284 4083 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 17:14:18.106253 master-0 kubenswrapper[4083]: I0216 17:14:18.103264 4083 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 17:14:18.106253 master-0 kubenswrapper[4083]: I0216 17:14:18.103355 4083 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 17:14:18.106253 master-0 kubenswrapper[4083]: I0216 17:14:18.103632 4083 server.go:997] "Starting client certificate rotation" Feb 16 17:14:18.106253 master-0 kubenswrapper[4083]: I0216 17:14:18.103644 4083 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 17:14:18.106253 master-0 kubenswrapper[4083]: I0216 17:14:18.103837 4083 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 14:24:13.633251468 +0000 UTC Feb 16 17:14:18.106253 master-0 kubenswrapper[4083]: I0216 17:14:18.103910 4083 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 21h9m55.529344508s for next certificate rotation Feb 16 17:14:18.106253 master-0 kubenswrapper[4083]: I0216 17:14:18.104430 4083 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:14:18.107221 master-0 kubenswrapper[4083]: I0216 17:14:18.107188 4083 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:14:18.111203 master-0 kubenswrapper[4083]: I0216 17:14:18.110980 4083 log.go:25] "Validated CRI v1 runtime API" Feb 16 17:14:18.116214 master-0 kubenswrapper[4083]: I0216 17:14:18.116179 4083 log.go:25] "Validated CRI v1 image API" Feb 16 17:14:18.117351 master-0 kubenswrapper[4083]: I0216 17:14:18.117246 4083 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 17:14:18.128303 master-0 kubenswrapper[4083]: I0216 17:14:18.128193 4083 fs.go:135] Filesystem UUIDs: map[35a0b0cc-84b1-4374-a18a-0f49ad7a8333:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 17:14:18.129002 master-0 kubenswrapper[4083]: I0216 17:14:18.128290 4083 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/06ddc2da13c0775c0e8f0acf19c817f8072a1fcf961d84d30040d9ab97e3ada6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/06ddc2da13c0775c0e8f0acf19c817f8072a1fcf961d84d30040d9ab97e3ada6/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/24414ee08d96c92b5e5a2987ea123de0d7bf6e29180b3cd5f05b52963a1027d2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/24414ee08d96c92b5e5a2987ea123de0d7bf6e29180b3cd5f05b52963a1027d2/userdata/shm major:0 minor:226 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2700d64446e8244b9b674cd60afd215140645d2edcc3782a0dea4459ce56db2c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2700d64446e8244b9b674cd60afd215140645d2edcc3782a0dea4459ce56db2c/userdata/shm major:0 minor:56 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/343b05919e1f64786e2254ca5f9bc68bb46285c032cc50fdd759ba021022cc78/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/343b05919e1f64786e2254ca5f9bc68bb46285c032cc50fdd759ba021022cc78/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/350fa8046d93176987309e508424a1570fc1eed5f18ae89cb9ac0b90ba3cb70f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/350fa8046d93176987309e508424a1570fc1eed5f18ae89cb9ac0b90ba3cb70f/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3ba5b55cdc513202565393d69d57718508e29795dfd1cdb87d49dc9c14489665/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3ba5b55cdc513202565393d69d57718508e29795dfd1cdb87d49dc9c14489665/userdata/shm major:0 minor:219 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/49352b0546742089f6d27ebdb79f9e6f209f38640843957969c3a7f0cde5300b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/49352b0546742089f6d27ebdb79f9e6f209f38640843957969c3a7f0cde5300b/userdata/shm major:0 minor:193 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4ba4eba49a66193e7786c85a4578333fc95c4bc9a7a4bb4ef1dbbff27d009c65/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4ba4eba49a66193e7786c85a4578333fc95c4bc9a7a4bb4ef1dbbff27d009c65/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/63964f47111b36b48ad0828624ccf523359d31129423bb7917fb1cd01aad8c04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/63964f47111b36b48ad0828624ccf523359d31129423bb7917fb1cd01aad8c04/userdata/shm major:0 minor:237 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6b896923757b268325f5828b39f28a688ac9f66638ad3480c6f941e1ecce93ee/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6b896923757b268325f5828b39f28a688ac9f66638ad3480c6f941e1ecce93ee/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b76fc34b4f11f5d3f3dd2290c7e69bd90116ec4cac1df909fdf5c5e5f8bf960d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b76fc34b4f11f5d3f3dd2290c7e69bd90116ec4cac1df909fdf5c5e5f8bf960d/userdata/shm major:0 minor:70 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c643d1a6bd2bbdb9a152ec5acdf256c8c4044ba37ff73d78c6f2993bc96d4a77/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c643d1a6bd2bbdb9a152ec5acdf256c8c4044ba37ff73d78c6f2993bc96d4a77/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c7d736050372998764922f31dd6cf88581d3803877d64338fea850cc359f7da3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c7d736050372998764922f31dd6cf88581d3803877d64338fea850cc359f7da3/userdata/shm major:0 minor:297 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d0c55c98db8491069414beee715fb0df1d28a36c886f9583fb6e5f20a3fd1076/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d0c55c98db8491069414beee715fb0df1d28a36c886f9583fb6e5f20a3fd1076/userdata/shm major:0 minor:64 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dc503dfcd2e24f7be396a4259fc22362db61ad7785d0bcd4d3225ebdfd2e1f72/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dc503dfcd2e24f7be396a4259fc22362db61ad7785d0bcd4d3225ebdfd2e1f72/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e5383f33102a9898e2ed29273a78d4b119c0cc0618f01a3ae943b24be1f2db07/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e5383f33102a9898e2ed29273a78d4b119c0cc0618f01a3ae943b24be1f2db07/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e8d809425731cc2967cdb379e53f1be7eba9e51662dfb79330c03d92562a8e44/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e8d809425731cc2967cdb379e53f1be7eba9e51662dfb79330c03d92562a8e44/userdata/shm major:0 minor:230 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e979fd391b550f805e511fcc06c4da51e87eefebf9f2469af331306f8e129b95/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e979fd391b550f805e511fcc06c4da51e87eefebf9f2469af331306f8e129b95/userdata/shm major:0 minor:201 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f119761284235b155a5550b379cddae2d59c4785ac83d6f2e1cabbb819959d1e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f119761284235b155a5550b379cddae2d59c4785ac83d6f2e1cabbb819959d1e/userdata/shm major:0 minor:241 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f38395ae743150d868e0e9f52251b36fa3cd386c02f6210a01132b4d3e9b83fa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f38395ae743150d868e0e9f52251b36fa3cd386c02f6210a01132b4d3e9b83fa/userdata/shm major:0 minor:206 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06067627-6ccf-4cc8-bd20-dabdd776bb46/volumes/kubernetes.io~projected/kube-api-access-pq4dn:{mountpoint:/var/lib/kubelet/pods/06067627-6ccf-4cc8-bd20-dabdd776bb46/volumes/kubernetes.io~projected/kube-api-access-pq4dn major:0 minor:200 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d980a9a-2574-41b9-b970-0718cd97c8cd/volumes/kubernetes.io~projected/kube-api-access-t7l6q:{mountpoint:/var/lib/kubelet/pods/0d980a9a-2574-41b9-b970-0718cd97c8cd/volumes/kubernetes.io~projected/kube-api-access-t7l6q major:0 minor:190 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1cd29be8-2b2a-49f7-badd-ff53c686a63d/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/1cd29be8-2b2a-49f7-badd-ff53c686a63d/volumes/kubernetes.io~empty-dir/config-out major:0 minor:178 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1cd29be8-2b2a-49f7-badd-ff53c686a63d/volumes/kubernetes.io~projected/kube-api-access-lgm4p:{mountpoint:/var/lib/kubelet/pods/1cd29be8-2b2a-49f7-badd-ff53c686a63d/volumes/kubernetes.io~projected/kube-api-access-lgm4p major:0 minor:199 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d1636c0-f34d-444c-822d-77f1d203ddc4/volumes/kubernetes.io~projected/kube-api-access-vbtld:{mountpoint:/var/lib/kubelet/pods/2d1636c0-f34d-444c-822d-77f1d203ddc4/volumes/kubernetes.io~projected/kube-api-access-vbtld major:0 minor:187 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2:{mountpoint:/var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2 major:0 minor:293 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl:{mountpoint:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert major:0 minor:170 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x:{mountpoint:/var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x major:0 minor:188 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt:{mountpoint:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls major:0 minor:173 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x:{mountpoint:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55d635cd-1f0d-4086-96f2-9f3524f3f18c/volumes/kubernetes.io~projected/kube-api-access-76rtg:{mountpoint:/var/lib/kubelet/pods/55d635cd-1f0d-4086-96f2-9f3524f3f18c/volumes/kubernetes.io~projected/kube-api-access-76rtg major:0 minor:189 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw:{mountpoint:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw major:0 minor:192 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:181 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x:{mountpoint:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls major:0 minor:179 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld:{mountpoint:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:177 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~projected/kube-api-access-gvw4s:{mountpoint:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~projected/kube-api-access-gvw4s major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/certs major:0 minor:180 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2 major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:171 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g:{mountpoint:/var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g major:0 minor:283 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~projected/kube-api-access-8nfk2:{mountpoint:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~projected/kube-api-access-8nfk2 major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:174 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:169 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm:{mountpoint:/var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm major:0 minor:202 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl:{mountpoint:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl major:0 minor:298 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:175 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5:{mountpoint:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5 major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae20b683-dac8-419e-808a-ddcdb3c564e1/volumes/kubernetes.io~projected/kube-api-access-f69cb:{mountpoint:/var/lib/kubelet/pods/ae20b683-dac8-419e-808a-ddcdb3c564e1/volumes/kubernetes.io~projected/kube-api-access-f69cb major:0 minor:276 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg:{mountpoint:/var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:285 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert major:0 minor:183 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ba37ef0e-373c-4ccc-b082-668630399765/volumes/kubernetes.io~projected/kube-api-access-57455:{mountpoint:/var/lib/kubelet/pods/ba37ef0e-373c-4ccc-b082-668630399765/volumes/kubernetes.io~projected/kube-api-access-57455 major:0 minor:186 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:182 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp major:0 minor:172 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52:{mountpoint:/var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52 major:0 minor:290 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67:{mountpoint:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67 major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e1443fb7-cb1e-4105-b604-b88c749620c4/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/e1443fb7-cb1e-4105-b604-b88c749620c4/volumes/kubernetes.io~empty-dir/config-out major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e1443fb7-cb1e-4105-b604-b88c749620c4/volumes/kubernetes.io~projected/kube-api-access-tjpvn:{mountpoint:/var/lib/kubelet/pods/e1443fb7-cb1e-4105-b604-b88c749620c4/volumes/kubernetes.io~projected/kube-api-access-tjpvn major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~projected/kube-api-access-94kdz:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~projected/kube-api-access-94kdz major:0 minor:191 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/default-certificate major:0 minor:168 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/metrics-certs major:0 minor:176 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/stats-auth major:0 minor:184 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz:{mountpoint:/var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz major:0 minor:185 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fe8e8e5d-cebb-4361-b765-5ff737f5e838/volumes/kubernetes.io~projected/kube-api-access-j99jl:{mountpoint:/var/lib/kubelet/pods/fe8e8e5d-cebb-4361-b765-5ff737f5e838/volumes/kubernetes.io~projected/kube-api-access-j99jl major:0 minor:212 fsType:tmpfs blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/09f5836d214995127556cc63cae727f45dd64f338d468d3b2445aa558ab8d5e8/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/4730131787eddad749fc1dc0f20ea38937957f70f28d392e79196c8fa37c8ef6/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-125:{mountpoint:/var/lib/containers/storage/overlay/8c7f87c4a3f6c3c580bc0fc7fbbd13e0d7d2b4ae53bb36402e20d28f605b1e69/merged major:0 minor:125 fsType:overlay blockSize:0} overlay_0-130:{mountpoint:/var/lib/containers/storage/overlay/0da4438975335e7243afc2a4f3e2a4f8796d170774df09174999b6c09b8c4d4f/merged major:0 minor:130 fsType:overlay blockSize:0} overlay_0-135:{mountpoint:/var/lib/containers/storage/overlay/9d103eefb9cd6e634cc6e8e22695241fb1883f77169c805f47273565f116e25b/merged major:0 minor:135 fsType:overlay blockSize:0} overlay_0-137:{mountpoint:/var/lib/containers/storage/overlay/3070d2a7aaa9b542fb066276d7bd3ab89f151ef2a983437c6ea145ec7cf95490/merged major:0 minor:137 fsType:overlay blockSize:0} overlay_0-142:{mountpoint:/var/lib/containers/storage/overlay/63492e05694adcbdc99297bfe0ea001bfbf93f57508ee414e0720638257875bc/merged major:0 minor:142 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/94a0fdd4dc96d79bf953c352cd97eecde7d4b0e24c37325ba16db0a33727b9c5/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/69eb6442e41dde7d72cb954446ea25da0c5cd141f76e4dbbb3814302b6b53917/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-157:{mountpoint:/var/lib/containers/storage/overlay/3a2c0df119dc5d1243a314c040b14ced4bccbcbb4159ca2254f48529ca3dd27c/merged major:0 minor:157 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/12f8ba5318372c4b7f1cf9087c35f19a7148877123566a714a340028b8fee625/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-197:{mountpoint:/var/lib/containers/storage/overlay/b3d4af7f554ddfe2f2fcd32535e883761e6647c53a1a3c8665582158409ba4e5/merged major:0 minor:197 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/9e88e470abf2b7d410743312c933211475395e7ae1f9acacef03884c9045a9da/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-208:{mountpoint:/var/lib/containers/storage/overlay/d52e468d25b5758f775699b0fbc016dcc8aedfe7d94487aacbe1d317f840793b/merged major:0 minor:208 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/702510cfe6ddeefbabe3a2111460f6278b35c164b8ae1a1a751ca22c69d7d900/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-214:{mountpoint:/var/lib/containers/storage/overlay/186aa6666ab889b7e4e4dce2937f24f89bc0f24eff15b1774acdab8a060ba6b4/merged major:0 minor:214 fsType:overlay blockSize:0} overlay_0-221:{mountpoint:/var/lib/containers/storage/overlay/aa9ecae4d1a4ceec48ec88a3015b3684af8932a8b6a41206798f041e3e225855/merged major:0 minor:221 fsType:overlay blockSize:0} overlay_0-224:{mountpoint:/var/lib/containers/storage/overlay/d496fa0a7fc542531c39318b15650827419755e1bcc42743c2e01c6e4bf6af88/merged major:0 minor:224 fsType:overlay blockSize:0} overlay_0-228:{mountpoint:/var/lib/containers/storage/overlay/40a17c793913c806b296c6fe0a05380b7258d3ba3ce4310cba28ce3cb135121b/merged major:0 minor:228 fsType:overlay blockSize:0} overlay_0-231:{mountpoint:/var/lib/containers/storage/overlay/5eba6c4f6844ea21a23157c212613d6f4fa74585a9970e7a548dc4d894617d30/merged major:0 minor:231 fsType:overlay blockSize:0} overlay_0-234:{mountpoint:/var/lib/containers/storage/overlay/b589cb15051206449ae81ac298f303e2a5bcf7c12c28982a954808b0e5637f4e/merged major:0 minor:234 fsType:overlay blockSize:0} overlay_0-239:{mountpoint:/var/lib/containers/storage/overlay/7283f153293e56f4e755c9366dc70339c452e6f39607e4b659e7c9832c151272/merged major:0 minor:239 fsType:overlay blockSize:0} overlay_0-243:{mountpoint:/var/lib/containers/storage/overlay/d48c4a6d8c87e1fa076654fe491ff87b4f38c03eda1debb0919bd2351ae7d92b/merged major:0 minor:243 fsType:overlay blockSize:0} overlay_0-246:{mountpoint:/var/lib/containers/storage/overlay/9e63d7a4326b5ec8153955e318c55bcd66365d859a792003aedf538437f84a48/merged major:0 minor:246 fsType:overlay blockSize:0} overlay_0-248:{mountpoint:/var/lib/containers/storage/overlay/8f9ea854e5ed931e6baa0721e67da5b513d9fe212795bc0a1a06107226f9fb30/merged major:0 minor:248 fsType:overlay blockSize:0} overlay_0-250:{mountpoint:/var/lib/containers/storage/overlay/22c17e58a3400fda2ffa08fc8ec3fb7c6db371aa1d43eaaed16c7507eb81190f/merged major:0 minor:250 fsType:overlay blockSize:0} overlay_0-257:{mountpoint:/var/lib/containers/storage/overlay/4f65194683b4d68e378da4ff2e164613df7aaf6a4e152b85b4e265b32d4a2bf1/merged major:0 minor:257 fsType:overlay blockSize:0} overlay_0-262:{mountpoint:/var/lib/containers/storage/overlay/c8983bff7342de01d004106809e1f7ec6088ac989d6304d9e9e21694f0ff4261/merged major:0 minor:262 fsType:overlay blockSize:0} overlay_0-264:{mountpoint:/var/lib/containers/storage/overlay/60ef59b21d2f1430b898a6240be68b903a9c7dd4c8bcb6cf99bd2d26d67584b2/merged major:0 minor:264 fsType:overlay blockSize:0} overlay_0-268:{mountpoint:/var/lib/containers/storage/overlay/3c12b0317fc3ac12ad9a58d26bb1e85ce31dc060a7225c14e3a37c21117b1168/merged major:0 minor:268 fsType:overlay blockSize:0} overlay_0-270:{mountpoint:/var/lib/containers/storage/overlay/b5502720c8eeeec36b2411158213ea83f00091cc4205b92c2aeed3a1e35d781a/merged major:0 minor:270 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/2b46a752bc02876ba1a17fe70953dc6871d6676bead98e3b9c3c96f87c595637/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/932f409502bbac9d2faa99878047bc99797602d9274acb464b88b4d68c141352/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/8101b6e5669626295dd45290c7d8011b6e4a2729ac5bc8b2ccceb3d088ca7c2f/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/7ebb794291f510af155e6e9c13bf2106cb095056a7ba626fe9795effeaebb392/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/402059b3fd7c3ad82996a07cedfaf3d7fb9d6c5d1fd3dc9d933c742d808e308a/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/eb652cf6aa8f44fb5e3e9b866463ff90960f6c64e3f00bd20327792f8c3740e9/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-300:{mountpoint:/var/lib/containers/storage/overlay/c98113b31333a24f181a526312000345ac72ccb1d864a8974a6c7277557c089f/merged major:0 minor:300 fsType:overlay blockSize:0} overlay_0-302:{mountpoint:/var/lib/containers/storage/overlay/7d635bcdbae5453e68be3fdf574ce463e8265695e310cf7dce2a5fc4a66adad4/merged major:0 minor:302 fsType:overlay blockSize:0} overlay_0-304:{mountpoint:/var/lib/containers/storage/overlay/9c5817251ab07013dd91ac724e3c3ef758f61d95260f8b239615b6d7a011d842/merged major:0 minor:304 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/b8dc0f7dde04f7bdc6e3ed09beb2babaf65726c6c20c72c60ebe4b91ceb41d1e/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/b313f99ae5563b3c4cad66321570c8e1a1722b1a6ad83021898e8b7ecf8a5a54/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/10d7a5d142a47fdbd7bc2f0bc3dae43d175f5bd9b90f0828d5c32fdfdc346ac9/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/1b1e25640d1b5f7e1a6979e2d742f949d48a7a3e3b807df7357d7e6d0105226c/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/c1ad7b1e75ff8121a52451a872293c913d884fd56a8fb4a6efadb6f825495965/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-58:{mountpoint:/var/lib/containers/storage/overlay/deef5c0bf28a7d0eb182bb332188ac4514e387dc82da41ef5d0de8e69ad30028/merged major:0 minor:58 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/6f3d72195335cbe00283a829cbdcf37291165f9eb21ec43a7a98d254c2195096/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/dea8e56242f771c3f4240437ea872b5aa69cbcdc4c69adb681db7faa81964b01/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/15ddce899bf7025b006817f4fbf95920343b8c52fab4bba0df3bb822c0c022a7/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/d5a5bcb18bf2f392ce545cb9c9fc95f21a26dcad394e19ec2c176101a87ae10c/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/954320faee93ae31aa937b1cf69c00a727a2d2b41738c5d4fa9c335e7ebb20fe/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/eb9ac9029da0ad246d4f31dc18005da99e3a772c91a341f3ecbbb3dc266c210a/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/56e3064af552e12ea4899e2fbeaf3af74bafd51955a190e60d5ade4ba9e5aa8b/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/3f3b685a049e712cb33406c009189a55043a93c6c1f96dfe66fed2874b411f2e/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/c5b7c7203c8cc83b7cabe0e03d31156007762ec9128369df76fbc0b0a99124f9/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/c07d0d65df8268c2304231f4096c419ce813309f8b1ed76d971660164daca72b/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-88:{mountpoint:/var/lib/containers/storage/overlay/dc9f8b50c0dd0e6ca4a6e1309bb74385c767326cd61a9990591642570f674d1f/merged major:0 minor:88 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/d4726534cd1288f550660be9c544e9d0c0f564826956ed6334054d34a6121e37/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-92:{mountpoint:/var/lib/containers/storage/overlay/7b7911ca28754b283d31d3daede99d5c25cba1c3fe1b5459cd5689ea168858e4/merged major:0 minor:92 fsType:overlay blockSize:0} overlay_0-94:{mountpoint:/var/lib/containers/storage/overlay/290ddae35fdfdf1e60f56b2e5e44cd9ee7fdc81fe4982ead3bef10512b2809f7/merged major:0 minor:94 fsType:overlay blockSize:0} overlay_0-96:{mountpoint:/var/lib/containers/storage/overlay/8905486c7eef215b0e527317624001d34ae7bf74f408a7666d9c0b72ed9b4194/merged major:0 minor:96 fsType:overlay blockSize:0}] Feb 16 17:14:18.174277 master-0 kubenswrapper[4083]: I0216 17:14:18.173160 4083 manager.go:217] Machine: {Timestamp:2026-02-16 17:14:18.171387816 +0000 UTC m=+0.133965126 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:47bfea951bd14de8bb3b008f6812b13f SystemUUID:47bfea95-1bd1-4de8-bb3b-008f6812b13f BootID:16009b8c-6511-4dd4-9a27-539c3ce647e4 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d0c55c98db8491069414beee715fb0df1d28a36c886f9583fb6e5f20a3fd1076/userdata/shm DeviceMajor:0 DeviceMinor:64 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n DeviceMajor:0 DeviceMinor:254 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1cd29be8-2b2a-49f7-badd-ff53c686a63d/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:178 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/343b05919e1f64786e2254ca5f9bc68bb46285c032cc50fdd759ba021022cc78/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-243 DeviceMajor:0 DeviceMinor:243 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg DeviceMajor:0 DeviceMinor:245 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-257 DeviceMajor:0 DeviceMinor:257 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x DeviceMajor:0 DeviceMinor:273 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e979fd391b550f805e511fcc06c4da51e87eefebf9f2469af331306f8e129b95/userdata/shm DeviceMajor:0 DeviceMinor:201 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl DeviceMajor:0 DeviceMinor:253 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:177 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:182 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/55d635cd-1f0d-4086-96f2-9f3524f3f18c/volumes/kubernetes.io~projected/kube-api-access-76rtg DeviceMajor:0 DeviceMinor:189 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~projected/kube-api-access-8nfk2 DeviceMajor:0 DeviceMinor:216 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x DeviceMajor:0 DeviceMinor:252 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/0d980a9a-2574-41b9-b970-0718cd97c8cd/volumes/kubernetes.io~projected/kube-api-access-t7l6q DeviceMajor:0 DeviceMinor:190 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1cd29be8-2b2a-49f7-badd-ff53c686a63d/volumes/kubernetes.io~projected/kube-api-access-lgm4p DeviceMajor:0 DeviceMinor:199 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-262 DeviceMajor:0 DeviceMinor:262 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-92 DeviceMajor:0 DeviceMinor:92 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-96 DeviceMajor:0 DeviceMinor:96 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:170 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:173 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ba37ef0e-373c-4ccc-b082-668630399765/volumes/kubernetes.io~projected/kube-api-access-57455 DeviceMajor:0 DeviceMinor:186 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-268 DeviceMajor:0 DeviceMinor:268 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~projected/kube-api-access-94kdz DeviceMajor:0 DeviceMinor:191 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5 DeviceMajor:0 DeviceMinor:213 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f119761284235b155a5550b379cddae2d59c4785ac83d6f2e1cabbb819959d1e/userdata/shm DeviceMajor:0 DeviceMinor:241 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c643d1a6bd2bbdb9a152ec5acdf256c8c4044ba37ff73d78c6f2993bc96d4a77/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:180 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/49352b0546742089f6d27ebdb79f9e6f209f38640843957969c3a7f0cde5300b/userdata/shm DeviceMajor:0 DeviceMinor:193 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt DeviceMajor:0 DeviceMinor:217 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-304 DeviceMajor:0 DeviceMinor:304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:183 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x DeviceMajor:0 DeviceMinor:188 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:261 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-270 DeviceMajor:0 DeviceMinor:270 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-94 DeviceMajor:0 DeviceMinor:94 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e1443fb7-cb1e-4105-b604-b88c749620c4/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:167 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz DeviceMajor:0 DeviceMinor:185 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2 DeviceMajor:0 DeviceMinor:293 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-300 DeviceMajor:0 DeviceMinor:300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2 DeviceMajor:0 DeviceMinor:223 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c7d736050372998764922f31dd6cf88581d3803877d64338fea850cc359f7da3/userdata/shm DeviceMajor:0 DeviceMinor:297 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:174 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-234 DeviceMajor:0 DeviceMinor:234 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-239 DeviceMajor:0 DeviceMinor:239 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-250 DeviceMajor:0 DeviceMinor:250 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g DeviceMajor:0 DeviceMinor:283 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4ba4eba49a66193e7786c85a4578333fc95c4bc9a7a4bb4ef1dbbff27d009c65/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b76fc34b4f11f5d3f3dd2290c7e69bd90116ec4cac1df909fdf5c5e5f8bf960d/userdata/shm DeviceMajor:0 DeviceMinor:70 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f38395ae743150d868e0e9f52251b36fa3cd386c02f6210a01132b4d3e9b83fa/userdata/shm DeviceMajor:0 DeviceMinor:206 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3ba5b55cdc513202565393d69d57718508e29795dfd1cdb87d49dc9c14489665/userdata/shm DeviceMajor:0 DeviceMinor:219 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-221 DeviceMajor:0 DeviceMinor:221 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-231 DeviceMajor:0 DeviceMinor:231 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-248 DeviceMajor:0 DeviceMinor:248 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:285 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-135 DeviceMajor:0 DeviceMinor:135 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:176 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-197 DeviceMajor:0 DeviceMinor:197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-228 DeviceMajor:0 DeviceMinor:228 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e1443fb7-cb1e-4105-b604-b88c749620c4/volumes/kubernetes.io~projected/kube-api-access-tjpvn DeviceMajor:0 DeviceMinor:235 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6b896923757b268325f5828b39f28a688ac9f66638ad3480c6f941e1ecce93ee/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e8d809425731cc2967cdb379e53f1be7eba9e51662dfb79330c03d92562a8e44/userdata/shm DeviceMajor:0 DeviceMinor:230 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-246 DeviceMajor:0 DeviceMinor:246 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67 DeviceMajor:0 DeviceMinor:272 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/06ddc2da13c0775c0e8f0acf19c817f8072a1fcf961d84d30040d9ab97e3ada6/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2700d64446e8244b9b674cd60afd215140645d2edcc3782a0dea4459ce56db2c/userdata/shm DeviceMajor:0 DeviceMinor:56 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:168 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/fe8e8e5d-cebb-4361-b765-5ff737f5e838/volumes/kubernetes.io~projected/kube-api-access-j99jl DeviceMajor:0 DeviceMinor:212 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld DeviceMajor:0 DeviceMinor:218 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-264 DeviceMajor:0 DeviceMinor:264 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ae20b683-dac8-419e-808a-ddcdb3c564e1/volumes/kubernetes.io~projected/kube-api-access-f69cb DeviceMajor:0 DeviceMinor:276 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52 DeviceMajor:0 DeviceMinor:290 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-302 DeviceMajor:0 DeviceMinor:302 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:175 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:181 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw DeviceMajor:0 DeviceMinor:192 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:139 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:179 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-214 DeviceMajor:0 DeviceMinor:214 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/24414ee08d96c92b5e5a2987ea123de0d7bf6e29180b3cd5f05b52963a1027d2/userdata/shm DeviceMajor:0 DeviceMinor:226 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:169 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:171 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl DeviceMajor:0 DeviceMinor:298 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-58 DeviceMajor:0 DeviceMinor:58 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-125 DeviceMajor:0 DeviceMinor:125 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-142 DeviceMajor:0 DeviceMinor:142 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-208 DeviceMajor:0 DeviceMinor:208 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e5383f33102a9898e2ed29273a78d4b119c0cc0618f01a3ae943b24be1f2db07/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:184 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/2d1636c0-f34d-444c-822d-77f1d203ddc4/volumes/kubernetes.io~projected/kube-api-access-vbtld DeviceMajor:0 DeviceMinor:187 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-88 DeviceMajor:0 DeviceMinor:88 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-224 DeviceMajor:0 DeviceMinor:224 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-130 DeviceMajor:0 DeviceMinor:130 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~projected/kube-api-access-gvw4s DeviceMajor:0 DeviceMinor:274 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/350fa8046d93176987309e508424a1570fc1eed5f18ae89cb9ac0b90ba3cb70f/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-157 DeviceMajor:0 DeviceMinor:157 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:172 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/06067627-6ccf-4cc8-bd20-dabdd776bb46/volumes/kubernetes.io~projected/kube-api-access-pq4dn DeviceMajor:0 DeviceMinor:200 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-137 DeviceMajor:0 DeviceMinor:137 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm DeviceMajor:0 DeviceMinor:202 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dc503dfcd2e24f7be396a4259fc22362db61ad7785d0bcd4d3225ebdfd2e1f72/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:275 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/63964f47111b36b48ad0828624ccf523359d31129423bb7917fb1cd01aad8c04/userdata/shm DeviceMajor:0 DeviceMinor:237 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:52:47:03:db:66:8a Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:2c:b9:e2 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:4a:2e:ce Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:02:c0:82:fb:4a:f4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 17:14:18.174277 master-0 kubenswrapper[4083]: I0216 17:14:18.174244 4083 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 17:14:18.175109 master-0 kubenswrapper[4083]: I0216 17:14:18.174659 4083 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 17:14:18.175414 master-0 kubenswrapper[4083]: I0216 17:14:18.175373 4083 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 17:14:18.175671 master-0 kubenswrapper[4083]: I0216 17:14:18.175608 4083 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 17:14:18.175992 master-0 kubenswrapper[4083]: I0216 17:14:18.175667 4083 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 17:14:18.176135 master-0 kubenswrapper[4083]: I0216 17:14:18.175999 4083 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 17:14:18.176135 master-0 kubenswrapper[4083]: I0216 17:14:18.176017 4083 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 17:14:18.176135 master-0 kubenswrapper[4083]: I0216 17:14:18.176029 4083 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:14:18.176135 master-0 kubenswrapper[4083]: I0216 17:14:18.176055 4083 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:14:18.176258 master-0 kubenswrapper[4083]: I0216 17:14:18.176231 4083 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:14:18.176347 master-0 kubenswrapper[4083]: I0216 17:14:18.176328 4083 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 17:14:18.176419 master-0 kubenswrapper[4083]: I0216 17:14:18.176403 4083 kubelet.go:418] "Attempting to sync node with API server" Feb 16 17:14:18.176455 master-0 kubenswrapper[4083]: I0216 17:14:18.176421 4083 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 17:14:18.176455 master-0 kubenswrapper[4083]: I0216 17:14:18.176437 4083 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 17:14:18.176455 master-0 kubenswrapper[4083]: I0216 17:14:18.176450 4083 kubelet.go:324] "Adding apiserver pod source" Feb 16 17:14:18.176532 master-0 kubenswrapper[4083]: I0216 17:14:18.176460 4083 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 17:14:18.178377 master-0 kubenswrapper[4083]: I0216 17:14:18.178326 4083 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 17:14:18.178553 master-0 kubenswrapper[4083]: I0216 17:14:18.178519 4083 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 17:14:18.178872 master-0 kubenswrapper[4083]: I0216 17:14:18.178836 4083 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 17:14:18.179047 master-0 kubenswrapper[4083]: I0216 17:14:18.179019 4083 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 17:14:18.179098 master-0 kubenswrapper[4083]: I0216 17:14:18.179049 4083 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 17:14:18.179098 master-0 kubenswrapper[4083]: I0216 17:14:18.179059 4083 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 17:14:18.179098 master-0 kubenswrapper[4083]: I0216 17:14:18.179069 4083 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 17:14:18.179098 master-0 kubenswrapper[4083]: I0216 17:14:18.179077 4083 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 17:14:18.179098 master-0 kubenswrapper[4083]: I0216 17:14:18.179087 4083 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 17:14:18.179098 master-0 kubenswrapper[4083]: I0216 17:14:18.179096 4083 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 17:14:18.179098 master-0 kubenswrapper[4083]: I0216 17:14:18.179106 4083 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 17:14:18.179343 master-0 kubenswrapper[4083]: I0216 17:14:18.179117 4083 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 17:14:18.179343 master-0 kubenswrapper[4083]: I0216 17:14:18.179126 4083 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 17:14:18.179343 master-0 kubenswrapper[4083]: I0216 17:14:18.179138 4083 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 17:14:18.179343 master-0 kubenswrapper[4083]: I0216 17:14:18.179153 4083 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 17:14:18.179343 master-0 kubenswrapper[4083]: I0216 17:14:18.179180 4083 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 17:14:18.179759 master-0 kubenswrapper[4083]: I0216 17:14:18.179726 4083 server.go:1280] "Started kubelet" Feb 16 17:14:18.180287 master-0 kubenswrapper[4083]: I0216 17:14:18.180229 4083 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 17:14:18.180343 master-0 kubenswrapper[4083]: I0216 17:14:18.180223 4083 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 17:14:18.180384 master-0 kubenswrapper[4083]: I0216 17:14:18.180368 4083 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 17:14:18.181341 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 17:14:18.182111 master-0 kubenswrapper[4083]: I0216 17:14:18.182051 4083 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 17:14:18.187490 master-0 kubenswrapper[4083]: I0216 17:14:18.187116 4083 server.go:449] "Adding debug handlers to kubelet server" Feb 16 17:14:18.188664 master-0 kubenswrapper[4083]: I0216 17:14:18.188634 4083 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:14:18.191582 master-0 kubenswrapper[4083]: I0216 17:14:18.191523 4083 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:14:18.192716 master-0 kubenswrapper[4083]: E0216 17:14:18.192680 4083 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 16 17:14:18.193814 master-0 kubenswrapper[4083]: I0216 17:14:18.193777 4083 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 17:14:18.193856 master-0 kubenswrapper[4083]: I0216 17:14:18.193827 4083 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 17:14:18.193896 master-0 kubenswrapper[4083]: I0216 17:14:18.193853 4083 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 12:05:18.627973019 +0000 UTC Feb 16 17:14:18.193896 master-0 kubenswrapper[4083]: I0216 17:14:18.193886 4083 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h51m0.434088444s for next certificate rotation Feb 16 17:14:18.194069 master-0 kubenswrapper[4083]: I0216 17:14:18.194044 4083 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 17:14:18.195087 master-0 kubenswrapper[4083]: I0216 17:14:18.195058 4083 factory.go:55] Registering systemd factory Feb 16 17:14:18.195087 master-0 kubenswrapper[4083]: I0216 17:14:18.195083 4083 factory.go:221] Registration of the systemd container factory successfully Feb 16 17:14:18.195345 master-0 kubenswrapper[4083]: I0216 17:14:18.195297 4083 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 17:14:18.195393 master-0 kubenswrapper[4083]: I0216 17:14:18.195361 4083 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 17:14:18.195437 master-0 kubenswrapper[4083]: I0216 17:14:18.195418 4083 factory.go:153] Registering CRI-O factory Feb 16 17:14:18.195437 master-0 kubenswrapper[4083]: I0216 17:14:18.195435 4083 factory.go:221] Registration of the crio container factory successfully Feb 16 17:14:18.195497 master-0 kubenswrapper[4083]: I0216 17:14:18.195451 4083 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:14:18.195531 master-0 kubenswrapper[4083]: I0216 17:14:18.195507 4083 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 17:14:18.195561 master-0 kubenswrapper[4083]: I0216 17:14:18.195535 4083 factory.go:103] Registering Raw factory Feb 16 17:14:18.195561 master-0 kubenswrapper[4083]: I0216 17:14:18.195551 4083 manager.go:1196] Started watching for new ooms in manager Feb 16 17:14:18.197691 master-0 kubenswrapper[4083]: I0216 17:14:18.197636 4083 manager.go:319] Starting recovery of all containers Feb 16 17:14:18.204645 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 16 17:14:18.214297 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 16 17:14:18.214639 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 16 17:14:18.237888 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 17:14:18.306009 master-0 kubenswrapper[4167]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:14:18.306009 master-0 kubenswrapper[4167]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 17:14:18.306009 master-0 kubenswrapper[4167]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:14:18.306009 master-0 kubenswrapper[4167]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:14:18.306009 master-0 kubenswrapper[4167]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 17:14:18.306009 master-0 kubenswrapper[4167]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:14:18.307117 master-0 kubenswrapper[4167]: I0216 17:14:18.306031 4167 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 17:14:18.310992 master-0 kubenswrapper[4167]: W0216 17:14:18.310930 4167 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:14:18.311145 master-0 kubenswrapper[4167]: W0216 17:14:18.311131 4167 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:14:18.311219 master-0 kubenswrapper[4167]: W0216 17:14:18.311206 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:14:18.311285 master-0 kubenswrapper[4167]: W0216 17:14:18.311274 4167 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:14:18.311357 master-0 kubenswrapper[4167]: W0216 17:14:18.311346 4167 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:14:18.311424 master-0 kubenswrapper[4167]: W0216 17:14:18.311413 4167 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:14:18.311488 master-0 kubenswrapper[4167]: W0216 17:14:18.311477 4167 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:14:18.311570 master-0 kubenswrapper[4167]: W0216 17:14:18.311556 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:14:18.311660 master-0 kubenswrapper[4167]: W0216 17:14:18.311646 4167 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:14:18.311733 master-0 kubenswrapper[4167]: W0216 17:14:18.311721 4167 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:14:18.311816 master-0 kubenswrapper[4167]: W0216 17:14:18.311803 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:14:18.311905 master-0 kubenswrapper[4167]: W0216 17:14:18.311890 4167 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:14:18.312003 master-0 kubenswrapper[4167]: W0216 17:14:18.311990 4167 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:14:18.312073 master-0 kubenswrapper[4167]: W0216 17:14:18.312062 4167 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:14:18.312139 master-0 kubenswrapper[4167]: W0216 17:14:18.312127 4167 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:14:18.312221 master-0 kubenswrapper[4167]: W0216 17:14:18.312209 4167 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:14:18.312294 master-0 kubenswrapper[4167]: W0216 17:14:18.312282 4167 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:14:18.312367 master-0 kubenswrapper[4167]: W0216 17:14:18.312355 4167 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:14:18.312439 master-0 kubenswrapper[4167]: W0216 17:14:18.312428 4167 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:14:18.312509 master-0 kubenswrapper[4167]: W0216 17:14:18.312497 4167 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:14:18.312590 master-0 kubenswrapper[4167]: W0216 17:14:18.312575 4167 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:14:18.312662 master-0 kubenswrapper[4167]: W0216 17:14:18.312651 4167 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:14:18.312729 master-0 kubenswrapper[4167]: W0216 17:14:18.312717 4167 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:14:18.312797 master-0 kubenswrapper[4167]: W0216 17:14:18.312784 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:14:18.312888 master-0 kubenswrapper[4167]: W0216 17:14:18.312875 4167 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:14:18.312983 master-0 kubenswrapper[4167]: W0216 17:14:18.312950 4167 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:14:18.313059 master-0 kubenswrapper[4167]: W0216 17:14:18.313045 4167 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:14:18.313128 master-0 kubenswrapper[4167]: W0216 17:14:18.313116 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:14:18.313195 master-0 kubenswrapper[4167]: W0216 17:14:18.313184 4167 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:14:18.313261 master-0 kubenswrapper[4167]: W0216 17:14:18.313249 4167 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:14:18.313329 master-0 kubenswrapper[4167]: W0216 17:14:18.313318 4167 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:14:18.313407 master-0 kubenswrapper[4167]: W0216 17:14:18.313395 4167 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:14:18.313475 master-0 kubenswrapper[4167]: W0216 17:14:18.313464 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:14:18.313542 master-0 kubenswrapper[4167]: W0216 17:14:18.313530 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:14:18.313636 master-0 kubenswrapper[4167]: W0216 17:14:18.313622 4167 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:14:18.313707 master-0 kubenswrapper[4167]: W0216 17:14:18.313696 4167 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:14:18.313773 master-0 kubenswrapper[4167]: W0216 17:14:18.313762 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:14:18.313840 master-0 kubenswrapper[4167]: W0216 17:14:18.313829 4167 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:14:18.313906 master-0 kubenswrapper[4167]: W0216 17:14:18.313894 4167 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:14:18.313993 master-0 kubenswrapper[4167]: W0216 17:14:18.313980 4167 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:14:18.314062 master-0 kubenswrapper[4167]: W0216 17:14:18.314050 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:14:18.314139 master-0 kubenswrapper[4167]: W0216 17:14:18.314127 4167 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:14:18.314207 master-0 kubenswrapper[4167]: W0216 17:14:18.314195 4167 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:14:18.314273 master-0 kubenswrapper[4167]: W0216 17:14:18.314261 4167 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:14:18.314338 master-0 kubenswrapper[4167]: W0216 17:14:18.314327 4167 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:14:18.314402 master-0 kubenswrapper[4167]: W0216 17:14:18.314391 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:14:18.314468 master-0 kubenswrapper[4167]: W0216 17:14:18.314457 4167 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:14:18.314532 master-0 kubenswrapper[4167]: W0216 17:14:18.314521 4167 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:14:18.314641 master-0 kubenswrapper[4167]: W0216 17:14:18.314627 4167 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:14:18.314714 master-0 kubenswrapper[4167]: W0216 17:14:18.314703 4167 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:14:18.314780 master-0 kubenswrapper[4167]: W0216 17:14:18.314769 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:14:18.314853 master-0 kubenswrapper[4167]: W0216 17:14:18.314841 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:14:18.314919 master-0 kubenswrapper[4167]: W0216 17:14:18.314908 4167 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:14:18.315009 master-0 kubenswrapper[4167]: W0216 17:14:18.314996 4167 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:14:18.315079 master-0 kubenswrapper[4167]: W0216 17:14:18.315068 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:14:18.315162 master-0 kubenswrapper[4167]: W0216 17:14:18.315150 4167 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:14:18.315231 master-0 kubenswrapper[4167]: W0216 17:14:18.315220 4167 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:14:18.315298 master-0 kubenswrapper[4167]: W0216 17:14:18.315286 4167 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:14:18.315364 master-0 kubenswrapper[4167]: W0216 17:14:18.315353 4167 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:14:18.315431 master-0 kubenswrapper[4167]: W0216 17:14:18.315420 4167 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:14:18.315503 master-0 kubenswrapper[4167]: W0216 17:14:18.315490 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:14:18.315581 master-0 kubenswrapper[4167]: W0216 17:14:18.315567 4167 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:14:18.315667 master-0 kubenswrapper[4167]: W0216 17:14:18.315655 4167 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:14:18.315837 master-0 kubenswrapper[4167]: W0216 17:14:18.315822 4167 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:14:18.315936 master-0 kubenswrapper[4167]: W0216 17:14:18.315921 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:14:18.316041 master-0 kubenswrapper[4167]: W0216 17:14:18.316029 4167 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:14:18.316111 master-0 kubenswrapper[4167]: W0216 17:14:18.316099 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:14:18.316191 master-0 kubenswrapper[4167]: W0216 17:14:18.316178 4167 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:14:18.316259 master-0 kubenswrapper[4167]: W0216 17:14:18.316247 4167 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:14:18.316331 master-0 kubenswrapper[4167]: W0216 17:14:18.316319 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:14:18.316399 master-0 kubenswrapper[4167]: W0216 17:14:18.316388 4167 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:14:18.316488 master-0 kubenswrapper[4167]: W0216 17:14:18.316476 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:14:18.316722 master-0 kubenswrapper[4167]: I0216 17:14:18.316699 4167 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 17:14:18.316818 master-0 kubenswrapper[4167]: I0216 17:14:18.316796 4167 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 17:14:18.316893 master-0 kubenswrapper[4167]: I0216 17:14:18.316878 4167 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 17:14:18.316987 master-0 kubenswrapper[4167]: I0216 17:14:18.316949 4167 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 17:14:18.317074 master-0 kubenswrapper[4167]: I0216 17:14:18.317060 4167 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 17:14:18.317147 master-0 kubenswrapper[4167]: I0216 17:14:18.317132 4167 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 17:14:18.317216 master-0 kubenswrapper[4167]: I0216 17:14:18.317202 4167 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 17:14:18.317283 master-0 kubenswrapper[4167]: I0216 17:14:18.317271 4167 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 17:14:18.317350 master-0 kubenswrapper[4167]: I0216 17:14:18.317337 4167 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 17:14:18.317417 master-0 kubenswrapper[4167]: I0216 17:14:18.317404 4167 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 17:14:18.317488 master-0 kubenswrapper[4167]: I0216 17:14:18.317475 4167 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 17:14:18.317570 master-0 kubenswrapper[4167]: I0216 17:14:18.317554 4167 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 17:14:18.317660 master-0 kubenswrapper[4167]: I0216 17:14:18.317644 4167 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 17:14:18.317735 master-0 kubenswrapper[4167]: I0216 17:14:18.317722 4167 flags.go:64] FLAG: --cgroup-root="" Feb 16 17:14:18.317811 master-0 kubenswrapper[4167]: I0216 17:14:18.317798 4167 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 17:14:18.317879 master-0 kubenswrapper[4167]: I0216 17:14:18.317867 4167 flags.go:64] FLAG: --client-ca-file="" Feb 16 17:14:18.317945 master-0 kubenswrapper[4167]: I0216 17:14:18.317933 4167 flags.go:64] FLAG: --cloud-config="" Feb 16 17:14:18.318043 master-0 kubenswrapper[4167]: I0216 17:14:18.318030 4167 flags.go:64] FLAG: --cloud-provider="" Feb 16 17:14:18.318129 master-0 kubenswrapper[4167]: I0216 17:14:18.318112 4167 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 17:14:18.318199 master-0 kubenswrapper[4167]: I0216 17:14:18.318187 4167 flags.go:64] FLAG: --cluster-domain="" Feb 16 17:14:18.318266 master-0 kubenswrapper[4167]: I0216 17:14:18.318255 4167 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 17:14:18.318332 master-0 kubenswrapper[4167]: I0216 17:14:18.318321 4167 flags.go:64] FLAG: --config-dir="" Feb 16 17:14:18.318399 master-0 kubenswrapper[4167]: I0216 17:14:18.318387 4167 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 17:14:18.318468 master-0 kubenswrapper[4167]: I0216 17:14:18.318453 4167 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 17:14:18.318566 master-0 kubenswrapper[4167]: I0216 17:14:18.318532 4167 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 17:14:18.318659 master-0 kubenswrapper[4167]: I0216 17:14:18.318643 4167 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 17:14:18.318731 master-0 kubenswrapper[4167]: I0216 17:14:18.318719 4167 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 17:14:18.318798 master-0 kubenswrapper[4167]: I0216 17:14:18.318786 4167 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 17:14:18.318866 master-0 kubenswrapper[4167]: I0216 17:14:18.318854 4167 flags.go:64] FLAG: --contention-profiling="false" Feb 16 17:14:18.318932 master-0 kubenswrapper[4167]: I0216 17:14:18.318920 4167 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 17:14:18.319035 master-0 kubenswrapper[4167]: I0216 17:14:18.319021 4167 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 17:14:18.319108 master-0 kubenswrapper[4167]: I0216 17:14:18.319096 4167 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 17:14:18.319191 master-0 kubenswrapper[4167]: I0216 17:14:18.319175 4167 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 17:14:18.319267 master-0 kubenswrapper[4167]: I0216 17:14:18.319253 4167 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 17:14:18.319336 master-0 kubenswrapper[4167]: I0216 17:14:18.319323 4167 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 17:14:18.319402 master-0 kubenswrapper[4167]: I0216 17:14:18.319390 4167 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 17:14:18.319468 master-0 kubenswrapper[4167]: I0216 17:14:18.319456 4167 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 17:14:18.319544 master-0 kubenswrapper[4167]: I0216 17:14:18.319530 4167 flags.go:64] FLAG: --enable-server="true" Feb 16 17:14:18.319628 master-0 kubenswrapper[4167]: I0216 17:14:18.319612 4167 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 17:14:18.319709 master-0 kubenswrapper[4167]: I0216 17:14:18.319696 4167 flags.go:64] FLAG: --event-burst="100" Feb 16 17:14:18.319784 master-0 kubenswrapper[4167]: I0216 17:14:18.319772 4167 flags.go:64] FLAG: --event-qps="50" Feb 16 17:14:18.319851 master-0 kubenswrapper[4167]: I0216 17:14:18.319840 4167 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 17:14:18.319930 master-0 kubenswrapper[4167]: I0216 17:14:18.319918 4167 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 17:14:18.320024 master-0 kubenswrapper[4167]: I0216 17:14:18.320009 4167 flags.go:64] FLAG: --eviction-hard="" Feb 16 17:14:18.320107 master-0 kubenswrapper[4167]: I0216 17:14:18.320093 4167 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 17:14:18.320175 master-0 kubenswrapper[4167]: I0216 17:14:18.320163 4167 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 17:14:18.320250 master-0 kubenswrapper[4167]: I0216 17:14:18.320238 4167 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 17:14:18.320320 master-0 kubenswrapper[4167]: I0216 17:14:18.320307 4167 flags.go:64] FLAG: --eviction-soft="" Feb 16 17:14:18.320388 master-0 kubenswrapper[4167]: I0216 17:14:18.320376 4167 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 17:14:18.320456 master-0 kubenswrapper[4167]: I0216 17:14:18.320443 4167 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 17:14:18.320558 master-0 kubenswrapper[4167]: I0216 17:14:18.320542 4167 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 17:14:18.320656 master-0 kubenswrapper[4167]: I0216 17:14:18.320641 4167 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 17:14:18.320727 master-0 kubenswrapper[4167]: I0216 17:14:18.320715 4167 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 17:14:18.320795 master-0 kubenswrapper[4167]: I0216 17:14:18.320782 4167 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 17:14:18.320985 master-0 kubenswrapper[4167]: I0216 17:14:18.320939 4167 flags.go:64] FLAG: --feature-gates="" Feb 16 17:14:18.321063 master-0 kubenswrapper[4167]: I0216 17:14:18.321050 4167 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 17:14:18.321146 master-0 kubenswrapper[4167]: I0216 17:14:18.321133 4167 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 17:14:18.321228 master-0 kubenswrapper[4167]: I0216 17:14:18.321216 4167 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 17:14:18.321313 master-0 kubenswrapper[4167]: I0216 17:14:18.321298 4167 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 17:14:18.321510 master-0 kubenswrapper[4167]: I0216 17:14:18.321492 4167 flags.go:64] FLAG: --healthz-port="10248" Feb 16 17:14:18.321607 master-0 kubenswrapper[4167]: I0216 17:14:18.321589 4167 flags.go:64] FLAG: --help="false" Feb 16 17:14:18.321689 master-0 kubenswrapper[4167]: I0216 17:14:18.321674 4167 flags.go:64] FLAG: --hostname-override="" Feb 16 17:14:18.321759 master-0 kubenswrapper[4167]: I0216 17:14:18.321746 4167 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 17:14:18.321826 master-0 kubenswrapper[4167]: I0216 17:14:18.321814 4167 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 17:14:18.321894 master-0 kubenswrapper[4167]: I0216 17:14:18.321882 4167 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 17:14:18.321986 master-0 kubenswrapper[4167]: I0216 17:14:18.321950 4167 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 17:14:18.322072 master-0 kubenswrapper[4167]: I0216 17:14:18.322058 4167 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 17:14:18.322142 master-0 kubenswrapper[4167]: I0216 17:14:18.322131 4167 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 17:14:18.322210 master-0 kubenswrapper[4167]: I0216 17:14:18.322198 4167 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 17:14:18.322276 master-0 kubenswrapper[4167]: I0216 17:14:18.322265 4167 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 17:14:18.322351 master-0 kubenswrapper[4167]: I0216 17:14:18.322339 4167 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 17:14:18.322420 master-0 kubenswrapper[4167]: I0216 17:14:18.322406 4167 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 17:14:18.322489 master-0 kubenswrapper[4167]: I0216 17:14:18.322477 4167 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 17:14:18.322586 master-0 kubenswrapper[4167]: I0216 17:14:18.322572 4167 flags.go:64] FLAG: --kube-reserved="" Feb 16 17:14:18.322665 master-0 kubenswrapper[4167]: I0216 17:14:18.322650 4167 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 17:14:18.322757 master-0 kubenswrapper[4167]: I0216 17:14:18.322740 4167 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 17:14:18.322850 master-0 kubenswrapper[4167]: I0216 17:14:18.322834 4167 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 17:14:18.322954 master-0 kubenswrapper[4167]: I0216 17:14:18.322937 4167 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 17:14:18.323100 master-0 kubenswrapper[4167]: I0216 17:14:18.323084 4167 flags.go:64] FLAG: --lock-file="" Feb 16 17:14:18.323172 master-0 kubenswrapper[4167]: I0216 17:14:18.323160 4167 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 17:14:18.323248 master-0 kubenswrapper[4167]: I0216 17:14:18.323235 4167 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 17:14:18.323322 master-0 kubenswrapper[4167]: I0216 17:14:18.323305 4167 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 17:14:18.323394 master-0 kubenswrapper[4167]: I0216 17:14:18.323382 4167 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 17:14:18.323462 master-0 kubenswrapper[4167]: I0216 17:14:18.323450 4167 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 17:14:18.323532 master-0 kubenswrapper[4167]: I0216 17:14:18.323521 4167 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 17:14:18.323618 master-0 kubenswrapper[4167]: I0216 17:14:18.323604 4167 flags.go:64] FLAG: --logging-format="text" Feb 16 17:14:18.323692 master-0 kubenswrapper[4167]: I0216 17:14:18.323679 4167 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 17:14:18.323769 master-0 kubenswrapper[4167]: I0216 17:14:18.323756 4167 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 17:14:18.323844 master-0 kubenswrapper[4167]: I0216 17:14:18.323832 4167 flags.go:64] FLAG: --manifest-url="" Feb 16 17:14:18.323915 master-0 kubenswrapper[4167]: I0216 17:14:18.323900 4167 flags.go:64] FLAG: --manifest-url-header="" Feb 16 17:14:18.324018 master-0 kubenswrapper[4167]: I0216 17:14:18.324005 4167 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 17:14:18.324105 master-0 kubenswrapper[4167]: I0216 17:14:18.324091 4167 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 17:14:18.324176 master-0 kubenswrapper[4167]: I0216 17:14:18.324164 4167 flags.go:64] FLAG: --max-pods="110" Feb 16 17:14:18.324244 master-0 kubenswrapper[4167]: I0216 17:14:18.324232 4167 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 17:14:18.324310 master-0 kubenswrapper[4167]: I0216 17:14:18.324298 4167 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 17:14:18.324385 master-0 kubenswrapper[4167]: I0216 17:14:18.324373 4167 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 17:14:18.324461 master-0 kubenswrapper[4167]: I0216 17:14:18.324448 4167 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 17:14:18.324529 master-0 kubenswrapper[4167]: I0216 17:14:18.324517 4167 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 17:14:18.324597 master-0 kubenswrapper[4167]: I0216 17:14:18.324584 4167 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 17:14:18.324673 master-0 kubenswrapper[4167]: I0216 17:14:18.324652 4167 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 17:14:18.324743 master-0 kubenswrapper[4167]: I0216 17:14:18.324731 4167 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 17:14:18.324811 master-0 kubenswrapper[4167]: I0216 17:14:18.324798 4167 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 17:14:18.324885 master-0 kubenswrapper[4167]: I0216 17:14:18.324872 4167 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 17:14:18.324953 master-0 kubenswrapper[4167]: I0216 17:14:18.324941 4167 flags.go:64] FLAG: --pod-cidr="" Feb 16 17:14:18.325049 master-0 kubenswrapper[4167]: I0216 17:14:18.325033 4167 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 17:14:18.325120 master-0 kubenswrapper[4167]: I0216 17:14:18.325108 4167 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 17:14:18.325187 master-0 kubenswrapper[4167]: I0216 17:14:18.325175 4167 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 17:14:18.325255 master-0 kubenswrapper[4167]: I0216 17:14:18.325243 4167 flags.go:64] FLAG: --pods-per-core="0" Feb 16 17:14:18.325335 master-0 kubenswrapper[4167]: I0216 17:14:18.325322 4167 flags.go:64] FLAG: --port="10250" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325402 4167 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325414 4167 flags.go:64] FLAG: --provider-id="" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325421 4167 flags.go:64] FLAG: --qos-reserved="" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325428 4167 flags.go:64] FLAG: --read-only-port="10255" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325436 4167 flags.go:64] FLAG: --register-node="true" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325443 4167 flags.go:64] FLAG: --register-schedulable="true" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325449 4167 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325461 4167 flags.go:64] FLAG: --registry-burst="10" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325467 4167 flags.go:64] FLAG: --registry-qps="5" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325476 4167 flags.go:64] FLAG: --reserved-cpus="" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325482 4167 flags.go:64] FLAG: --reserved-memory="" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325490 4167 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325497 4167 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325504 4167 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325511 4167 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325517 4167 flags.go:64] FLAG: --runonce="false" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325523 4167 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325530 4167 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325536 4167 flags.go:64] FLAG: --seccomp-default="false" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325542 4167 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325548 4167 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325555 4167 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325561 4167 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325569 4167 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 17:14:18.326173 master-0 kubenswrapper[4167]: I0216 17:14:18.325575 4167 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325581 4167 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325587 4167 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325594 4167 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325600 4167 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325607 4167 flags.go:64] FLAG: --system-cgroups="" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325613 4167 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325624 4167 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325630 4167 flags.go:64] FLAG: --tls-cert-file="" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325637 4167 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325645 4167 flags.go:64] FLAG: --tls-min-version="" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325651 4167 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325657 4167 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325663 4167 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325670 4167 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325676 4167 flags.go:64] FLAG: --v="2" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325696 4167 flags.go:64] FLAG: --version="false" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325704 4167 flags.go:64] FLAG: --vmodule="" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325711 4167 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: I0216 17:14:18.325718 4167 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: W0216 17:14:18.325865 4167 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: W0216 17:14:18.325873 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: W0216 17:14:18.325879 4167 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: W0216 17:14:18.325886 4167 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:14:18.326831 master-0 kubenswrapper[4167]: W0216 17:14:18.325892 4167 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325898 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325903 4167 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325908 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325913 4167 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325919 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325924 4167 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325930 4167 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325936 4167 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325941 4167 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325946 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325952 4167 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325976 4167 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325982 4167 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325988 4167 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325993 4167 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.325998 4167 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.326005 4167 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.326011 4167 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.326016 4167 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:14:18.327409 master-0 kubenswrapper[4167]: W0216 17:14:18.326021 4167 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326027 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326032 4167 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326037 4167 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326042 4167 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326048 4167 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326053 4167 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326058 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326063 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326069 4167 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326074 4167 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326080 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326086 4167 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326094 4167 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326100 4167 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326105 4167 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326111 4167 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326117 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326122 4167 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326127 4167 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:14:18.327887 master-0 kubenswrapper[4167]: W0216 17:14:18.326132 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326137 4167 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326143 4167 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326148 4167 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326153 4167 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326158 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326164 4167 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326169 4167 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326174 4167 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326180 4167 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326185 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326190 4167 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326196 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326201 4167 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326206 4167 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326211 4167 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326217 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326222 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326227 4167 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:14:18.328434 master-0 kubenswrapper[4167]: W0216 17:14:18.326232 4167 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:14:18.328869 master-0 kubenswrapper[4167]: W0216 17:14:18.326239 4167 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:14:18.328869 master-0 kubenswrapper[4167]: W0216 17:14:18.326246 4167 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:14:18.328869 master-0 kubenswrapper[4167]: W0216 17:14:18.326253 4167 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:14:18.328869 master-0 kubenswrapper[4167]: W0216 17:14:18.326261 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:14:18.328869 master-0 kubenswrapper[4167]: W0216 17:14:18.326267 4167 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:14:18.328869 master-0 kubenswrapper[4167]: W0216 17:14:18.326272 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:14:18.328869 master-0 kubenswrapper[4167]: W0216 17:14:18.326277 4167 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:14:18.328869 master-0 kubenswrapper[4167]: W0216 17:14:18.326282 4167 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:14:18.328869 master-0 kubenswrapper[4167]: I0216 17:14:18.326292 4167 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:14:18.334828 master-0 kubenswrapper[4167]: I0216 17:14:18.334369 4167 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 17:14:18.334828 master-0 kubenswrapper[4167]: I0216 17:14:18.334762 4167 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 17:14:18.334912 master-0 kubenswrapper[4167]: W0216 17:14:18.334894 4167 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:14:18.334912 master-0 kubenswrapper[4167]: W0216 17:14:18.334908 4167 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:14:18.335018 master-0 kubenswrapper[4167]: W0216 17:14:18.334918 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:14:18.335018 master-0 kubenswrapper[4167]: W0216 17:14:18.334929 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:14:18.335018 master-0 kubenswrapper[4167]: W0216 17:14:18.334937 4167 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:14:18.335018 master-0 kubenswrapper[4167]: W0216 17:14:18.334946 4167 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:14:18.335018 master-0 kubenswrapper[4167]: W0216 17:14:18.334954 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:14:18.335018 master-0 kubenswrapper[4167]: W0216 17:14:18.334986 4167 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:14:18.335018 master-0 kubenswrapper[4167]: W0216 17:14:18.334994 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:14:18.335018 master-0 kubenswrapper[4167]: W0216 17:14:18.335006 4167 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:14:18.335018 master-0 kubenswrapper[4167]: W0216 17:14:18.335017 4167 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335027 4167 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335037 4167 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335046 4167 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335055 4167 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335063 4167 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335074 4167 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335084 4167 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335093 4167 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335101 4167 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335111 4167 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335120 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335128 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335136 4167 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335146 4167 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335154 4167 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335162 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335169 4167 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335177 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335185 4167 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:14:18.335360 master-0 kubenswrapper[4167]: W0216 17:14:18.335193 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335201 4167 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335209 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335217 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335226 4167 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335235 4167 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335243 4167 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335254 4167 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335263 4167 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335272 4167 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335282 4167 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335293 4167 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335302 4167 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335311 4167 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335320 4167 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335328 4167 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335336 4167 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335347 4167 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335355 4167 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:14:18.335876 master-0 kubenswrapper[4167]: W0216 17:14:18.335364 4167 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335372 4167 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335381 4167 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335389 4167 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335397 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335405 4167 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335413 4167 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335421 4167 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335428 4167 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335436 4167 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335444 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335452 4167 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335460 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335467 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335475 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335483 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335490 4167 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335499 4167 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335506 4167 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335514 4167 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:14:18.336386 master-0 kubenswrapper[4167]: W0216 17:14:18.335522 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: W0216 17:14:18.335531 4167 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: W0216 17:14:18.335540 4167 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: I0216 17:14:18.335553 4167 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: W0216 17:14:18.335776 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: W0216 17:14:18.335789 4167 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: W0216 17:14:18.335800 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: W0216 17:14:18.335811 4167 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: W0216 17:14:18.335823 4167 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: W0216 17:14:18.335832 4167 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: W0216 17:14:18.335840 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: W0216 17:14:18.335849 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: W0216 17:14:18.335857 4167 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: W0216 17:14:18.335865 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:14:18.336849 master-0 kubenswrapper[4167]: W0216 17:14:18.335873 4167 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.335881 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.335889 4167 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.335898 4167 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.335909 4167 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.335918 4167 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.335927 4167 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.335936 4167 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.335945 4167 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.335953 4167 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.335984 4167 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.335993 4167 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.336001 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.336009 4167 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.336018 4167 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.336025 4167 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.336033 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.336041 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.336049 4167 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.336057 4167 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:14:18.337226 master-0 kubenswrapper[4167]: W0216 17:14:18.336065 4167 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336073 4167 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336080 4167 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336089 4167 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336098 4167 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336106 4167 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336114 4167 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336122 4167 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336130 4167 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336138 4167 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336146 4167 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336154 4167 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336161 4167 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336169 4167 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336177 4167 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336185 4167 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336193 4167 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336201 4167 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336209 4167 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336217 4167 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:14:18.337691 master-0 kubenswrapper[4167]: W0216 17:14:18.336225 4167 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336233 4167 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336243 4167 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336253 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336262 4167 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336272 4167 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336280 4167 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336288 4167 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336296 4167 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336304 4167 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336312 4167 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336320 4167 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336328 4167 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336336 4167 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336343 4167 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336351 4167 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336359 4167 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336369 4167 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336378 4167 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:14:18.338237 master-0 kubenswrapper[4167]: W0216 17:14:18.336388 4167 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:14:18.338857 master-0 kubenswrapper[4167]: W0216 17:14:18.336398 4167 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:14:18.338857 master-0 kubenswrapper[4167]: W0216 17:14:18.336407 4167 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:14:18.338857 master-0 kubenswrapper[4167]: I0216 17:14:18.336418 4167 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:14:18.338857 master-0 kubenswrapper[4167]: I0216 17:14:18.336675 4167 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 17:14:18.339826 master-0 kubenswrapper[4167]: I0216 17:14:18.339783 4167 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 17:14:18.339972 master-0 kubenswrapper[4167]: I0216 17:14:18.339926 4167 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 17:14:18.340413 master-0 kubenswrapper[4167]: I0216 17:14:18.340378 4167 server.go:997] "Starting client certificate rotation" Feb 16 17:14:18.340413 master-0 kubenswrapper[4167]: I0216 17:14:18.340404 4167 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 17:14:18.340773 master-0 kubenswrapper[4167]: I0216 17:14:18.340625 4167 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 11:30:58.320032905 +0000 UTC Feb 16 17:14:18.340830 master-0 kubenswrapper[4167]: I0216 17:14:18.340768 4167 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h16m39.979271936s for next certificate rotation Feb 16 17:14:18.341485 master-0 kubenswrapper[4167]: I0216 17:14:18.341445 4167 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:14:18.345232 master-0 kubenswrapper[4167]: I0216 17:14:18.345192 4167 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:14:18.349507 master-0 kubenswrapper[4167]: I0216 17:14:18.349461 4167 log.go:25] "Validated CRI v1 runtime API" Feb 16 17:14:18.356876 master-0 kubenswrapper[4167]: I0216 17:14:18.356834 4167 log.go:25] "Validated CRI v1 image API" Feb 16 17:14:18.358711 master-0 kubenswrapper[4167]: I0216 17:14:18.358662 4167 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 17:14:18.365449 master-0 kubenswrapper[4167]: I0216 17:14:18.365348 4167 fs.go:135] Filesystem UUIDs: map[35a0b0cc-84b1-4374-a18a-0f49ad7a8333:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 17:14:18.365953 master-0 kubenswrapper[4167]: I0216 17:14:18.365400 4167 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/06ddc2da13c0775c0e8f0acf19c817f8072a1fcf961d84d30040d9ab97e3ada6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/06ddc2da13c0775c0e8f0acf19c817f8072a1fcf961d84d30040d9ab97e3ada6/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/24414ee08d96c92b5e5a2987ea123de0d7bf6e29180b3cd5f05b52963a1027d2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/24414ee08d96c92b5e5a2987ea123de0d7bf6e29180b3cd5f05b52963a1027d2/userdata/shm major:0 minor:226 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2700d64446e8244b9b674cd60afd215140645d2edcc3782a0dea4459ce56db2c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2700d64446e8244b9b674cd60afd215140645d2edcc3782a0dea4459ce56db2c/userdata/shm major:0 minor:56 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/343b05919e1f64786e2254ca5f9bc68bb46285c032cc50fdd759ba021022cc78/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/343b05919e1f64786e2254ca5f9bc68bb46285c032cc50fdd759ba021022cc78/userdata/shm major:0 minor:289 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/350fa8046d93176987309e508424a1570fc1eed5f18ae89cb9ac0b90ba3cb70f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/350fa8046d93176987309e508424a1570fc1eed5f18ae89cb9ac0b90ba3cb70f/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3ba5b55cdc513202565393d69d57718508e29795dfd1cdb87d49dc9c14489665/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3ba5b55cdc513202565393d69d57718508e29795dfd1cdb87d49dc9c14489665/userdata/shm major:0 minor:219 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/49352b0546742089f6d27ebdb79f9e6f209f38640843957969c3a7f0cde5300b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/49352b0546742089f6d27ebdb79f9e6f209f38640843957969c3a7f0cde5300b/userdata/shm major:0 minor:193 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4ba4eba49a66193e7786c85a4578333fc95c4bc9a7a4bb4ef1dbbff27d009c65/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4ba4eba49a66193e7786c85a4578333fc95c4bc9a7a4bb4ef1dbbff27d009c65/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/63964f47111b36b48ad0828624ccf523359d31129423bb7917fb1cd01aad8c04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/63964f47111b36b48ad0828624ccf523359d31129423bb7917fb1cd01aad8c04/userdata/shm major:0 minor:237 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6b896923757b268325f5828b39f28a688ac9f66638ad3480c6f941e1ecce93ee/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6b896923757b268325f5828b39f28a688ac9f66638ad3480c6f941e1ecce93ee/userdata/shm major:0 minor:255 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b76fc34b4f11f5d3f3dd2290c7e69bd90116ec4cac1df909fdf5c5e5f8bf960d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b76fc34b4f11f5d3f3dd2290c7e69bd90116ec4cac1df909fdf5c5e5f8bf960d/userdata/shm major:0 minor:70 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c643d1a6bd2bbdb9a152ec5acdf256c8c4044ba37ff73d78c6f2993bc96d4a77/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c643d1a6bd2bbdb9a152ec5acdf256c8c4044ba37ff73d78c6f2993bc96d4a77/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c7d736050372998764922f31dd6cf88581d3803877d64338fea850cc359f7da3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c7d736050372998764922f31dd6cf88581d3803877d64338fea850cc359f7da3/userdata/shm major:0 minor:297 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d0c55c98db8491069414beee715fb0df1d28a36c886f9583fb6e5f20a3fd1076/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d0c55c98db8491069414beee715fb0df1d28a36c886f9583fb6e5f20a3fd1076/userdata/shm major:0 minor:64 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dc503dfcd2e24f7be396a4259fc22362db61ad7785d0bcd4d3225ebdfd2e1f72/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dc503dfcd2e24f7be396a4259fc22362db61ad7785d0bcd4d3225ebdfd2e1f72/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e5383f33102a9898e2ed29273a78d4b119c0cc0618f01a3ae943b24be1f2db07/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e5383f33102a9898e2ed29273a78d4b119c0cc0618f01a3ae943b24be1f2db07/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e8d809425731cc2967cdb379e53f1be7eba9e51662dfb79330c03d92562a8e44/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e8d809425731cc2967cdb379e53f1be7eba9e51662dfb79330c03d92562a8e44/userdata/shm major:0 minor:230 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e979fd391b550f805e511fcc06c4da51e87eefebf9f2469af331306f8e129b95/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e979fd391b550f805e511fcc06c4da51e87eefebf9f2469af331306f8e129b95/userdata/shm major:0 minor:201 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f119761284235b155a5550b379cddae2d59c4785ac83d6f2e1cabbb819959d1e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f119761284235b155a5550b379cddae2d59c4785ac83d6f2e1cabbb819959d1e/userdata/shm major:0 minor:241 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f38395ae743150d868e0e9f52251b36fa3cd386c02f6210a01132b4d3e9b83fa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f38395ae743150d868e0e9f52251b36fa3cd386c02f6210a01132b4d3e9b83fa/userdata/shm major:0 minor:206 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06067627-6ccf-4cc8-bd20-dabdd776bb46/volumes/kubernetes.io~projected/kube-api-access-pq4dn:{mountpoint:/var/lib/kubelet/pods/06067627-6ccf-4cc8-bd20-dabdd776bb46/volumes/kubernetes.io~projected/kube-api-access-pq4dn major:0 minor:200 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d980a9a-2574-41b9-b970-0718cd97c8cd/volumes/kubernetes.io~projected/kube-api-access-t7l6q:{mountpoint:/var/lib/kubelet/pods/0d980a9a-2574-41b9-b970-0718cd97c8cd/volumes/kubernetes.io~projected/kube-api-access-t7l6q major:0 minor:190 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1cd29be8-2b2a-49f7-badd-ff53c686a63d/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/1cd29be8-2b2a-49f7-badd-ff53c686a63d/volumes/kubernetes.io~empty-dir/config-out major:0 minor:178 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1cd29be8-2b2a-49f7-badd-ff53c686a63d/volumes/kubernetes.io~projected/kube-api-access-lgm4p:{mountpoint:/var/lib/kubelet/pods/1cd29be8-2b2a-49f7-badd-ff53c686a63d/volumes/kubernetes.io~projected/kube-api-access-lgm4p major:0 minor:199 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d1636c0-f34d-444c-822d-77f1d203ddc4/volumes/kubernetes.io~projected/kube-api-access-vbtld:{mountpoint:/var/lib/kubelet/pods/2d1636c0-f34d-444c-822d-77f1d203ddc4/volumes/kubernetes.io~projected/kube-api-access-vbtld major:0 minor:187 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2:{mountpoint:/var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2 major:0 minor:293 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl:{mountpoint:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert major:0 minor:170 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x:{mountpoint:/var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x major:0 minor:188 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt:{mountpoint:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls major:0 minor:173 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x:{mountpoint:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55d635cd-1f0d-4086-96f2-9f3524f3f18c/volumes/kubernetes.io~projected/kube-api-access-76rtg:{mountpoint:/var/lib/kubelet/pods/55d635cd-1f0d-4086-96f2-9f3524f3f18c/volumes/kubernetes.io~projected/kube-api-access-76rtg major:0 minor:189 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw:{mountpoint:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw major:0 minor:192 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:181 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:275 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x:{mountpoint:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls major:0 minor:179 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld:{mountpoint:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:177 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~projected/kube-api-access-gvw4s:{mountpoint:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~projected/kube-api-access-gvw4s major:0 minor:274 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/certs major:0 minor:180 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2 major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:171 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g:{mountpoint:/var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g major:0 minor:283 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~projected/kube-api-access-8nfk2:{mountpoint:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~projected/kube-api-access-8nfk2 major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:174 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:169 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm:{mountpoint:/var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm major:0 minor:202 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl:{mountpoint:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl major:0 minor:298 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:175 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5:{mountpoint:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5 major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae20b683-dac8-419e-808a-ddcdb3c564e1/volumes/kubernetes.io~projected/kube-api-access-f69cb:{mountpoint:/var/lib/kubelet/pods/ae20b683-dac8-419e-808a-ddcdb3c564e1/volumes/kubernetes.io~projected/kube-api-access-f69cb major:0 minor:276 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg:{mountpoint:/var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:285 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert major:0 minor:183 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ba37ef0e-373c-4ccc-b082-668630399765/volumes/kubernetes.io~projected/kube-api-access-57455:{mountpoint:/var/lib/kubelet/pods/ba37ef0e-373c-4ccc-b082-668630399765/volumes/kubernetes.io~projected/kube-api-access-57455 major:0 minor:186 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:182 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp major:0 minor:172 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52:{mountpoint:/var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52 major:0 minor:290 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67:{mountpoint:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67 major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e1443fb7-cb1e-4105-b604-b88c749620c4/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/e1443fb7-cb1e-4105-b604-b88c749620c4/volumes/kubernetes.io~empty-dir/config-out major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e1443fb7-cb1e-4105-b604-b88c749620c4/volumes/kubernetes.io~projected/kube-api-access-tjpvn:{mountpoint:/var/lib/kubelet/pods/e1443fb7-cb1e-4105-b604-b88c749620c4/volumes/kubernetes.io~projected/kube-api-access-tjpvn major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~projected/kube-api-access-94kdz:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~projected/kube-api-access-94kdz major:0 minor:191 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/default-certificate major:0 minor:168 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/metrics-certs major:0 minor:176 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/stats-auth major:0 minor:184 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz:{mountpoint:/var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz major:0 minor:185 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fe8e8e5d-cebb-4361-b765-5ff737f5e838/volumes/kubernetes.io~projected/kube-api-access-j99jl:{mountpoint:/var/lib/kubelet/pods/fe8e8e5d-cebb-4361-b765-5ff737f5e838/volumes/kubernetes.io~projected/kube-api-access-j99jl major:0 minor:212 fsType:tmpfs blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/09f5836d214995127556cc63cae727f45dd64f338d468d3b2445aa558ab8d5e8/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/4730131787eddad749fc1dc0f20ea38937957f70f28d392e79196c8fa37c8ef6/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-125:{mountpoint:/var/lib/containers/storage/overlay/8c7f87c4a3f6c3c580bc0fc7fbbd13e0d7d2b4ae53bb36402e20d28f605b1e69/merged major:0 minor:125 fsType:overlay blockSize:0} overlay_0-130:{mountpoint:/var/lib/containers/storage/overlay/0da4438975335e7243afc2a4f3e2a4f8796d170774df09174999b6c09b8c4d4f/merged major:0 minor:130 fsType:overlay blockSize:0} overlay_0-135:{mountpoint:/var/lib/containers/storage/overlay/9d103eefb9cd6e634cc6e8e22695241fb1883f77169c805f47273565f116e25b/merged major:0 minor:135 fsType:overlay blockSize:0} overlay_0-137:{mountpoint:/var/lib/containers/storage/overlay/3070d2a7aaa9b542fb066276d7bd3ab89f151ef2a983437c6ea145ec7cf95490/merged major:0 minor:137 fsType:overlay blockSize:0} overlay_0-142:{mountpoint:/var/lib/containers/storage/overlay/63492e05694adcbdc99297bfe0ea001bfbf93f57508ee414e0720638257875bc/merged major:0 minor:142 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/94a0fdd4dc96d79bf953c352cd97eecde7d4b0e24c37325ba16db0a33727b9c5/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/69eb6442e41dde7d72cb954446ea25da0c5cd141f76e4dbbb3814302b6b53917/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-157:{mountpoint:/var/lib/containers/storage/overlay/3a2c0df119dc5d1243a314c040b14ced4bccbcbb4159ca2254f48529ca3dd27c/merged major:0 minor:157 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/12f8ba5318372c4b7f1cf9087c35f19a7148877123566a714a340028b8fee625/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-197:{mountpoint:/var/lib/containers/storage/overlay/b3d4af7f554ddfe2f2fcd32535e883761e6647c53a1a3c8665582158409ba4e5/merged major:0 minor:197 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/9e88e470abf2b7d410743312c933211475395e7ae1f9acacef03884c9045a9da/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-208:{mountpoint:/var/lib/containers/storage/overlay/d52e468d25b5758f775699b0fbc016dcc8aedfe7d94487aacbe1d317f840793b/merged major:0 minor:208 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/702510cfe6ddeefbabe3a2111460f6278b35c164b8ae1a1a751ca22c69d7d900/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-214:{mountpoint:/var/lib/containers/storage/overlay/186aa6666ab889b7e4e4dce2937f24f89bc0f24eff15b1774acdab8a060ba6b4/merged major:0 minor:214 fsType:overlay blockSize:0} overlay_0-221:{mountpoint:/var/lib/containers/storage/overlay/aa9ecae4d1a4ceec48ec88a3015b3684af8932a8b6a41206798f041e3e225855/merged major:0 minor:221 fsType:overlay blockSize:0} overlay_0-224:{mountpoint:/var/lib/containers/storage/overlay/d496fa0a7fc542531c39318b15650827419755e1bcc42743c2e01c6e4bf6af88/merged major:0 minor:224 fsType:overlay blockSize:0} overlay_0-228:{mountpoint:/var/lib/containers/storage/overlay/40a17c793913c806b296c6fe0a05380b7258d3ba3ce4310cba28ce3cb135121b/merged major:0 minor:228 fsType:overlay blockSize:0} overlay_0-231:{mountpoint:/var/lib/containers/storage/overlay/5eba6c4f6844ea21a23157c212613d6f4fa74585a9970e7a548dc4d894617d30/merged major:0 minor:231 fsType:overlay blockSize:0} overlay_0-234:{mountpoint:/var/lib/containers/storage/overlay/b589cb15051206449ae81ac298f303e2a5bcf7c12c28982a954808b0e5637f4e/merged major:0 minor:234 fsType:overlay blockSize:0} overlay_0-239:{mountpoint:/var/lib/containers/storage/overlay/7283f153293e56f4e755c9366dc70339c452e6f39607e4b659e7c9832c151272/merged major:0 minor:239 fsType:overlay blockSize:0} overlay_0-243:{mountpoint:/var/lib/containers/storage/overlay/d48c4a6d8c87e1fa076654fe491ff87b4f38c03eda1debb0919bd2351ae7d92b/merged major:0 minor:243 fsType:overlay blockSize:0} overlay_0-246:{mountpoint:/var/lib/containers/storage/overlay/9e63d7a4326b5ec8153955e318c55bcd66365d859a792003aedf538437f84a48/merged major:0 minor:246 fsType:overlay blockSize:0} overlay_0-248:{mountpoint:/var/lib/containers/storage/overlay/8f9ea854e5ed931e6baa0721e67da5b513d9fe212795bc0a1a06107226f9fb30/merged major:0 minor:248 fsType:overlay blockSize:0} overlay_0-250:{mountpoint:/var/lib/containers/storage/overlay/22c17e58a3400fda2ffa08fc8ec3fb7c6db371aa1d43eaaed16c7507eb81190f/merged major:0 minor:250 fsType:overlay blockSize:0} overlay_0-257:{mountpoint:/var/lib/containers/storage/overlay/4f65194683b4d68e378da4ff2e164613df7aaf6a4e152b85b4e265b32d4a2bf1/merged major:0 minor:257 fsType:overlay blockSize:0} overlay_0-262:{mountpoint:/var/lib/containers/storage/overlay/c8983bff7342de01d004106809e1f7ec6088ac989d6304d9e9e21694f0ff4261/merged major:0 minor:262 fsType:overlay blockSize:0} overlay_0-264:{mountpoint:/var/lib/containers/storage/overlay/60ef59b21d2f1430b898a6240be68b903a9c7dd4c8bcb6cf99bd2d26d67584b2/merged major:0 minor:264 fsType:overlay blockSize:0} overlay_0-268:{mountpoint:/var/lib/containers/storage/overlay/3c12b0317fc3ac12ad9a58d26bb1e85ce31dc060a7225c14e3a37c21117b1168/merged major:0 minor:268 fsType:overlay blockSize:0} overlay_0-270:{mountpoint:/var/lib/containers/storage/overlay/b5502720c8eeeec36b2411158213ea83f00091cc4205b92c2aeed3a1e35d781a/merged major:0 minor:270 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/2b46a752bc02876ba1a17fe70953dc6871d6676bead98e3b9c3c96f87c595637/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/932f409502bbac9d2faa99878047bc99797602d9274acb464b88b4d68c141352/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/8101b6e5669626295dd45290c7d8011b6e4a2729ac5bc8b2ccceb3d088ca7c2f/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/7ebb794291f510af155e6e9c13bf2106cb095056a7ba626fe9795effeaebb392/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/402059b3fd7c3ad82996a07cedfaf3d7fb9d6c5d1fd3dc9d933c742d808e308a/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/eb652cf6aa8f44fb5e3e9b866463ff90960f6c64e3f00bd20327792f8c3740e9/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-300:{mountpoint:/var/lib/containers/storage/overlay/c98113b31333a24f181a526312000345ac72ccb1d864a8974a6c7277557c089f/merged major:0 minor:300 fsType:overlay blockSize:0} overlay_0-302:{mountpoint:/var/lib/containers/storage/overlay/7d635bcdbae5453e68be3fdf574ce463e8265695e310cf7dce2a5fc4a66adad4/merged major:0 minor:302 fsType:overlay blockSize:0} overlay_0-304:{mountpoint:/var/lib/containers/storage/overlay/9c5817251ab07013dd91ac724e3c3ef758f61d95260f8b239615b6d7a011d842/merged major:0 minor:304 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/b8dc0f7dde04f7bdc6e3ed09beb2babaf65726c6c20c72c60ebe4b91ceb41d1e/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/b313f99ae5563b3c4cad66321570c8e1a1722b1a6ad83021898e8b7ecf8a5a54/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/10d7a5d142a47fdbd7bc2f0bc3dae43d175f5bd9b90f0828d5c32fdfdc346ac9/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/1b1e25640d1b5f7e1a6979e2d742f949d48a7a3e3b807df7357d7e6d0105226c/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/c1ad7b1e75ff8121a52451a872293c913d884fd56a8fb4a6efadb6f825495965/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-58:{mountpoint:/var/lib/containers/storage/overlay/deef5c0bf28a7d0eb182bb332188ac4514e387dc82da41ef5d0de8e69ad30028/merged major:0 minor:58 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/6f3d72195335cbe00283a829cbdcf37291165f9eb21ec43a7a98d254c2195096/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/dea8e56242f771c3f4240437ea872b5aa69cbcdc4c69adb681db7faa81964b01/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/15ddce899bf7025b006817f4fbf95920343b8c52fab4bba0df3bb822c0c022a7/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/d5a5bcb18bf2f392ce545cb9c9fc95f21a26dcad394e19ec2c176101a87ae10c/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/954320faee93ae31aa937b1cf69c00a727a2d2b41738c5d4fa9c335e7ebb20fe/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/eb9ac9029da0ad246d4f31dc18005da99e3a772c91a341f3ecbbb3dc266c210a/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/56e3064af552e12ea4899e2fbeaf3af74bafd51955a190e60d5ade4ba9e5aa8b/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/3f3b685a049e712cb33406c009189a55043a93c6c1f96dfe66fed2874b411f2e/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-80:{mountpoint:/var/lib/containers/storage/overlay/c5b7c7203c8cc83b7cabe0e03d31156007762ec9128369df76fbc0b0a99124f9/merged major:0 minor:80 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/c07d0d65df8268c2304231f4096c419ce813309f8b1ed76d971660164daca72b/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-88:{mountpoint:/var/lib/containers/storage/overlay/dc9f8b50c0dd0e6ca4a6e1309bb74385c767326cd61a9990591642570f674d1f/merged major:0 minor:88 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/d4726534cd1288f550660be9c544e9d0c0f564826956ed6334054d34a6121e37/merged major:0 minor:90 fsType:overlay blockSize:0} overlay_0-92:{mountpoint:/var/lib/containers/storage/overlay/7b7911ca28754b283d31d3daede99d5c25cba1c3fe1b5459cd5689ea168858e4/merged major:0 minor:92 fsType:overlay blockSize:0} overlay_0-94:{mountpoint:/var/lib/containers/storage/overlay/290ddae35fdfdf1e60f56b2e5e44cd9ee7fdc81fe4982ead3bef10512b2809f7/merged major:0 minor:94 fsType:overlay blockSize:0} overlay_0-96:{mountpoint:/var/lib/containers/storage/overlay/8905486c7eef215b0e527317624001d34ae7bf74f408a7666d9c0b72ed9b4194/merged major:0 minor:96 fsType:overlay blockSize:0}] Feb 16 17:14:18.399410 master-0 kubenswrapper[4167]: I0216 17:14:18.398736 4167 manager.go:217] Machine: {Timestamp:2026-02-16 17:14:18.39731178 +0000 UTC m=+0.127758198 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514153472 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:47bfea951bd14de8bb3b008f6812b13f SystemUUID:47bfea95-1bd1-4de8-bb3b-008f6812b13f BootID:16009b8c-6511-4dd4-9a27-539c3ce647e4 Filesystems:[{Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1cd29be8-2b2a-49f7-badd-ff53c686a63d/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:178 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-234 DeviceMajor:0 DeviceMinor:234 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:275 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:174 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm DeviceMajor:0 DeviceMinor:202 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:261 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-80 DeviceMajor:0 DeviceMinor:80 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:170 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-92 DeviceMajor:0 DeviceMinor:92 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/06067627-6ccf-4cc8-bd20-dabdd776bb46/volumes/kubernetes.io~projected/kube-api-access-pq4dn DeviceMajor:0 DeviceMinor:200 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102833152 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/containers/storage/overlay-containers/06ddc2da13c0775c0e8f0acf19c817f8072a1fcf961d84d30040d9ab97e3ada6/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-96 DeviceMajor:0 DeviceMinor:96 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2d1636c0-f34d-444c-822d-77f1d203ddc4/volumes/kubernetes.io~projected/kube-api-access-vbtld DeviceMajor:0 DeviceMinor:187 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-125 DeviceMajor:0 DeviceMinor:125 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f119761284235b155a5550b379cddae2d59c4785ac83d6f2e1cabbb819959d1e/userdata/shm DeviceMajor:0 DeviceMinor:241 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-268 DeviceMajor:0 DeviceMinor:268 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-137 DeviceMajor:0 DeviceMinor:137 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-228 DeviceMajor:0 DeviceMinor:228 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n DeviceMajor:0 DeviceMinor:254 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/343b05919e1f64786e2254ca5f9bc68bb46285c032cc50fdd759ba021022cc78/userdata/shm DeviceMajor:0 DeviceMinor:289 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-142 DeviceMajor:0 DeviceMinor:142 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:184 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-214 DeviceMajor:0 DeviceMinor:214 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-264 DeviceMajor:0 DeviceMinor:264 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:139 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:169 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:173 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:176 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw DeviceMajor:0 DeviceMinor:192 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e979fd391b550f805e511fcc06c4da51e87eefebf9f2469af331306f8e129b95/userdata/shm DeviceMajor:0 DeviceMinor:201 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-58 DeviceMajor:0 DeviceMinor:58 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg DeviceMajor:0 DeviceMinor:245 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-246 DeviceMajor:0 DeviceMinor:246 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257074688 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-231 DeviceMajor:0 DeviceMinor:231 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67 DeviceMajor:0 DeviceMinor:272 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-300 DeviceMajor:0 DeviceMinor:300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-250 DeviceMajor:0 DeviceMinor:250 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c643d1a6bd2bbdb9a152ec5acdf256c8c4044ba37ff73d78c6f2993bc96d4a77/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e1443fb7-cb1e-4105-b604-b88c749620c4/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:167 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-208 DeviceMajor:0 DeviceMinor:208 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fe8e8e5d-cebb-4361-b765-5ff737f5e838/volumes/kubernetes.io~projected/kube-api-access-j99jl DeviceMajor:0 DeviceMinor:212 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/63964f47111b36b48ad0828624ccf523359d31129423bb7917fb1cd01aad8c04/userdata/shm DeviceMajor:0 DeviceMinor:237 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52 DeviceMajor:0 DeviceMinor:290 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl DeviceMajor:0 DeviceMinor:253 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d0c55c98db8491069414beee715fb0df1d28a36c886f9583fb6e5f20a3fd1076/userdata/shm DeviceMajor:0 DeviceMinor:64 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b76fc34b4f11f5d3f3dd2290c7e69bd90116ec4cac1df909fdf5c5e5f8bf960d/userdata/shm DeviceMajor:0 DeviceMinor:70 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-157 DeviceMajor:0 DeviceMinor:157 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:171 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:177 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-224 DeviceMajor:0 DeviceMinor:224 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2700d64446e8244b9b674cd60afd215140645d2edcc3782a0dea4459ce56db2c/userdata/shm DeviceMajor:0 DeviceMinor:56 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:180 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/55d635cd-1f0d-4086-96f2-9f3524f3f18c/volumes/kubernetes.io~projected/kube-api-access-76rtg DeviceMajor:0 DeviceMinor:189 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~projected/kube-api-access-94kdz DeviceMajor:0 DeviceMinor:191 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:285 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257078784 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3ba5b55cdc513202565393d69d57718508e29795dfd1cdb87d49dc9c14489665/userdata/shm DeviceMajor:0 DeviceMinor:219 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-243 DeviceMajor:0 DeviceMinor:243 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2 DeviceMajor:0 DeviceMinor:293 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl DeviceMajor:0 DeviceMinor:298 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/e1443fb7-cb1e-4105-b604-b88c749620c4/volumes/kubernetes.io~projected/kube-api-access-tjpvn DeviceMajor:0 DeviceMinor:235 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:172 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:175 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:183 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f38395ae743150d868e0e9f52251b36fa3cd386c02f6210a01132b4d3e9b83fa/userdata/shm DeviceMajor:0 DeviceMinor:206 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5 DeviceMajor:0 DeviceMinor:213 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-257 DeviceMajor:0 DeviceMinor:257 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ba37ef0e-373c-4ccc-b082-668630399765/volumes/kubernetes.io~projected/kube-api-access-57455 DeviceMajor:0 DeviceMinor:186 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x DeviceMajor:0 DeviceMinor:188 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-221 DeviceMajor:0 DeviceMinor:221 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-248 DeviceMajor:0 DeviceMinor:248 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ae20b683-dac8-419e-808a-ddcdb3c564e1/volumes/kubernetes.io~projected/kube-api-access-f69cb DeviceMajor:0 DeviceMinor:276 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c7d736050372998764922f31dd6cf88581d3803877d64338fea850cc359f7da3/userdata/shm DeviceMajor:0 DeviceMinor:297 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:181 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4ba4eba49a66193e7786c85a4578333fc95c4bc9a7a4bb4ef1dbbff27d009c65/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-130 DeviceMajor:0 DeviceMinor:130 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-135 DeviceMajor:0 DeviceMinor:135 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d980a9a-2574-41b9-b970-0718cd97c8cd/volumes/kubernetes.io~projected/kube-api-access-t7l6q DeviceMajor:0 DeviceMinor:190 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/1cd29be8-2b2a-49f7-badd-ff53c686a63d/volumes/kubernetes.io~projected/kube-api-access-lgm4p DeviceMajor:0 DeviceMinor:199 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x DeviceMajor:0 DeviceMinor:252 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:168 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-302 DeviceMajor:0 DeviceMinor:302 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:179 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2 DeviceMajor:0 DeviceMinor:223 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-239 DeviceMajor:0 DeviceMinor:239 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-262 DeviceMajor:0 DeviceMinor:262 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-197 DeviceMajor:0 DeviceMinor:197 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~projected/kube-api-access-8nfk2 DeviceMajor:0 DeviceMinor:216 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/350fa8046d93176987309e508424a1570fc1eed5f18ae89cb9ac0b90ba3cb70f/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-88 DeviceMajor:0 DeviceMinor:88 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:182 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt DeviceMajor:0 DeviceMinor:217 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e8d809425731cc2967cdb379e53f1be7eba9e51662dfb79330c03d92562a8e44/userdata/shm DeviceMajor:0 DeviceMinor:230 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6b896923757b268325f5828b39f28a688ac9f66638ad3480c6f941e1ecce93ee/userdata/shm DeviceMajor:0 DeviceMinor:255 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/49352b0546742089f6d27ebdb79f9e6f209f38640843957969c3a7f0cde5300b/userdata/shm DeviceMajor:0 DeviceMinor:193 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-304 DeviceMajor:0 DeviceMinor:304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld DeviceMajor:0 DeviceMinor:218 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-270 DeviceMajor:0 DeviceMinor:270 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz DeviceMajor:0 DeviceMinor:185 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/24414ee08d96c92b5e5a2987ea123de0d7bf6e29180b3cd5f05b52963a1027d2/userdata/shm DeviceMajor:0 DeviceMinor:226 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x DeviceMajor:0 DeviceMinor:273 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~projected/kube-api-access-gvw4s DeviceMajor:0 DeviceMinor:274 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:/var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g DeviceMajor:0 DeviceMinor:283 Capacity:49335549952 Type:vfs Inodes:6166278 HasInodes:true} {Device:overlay_0-94 DeviceMajor:0 DeviceMinor:94 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e5383f33102a9898e2ed29273a78d4b119c0cc0618f01a3ae943b24be1f2db07/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dc503dfcd2e24f7be396a4259fc22362db61ad7785d0bcd4d3225ebdfd2e1f72/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:6166278 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:52:47:03:db:66:8a Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:2c:b9:e2 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:4a:2e:ce Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:02:c0:82:fb:4a:f4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514153472 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 17:14:18.399410 master-0 kubenswrapper[4167]: I0216 17:14:18.399375 4167 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 17:14:18.399410 master-0 kubenswrapper[4167]: I0216 17:14:18.399433 4167 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 17:14:18.399951 master-0 kubenswrapper[4167]: I0216 17:14:18.399652 4167 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 17:14:18.399951 master-0 kubenswrapper[4167]: I0216 17:14:18.399829 4167 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 17:14:18.400093 master-0 kubenswrapper[4167]: I0216 17:14:18.399864 4167 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 17:14:18.400156 master-0 kubenswrapper[4167]: I0216 17:14:18.400110 4167 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 17:14:18.400156 master-0 kubenswrapper[4167]: I0216 17:14:18.400122 4167 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 17:14:18.400156 master-0 kubenswrapper[4167]: I0216 17:14:18.400132 4167 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:14:18.400156 master-0 kubenswrapper[4167]: I0216 17:14:18.400157 4167 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:14:18.400375 master-0 kubenswrapper[4167]: I0216 17:14:18.400196 4167 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:14:18.400375 master-0 kubenswrapper[4167]: I0216 17:14:18.400281 4167 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 17:14:18.400375 master-0 kubenswrapper[4167]: I0216 17:14:18.400343 4167 kubelet.go:418] "Attempting to sync node with API server" Feb 16 17:14:18.400375 master-0 kubenswrapper[4167]: I0216 17:14:18.400357 4167 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 17:14:18.400375 master-0 kubenswrapper[4167]: I0216 17:14:18.400373 4167 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 17:14:18.400565 master-0 kubenswrapper[4167]: I0216 17:14:18.400388 4167 kubelet.go:324] "Adding apiserver pod source" Feb 16 17:14:18.400565 master-0 kubenswrapper[4167]: I0216 17:14:18.400406 4167 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 17:14:18.401384 master-0 kubenswrapper[4167]: I0216 17:14:18.401335 4167 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 17:14:18.401537 master-0 kubenswrapper[4167]: I0216 17:14:18.401511 4167 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 17:14:18.401856 master-0 kubenswrapper[4167]: I0216 17:14:18.401830 4167 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 17:14:18.401987 master-0 kubenswrapper[4167]: I0216 17:14:18.401946 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 17:14:18.402051 master-0 kubenswrapper[4167]: I0216 17:14:18.401992 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 17:14:18.402051 master-0 kubenswrapper[4167]: I0216 17:14:18.402006 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 17:14:18.402051 master-0 kubenswrapper[4167]: I0216 17:14:18.402014 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 17:14:18.402051 master-0 kubenswrapper[4167]: I0216 17:14:18.402021 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 17:14:18.402051 master-0 kubenswrapper[4167]: I0216 17:14:18.402029 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 17:14:18.402051 master-0 kubenswrapper[4167]: I0216 17:14:18.402037 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 17:14:18.402051 master-0 kubenswrapper[4167]: I0216 17:14:18.402043 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 17:14:18.402051 master-0 kubenswrapper[4167]: I0216 17:14:18.402054 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 17:14:18.402344 master-0 kubenswrapper[4167]: I0216 17:14:18.402062 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 17:14:18.402344 master-0 kubenswrapper[4167]: I0216 17:14:18.402073 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 17:14:18.402344 master-0 kubenswrapper[4167]: I0216 17:14:18.402089 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 17:14:18.402344 master-0 kubenswrapper[4167]: I0216 17:14:18.402124 4167 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 17:14:18.402482 master-0 kubenswrapper[4167]: I0216 17:14:18.402424 4167 server.go:1280] "Started kubelet" Feb 16 17:14:18.403383 master-0 kubenswrapper[4167]: I0216 17:14:18.402685 4167 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 17:14:18.403383 master-0 kubenswrapper[4167]: I0216 17:14:18.402801 4167 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 17:14:18.403383 master-0 kubenswrapper[4167]: I0216 17:14:18.402733 4167 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 17:14:18.403286 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 17:14:18.407082 master-0 kubenswrapper[4167]: I0216 17:14:18.404272 4167 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 17:14:18.407082 master-0 kubenswrapper[4167]: I0216 17:14:18.406119 4167 server.go:449] "Adding debug handlers to kubelet server" Feb 16 17:14:18.413469 master-0 kubenswrapper[4167]: I0216 17:14:18.413316 4167 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:14:18.414644 master-0 kubenswrapper[4167]: I0216 17:14:18.414569 4167 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:14:18.416947 master-0 kubenswrapper[4167]: E0216 17:14:18.416895 4167 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 16 17:14:18.417727 master-0 kubenswrapper[4167]: I0216 17:14:18.417256 4167 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 17:14:18.417727 master-0 kubenswrapper[4167]: I0216 17:14:18.417285 4167 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 17:14:18.417727 master-0 kubenswrapper[4167]: I0216 17:14:18.417310 4167 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 13:03:03.296043925 +0000 UTC Feb 16 17:14:18.417727 master-0 kubenswrapper[4167]: I0216 17:14:18.417346 4167 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h48m44.878700883s for next certificate rotation Feb 16 17:14:18.417727 master-0 kubenswrapper[4167]: I0216 17:14:18.417465 4167 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 17:14:18.417727 master-0 kubenswrapper[4167]: I0216 17:14:18.417494 4167 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 17:14:18.417727 master-0 kubenswrapper[4167]: I0216 17:14:18.417564 4167 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 17:14:18.420468 master-0 kubenswrapper[4167]: I0216 17:14:18.420250 4167 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 17:14:18.420468 master-0 kubenswrapper[4167]: I0216 17:14:18.420279 4167 factory.go:55] Registering systemd factory Feb 16 17:14:18.420468 master-0 kubenswrapper[4167]: I0216 17:14:18.420288 4167 factory.go:221] Registration of the systemd container factory successfully Feb 16 17:14:18.420686 master-0 kubenswrapper[4167]: I0216 17:14:18.420539 4167 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:14:18.420764 master-0 kubenswrapper[4167]: I0216 17:14:18.420726 4167 factory.go:153] Registering CRI-O factory Feb 16 17:14:18.420764 master-0 kubenswrapper[4167]: I0216 17:14:18.420736 4167 factory.go:221] Registration of the crio container factory successfully Feb 16 17:14:18.420764 master-0 kubenswrapper[4167]: I0216 17:14:18.420754 4167 factory.go:103] Registering Raw factory Feb 16 17:14:18.420764 master-0 kubenswrapper[4167]: I0216 17:14:18.420767 4167 manager.go:1196] Started watching for new ooms in manager Feb 16 17:14:18.421247 master-0 kubenswrapper[4167]: I0216 17:14:18.421210 4167 manager.go:319] Starting recovery of all containers Feb 16 17:14:18.433918 master-0 kubenswrapper[4167]: I0216 17:14:18.433822 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:14:18.433918 master-0 kubenswrapper[4167]: I0216 17:14:18.433910 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="642e5115-b7f2-4561-bc6b-1a74b6d891c4" volumeName="kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 17:14:18.433918 master-0 kubenswrapper[4167]: I0216 17:14:18.433930 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs" seLinuxMountContext="" Feb 16 17:14:18.433918 master-0 kubenswrapper[4167]: I0216 17:14:18.433946 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434023 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434041 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-main-db" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434057 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434073 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434090 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434104 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434120 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434136 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434151 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434170 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434184 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434200 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434215 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434233 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images" seLinuxMountContext="" Feb 16 17:14:18.434247 master-0 kubenswrapper[4167]: I0216 17:14:18.434249 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434264 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434283 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434299 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434316 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434332 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434349 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434367 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434412 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434429 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434444 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434460 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434476 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434490 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434505 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1524fc1-d157-435a-8bf8-7e877c45909d" volumeName="kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434521 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434537 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434575 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434592 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434608 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434650 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434666 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a275679-b7b6-4c28-b389-94cd2b014d6c" volumeName="kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434685 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434701 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad805251-19d0-4d2f-b741-7d11158f1f03" volumeName="kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434717 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434733 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.434735 master-0 kubenswrapper[4167]: I0216 17:14:18.434749 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.434766 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.434782 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.434798 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.434816 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="642e5115-b7f2-4561-bc6b-1a74b6d891c4" volumeName="kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.434841 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.434859 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.434877 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.434901 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.434918 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.434936 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.434953 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.434997 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d980a9a-2574-41b9-b970-0718cd97c8cd" volumeName="kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435015 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435031 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435047 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435063 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435079 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435095 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435112 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435129 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435144 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435162 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435177 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435194 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435209 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435226 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435243 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435261 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435275 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435293 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435308 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435325 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435340 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435355 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435371 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435386 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435428 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435447 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435466 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435482 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435497 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435513 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-kube-api-access-tjpvn" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435529 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435547 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435563 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435579 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435594 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435611 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435625 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435639 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435655 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435671 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435685 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435701 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435716 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 16 17:14:18.435672 master-0 kubenswrapper[4167]: I0216 17:14:18.435732 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.435747 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.435764 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.435791 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.435822 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.435841 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.435861 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.435878 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.435897 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.435915 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.435933 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.435950 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.435989 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436009 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436028 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436046 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436062 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436125 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ff68421-1741-41c1-93d5-5c722dfd295e" volumeName="kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436142 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436161 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436181 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436197 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436214 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08a90dc5-b0d8-4aad-a002-736492b6c1a9" volumeName="kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436230 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436254 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436272 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436289 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436305 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436321 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436337 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436353 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436369 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9859457-f0d1-4754-a6c5-cf05d5abf447" volumeName="kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436384 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436400 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436416 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436432 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436449 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436467 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436483 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436499 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436515 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6fe41b0-1a42-4f07-8220-d9aaa50788ad" volumeName="kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436534 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436550 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436565 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436584 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436600 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436616 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436632 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436648 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="970d4376-f299-412c-a8ee-90aa980c689e" volumeName="kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436665 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436682 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436697 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436714 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436730 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436746 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436764 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436778 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="544c6815-81d7-422a-9e4a-5fcbfabe8da8" volumeName="kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436792 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436807 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436823 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18e9a9d3-9b18-4c19-9558-f33c68101922" volumeName="kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436841 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436856 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436871 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436887 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436901 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436917 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436932 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.436948 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437000 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437020 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437036 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437052 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437068 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9859457-f0d1-4754-a6c5-cf05d5abf447" volumeName="kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437084 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" volumeName="kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437098 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437110 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437122 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437133 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437144 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437156 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437168 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437180 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437194 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437205 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437219 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437233 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437246 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437257 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437269 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437281 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437293 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437305 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437317 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437328 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437339 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437352 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437364 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437375 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437386 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1524fc1-d157-435a-8bf8-7e877c45909d" volumeName="kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437398 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437411 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d980a9a-2574-41b9-b970-0718cd97c8cd" volumeName="kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437423 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437437 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437449 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437463 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437475 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437486 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437498 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437515 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437529 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437542 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437556 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437569 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437581 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-metrics-client-ca" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437594 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437607 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437622 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437636 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437648 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437663 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437680 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54fba066-0e9e-49f6-8a86-34d5b4b660df" volumeName="kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437695 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62fc29f4-557f-4a75-8b78-6ca425c81b81" volumeName="kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437711 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c303189e-adae-4fe2-8dd7-cc9b80f73e66" volumeName="kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437725 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437742 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437759 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437776 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437793 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437810 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437828 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437843 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437860 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca" seLinuxMountContext="" Feb 16 17:14:18.437734 master-0 kubenswrapper[4167]: I0216 17:14:18.437878 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.437895 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.437913 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-db" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.437929 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.437944 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.437979 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.437996 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad805251-19d0-4d2f-b741-7d11158f1f03" volumeName="kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438012 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438032 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438046 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438061 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438075 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438089 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438104 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438119 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438134 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438152 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-metrics-client-ca" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438168 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438182 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438198 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438213 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438229 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438244 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438261 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438278 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438296 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18e9a9d3-9b18-4c19-9558-f33c68101922" volumeName="kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438311 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438328 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438344 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438359 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a275679-b7b6-4c28-b389-94cd2b014d6c" volumeName="kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438377 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438393 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438411 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438431 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438449 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438466 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438482 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438498 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438514 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438529 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438545 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438562 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438577 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438592 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438607 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438751 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438770 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80d3b238-70c3-4e71-96a1-99405352033f" volumeName="kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438786 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438806 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438822 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-kube-api-access-lgm4p" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438836 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438852 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1443fb7-cb1e-4105-b604-b88c749620c4" volumeName="kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-config-out" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438868 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438885 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438899 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438914 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438932 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438948 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" volumeName="kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.438984 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1cd29be8-2b2a-49f7-badd-ff53c686a63d" volumeName="kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config-out" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.439000 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.439018 4167 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert" seLinuxMountContext="" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.439033 4167 reconstruct.go:97] "Volume reconstruction finished" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.439043 4167 reconciler.go:26] "Reconciler: start to sync state" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.441529 4167 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 17:14:18.442187 master-0 kubenswrapper[4167]: I0216 17:14:18.442070 4167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 17:14:18.444000 master-0 kubenswrapper[4167]: I0216 17:14:18.443795 4167 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 17:14:18.444000 master-0 kubenswrapper[4167]: I0216 17:14:18.443863 4167 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 17:14:18.444000 master-0 kubenswrapper[4167]: I0216 17:14:18.443894 4167 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 17:14:18.444087 master-0 kubenswrapper[4167]: E0216 17:14:18.443990 4167 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 17:14:18.445311 master-0 kubenswrapper[4167]: I0216 17:14:18.445280 4167 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 17:14:18.457354 master-0 kubenswrapper[4167]: I0216 17:14:18.457304 4167 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="8774224188f63a305c99868a1a126c4172ed3b7488104d79c4b6d14629a0d4ee" exitCode=0 Feb 16 17:14:18.460026 master-0 kubenswrapper[4167]: I0216 17:14:18.459985 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_10e298020284b0e8ffa6a0bc184059d9/kube-apiserver-check-endpoints/0.log" Feb 16 17:14:18.462165 master-0 kubenswrapper[4167]: I0216 17:14:18.462117 4167 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="15bd44df4287ebd73cf51ab580577f1b0e984e37a690d3b82be46044edeeb30a" exitCode=255 Feb 16 17:14:18.462165 master-0 kubenswrapper[4167]: I0216 17:14:18.462159 4167 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="031351d655133b7eb2314bbf088d80efa45189d1f9252d31e6d06128b3e90f08" exitCode=0 Feb 16 17:14:18.469167 master-0 kubenswrapper[4167]: I0216 17:14:18.469125 4167 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="a2626c4f0d02f1b08ed65857f9aece164559757f234922a3555b40b68623d959" exitCode=0 Feb 16 17:14:18.475602 master-0 kubenswrapper[4167]: I0216 17:14:18.475570 4167 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e" exitCode=1 Feb 16 17:14:18.480941 master-0 kubenswrapper[4167]: I0216 17:14:18.480897 4167 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="6ac11f88c295e3e046b26ff9ae40dab9b0eceda0cac3f32bc505dc4f4fee5314" exitCode=0 Feb 16 17:14:18.480941 master-0 kubenswrapper[4167]: I0216 17:14:18.480924 4167 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="29b286fe19ef6218902f26309c43d4a3f1b4a5c65b84d854a9532c01548c0948" exitCode=0 Feb 16 17:14:18.480941 master-0 kubenswrapper[4167]: I0216 17:14:18.480933 4167 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="113e4c68695ecb798e25862ac1123977a0b09805e3743cfdc83f64ff3d629fe8" exitCode=0 Feb 16 17:14:18.544141 master-0 kubenswrapper[4167]: E0216 17:14:18.544081 4167 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 17:14:18.552004 master-0 kubenswrapper[4167]: I0216 17:14:18.551968 4167 manager.go:324] Recovery completed Feb 16 17:14:18.580838 master-0 kubenswrapper[4167]: I0216 17:14:18.580777 4167 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 17:14:18.580838 master-0 kubenswrapper[4167]: I0216 17:14:18.580805 4167 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 17:14:18.580838 master-0 kubenswrapper[4167]: I0216 17:14:18.580832 4167 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:14:18.581083 master-0 kubenswrapper[4167]: I0216 17:14:18.581054 4167 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 16 17:14:18.581140 master-0 kubenswrapper[4167]: I0216 17:14:18.581074 4167 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 16 17:14:18.581140 master-0 kubenswrapper[4167]: I0216 17:14:18.581097 4167 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 16 17:14:18.581140 master-0 kubenswrapper[4167]: I0216 17:14:18.581106 4167 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 16 17:14:18.581140 master-0 kubenswrapper[4167]: I0216 17:14:18.581115 4167 policy_none.go:49] "None policy: Start" Feb 16 17:14:18.582182 master-0 kubenswrapper[4167]: I0216 17:14:18.582154 4167 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 17:14:18.582231 master-0 kubenswrapper[4167]: I0216 17:14:18.582193 4167 state_mem.go:35] "Initializing new in-memory state store" Feb 16 17:14:18.582435 master-0 kubenswrapper[4167]: I0216 17:14:18.582417 4167 state_mem.go:75] "Updated machine memory state" Feb 16 17:14:18.582435 master-0 kubenswrapper[4167]: I0216 17:14:18.582432 4167 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 16 17:14:18.590831 master-0 kubenswrapper[4167]: I0216 17:14:18.590753 4167 manager.go:334] "Starting Device Plugin manager" Feb 16 17:14:18.591022 master-0 kubenswrapper[4167]: I0216 17:14:18.590998 4167 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 17:14:18.591022 master-0 kubenswrapper[4167]: I0216 17:14:18.591017 4167 server.go:79] "Starting device plugin registration server" Feb 16 17:14:18.591456 master-0 kubenswrapper[4167]: I0216 17:14:18.591426 4167 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 17:14:18.591504 master-0 kubenswrapper[4167]: I0216 17:14:18.591445 4167 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 17:14:18.591595 master-0 kubenswrapper[4167]: I0216 17:14:18.591571 4167 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 17:14:18.591696 master-0 kubenswrapper[4167]: I0216 17:14:18.591660 4167 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 17:14:18.591696 master-0 kubenswrapper[4167]: I0216 17:14:18.591671 4167 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 17:14:18.592181 master-0 kubenswrapper[4167]: E0216 17:14:18.592144 4167 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:14:18.692082 master-0 kubenswrapper[4167]: I0216 17:14:18.691987 4167 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:14:18.693923 master-0 kubenswrapper[4167]: I0216 17:14:18.693860 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:14:18.693923 master-0 kubenswrapper[4167]: I0216 17:14:18.693915 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:14:18.693923 master-0 kubenswrapper[4167]: I0216 17:14:18.693927 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:14:18.694728 master-0 kubenswrapper[4167]: I0216 17:14:18.694059 4167 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:14:18.704638 master-0 kubenswrapper[4167]: I0216 17:14:18.704576 4167 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 16 17:14:18.704896 master-0 kubenswrapper[4167]: I0216 17:14:18.704853 4167 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 16 17:14:18.706168 master-0 kubenswrapper[4167]: I0216 17:14:18.706103 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:14:18.706301 master-0 kubenswrapper[4167]: I0216 17:14:18.706145 4167 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:14:18Z","lastTransitionTime":"2026-02-16T17:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:14:18.713509 master-0 kubenswrapper[4167]: E0216 17:14:18.713441 4167 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179252Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330228Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16009b8c-6511-4dd4-9a27-539c3ce647e4\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:18.715991 master-0 kubenswrapper[4167]: I0216 17:14:18.715922 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:14:18.716063 master-0 kubenswrapper[4167]: I0216 17:14:18.715995 4167 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:14:18Z","lastTransitionTime":"2026-02-16T17:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:14:18.723485 master-0 kubenswrapper[4167]: E0216 17:14:18.723337 4167 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179252Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330228Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16009b8c-6511-4dd4-9a27-539c3ce647e4\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:18.726256 master-0 kubenswrapper[4167]: I0216 17:14:18.726217 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:14:18.726328 master-0 kubenswrapper[4167]: I0216 17:14:18.726250 4167 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:14:18Z","lastTransitionTime":"2026-02-16T17:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:14:18.733139 master-0 kubenswrapper[4167]: E0216 17:14:18.733079 4167 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179252Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330228Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16009b8c-6511-4dd4-9a27-539c3ce647e4\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:18.736582 master-0 kubenswrapper[4167]: I0216 17:14:18.736544 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:14:18.736638 master-0 kubenswrapper[4167]: I0216 17:14:18.736585 4167 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:14:18Z","lastTransitionTime":"2026-02-16T17:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:14:18.744221 master-0 kubenswrapper[4167]: E0216 17:14:18.744149 4167 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179252Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330228Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16009b8c-6511-4dd4-9a27-539c3ce647e4\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:18.744417 master-0 kubenswrapper[4167]: I0216 17:14:18.744224 4167 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 17:14:18.745058 master-0 kubenswrapper[4167]: I0216 17:14:18.744975 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"35f5cd1992e81a7af4f5f45d4bb9187b72081e16a022193c8645c6082cefaf93"} Feb 16 17:14:18.745099 master-0 kubenswrapper[4167]: I0216 17:14:18.745059 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"01765c8bc2f28fd305b50bff98604cd983df450cc2ab5cd1fc4e41b470b0da56"} Feb 16 17:14:18.745099 master-0 kubenswrapper[4167]: I0216 17:14:18.745071 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"f79c4f95d84447d2060b3a9bdc40c88a369d3407543549b9827cfbca809475bb"} Feb 16 17:14:18.745099 master-0 kubenswrapper[4167]: I0216 17:14:18.745082 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"8774224188f63a305c99868a1a126c4172ed3b7488104d79c4b6d14629a0d4ee"} Feb 16 17:14:18.745207 master-0 kubenswrapper[4167]: I0216 17:14:18.745168 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"2700d64446e8244b9b674cd60afd215140645d2edcc3782a0dea4459ce56db2c"} Feb 16 17:14:18.745207 master-0 kubenswrapper[4167]: I0216 17:14:18.745184 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"861e88422bffb8b290aabc9e4e2f5c409ac08d163d44f75a8570182c8883c793"} Feb 16 17:14:18.745207 master-0 kubenswrapper[4167]: I0216 17:14:18.745199 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerDied","Data":"15bd44df4287ebd73cf51ab580577f1b0e984e37a690d3b82be46044edeeb30a"} Feb 16 17:14:18.745296 master-0 kubenswrapper[4167]: I0216 17:14:18.745213 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"25465b580bb48f3a6fb46d8c2d044d08f4836a58f5f20bd9908d153f0df9ce48"} Feb 16 17:14:18.745296 master-0 kubenswrapper[4167]: I0216 17:14:18.745224 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"1f9ccdae401f5d6081eacec1d36e73c0010b8afd14d410faf565b70a3ef4d59d"} Feb 16 17:14:18.745296 master-0 kubenswrapper[4167]: I0216 17:14:18.745234 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"1eea8daa47d9e87380f3482df90e85193de1a1240a1d240c6fe7a9ef3f0312e1"} Feb 16 17:14:18.745296 master-0 kubenswrapper[4167]: I0216 17:14:18.745246 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"4992c22e831121a95bc649c439bd4c2706d23e304f9780a4ff56be58ca6cb875"} Feb 16 17:14:18.745296 master-0 kubenswrapper[4167]: I0216 17:14:18.745255 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerDied","Data":"031351d655133b7eb2314bbf088d80efa45189d1f9252d31e6d06128b3e90f08"} Feb 16 17:14:18.745296 master-0 kubenswrapper[4167]: I0216 17:14:18.745265 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"c643d1a6bd2bbdb9a152ec5acdf256c8c4044ba37ff73d78c6f2993bc96d4a77"} Feb 16 17:14:18.745296 master-0 kubenswrapper[4167]: I0216 17:14:18.745278 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"32286c81635de6de1cf7f328273c1a49","Type":"ContainerStarted","Data":"d7c38e55f71867938246c19521c872dc2168e928e2d36640288dfca85978e020"} Feb 16 17:14:18.745296 master-0 kubenswrapper[4167]: I0216 17:14:18.745291 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"32286c81635de6de1cf7f328273c1a49","Type":"ContainerStarted","Data":"06ddc2da13c0775c0e8f0acf19c817f8072a1fcf961d84d30040d9ab97e3ada6"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745302 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"9cac97f2a7ed5b660f1fe9defbb77e1823cc7917bedfcd9a1ee2cf3d27a5413c"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745316 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"a2626c4f0d02f1b08ed65857f9aece164559757f234922a3555b40b68623d959"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745330 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"d0c55c98db8491069414beee715fb0df1d28a36c886f9583fb6e5f20a3fd1076"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745346 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"62478ab5c23f3ea9a0e6eac2c1335867a9ed27280579a87a6023cc6f8b882123"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745359 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745370 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"aab010443e5953fae2765f398f189cc7072cddd5f5db6fb4755e40a70cbb95c4"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745381 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"4ba4eba49a66193e7786c85a4578333fc95c4bc9a7a4bb4ef1dbbff27d009c65"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745394 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"aedfff1b9149a45317431b294572c56015ded146f7b3ff9ec7a263bc47a383b3"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745405 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"04067ed4187486197b26e1ae13951b566ae8dc6eabd9679686fe0234c3137a4b"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745415 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"482b77f255da8dbdc1be0e1707334e04261602c426501a77264ce29439b9c9bd"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745425 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"21bbf2dd3dd984198b802df6fcf5c5633dc7d3f5d39543c470dd12c4b0604853"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745436 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"c7588158dd7d7dabaa4f447c2b9bdd6aa5e276f1eca5073f8e69a9eb9b31cfba"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745446 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"6ac11f88c295e3e046b26ff9ae40dab9b0eceda0cac3f32bc505dc4f4fee5314"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745459 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"29b286fe19ef6218902f26309c43d4a3f1b4a5c65b84d854a9532c01548c0948"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745470 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"113e4c68695ecb798e25862ac1123977a0b09805e3743cfdc83f64ff3d629fe8"} Feb 16 17:14:18.745517 master-0 kubenswrapper[4167]: I0216 17:14:18.745482 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"b76fc34b4f11f5d3f3dd2290c7e69bd90116ec4cac1df909fdf5c5e5f8bf960d"} Feb 16 17:14:18.749328 master-0 kubenswrapper[4167]: I0216 17:14:18.749266 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:14:18.749328 master-0 kubenswrapper[4167]: I0216 17:14:18.749301 4167 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:14:18Z","lastTransitionTime":"2026-02-16T17:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:14:18.754265 master-0 kubenswrapper[4167]: E0216 17:14:18.754068 4167 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:18.755685 master-0 kubenswrapper[4167]: E0216 17:14:18.755645 4167 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.756250 master-0 kubenswrapper[4167]: E0216 17:14:18.756215 4167 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:14:18.756546 master-0 kubenswrapper[4167]: E0216 17:14:18.756484 4167 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179252Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330228Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"16009b8c-6511-4dd4-9a27-539c3ce647e4\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:18.756546 master-0 kubenswrapper[4167]: E0216 17:14:18.756534 4167 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:14:18.762154 master-0 kubenswrapper[4167]: E0216 17:14:18.762116 4167 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:14:18.763338 master-0 kubenswrapper[4167]: E0216 17:14:18.763298 4167 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.843298 master-0 kubenswrapper[4167]: I0216 17:14:18.843048 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:14:18.843298 master-0 kubenswrapper[4167]: I0216 17:14:18.843129 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:18.843298 master-0 kubenswrapper[4167]: I0216 17:14:18.843190 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.843298 master-0 kubenswrapper[4167]: I0216 17:14:18.843216 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.843298 master-0 kubenswrapper[4167]: I0216 17:14:18.843309 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.843823 master-0 kubenswrapper[4167]: I0216 17:14:18.843390 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.843823 master-0 kubenswrapper[4167]: I0216 17:14:18.843487 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.843823 master-0 kubenswrapper[4167]: I0216 17:14:18.843561 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:14:18.843823 master-0 kubenswrapper[4167]: I0216 17:14:18.843595 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:14:18.843823 master-0 kubenswrapper[4167]: I0216 17:14:18.843665 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.843823 master-0 kubenswrapper[4167]: I0216 17:14:18.843695 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.843823 master-0 kubenswrapper[4167]: I0216 17:14:18.843725 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:18.843823 master-0 kubenswrapper[4167]: I0216 17:14:18.843757 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.843823 master-0 kubenswrapper[4167]: I0216 17:14:18.843787 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.843823 master-0 kubenswrapper[4167]: I0216 17:14:18.843818 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.844469 master-0 kubenswrapper[4167]: I0216 17:14:18.843846 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.844469 master-0 kubenswrapper[4167]: I0216 17:14:18.843867 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.844469 master-0 kubenswrapper[4167]: I0216 17:14:18.843892 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:14:18.844469 master-0 kubenswrapper[4167]: I0216 17:14:18.843917 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.844469 master-0 kubenswrapper[4167]: I0216 17:14:18.844031 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.844469 master-0 kubenswrapper[4167]: I0216 17:14:18.844062 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.844469 master-0 kubenswrapper[4167]: I0216 17:14:18.844146 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.844469 master-0 kubenswrapper[4167]: I0216 17:14:18.844177 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:18.945219 master-0 kubenswrapper[4167]: I0216 17:14:18.945081 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.945219 master-0 kubenswrapper[4167]: I0216 17:14:18.945179 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.945219 master-0 kubenswrapper[4167]: I0216 17:14:18.945215 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.945678 master-0 kubenswrapper[4167]: I0216 17:14:18.945247 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:18.945678 master-0 kubenswrapper[4167]: I0216 17:14:18.945279 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.945678 master-0 kubenswrapper[4167]: I0216 17:14:18.945312 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.945678 master-0 kubenswrapper[4167]: I0216 17:14:18.945450 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.945678 master-0 kubenswrapper[4167]: I0216 17:14:18.945560 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:14:18.945678 master-0 kubenswrapper[4167]: I0216 17:14:18.945618 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.945678 master-0 kubenswrapper[4167]: I0216 17:14:18.945639 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:14:18.946230 master-0 kubenswrapper[4167]: I0216 17:14:18.945661 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.946353 master-0 kubenswrapper[4167]: I0216 17:14:18.946297 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.946353 master-0 kubenswrapper[4167]: I0216 17:14:18.945652 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.946491 master-0 kubenswrapper[4167]: I0216 17:14:18.946360 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.946491 master-0 kubenswrapper[4167]: I0216 17:14:18.946388 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.946491 master-0 kubenswrapper[4167]: I0216 17:14:18.946441 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.946680 master-0 kubenswrapper[4167]: I0216 17:14:18.946551 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.946680 master-0 kubenswrapper[4167]: I0216 17:14:18.946564 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:18.946680 master-0 kubenswrapper[4167]: I0216 17:14:18.946664 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.946910 master-0 kubenswrapper[4167]: I0216 17:14:18.946738 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.946910 master-0 kubenswrapper[4167]: I0216 17:14:18.946840 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.946910 master-0 kubenswrapper[4167]: I0216 17:14:18.946981 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:14:18.947365 master-0 kubenswrapper[4167]: I0216 17:14:18.947017 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:18.947365 master-0 kubenswrapper[4167]: I0216 17:14:18.947022 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.947365 master-0 kubenswrapper[4167]: I0216 17:14:18.947042 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.947365 master-0 kubenswrapper[4167]: I0216 17:14:18.947173 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.947365 master-0 kubenswrapper[4167]: I0216 17:14:18.947204 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:14:18.947365 master-0 kubenswrapper[4167]: I0216 17:14:18.947241 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.947365 master-0 kubenswrapper[4167]: I0216 17:14:18.947246 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.947365 master-0 kubenswrapper[4167]: I0216 17:14:18.947089 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.947365 master-0 kubenswrapper[4167]: I0216 17:14:18.947304 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.947365 master-0 kubenswrapper[4167]: I0216 17:14:18.947350 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947299 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947429 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947436 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947515 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947494 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947593 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947610 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947661 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947677 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947735 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947793 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947809 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947856 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:18.948335 master-0 kubenswrapper[4167]: I0216 17:14:18.947880 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:14:19.401303 master-0 kubenswrapper[4167]: I0216 17:14:19.401207 4167 apiserver.go:52] "Watching apiserver" Feb 16 17:14:19.435728 master-0 kubenswrapper[4167]: I0216 17:14:19.435628 4167 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 17:14:19.438791 master-0 kubenswrapper[4167]: I0216 17:14:19.438692 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0","openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9","openshift-kube-apiserver/installer-4-master-0","openshift-network-operator/iptables-alerter-czzz2","openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf","openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw","openshift-ovn-kubernetes/ovnkube-node-flr86","openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8","openshift-marketplace/community-operators-7w4km","openshift-marketplace/redhat-marketplace-4kd66","openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk","openshift-monitoring/prometheus-k8s-0","openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs","openshift-dns/node-resolver-vfxj4","openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8","openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl","openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h","openshift-multus/multus-admission-controller-6d678b8d67-5n9cl","openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr","openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d","openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6","openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48","openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg","openshift-service-ca/service-ca-676cd8b9b5-cp9rb","openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf","openshift-console/downloads-dcd7b7d95-dhhfh","openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm","openshift-monitoring/node-exporter-8256c","openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w","openshift-marketplace/certified-operators-z69zq","openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2","openshift-authentication-operator/authentication-operator-755d954778-lf4cb","openshift-console-operator/console-operator-7777d5cc66-64vhv","openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd","openshift-dns/dns-default-qcgxx","openshift-kube-apiserver/installer-3-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","assisted-installer/assisted-installer-controller-thhq2","openshift-ingress-canary/ingress-canary-qqvg4","openshift-kube-scheduler/installer-4-master-0","openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd","openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx","openshift-multus/network-metrics-daemon-279g6","openshift-monitoring/prometheus-operator-7485d645b8-zxxwd","openshift-multus/multus-6r7wj","openshift-multus/multus-additional-cni-plugins-rjdlk","openshift-network-diagnostics/network-check-target-vwvwx","openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b","openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl","openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz","openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn","openshift-machine-config-operator/machine-config-daemon-98q6v","openshift-monitoring/alertmanager-main-0","openshift-monitoring/metrics-server-745bd8d89b-qr4zh","openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz","openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq","openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv","openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9","openshift-etcd/etcd-master-0","openshift-machine-config-operator/machine-config-server-2ws9r","openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp","openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2","openshift-dns-operator/dns-operator-86b8869b79-nhxlp","openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv","openshift-monitoring/monitoring-plugin-555857f695-nlrnr","openshift-network-node-identity/network-node-identity-hhcpr","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz","openshift-apiserver/apiserver-fc4bf7f79-tqnlw","openshift-kube-controller-manager/installer-2-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc","openshift-network-operator/network-operator-6fcf4c966-6bmf9","openshift-cluster-node-tuning-operator/tuned-l5kbz","openshift-etcd/installer-2-master-0","openshift-etcd/installer-2-retry-1-master-0","openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc","openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6","openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v","openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb","openshift-kube-apiserver/installer-1-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6","openshift-marketplace/redhat-operators-lnzfx","openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k","openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk","openshift-ingress/router-default-864ddd5f56-pm4rt","openshift-insights/insights-operator-cb4f7b4cf-6qrw5","openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf","openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx","openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9"] Feb 16 17:14:19.439081 master-0 kubenswrapper[4167]: I0216 17:14:19.439038 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 17:14:19.439272 master-0 kubenswrapper[4167]: I0216 17:14:19.439239 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:19.439371 master-0 kubenswrapper[4167]: E0216 17:14:19.439306 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:14:19.439836 master-0 kubenswrapper[4167]: I0216 17:14:19.439712 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:19.439836 master-0 kubenswrapper[4167]: I0216 17:14:19.439789 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:19.440140 master-0 kubenswrapper[4167]: I0216 17:14:19.439830 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:19.440140 master-0 kubenswrapper[4167]: E0216 17:14:19.439880 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:14:19.440140 master-0 kubenswrapper[4167]: E0216 17:14:19.440013 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:14:19.440318 master-0 kubenswrapper[4167]: E0216 17:14:19.440146 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:14:19.440318 master-0 kubenswrapper[4167]: I0216 17:14:19.440288 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:19.440713 master-0 kubenswrapper[4167]: E0216 17:14:19.440441 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:14:19.440713 master-0 kubenswrapper[4167]: I0216 17:14:19.440567 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:19.440713 master-0 kubenswrapper[4167]: E0216 17:14:19.440630 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:14:19.440878 master-0 kubenswrapper[4167]: I0216 17:14:19.440734 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:19.440878 master-0 kubenswrapper[4167]: E0216 17:14:19.440819 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:14:19.441213 master-0 kubenswrapper[4167]: I0216 17:14:19.440879 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:19.441213 master-0 kubenswrapper[4167]: E0216 17:14:19.441018 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:14:19.441314 master-0 kubenswrapper[4167]: I0216 17:14:19.441251 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:19.441936 master-0 kubenswrapper[4167]: E0216 17:14:19.441339 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:14:19.441936 master-0 kubenswrapper[4167]: I0216 17:14:19.441571 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:19.441936 master-0 kubenswrapper[4167]: I0216 17:14:19.441578 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:19.441936 master-0 kubenswrapper[4167]: E0216 17:14:19.441642 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:14:19.441936 master-0 kubenswrapper[4167]: I0216 17:14:19.441712 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:19.441936 master-0 kubenswrapper[4167]: E0216 17:14:19.441792 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:14:19.441936 master-0 kubenswrapper[4167]: I0216 17:14:19.441826 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:19.441936 master-0 kubenswrapper[4167]: E0216 17:14:19.441883 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:14:19.441936 master-0 kubenswrapper[4167]: I0216 17:14:19.441853 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:19.442730 master-0 kubenswrapper[4167]: E0216 17:14:19.441983 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:14:19.442730 master-0 kubenswrapper[4167]: E0216 17:14:19.442044 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:14:19.442730 master-0 kubenswrapper[4167]: I0216 17:14:19.442240 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:19.442730 master-0 kubenswrapper[4167]: I0216 17:14:19.442272 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 17:14:19.442730 master-0 kubenswrapper[4167]: E0216 17:14:19.442282 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:14:19.442730 master-0 kubenswrapper[4167]: I0216 17:14:19.442560 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:19.442730 master-0 kubenswrapper[4167]: I0216 17:14:19.442651 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:19.443314 master-0 kubenswrapper[4167]: E0216 17:14:19.442747 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:14:19.443314 master-0 kubenswrapper[4167]: I0216 17:14:19.442871 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:19.443314 master-0 kubenswrapper[4167]: E0216 17:14:19.442924 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:14:19.443314 master-0 kubenswrapper[4167]: E0216 17:14:19.443031 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:14:19.443314 master-0 kubenswrapper[4167]: I0216 17:14:19.443185 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:19.443945 master-0 kubenswrapper[4167]: E0216 17:14:19.443311 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:14:19.443945 master-0 kubenswrapper[4167]: I0216 17:14:19.443335 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 17:14:19.443945 master-0 kubenswrapper[4167]: I0216 17:14:19.443516 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 17:14:19.444581 master-0 kubenswrapper[4167]: I0216 17:14:19.444521 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:19.445689 master-0 kubenswrapper[4167]: I0216 17:14:19.444723 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:19.445689 master-0 kubenswrapper[4167]: E0216 17:14:19.444816 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:14:19.445689 master-0 kubenswrapper[4167]: I0216 17:14:19.445077 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:19.445689 master-0 kubenswrapper[4167]: I0216 17:14:19.445089 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:19.445689 master-0 kubenswrapper[4167]: E0216 17:14:19.445143 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:14:19.445689 master-0 kubenswrapper[4167]: E0216 17:14:19.445168 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: I0216 17:14:19.447307 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: I0216 17:14:19.447443 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: E0216 17:14:19.447543 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: I0216 17:14:19.447953 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: I0216 17:14:19.447319 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: I0216 17:14:19.447643 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: E0216 17:14:19.448158 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: I0216 17:14:19.448209 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: I0216 17:14:19.447763 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: I0216 17:14:19.448244 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: I0216 17:14:19.447771 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: I0216 17:14:19.448318 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: I0216 17:14:19.447849 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: E0216 17:14:19.448401 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:14:19.449053 master-0 kubenswrapper[4167]: E0216 17:14:19.448506 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:14:19.454408 master-0 kubenswrapper[4167]: I0216 17:14:19.449121 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 17:14:19.454408 master-0 kubenswrapper[4167]: E0216 17:14:19.449241 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:14:19.454408 master-0 kubenswrapper[4167]: I0216 17:14:19.450575 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:19.454408 master-0 kubenswrapper[4167]: E0216 17:14:19.450683 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:14:19.454408 master-0 kubenswrapper[4167]: I0216 17:14:19.451689 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:19.454408 master-0 kubenswrapper[4167]: E0216 17:14:19.451780 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:14:19.454408 master-0 kubenswrapper[4167]: I0216 17:14:19.452161 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:19.454408 master-0 kubenswrapper[4167]: E0216 17:14:19.452325 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:14:19.454408 master-0 kubenswrapper[4167]: I0216 17:14:19.453510 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:19.454764 master-0 kubenswrapper[4167]: E0216 17:14:19.453994 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:14:19.456460 master-0 kubenswrapper[4167]: I0216 17:14:19.455204 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 17:14:19.456460 master-0 kubenswrapper[4167]: I0216 17:14:19.455523 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 17:14:19.456460 master-0 kubenswrapper[4167]: I0216 17:14:19.455700 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 17:14:19.456460 master-0 kubenswrapper[4167]: I0216 17:14:19.455771 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 17:14:19.456460 master-0 kubenswrapper[4167]: I0216 17:14:19.455856 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 17:14:19.456460 master-0 kubenswrapper[4167]: I0216 17:14:19.456061 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 17:14:19.456460 master-0 kubenswrapper[4167]: I0216 17:14:19.456149 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 17:14:19.456460 master-0 kubenswrapper[4167]: I0216 17:14:19.456278 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 17:14:19.456460 master-0 kubenswrapper[4167]: I0216 17:14:19.456460 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 17:14:19.458808 master-0 kubenswrapper[4167]: I0216 17:14:19.456590 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 17:14:19.458808 master-0 kubenswrapper[4167]: I0216 17:14:19.456751 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 17:14:19.458808 master-0 kubenswrapper[4167]: I0216 17:14:19.456600 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 17:14:19.458808 master-0 kubenswrapper[4167]: I0216 17:14:19.457116 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 17:14:19.458808 master-0 kubenswrapper[4167]: I0216 17:14:19.457517 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 17:14:19.458808 master-0 kubenswrapper[4167]: I0216 17:14:19.457629 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 17:14:19.458808 master-0 kubenswrapper[4167]: I0216 17:14:19.457944 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 17:14:19.458808 master-0 kubenswrapper[4167]: I0216 17:14:19.458121 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 17:14:19.458808 master-0 kubenswrapper[4167]: I0216 17:14:19.458259 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 17:14:19.458808 master-0 kubenswrapper[4167]: I0216 17:14:19.458307 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 17:14:19.458808 master-0 kubenswrapper[4167]: I0216 17:14:19.458428 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 17:14:19.459214 master-0 kubenswrapper[4167]: I0216 17:14:19.458845 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 17:14:19.459214 master-0 kubenswrapper[4167]: I0216 17:14:19.458920 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 17:14:19.459547 master-0 kubenswrapper[4167]: I0216 17:14:19.459497 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:19.459815 master-0 kubenswrapper[4167]: E0216 17:14:19.459771 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:14:19.459890 master-0 kubenswrapper[4167]: I0216 17:14:19.459845 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:14:19.460004 master-0 kubenswrapper[4167]: I0216 17:14:19.459854 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 17:14:19.462225 master-0 kubenswrapper[4167]: I0216 17:14:19.462183 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:19.462337 master-0 kubenswrapper[4167]: I0216 17:14:19.462298 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:14:19.462390 master-0 kubenswrapper[4167]: E0216 17:14:19.462321 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:14:19.462390 master-0 kubenswrapper[4167]: I0216 17:14:19.462190 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 17:14:19.462765 master-0 kubenswrapper[4167]: I0216 17:14:19.462712 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:19.462863 master-0 kubenswrapper[4167]: E0216 17:14:19.462820 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:14:19.463026 master-0 kubenswrapper[4167]: I0216 17:14:19.462997 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:19.463068 master-0 kubenswrapper[4167]: E0216 17:14:19.463053 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:14:19.463528 master-0 kubenswrapper[4167]: I0216 17:14:19.463477 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:19.463584 master-0 kubenswrapper[4167]: E0216 17:14:19.463560 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:14:19.463628 master-0 kubenswrapper[4167]: I0216 17:14:19.463612 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:19.463680 master-0 kubenswrapper[4167]: I0216 17:14:19.463620 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:19.463723 master-0 kubenswrapper[4167]: E0216 17:14:19.463671 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:14:19.463723 master-0 kubenswrapper[4167]: I0216 17:14:19.463692 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 17:14:19.463799 master-0 kubenswrapper[4167]: E0216 17:14:19.463741 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:14:19.464556 master-0 kubenswrapper[4167]: I0216 17:14:19.464517 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 17:14:19.465209 master-0 kubenswrapper[4167]: I0216 17:14:19.465071 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 17:14:19.465209 master-0 kubenswrapper[4167]: I0216 17:14:19.465112 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:19.465209 master-0 kubenswrapper[4167]: I0216 17:14:19.465154 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:19.465209 master-0 kubenswrapper[4167]: E0216 17:14:19.465164 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:14:19.465410 master-0 kubenswrapper[4167]: E0216 17:14:19.465211 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:14:19.465410 master-0 kubenswrapper[4167]: I0216 17:14:19.465158 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:19.465410 master-0 kubenswrapper[4167]: E0216 17:14:19.465266 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:14:19.465410 master-0 kubenswrapper[4167]: I0216 17:14:19.465314 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:19.465410 master-0 kubenswrapper[4167]: I0216 17:14:19.465392 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:19.465410 master-0 kubenswrapper[4167]: E0216 17:14:19.465388 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:14:19.465640 master-0 kubenswrapper[4167]: E0216 17:14:19.465420 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:14:19.465989 master-0 kubenswrapper[4167]: I0216 17:14:19.465877 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:19.465989 master-0 kubenswrapper[4167]: E0216 17:14:19.465923 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:14:19.465989 master-0 kubenswrapper[4167]: I0216 17:14:19.465935 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:19.465989 master-0 kubenswrapper[4167]: I0216 17:14:19.465940 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:14:19.465989 master-0 kubenswrapper[4167]: E0216 17:14:19.465985 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:14:19.466877 master-0 kubenswrapper[4167]: I0216 17:14:19.466851 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:19.466930 master-0 kubenswrapper[4167]: E0216 17:14:19.466898 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:14:19.468570 master-0 kubenswrapper[4167]: I0216 17:14:19.468095 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:19.468570 master-0 kubenswrapper[4167]: E0216 17:14:19.468278 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:14:19.469066 master-0 kubenswrapper[4167]: I0216 17:14:19.469029 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 17:14:19.469166 master-0 kubenswrapper[4167]: I0216 17:14:19.469147 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-q5h8t" Feb 16 17:14:19.470284 master-0 kubenswrapper[4167]: I0216 17:14:19.469244 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:19.470284 master-0 kubenswrapper[4167]: I0216 17:14:19.469296 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 17:14:19.470284 master-0 kubenswrapper[4167]: E0216 17:14:19.469336 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:14:19.470284 master-0 kubenswrapper[4167]: I0216 17:14:19.469368 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 17:14:19.470284 master-0 kubenswrapper[4167]: I0216 17:14:19.469783 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 17:14:19.470284 master-0 kubenswrapper[4167]: I0216 17:14:19.469829 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:19.470284 master-0 kubenswrapper[4167]: E0216 17:14:19.469874 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:14:19.470841 master-0 kubenswrapper[4167]: I0216 17:14:19.470508 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:19.470841 master-0 kubenswrapper[4167]: E0216 17:14:19.470570 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:14:19.470841 master-0 kubenswrapper[4167]: I0216 17:14:19.470628 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 17:14:19.471858 master-0 kubenswrapper[4167]: I0216 17:14:19.470980 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 17:14:19.471858 master-0 kubenswrapper[4167]: I0216 17:14:19.471347 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-wnnb7" Feb 16 17:14:19.471858 master-0 kubenswrapper[4167]: I0216 17:14:19.471562 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 17:14:19.471858 master-0 kubenswrapper[4167]: I0216 17:14:19.471645 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 17:14:19.471858 master-0 kubenswrapper[4167]: I0216 17:14:19.471661 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:19.471858 master-0 kubenswrapper[4167]: E0216 17:14:19.471704 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:14:19.471858 master-0 kubenswrapper[4167]: I0216 17:14:19.471807 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 17:14:19.471858 master-0 kubenswrapper[4167]: I0216 17:14:19.471840 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:19.472443 master-0 kubenswrapper[4167]: E0216 17:14:19.471873 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:14:19.472443 master-0 kubenswrapper[4167]: I0216 17:14:19.472075 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:19.472443 master-0 kubenswrapper[4167]: E0216 17:14:19.472120 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:14:19.472443 master-0 kubenswrapper[4167]: I0216 17:14:19.472204 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:14:19.472594 master-0 kubenswrapper[4167]: I0216 17:14:19.472515 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:19.472666 master-0 kubenswrapper[4167]: I0216 17:14:19.472628 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:14:19.472666 master-0 kubenswrapper[4167]: I0216 17:14:19.472664 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:19.472917 master-0 kubenswrapper[4167]: E0216 17:14:19.472837 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:14:19.473173 master-0 kubenswrapper[4167]: E0216 17:14:19.473114 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:14:19.474060 master-0 kubenswrapper[4167]: I0216 17:14:19.473761 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lc8g2" Feb 16 17:14:19.474060 master-0 kubenswrapper[4167]: I0216 17:14:19.473834 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:14:19.474060 master-0 kubenswrapper[4167]: I0216 17:14:19.473836 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 17:14:19.474712 master-0 kubenswrapper[4167]: I0216 17:14:19.474210 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 17:14:19.474712 master-0 kubenswrapper[4167]: I0216 17:14:19.474352 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:14:19.474712 master-0 kubenswrapper[4167]: I0216 17:14:19.474486 4167 status_manager.go:875] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29402454-a920-471e-895e-764235d16eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db8e6dde9089415ec50ea395cc6048bd2122d36d369cf40adfb691513d4759ff\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:03:16Z\\\",\\\"message\\\":\\\"I0216 17:02:46.746715 1 cmd.go:253] Using service-serving-cert provided certificates\\\\nI0216 17:02:46.746894 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0216 17:02:46.747307 1 observer_polling.go:159] Starting file observer\\\\nI0216 17:02:46.747774 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0216 17:02:46.747787 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0216 17:02:46.749369 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://172.30.0.1:443/api/v1/namespaces/openshift-service-ca-operator/pods\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\nI0216 17:02:46.749553 1 builder.go:304] service-ca-operator version -\\\\nI0216 17:02:46.750456 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key\\\\\\\"\\\\nF0216 17:03:16.915481 1 cmd.go:182] failed checking apiserver connectivity: Get \\\\\\\"https://172.30.0.1:443/apis/coordination.k8s.io/v1/namespaces/openshift-service-ca-operator/leases/service-ca-operator-lock\\\\\\\": dial tcp 172.30.0.1:443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:02:46Z\\\"}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-5dc4688546-pl7r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:19.475232 master-0 kubenswrapper[4167]: I0216 17:14:19.475110 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 17:14:19.475584 master-0 kubenswrapper[4167]: I0216 17:14:19.475454 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:19.475584 master-0 kubenswrapper[4167]: E0216 17:14:19.475515 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:14:19.478482 master-0 kubenswrapper[4167]: I0216 17:14:19.475993 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 17:14:19.478482 master-0 kubenswrapper[4167]: I0216 17:14:19.476200 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 17:14:19.478482 master-0 kubenswrapper[4167]: I0216 17:14:19.476581 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:19.478482 master-0 kubenswrapper[4167]: E0216 17:14:19.476626 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:14:19.478482 master-0 kubenswrapper[4167]: I0216 17:14:19.476630 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.478482 master-0 kubenswrapper[4167]: E0216 17:14:19.476681 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" Feb 16 17:14:19.478482 master-0 kubenswrapper[4167]: I0216 17:14:19.478132 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:19.478482 master-0 kubenswrapper[4167]: I0216 17:14:19.478156 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:19.478482 master-0 kubenswrapper[4167]: E0216 17:14:19.478182 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:14:19.478482 master-0 kubenswrapper[4167]: E0216 17:14:19.478248 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:14:19.478482 master-0 kubenswrapper[4167]: I0216 17:14:19.478356 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:19.478482 master-0 kubenswrapper[4167]: E0216 17:14:19.478432 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:14:19.480092 master-0 kubenswrapper[4167]: I0216 17:14:19.479727 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.480092 master-0 kubenswrapper[4167]: I0216 17:14:19.479794 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 17:14:19.480092 master-0 kubenswrapper[4167]: I0216 17:14:19.480027 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 17:14:19.480247 master-0 kubenswrapper[4167]: I0216 17:14:19.480141 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:19.480247 master-0 kubenswrapper[4167]: I0216 17:14:19.480242 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 17:14:19.480788 master-0 kubenswrapper[4167]: I0216 17:14:19.480424 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 17:14:19.480788 master-0 kubenswrapper[4167]: I0216 17:14:19.480608 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 17:14:19.480788 master-0 kubenswrapper[4167]: E0216 17:14:19.480608 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:14:19.480934 master-0 kubenswrapper[4167]: E0216 17:14:19.480822 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" Feb 16 17:14:19.481317 master-0 kubenswrapper[4167]: I0216 17:14:19.481288 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:19.488127 master-0 kubenswrapper[4167]: I0216 17:14:19.487999 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:19.491136 master-0 kubenswrapper[4167]: I0216 17:14:19.491024 4167 status_manager.go:875] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7390ccc6-dfbe-4f51-960c-7628f49bffb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/audit\\\",\\\"name\\\":\\\"audit-policies\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/etcd-client\\\",\\\"name\\\":\\\"etcd-client\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-serving-ca\\\",\\\"name\\\":\\\"etcd-serving-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca-bundle\\\",\\\"name\\\":\\\"trusted-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/encryption-config\\\",\\\"name\\\":\\\"encryption-config\\\"},{\\\"mountPath\\\":\\\"/var/log/oauth-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-66788cb45c-dp9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:19.497109 master-0 kubenswrapper[4167]: I0216 17:14:19.497062 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:19.503451 master-0 kubenswrapper[4167]: E0216 17:14:19.502832 4167 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 16 17:14:19.503451 master-0 kubenswrapper[4167]: E0216 17:14:19.502861 4167 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-startup-monitor-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:19.503451 master-0 kubenswrapper[4167]: E0216 17:14:19.502864 4167 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:19.503451 master-0 kubenswrapper[4167]: E0216 17:14:19.502883 4167 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:14:19.503736 master-0 kubenswrapper[4167]: E0216 17:14:19.503602 4167 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:14:19.503736 master-0 kubenswrapper[4167]: E0216 17:14:19.503622 4167 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:19.504040 master-0 kubenswrapper[4167]: I0216 17:14:19.503862 4167 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/installer-1-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"86c571b6-0f65-41f0-b1be-f63d7a974782\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver\"/\"installer-1-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:19.514352 master-0 kubenswrapper[4167]: I0216 17:14:19.514292 4167 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/installer-2-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b1b4fccc-6bf6-47ac-8ae1-32cad23734da\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd\"/\"installer-2-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:19.524361 master-0 kubenswrapper[4167]: I0216 17:14:19.524266 4167 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"188e42e5-9f9c-42af-ba15-5548c4fa4b52\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"message\\\":\\\"containers with unready status: [catalog-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"catalog-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/srv-cert\\\",\\\"name\\\":\\\"srv-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/profile-collector-cert\\\",\\\"name\\\":\\\"profile-collector-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-25g7f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"catalog-operator-588944557d-5drhs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:19.524642 master-0 kubenswrapper[4167]: I0216 17:14:19.524594 4167 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 16 17:14:19.533844 master-0 kubenswrapper[4167]: I0216 17:14:19.533718 4167 status_manager.go:875] "Failed to update status for pod" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0517b180-00ee-47fe-a8e7-36a3931b7e72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91710f3ffed9e691771d9c2df1c7410a137cbac4dc029e63df70fc8620c4721e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T17:04:00Z\\\",\\\"message\\\":\\\"W0216 17:04:00.752201 1 cmd.go:254] Using insecure, self-signed certificates\\\\nF0216 17:04:00.752428 1 cmd.go:179] mkdir /tmp/serving-cert-2413816709: read-only file system\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:04:00Z\\\"}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"trusted-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbrtz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-7777d5cc66-64vhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:19.544759 master-0 kubenswrapper[4167]: I0216 17:14:19.544684 4167 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676e24eb-bc42-4b39-8762-94da3ed718e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"message\\\":null,\\\"reason\\\":null,\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"message\\\":null,\\\"reason\\\":null,\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:18Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f79c4f95d84447d2060b3a9bdc40c88a369d3407543549b9827cfbca809475bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:13:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://01765c8bc2f28fd305b50bff98604cd983df450cc2ab5cd1fc4e41b470b0da56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:13:53Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://35f5cd1992e81a7af4f5f45d4bb9187b72081e16a022193c8645c6082cefaf93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:13:53Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8774224188f63a305c99868a1a126c4172ed3b7488104d79c4b6d14629a0d4ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8774224188f63a305c99868a1a126c4172ed3b7488104d79c4b6d14629a0d4ee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:13:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:13:52Z\\\"}}}],\\\"startTime\\\":\\\"2026-02-16T17:14:18Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:19.552355 master-0 kubenswrapper[4167]: I0216 17:14:19.552254 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:19.552355 master-0 kubenswrapper[4167]: I0216 17:14:19.552296 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:19.552355 master-0 kubenswrapper[4167]: I0216 17:14:19.552319 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:19.552355 master-0 kubenswrapper[4167]: I0216 17:14:19.552341 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.552355 master-0 kubenswrapper[4167]: I0216 17:14:19.552362 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:19.552355 master-0 kubenswrapper[4167]: I0216 17:14:19.552384 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:19.553031 master-0 kubenswrapper[4167]: I0216 17:14:19.552405 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-config-out\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.553031 master-0 kubenswrapper[4167]: E0216 17:14:19.552512 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:19.553031 master-0 kubenswrapper[4167]: E0216 17:14:19.552530 4167 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:19.553031 master-0 kubenswrapper[4167]: E0216 17:14:19.552571 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.052552839 +0000 UTC m=+1.782999227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:19.553031 master-0 kubenswrapper[4167]: E0216 17:14:19.552588 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.052578769 +0000 UTC m=+1.783025167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:19.553031 master-0 kubenswrapper[4167]: E0216 17:14:19.552637 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:19.553031 master-0 kubenswrapper[4167]: E0216 17:14:19.552665 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.052656942 +0000 UTC m=+1.783103330 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:19.553031 master-0 kubenswrapper[4167]: I0216 17:14:19.552689 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-config-out\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.553031 master-0 kubenswrapper[4167]: E0216 17:14:19.552698 4167 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:19.553031 master-0 kubenswrapper[4167]: E0216 17:14:19.552721 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.052714023 +0000 UTC m=+1.783160421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:19.553031 master-0 kubenswrapper[4167]: I0216 17:14:19.552848 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:19.553658 master-0 kubenswrapper[4167]: I0216 17:14:19.552877 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:19.553658 master-0 kubenswrapper[4167]: I0216 17:14:19.553554 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.553658 master-0 kubenswrapper[4167]: I0216 17:14:19.553347 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:19.553832 master-0 kubenswrapper[4167]: I0216 17:14:19.553685 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:19.553832 master-0 kubenswrapper[4167]: I0216 17:14:19.553704 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:19.554015 master-0 kubenswrapper[4167]: I0216 17:14:19.553970 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:19.554015 master-0 kubenswrapper[4167]: E0216 17:14:19.553985 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:19.554137 master-0 kubenswrapper[4167]: E0216 17:14:19.554079 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.054052069 +0000 UTC m=+1.784498507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:19.554137 master-0 kubenswrapper[4167]: I0216 17:14:19.554000 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:19.554279 master-0 kubenswrapper[4167]: I0216 17:14:19.553919 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:19.554279 master-0 kubenswrapper[4167]: I0216 17:14:19.554154 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:19.554279 master-0 kubenswrapper[4167]: I0216 17:14:19.554206 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.554279 master-0 kubenswrapper[4167]: I0216 17:14:19.554248 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.554507 master-0 kubenswrapper[4167]: I0216 17:14:19.554284 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:19.554507 master-0 kubenswrapper[4167]: I0216 17:14:19.554317 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:19.554507 master-0 kubenswrapper[4167]: I0216 17:14:19.554359 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:19.554507 master-0 kubenswrapper[4167]: I0216 17:14:19.554401 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:19.554507 master-0 kubenswrapper[4167]: I0216 17:14:19.554424 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.554507 master-0 kubenswrapper[4167]: I0216 17:14:19.554441 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.554507 master-0 kubenswrapper[4167]: E0216 17:14:19.554213 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:19.554507 master-0 kubenswrapper[4167]: I0216 17:14:19.554481 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:19.554507 master-0 kubenswrapper[4167]: E0216 17:14:19.554507 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.054497591 +0000 UTC m=+1.784943969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554521 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554542 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554559 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554576 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554592 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554609 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554625 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554640 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554656 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554674 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554692 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554707 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554722 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554738 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554753 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554770 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554786 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: I0216 17:14:19.554801 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: E0216 17:14:19.554813 4167 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: E0216 17:14:19.554839 4167 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: E0216 17:14:19.554872 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.054856761 +0000 UTC m=+1.785303199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:19.555065 master-0 kubenswrapper[4167]: E0216 17:14:19.554935 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.054921013 +0000 UTC m=+1.785367401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.554234 4167 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.555218 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.055208771 +0000 UTC m=+1.785655149 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.554478 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.555234 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.555244 4167 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.555268 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.055261762 +0000 UTC m=+1.785708140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.555480 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.554816 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.555537 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.555565 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.555583 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.555599 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.555592 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.555622 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.555708 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.555721 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.555770 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.055740845 +0000 UTC m=+1.786187313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.555804 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.555837 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.055829817 +0000 UTC m=+1.786276185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.555841 4167 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.555877 4167 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.555922 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.555931 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556029 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556092 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.056081304 +0000 UTC m=+1.786527682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556110 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.056103565 +0000 UTC m=+1.786549943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.555805 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556195 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.056116165 +0000 UTC m=+1.786562543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556208 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.056203167 +0000 UTC m=+1.786649535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.555808 4167 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556222 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.556226 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556241 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.056236208 +0000 UTC m=+1.786682586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556277 4167 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556314 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.05630434 +0000 UTC m=+1.786750718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556245 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556317 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556360 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.056350111 +0000 UTC m=+1.786796589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.556377 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556397 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.056381582 +0000 UTC m=+1.786828050 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: E0216 17:14:19.556420 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0 podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.056408653 +0000 UTC m=+1.786855071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.556447 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:19.556465 master-0 kubenswrapper[4167]: I0216 17:14:19.556496 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556531 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556617 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556605 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556650 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556670 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556689 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556708 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556724 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556755 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556773 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556790 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556809 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556827 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556843 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556859 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556877 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556877 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556895 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556916 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.556945 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557005 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.557033 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557060 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.05704471 +0000 UTC m=+1.787491228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.557094 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.557136 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57455\" (UniqueName: \"kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.557175 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.557217 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.557222 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557143 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557294 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.057286617 +0000 UTC m=+1.787732995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557208 4167 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557317 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.057311277 +0000 UTC m=+1.787757755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557328 4167 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557352 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557389 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.557258 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557362 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557448 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.557459 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557229 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557378 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.057364129 +0000 UTC m=+1.787810597 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557223 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557558 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.057516653 +0000 UTC m=+1.787963041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: E0216 17:14:19.557585 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.057567504 +0000 UTC m=+1.788013892 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.557637 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:19.558808 master-0 kubenswrapper[4167]: I0216 17:14:19.557671 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.557761 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.057748609 +0000 UTC m=+1.788194987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.560300 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.060283898 +0000 UTC m=+1.790730376 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.560328 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.060316719 +0000 UTC m=+1.790763217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.560348 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.060338889 +0000 UTC m=+1.790785387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560374 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560404 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560431 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560457 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560500 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560529 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560557 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560582 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560608 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560634 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560658 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94kdz\" (UniqueName: \"kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560682 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560709 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560746 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560771 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560802 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560828 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560853 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560880 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq4dn\" (UniqueName: \"kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560903 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560929 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560951 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.560997 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561023 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561049 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561073 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561098 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561122 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561148 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561172 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561198 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561223 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561247 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561260 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561288 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561314 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561336 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561362 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561446 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561460 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.558004 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.561706 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.561802 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.061750848 +0000 UTC m=+1.792197306 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.561852 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.561901 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.561919 4167 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.561933 4167 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.561945 4167 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.561985 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.061932633 +0000 UTC m=+1.792379031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.562041 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.062028935 +0000 UTC m=+1.792475323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.562103 4167 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.562163 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.562178 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.062154129 +0000 UTC m=+1.792600587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.562231 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.06219291 +0000 UTC m=+1.792639418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.562300 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.562318 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: E0216 17:14:19.562332 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.062322493 +0000 UTC m=+1.792769021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:19.562285 master-0 kubenswrapper[4167]: I0216 17:14:19.562379 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.562441 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.562496 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.562550 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.562600 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.562652 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.562696 4167 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.562706 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.562773 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.062757845 +0000 UTC m=+1.793204343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.562762 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.562854 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.562913 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.562108 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.562974 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.06294723 +0000 UTC m=+1.793393738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.563030 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.563058 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.063049843 +0000 UTC m=+1.793496371 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.563055 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.563075 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.563084 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.063077724 +0000 UTC m=+1.793524232 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.563129 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.063115315 +0000 UTC m=+1.793561723 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.562555 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.563173 4167 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.562858 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.563218 4167 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.563750 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564133 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564195 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564227 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564259 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564285 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564311 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564313 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564398 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.564412 4167 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564426 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564452 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.564513 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.064455731 +0000 UTC m=+1.794902189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564556 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564611 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564676 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564716 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564717 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564727 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.564749 4167 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564779 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.564793 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.564930 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.064915253 +0000 UTC m=+1.795361741 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565003 4167 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565025 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565070 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.065049817 +0000 UTC m=+1.795496275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565079 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565104 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.065086628 +0000 UTC m=+1.795533126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565130 4167 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565145 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565170 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.06515743 +0000 UTC m=+1.795603828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565207 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565204 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvw4s\" (UniqueName: \"kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565245 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565376 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565437 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565465 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565500 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565524 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565569 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565583 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565604 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.065584881 +0000 UTC m=+1.796031389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565645 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.065631493 +0000 UTC m=+1.796077991 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565643 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565684 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.065674354 +0000 UTC m=+1.796120832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565730 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565759 4167 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565784 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565812 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.065795027 +0000 UTC m=+1.796241445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.565844 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.065827628 +0000 UTC m=+1.796274066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565853 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565883 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.565924 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: E0216 17:14:19.566052 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:19.565914 master-0 kubenswrapper[4167]: I0216 17:14:19.566069 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566098 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566117 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.066097435 +0000 UTC m=+1.796543863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566156 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566155 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566184 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.066173897 +0000 UTC m=+1.796620285 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566212 4167 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566217 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566234 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.066227019 +0000 UTC m=+1.796673407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566288 4167 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566282 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566325 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.066307881 +0000 UTC m=+1.796754299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566365 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566363 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566387 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.066379503 +0000 UTC m=+1.796825881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566248 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566417 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.066410654 +0000 UTC m=+1.796857042 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566284 4167 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b2561b-933b-4c58-a63a-7a8c671d0ae9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:14:19Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://05aafc42942edec512935971aa649b31b51a37bb17ad3e45d4a47e5503a28fae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:07:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:03:27Z\\\"}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/\\\",\\\"name\\\":\\\"marketplace-trusted-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"marketplace-operator-metrics\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kx9vc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-6cc5b65c6b-s4gp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566415 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566471 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566497 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566507 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566520 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566558 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.066541577 +0000 UTC m=+1.796987995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566602 4167 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: E0216 17:14:19.566667 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.06665244 +0000 UTC m=+1.797098908 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566781 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566597 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566851 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566887 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566921 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.566977 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.567020 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.567054 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.567089 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.567122 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.567154 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:19.570712 master-0 kubenswrapper[4167]: I0216 17:14:19.567189 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.574313 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.574394 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.574446 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.574502 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.574556 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.574607 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.574658 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.574710 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.574761 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.574810 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.574861 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.574913 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575002 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575058 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575114 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575168 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575220 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575271 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575351 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575401 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575461 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575520 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575576 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76rtg\" (UniqueName: \"kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575626 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575677 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575728 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575780 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575835 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575888 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.575938 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576031 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576087 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576139 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576197 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576252 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576307 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576360 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576409 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576468 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576524 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576579 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7l6q\" (UniqueName: \"kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576633 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576687 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576739 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576826 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576883 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.576939 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.577035 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:19.577056 master-0 kubenswrapper[4167]: I0216 17:14:19.577093 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.577149 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.577205 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.570000 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.577260 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.577376 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbtld\" (UniqueName: \"kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.570072 4167 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.577432 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.577478 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.077447642 +0000 UTC m=+1.807894040 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.577537 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.577599 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.577717 4167 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.577766 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.077749701 +0000 UTC m=+1.808196159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.570538 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.572845 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.573458 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.577891 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.577944 4167 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578005 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.077939216 +0000 UTC m=+1.808385664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578039 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578046 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.078025118 +0000 UTC m=+1.808471596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578094 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.078079909 +0000 UTC m=+1.808526367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578224 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578265 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.078251744 +0000 UTC m=+1.808698202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578315 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578352 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.078341027 +0000 UTC m=+1.808787505 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578413 4167 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578450 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578485 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.578491 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578505 4167 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578562 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578462 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.078438759 +0000 UTC m=+1.808885227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578583 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578599 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578612 4167 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578616 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.078594613 +0000 UTC m=+1.809041111 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578672 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.078655315 +0000 UTC m=+1.809101833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578695 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.078684216 +0000 UTC m=+1.809130684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578706 4167 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578763 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.078742447 +0000 UTC m=+1.809188925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.578909 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579067 4167 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579080 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579120 4167 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579139 4167 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579143 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079124108 +0000 UTC m=+1.809570606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579153 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.579194 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579211 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079186999 +0000 UTC m=+1.809633467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.577390 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579270 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079258251 +0000 UTC m=+1.809704749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579279 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579348 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079335753 +0000 UTC m=+1.809782141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579362 4167 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: object "openshift-image-registry"/"kube-root-ca.crt" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579381 4167 projected.go:288] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: object "openshift-image-registry"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579395 4167 projected.go:194] Error preparing data for projected volume kube-api-access-b5mwd for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579425 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.570159 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579482 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079468787 +0000 UTC m=+1.809915305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b5mwd" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.572989 4167 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579512 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079497018 +0000 UTC m=+1.809943596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579536 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079525439 +0000 UTC m=+1.809971957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: I0216 17:14:19.579546 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579564 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079552029 +0000 UTC m=+1.809998537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.570209 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.572424 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579089 4167 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579732 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579738 4167 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579735 4167 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579811 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579759 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079745205 +0000 UTC m=+1.810191703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579848 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079835537 +0000 UTC m=+1.810282035 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579872 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079860328 +0000 UTC m=+1.810306806 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579887 4167 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579893 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079884458 +0000 UTC m=+1.810330866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579922 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079909499 +0000 UTC m=+1.810356017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579945 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.07993452 +0000 UTC m=+1.810381018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579992 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:19.579886 master-0 kubenswrapper[4167]: E0216 17:14:19.579995 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.079983491 +0000 UTC m=+1.810429899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580061 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.080039402 +0000 UTC m=+1.810485910 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580161 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580174 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580241 4167 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580279 4167 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580302 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.080285679 +0000 UTC m=+1.810732077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580323 4167 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580362 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.080350561 +0000 UTC m=+1.810797069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580395 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580406 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.080396112 +0000 UTC m=+1.810842630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580468 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580524 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580539 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580542 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580552 4167 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580556 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.080543006 +0000 UTC m=+1.810989404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.572326 4167 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580584 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.080573557 +0000 UTC m=+1.811020045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580528 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580587 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580609 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.080595688 +0000 UTC m=+1.811042186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580491 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580641 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.080632779 +0000 UTC m=+1.811079297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580640 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580665 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.080654149 +0000 UTC m=+1.811100657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580697 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580736 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580780 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580798 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580818 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580857 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580897 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580897 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.580914 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.580949 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581006 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.581021 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.081007419 +0000 UTC m=+1.811453817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581053 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581080 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581135 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgm4p\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-kube-api-access-lgm4p\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581186 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581223 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581227 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j99jl\" (UniqueName: \"kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581283 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config-out\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581321 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.581379 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.581422 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.08140752 +0000 UTC m=+1.811853918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581451 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581493 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581533 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581572 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581610 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581651 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581689 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581726 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581760 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581794 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581829 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581870 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.581891 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.582002 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.081984385 +0000 UTC m=+1.812430753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.581908 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjpvn\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-kube-api-access-tjpvn\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582092 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config-out\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582142 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582180 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582217 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582258 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582295 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582331 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582364 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582365 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582423 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.582426 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582444 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.582467 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.082453008 +0000 UTC m=+1.812899406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.582479 4167 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582496 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.582504 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.082497209 +0000 UTC m=+1.812943587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.582534 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582544 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.582555 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.08254941 +0000 UTC m=+1.812995788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582581 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582621 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582662 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582690 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582707 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582750 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.582781 4167 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582792 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.582942 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.583639 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.582803 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.082796777 +0000 UTC m=+1.813243155 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.583716 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.583736 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.583741 4167 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.583753 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.583772 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.583799 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.083782914 +0000 UTC m=+1.814229382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.583836 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.583880 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.583923 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.583986 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.584029 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.584065 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.584104 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.584136 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.584175 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.584217 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.584257 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.584295 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.584336 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.584374 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.584405 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.582869 4167 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.584413 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.584458 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.084447432 +0000 UTC m=+1.814893810 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.584499 4167 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.584519 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.084513484 +0000 UTC m=+1.814959862 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: I0216 17:14:19.584487 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.584553 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.584572 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.084566965 +0000 UTC m=+1.815013343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.582904 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:19.589141 master-0 kubenswrapper[4167]: E0216 17:14:19.584598 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.084592776 +0000 UTC m=+1.815039154 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.584618 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.584638 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.584656 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.582925 4167 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.582977 4167 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.583536 4167 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.584703 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.084696019 +0000 UTC m=+1.815142397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.584715 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.084709999 +0000 UTC m=+1.815156377 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.584757 4167 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.584800 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.084786191 +0000 UTC m=+1.815232699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.584863 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.584877 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.584897 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.583589 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.584941 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.084934935 +0000 UTC m=+1.815381313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.583628 4167 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.583655 4167 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.583692 4167 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.584981 4167 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585004 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.084999457 +0000 UTC m=+1.815445835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585002 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585028 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.085023997 +0000 UTC m=+1.815470375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585046 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585056 4167 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585133 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585245 4167 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585261 4167 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585300 4167 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585385 4167 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585079 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.085073439 +0000 UTC m=+1.815519817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585422 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.085415358 +0000 UTC m=+1.815861736 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585433 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.085428098 +0000 UTC m=+1.815874616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585443 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.085438719 +0000 UTC m=+1.815885097 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585443 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585472 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585485 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585504 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585511 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585535 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585551 4167 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585647 4167 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.585661 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585666 4167 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585692 4167 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585457 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.085449929 +0000 UTC m=+1.815896307 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585727 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.085720266 +0000 UTC m=+1.816166644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585742 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.085737257 +0000 UTC m=+1.816183635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585749 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585759 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.085753617 +0000 UTC m=+1.816199995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585769 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.585779 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585782 4167 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585812 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.085807279 +0000 UTC m=+1.816253657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585827 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.085822389 +0000 UTC m=+1.816268767 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585841 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.085832709 +0000 UTC m=+1.816279087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585853 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.08584788 +0000 UTC m=+1.816294258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585866 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.08586047 +0000 UTC m=+1.816306848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585878 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.08587202 +0000 UTC m=+1.816318398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.585890 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.085885721 +0000 UTC m=+1.816332099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.585908 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.585924 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.585942 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586047 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.586143 4167 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586159 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.586184 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.086169958 +0000 UTC m=+1.816616356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586311 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.586331 4167 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.586402 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.08624018 +0000 UTC m=+1.816686678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586448 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586492 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586530 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586573 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nfk2\" (UniqueName: \"kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586611 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586648 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586688 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586729 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586768 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586807 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586843 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586881 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586918 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.586952 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.587423 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.587449 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.587465 4167 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.587513 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.087497634 +0000 UTC m=+1.817944132 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.587581 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.587620 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.087607947 +0000 UTC m=+1.818054345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.587724 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.08771234 +0000 UTC m=+1.818158748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.587784 4167 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.587820 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.087809143 +0000 UTC m=+1.818255551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.587977 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.588020 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.088007708 +0000 UTC m=+1.818454106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.588343 4167 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.588383 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.088371788 +0000 UTC m=+1.818818186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.588448 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.588483 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.088472231 +0000 UTC m=+1.818918639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.588819 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: E0216 17:14:19.588862 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.088848921 +0000 UTC m=+1.819295329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.588927 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589008 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589053 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589096 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589137 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589178 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589216 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589260 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589297 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589335 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589373 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589415 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589455 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589496 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589531 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589569 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589607 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589644 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589721 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589763 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589801 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589850 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.595476 master-0 kubenswrapper[4167]: I0216 17:14:19.589892 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.589933 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.589994 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.590059 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.590119 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.090105515 +0000 UTC m=+1.820551923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.590299 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.590348 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.090332571 +0000 UTC m=+1.820779069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.590398 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.590432 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.090421383 +0000 UTC m=+1.820867861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.590462 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.590478 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.590551 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.090536897 +0000 UTC m=+1.820983385 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.590781 4167 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.590824 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.090811044 +0000 UTC m=+1.821257442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.590831 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.590897 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.590934 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.090923007 +0000 UTC m=+1.821369525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.591213 4167 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.591232 4167 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.591242 4167 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.591274 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.091264656 +0000 UTC m=+1.821711034 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.591406 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.591461 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.091445821 +0000 UTC m=+1.821892269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.591528 4167 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.591625 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.091594925 +0000 UTC m=+1.822041413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.591771 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.591845 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.591855 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.591892 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.591936 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.591996 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592031 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592040 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592092 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.092076548 +0000 UTC m=+1.822522976 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592169 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592199 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592281 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.092266683 +0000 UTC m=+1.822713081 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592282 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592362 4167 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592394 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.092382957 +0000 UTC m=+1.822829325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592423 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592451 4167 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592471 4167 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592472 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592486 4167 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592514 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592528 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.09251443 +0000 UTC m=+1.822960918 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592566 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592597 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592612 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592631 4167 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592642 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.092630443 +0000 UTC m=+1.823076931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592671 4167 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592688 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.092678995 +0000 UTC m=+1.823125383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592668 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592709 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.092697375 +0000 UTC m=+1.823143773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592738 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592773 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592794 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592807 4167 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592772 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592863 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592876 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592921 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592936 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.092925141 +0000 UTC m=+1.823371539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592922 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592942 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592987 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.592999 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.593014 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.093006413 +0000 UTC m=+1.823452811 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.593033 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.093022754 +0000 UTC m=+1.823469122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.593049 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.593055 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.592995 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.593079 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.093069865 +0000 UTC m=+1.823516323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.593100 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.593166 4167 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.593208 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.093195618 +0000 UTC m=+1.823642016 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.597347 4167 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.597364 4167 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.597377 4167 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.597409 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.097398192 +0000 UTC m=+1.827844680 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.597868 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.597885 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.597895 4167 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.597931 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.097920406 +0000 UTC m=+1.828366884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.597930 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.598033 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.598107 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.598130 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.598145 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.598189 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.098174723 +0000 UTC m=+1.828621201 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.598586 4167 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.598603 4167 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.598614 4167 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: E0216 17:14:19.598648 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.098637376 +0000 UTC m=+1.829083764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.599050 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq4dn\" (UniqueName: \"kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.600054 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57455\" (UniqueName: \"kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:19.601116 master-0 kubenswrapper[4167]: I0216 17:14:19.600941 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:19.619509 master-0 kubenswrapper[4167]: I0216 17:14:19.619469 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94kdz\" (UniqueName: \"kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:19.631043 master-0 kubenswrapper[4167]: E0216 17:14:19.630877 4167 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.631043 master-0 kubenswrapper[4167]: E0216 17:14:19.631029 4167 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.631043 master-0 kubenswrapper[4167]: E0216 17:14:19.631041 4167 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.631259 master-0 kubenswrapper[4167]: E0216 17:14:19.631091 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.131075364 +0000 UTC m=+1.861521742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.654823 master-0 kubenswrapper[4167]: E0216 17:14:19.654752 4167 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.654823 master-0 kubenswrapper[4167]: E0216 17:14:19.654783 4167 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.654823 master-0 kubenswrapper[4167]: E0216 17:14:19.654798 4167 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.654985 master-0 kubenswrapper[4167]: E0216 17:14:19.654863 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.154842657 +0000 UTC m=+1.885289085 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.674696 master-0 kubenswrapper[4167]: E0216 17:14:19.674607 4167 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:14:19.674696 master-0 kubenswrapper[4167]: E0216 17:14:19.674641 4167 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.674696 master-0 kubenswrapper[4167]: E0216 17:14:19.674656 4167 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.674891 master-0 kubenswrapper[4167]: E0216 17:14:19.674727 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.174696924 +0000 UTC m=+1.905143312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.691994 master-0 kubenswrapper[4167]: I0216 17:14:19.691894 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:14:19.694205 master-0 kubenswrapper[4167]: I0216 17:14:19.694168 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.694323 master-0 kubenswrapper[4167]: I0216 17:14:19.694235 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.694394 master-0 kubenswrapper[4167]: I0216 17:14:19.694374 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:19.694467 master-0 kubenswrapper[4167]: I0216 17:14:19.694416 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:19.694622 master-0 kubenswrapper[4167]: I0216 17:14:19.694574 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:19.694622 master-0 kubenswrapper[4167]: I0216 17:14:19.694587 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:19.694763 master-0 kubenswrapper[4167]: I0216 17:14:19.694646 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.694763 master-0 kubenswrapper[4167]: I0216 17:14:19.694731 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.694763 master-0 kubenswrapper[4167]: I0216 17:14:19.694757 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.694948 master-0 kubenswrapper[4167]: I0216 17:14:19.694831 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.694948 master-0 kubenswrapper[4167]: I0216 17:14:19.694843 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.695118 master-0 kubenswrapper[4167]: I0216 17:14:19.694982 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:19.695118 master-0 kubenswrapper[4167]: I0216 17:14:19.695026 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.695118 master-0 kubenswrapper[4167]: I0216 17:14:19.695053 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.695118 master-0 kubenswrapper[4167]: I0216 17:14:19.695026 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:19.695118 master-0 kubenswrapper[4167]: I0216 17:14:19.695063 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.695118 master-0 kubenswrapper[4167]: I0216 17:14:19.695089 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.695118 master-0 kubenswrapper[4167]: I0216 17:14:19.695107 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.695609 master-0 kubenswrapper[4167]: I0216 17:14:19.695191 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.695609 master-0 kubenswrapper[4167]: I0216 17:14:19.695197 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.695609 master-0 kubenswrapper[4167]: I0216 17:14:19.695236 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.695989 master-0 kubenswrapper[4167]: I0216 17:14:19.695892 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.696692 master-0 kubenswrapper[4167]: I0216 17:14:19.696613 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.696920 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.697470 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.697947 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.698402 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.698483 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.698616 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.698755 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.698940 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.699176 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.699920 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.700310 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.700403 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.700473 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.700494 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.700547 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.700611 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.700263 master-0 kubenswrapper[4167]: I0216 17:14:19.700651 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.703056 master-0 kubenswrapper[4167]: I0216 17:14:19.701077 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.703056 master-0 kubenswrapper[4167]: I0216 17:14:19.701151 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:19.703056 master-0 kubenswrapper[4167]: I0216 17:14:19.701224 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.703056 master-0 kubenswrapper[4167]: I0216 17:14:19.701311 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.703056 master-0 kubenswrapper[4167]: I0216 17:14:19.701362 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:19.703056 master-0 kubenswrapper[4167]: I0216 17:14:19.701317 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.703056 master-0 kubenswrapper[4167]: I0216 17:14:19.701545 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.703056 master-0 kubenswrapper[4167]: I0216 17:14:19.701648 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.703056 master-0 kubenswrapper[4167]: I0216 17:14:19.702053 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.703056 master-0 kubenswrapper[4167]: I0216 17:14:19.702216 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.703056 master-0 kubenswrapper[4167]: I0216 17:14:19.703074 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703123 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703153 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703179 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703197 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703245 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703289 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703354 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703361 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703417 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703603 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703733 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703848 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703866 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703905 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.703944 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704084 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704172 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704254 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704283 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704322 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704353 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704374 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704409 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704426 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704431 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704435 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704466 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704504 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704623 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704643 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704708 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704752 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704830 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704834 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.704862 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.705002 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.705124 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.705162 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.705271 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.705274 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.705311 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.705311 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.705370 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.705466 master-0 kubenswrapper[4167]: I0216 17:14:19.705442 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.705548 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.705597 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.705698 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.705845 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.705914 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.705993 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706040 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706139 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706189 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706223 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706318 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706412 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706452 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706513 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706665 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706695 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706751 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706818 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706862 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706917 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706937 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.706999 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.707032 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.707081 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.707118 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.707252 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.709341 master-0 kubenswrapper[4167]: I0216 17:14:19.707402 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:14:19.721768 master-0 kubenswrapper[4167]: E0216 17:14:19.721728 4167 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.721768 master-0 kubenswrapper[4167]: E0216 17:14:19.721771 4167 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.722032 master-0 kubenswrapper[4167]: E0216 17:14:19.721785 4167 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.722032 master-0 kubenswrapper[4167]: E0216 17:14:19.721856 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.221829759 +0000 UTC m=+1.952276207 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.734362 master-0 kubenswrapper[4167]: I0216 17:14:19.734311 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvw4s\" (UniqueName: \"kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:14:19.753557 master-0 kubenswrapper[4167]: I0216 17:14:19.753392 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:19.780511 master-0 kubenswrapper[4167]: E0216 17:14:19.780374 4167 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:19.780511 master-0 kubenswrapper[4167]: E0216 17:14:19.780412 4167 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.780511 master-0 kubenswrapper[4167]: E0216 17:14:19.780466 4167 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.782800 master-0 kubenswrapper[4167]: E0216 17:14:19.780559 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.280529037 +0000 UTC m=+2.010975425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.799846 master-0 kubenswrapper[4167]: I0216 17:14:19.799792 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:19.817842 master-0 kubenswrapper[4167]: I0216 17:14:19.817762 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:14:19.863371 master-0 kubenswrapper[4167]: E0216 17:14:19.863310 4167 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.863371 master-0 kubenswrapper[4167]: E0216 17:14:19.863364 4167 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.863586 master-0 kubenswrapper[4167]: E0216 17:14:19.863383 4167 projected.go:194] Error preparing data for projected volume kube-api-access-sbrtz for pod openshift-console-operator/console-operator-7777d5cc66-64vhv: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.863586 master-0 kubenswrapper[4167]: E0216 17:14:19.863460 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.363427251 +0000 UTC m=+2.093873639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sbrtz" (UniqueName: "kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.876621 master-0 kubenswrapper[4167]: E0216 17:14:19.876586 4167 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.876758 master-0 kubenswrapper[4167]: E0216 17:14:19.876630 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.876758 master-0 kubenswrapper[4167]: E0216 17:14:19.876690 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.376672939 +0000 UTC m=+2.107119317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.902485 master-0 kubenswrapper[4167]: E0216 17:14:19.902457 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:19.902485 master-0 kubenswrapper[4167]: E0216 17:14:19.902485 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.902617 master-0 kubenswrapper[4167]: E0216 17:14:19.902499 4167 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.902617 master-0 kubenswrapper[4167]: E0216 17:14:19.902552 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.402534809 +0000 UTC m=+2.132981187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.913657 master-0 kubenswrapper[4167]: E0216 17:14:19.913616 4167 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.913657 master-0 kubenswrapper[4167]: E0216 17:14:19.913658 4167 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.913769 master-0 kubenswrapper[4167]: E0216 17:14:19.913675 4167 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.913769 master-0 kubenswrapper[4167]: E0216 17:14:19.913752 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.413732442 +0000 UTC m=+2.144178820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.942619 master-0 kubenswrapper[4167]: E0216 17:14:19.942338 4167 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.942619 master-0 kubenswrapper[4167]: E0216 17:14:19.942368 4167 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.942619 master-0 kubenswrapper[4167]: E0216 17:14:19.942379 4167 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.942619 master-0 kubenswrapper[4167]: E0216 17:14:19.942430 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.442412608 +0000 UTC m=+2.172858976 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.956365 master-0 kubenswrapper[4167]: I0216 17:14:19.956310 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:19.977411 master-0 kubenswrapper[4167]: E0216 17:14:19.977376 4167 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:19.977471 master-0 kubenswrapper[4167]: E0216 17:14:19.977413 4167 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:19.977471 master-0 kubenswrapper[4167]: E0216 17:14:19.977425 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:19.977543 master-0 kubenswrapper[4167]: E0216 17:14:19.977479 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.477462396 +0000 UTC m=+2.207908774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.002519 master-0 kubenswrapper[4167]: E0216 17:14:20.002490 4167 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:14:20.002519 master-0 kubenswrapper[4167]: E0216 17:14:20.002521 4167 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.002622 master-0 kubenswrapper[4167]: E0216 17:14:20.002533 4167 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.002622 master-0 kubenswrapper[4167]: E0216 17:14:20.002586 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.502569246 +0000 UTC m=+2.233015624 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.013601 master-0 kubenswrapper[4167]: I0216 17:14:20.013567 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbtld\" (UniqueName: \"kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:20.036903 master-0 kubenswrapper[4167]: I0216 17:14:20.036846 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:20.051931 master-0 kubenswrapper[4167]: I0216 17:14:20.047032 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:14:20.056196 master-0 kubenswrapper[4167]: E0216 17:14:20.056123 4167 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.056196 master-0 kubenswrapper[4167]: E0216 17:14:20.056159 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.056457 master-0 kubenswrapper[4167]: E0216 17:14:20.056226 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.556197967 +0000 UTC m=+2.286644355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.085539 master-0 kubenswrapper[4167]: W0216 17:14:20.085484 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab80e0fb_09dd_4c93_b235_1487024105d2.slice/crio-0f552d93f93fd9732af72437d235c69c813e130e35741b2d3290feef041fca23 WatchSource:0}: Error finding container 0f552d93f93fd9732af72437d235c69c813e130e35741b2d3290feef041fca23: Status 404 returned error can't find the container with id 0f552d93f93fd9732af72437d235c69c813e130e35741b2d3290feef041fca23 Feb 16 17:14:20.091522 master-0 kubenswrapper[4167]: E0216 17:14:20.091452 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:20.091522 master-0 kubenswrapper[4167]: E0216 17:14:20.091492 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.091522 master-0 kubenswrapper[4167]: E0216 17:14:20.091508 4167 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.091993 master-0 kubenswrapper[4167]: E0216 17:14:20.091574 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.591550873 +0000 UTC m=+2.321997251 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.097991 master-0 kubenswrapper[4167]: I0216 17:14:20.097439 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:14:20.112688 master-0 kubenswrapper[4167]: I0216 17:14:20.112641 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7l6q\" (UniqueName: \"kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:20.119457 master-0 kubenswrapper[4167]: I0216 17:14:20.119401 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:20.119457 master-0 kubenswrapper[4167]: I0216 17:14:20.119455 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:20.119555 master-0 kubenswrapper[4167]: I0216 17:14:20.119479 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:20.119555 master-0 kubenswrapper[4167]: I0216 17:14:20.119502 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.119668 master-0 kubenswrapper[4167]: E0216 17:14:20.119616 4167 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:20.119709 master-0 kubenswrapper[4167]: I0216 17:14:20.119691 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:20.119754 master-0 kubenswrapper[4167]: E0216 17:14:20.119710 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.119691665 +0000 UTC m=+2.850138103 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:20.119807 master-0 kubenswrapper[4167]: I0216 17:14:20.119752 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:20.119807 master-0 kubenswrapper[4167]: E0216 17:14:20.119765 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:20.119807 master-0 kubenswrapper[4167]: I0216 17:14:20.119799 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:20.119907 master-0 kubenswrapper[4167]: E0216 17:14:20.119816 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.119803258 +0000 UTC m=+2.850249636 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:20.119907 master-0 kubenswrapper[4167]: I0216 17:14:20.119855 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:20.119907 master-0 kubenswrapper[4167]: E0216 17:14:20.119877 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:20.119907 master-0 kubenswrapper[4167]: I0216 17:14:20.119891 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:20.120053 master-0 kubenswrapper[4167]: E0216 17:14:20.119899 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.11989299 +0000 UTC m=+2.850339368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:20.120053 master-0 kubenswrapper[4167]: E0216 17:14:20.120030 4167 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:20.120053 master-0 kubenswrapper[4167]: I0216 17:14:20.120038 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:20.120053 master-0 kubenswrapper[4167]: E0216 17:14:20.120054 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120046614 +0000 UTC m=+2.850492992 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: I0216 17:14:20.120076 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: E0216 17:14:20.120096 4167 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: E0216 17:14:20.120135 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120125186 +0000 UTC m=+2.850571614 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: E0216 17:14:20.120143 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: I0216 17:14:20.120158 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: E0216 17:14:20.120189 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120159267 +0000 UTC m=+2.850605645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: I0216 17:14:20.120206 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: E0216 17:14:20.120224 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: I0216 17:14:20.120230 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: E0216 17:14:20.119940 4167 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: I0216 17:14:20.120251 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: E0216 17:14:20.120273 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.12026448 +0000 UTC m=+2.850710928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: E0216 17:14:20.120289 4167 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: E0216 17:14:20.120310 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120304331 +0000 UTC m=+2.850750709 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:20.120300 master-0 kubenswrapper[4167]: E0216 17:14:20.119980 4167 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: I0216 17:14:20.120329 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120349 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120338542 +0000 UTC m=+2.850785010 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120367 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120360313 +0000 UTC m=+2.850806811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120368 4167 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120379 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120396 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120402 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120395204 +0000 UTC m=+2.850841672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.119980 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120417 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120410754 +0000 UTC m=+2.850857132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120430 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120424885 +0000 UTC m=+2.850871253 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: I0216 17:14:20.120446 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120476 4167 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120491 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: I0216 17:14:20.120497 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120513 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120504167 +0000 UTC m=+2.850950645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120533 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120521877 +0000 UTC m=+2.850968345 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120540 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120562 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120557138 +0000 UTC m=+2.851003516 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120567 4167 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120583 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120577929 +0000 UTC m=+2.851024307 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: I0216 17:14:20.120584 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120598 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120592999 +0000 UTC m=+2.851039377 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120630 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: I0216 17:14:20.120638 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120657 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120647611 +0000 UTC m=+2.851094109 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: I0216 17:14:20.120679 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120710 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: I0216 17:14:20.120716 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120741 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120732293 +0000 UTC m=+2.851178771 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120757 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: I0216 17:14:20.120764 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:20.120759 master-0 kubenswrapper[4167]: E0216 17:14:20.120786 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120777764 +0000 UTC m=+2.851224142 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.120819 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.120829 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.120846 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120838796 +0000 UTC m=+2.851285164 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.120861 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.120852596 +0000 UTC m=+2.851299084 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.120881 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.120916 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.120944 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.120993 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.121019 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.121045 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.121093 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.121117 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.121140 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.121171 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.121195 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.121223 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.121246 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.121269 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.121291 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.121318 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121417 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121431 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121453 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.121443522 +0000 UTC m=+2.851889900 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121470 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.121462073 +0000 UTC m=+2.851908511 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121500 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121530 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.121521494 +0000 UTC m=+2.851967962 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121558 4167 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121586 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.121577656 +0000 UTC m=+2.852024034 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121609 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121710 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121767 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121815 4167 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121863 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121903 4167 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121981 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.121994 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122007 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122057 4167 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122101 4167 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122139 4167 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122191 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122243 4167 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122256 4167 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122265 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122321 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.121624517 +0000 UTC m=+2.852070995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122343 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122333886 +0000 UTC m=+2.852780324 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122374 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122363167 +0000 UTC m=+2.852809645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122389 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122382378 +0000 UTC m=+2.852828756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.122412 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.122447 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.122480 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122489 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.122514 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122540 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122525981 +0000 UTC m=+2.852972369 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.122567 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122575 4167 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122607 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122597553 +0000 UTC m=+2.853043931 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122621 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122613934 +0000 UTC m=+2.853060392 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122647 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122628934 +0000 UTC m=+2.853075312 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122662 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122655025 +0000 UTC m=+2.853101523 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122675 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122677 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122669265 +0000 UTC m=+2.853115643 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122697 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122690196 +0000 UTC m=+2.853136574 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122711 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122704546 +0000 UTC m=+2.853151014 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122724 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122717927 +0000 UTC m=+2.853164305 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122736 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122730297 +0000 UTC m=+2.853176675 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122749 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122742887 +0000 UTC m=+2.853189335 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122772 4167 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122796 4167 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122812 4167 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122844 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122774 4167 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122893 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122905 4167 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122906 4167 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122924 4167 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122850 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.12283874 +0000 UTC m=+2.853285178 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.122995 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.122982754 +0000 UTC m=+2.853429182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.123018 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.123008944 +0000 UTC m=+2.853455432 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.123090 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.123279 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: E0216 17:14:20.123364 4167 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.123447 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.123499 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.123553 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.123598 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.123672 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.123756 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.123798 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.123841 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.123885 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.123923 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.123996 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.124043 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.124085 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.124124 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.124161 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.124204 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.124245 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.124282 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.124326 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:20.124257 master-0 kubenswrapper[4167]: I0216 17:14:20.124373 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.124413 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.124457 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.124497 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.124537 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.124585 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.124623 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.124660 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.124698 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.124736 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.124776 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.124820 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.124860 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.124914 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.124995 4167 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.125013 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125038 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125028049 +0000 UTC m=+2.855474427 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.125068 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.125107 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125114 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125166 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125195 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125153192 +0000 UTC m=+2.855599580 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125220 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125209754 +0000 UTC m=+2.855656242 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125234 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0 podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125226764 +0000 UTC m=+2.855673262 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125198 4167 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125314 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125303957 +0000 UTC m=+2.855750435 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125320 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125351 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125341498 +0000 UTC m=+2.855787876 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125354 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125411 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125395979 +0000 UTC m=+2.855842367 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125479 4167 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: object "openshift-image-registry"/"kube-root-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125497 4167 projected.go:288] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: object "openshift-image-registry"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125507 4167 projected.go:194] Error preparing data for projected volume kube-api-access-b5mwd for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125539 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125527673 +0000 UTC m=+2.855974141 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5mwd" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125569 4167 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125587 4167 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125600 4167 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125635 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125623885 +0000 UTC m=+2.856070273 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125659 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125692 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125683677 +0000 UTC m=+2.856130255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125768 4167 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125816 4167 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125824 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.12581427 +0000 UTC m=+2.856260648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125860 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125853001 +0000 UTC m=+2.856299379 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125860 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125881 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125890 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125882392 +0000 UTC m=+2.856328890 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125909 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125901773 +0000 UTC m=+2.856348151 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125934 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125945 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.125954 4167 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126000 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.125992625 +0000 UTC m=+2.856439113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126024 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126109 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126050877 +0000 UTC m=+2.856497255 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126128 4167 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126151 4167 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126165 4167 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126202 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126191471 +0000 UTC m=+2.856637859 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126217 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126229 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126237 4167 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126260 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126253582 +0000 UTC m=+2.856699960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126296 4167 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126315 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126309104 +0000 UTC m=+2.856755482 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126346 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126364 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126358815 +0000 UTC m=+2.856805193 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126417 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126433 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126442 4167 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126453 4167 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126476 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126464718 +0000 UTC m=+2.856911146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126489 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126496 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126487959 +0000 UTC m=+2.856934447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126512 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126504819 +0000 UTC m=+2.856951317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126526 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126546 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.12653977 +0000 UTC m=+2.856986148 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126583 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126617 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126605962 +0000 UTC m=+2.857052350 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126623 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126658 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126645863 +0000 UTC m=+2.857092331 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126675 4167 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126688 4167 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126698 4167 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126721 4167 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126732 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126720235 +0000 UTC m=+2.857166693 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126753 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126745076 +0000 UTC m=+2.857191554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126808 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126823 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126833 4167 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126859 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126851568 +0000 UTC m=+2.857298046 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126890 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126917 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.1269105 +0000 UTC m=+2.857356868 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126917 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126969 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126978 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126989 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.126983682 +0000 UTC m=+2.857430060 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.126991 4167 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127015 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127034 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.127029433 +0000 UTC m=+2.857475811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127048 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.127041484 +0000 UTC m=+2.857487862 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.127089 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127124 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127137 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127150 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127162 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.127151627 +0000 UTC m=+2.857598095 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.127129 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127198 4167 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127205 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.127174267 +0000 UTC m=+2.857620655 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127224 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.127215478 +0000 UTC m=+2.857661996 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.127249 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127256 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127327 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127337 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.127310 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127287 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127328 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.127318381 +0000 UTC m=+2.857764839 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127396 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127418 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.127395143 +0000 UTC m=+2.857841561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127437 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.127428384 +0000 UTC m=+2.857874782 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127305 4167 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: I0216 17:14:20.127462 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127467 4167 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.130378 master-0 kubenswrapper[4167]: E0216 17:14:20.127502 4167 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.127486 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.127473095 +0000 UTC m=+2.857919613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.127533 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.127550 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.127554 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.127560 4167 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.127550 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.127541457 +0000 UTC m=+2.857987955 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.127534 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.127591 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.127579808 +0000 UTC m=+2.858026196 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.127650 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.127619419 +0000 UTC m=+2.858065897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.127678 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.127709 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.127737 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.127806 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.127848 4167 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128084 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.127870186 +0000 UTC m=+2.858316554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128118 4167 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128131 4167 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.127804 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.127849 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128171 4167 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128179 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.128143633 +0000 UTC m=+2.858590081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128199 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.128189815 +0000 UTC m=+2.858636273 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.128223 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.128285 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.128319 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.128373 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.128428 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.128472 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.128523 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128613 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128642 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.128633767 +0000 UTC m=+2.859080215 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128645 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128686 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128689 4167 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.128717 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128753 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.128730759 +0000 UTC m=+2.859177167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128782 4167 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.128793 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128835 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.128803261 +0000 UTC m=+2.859249639 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128695 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128859 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.128846532 +0000 UTC m=+2.859292940 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.128892 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128912 4167 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128929 4167 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128939 4167 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128931 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.128919684 +0000 UTC m=+2.859366102 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129005 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.128994346 +0000 UTC m=+2.859440774 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129005 4167 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129018 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129035 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.129027167 +0000 UTC m=+2.859473645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.128938 4167 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129059 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.129048288 +0000 UTC m=+2.859494696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129077 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129081 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.129070528 +0000 UTC m=+2.859516946 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129123 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129150 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.12911748 +0000 UTC m=+2.859563918 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129175 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129204 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129229 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129250 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129270 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129289 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129311 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129334 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129356 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129379 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129397 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129416 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129437 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129455 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129473 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129494 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129628 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129654 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129672 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129707 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129712 4167 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129731 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.129724186 +0000 UTC m=+2.860170564 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129750 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.129740677 +0000 UTC m=+2.860187105 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129770 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129772 4167 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129812 4167 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129831 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129788 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.129782638 +0000 UTC m=+2.860229006 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129860 4167 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.129782 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129879 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.12986425 +0000 UTC m=+2.860310658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129897 4167 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129906 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.129892411 +0000 UTC m=+2.860338829 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129799 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129933 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.129923042 +0000 UTC m=+2.860369450 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129878 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129987 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.129951082 +0000 UTC m=+2.860397490 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.129998 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130008 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130010 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130000244 +0000 UTC m=+2.860446652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130033 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130025444 +0000 UTC m=+2.860471822 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.130061 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130082 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130074816 +0000 UTC m=+2.860521184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130096 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130088546 +0000 UTC m=+2.860535014 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130107 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130101216 +0000 UTC m=+2.860547594 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130118 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130113467 +0000 UTC m=+2.860559835 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130217 4167 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130260 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.13025074 +0000 UTC m=+2.860697308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.130315 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130344 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130400 4167 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130444 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130413595 +0000 UTC m=+2.860860103 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130478 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130543 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130608 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: I0216 17:14:20.130351 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130667 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130683 4167 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130562 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130465956 +0000 UTC m=+2.860912404 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130703 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130448 4167 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130726 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130709633 +0000 UTC m=+2.861156201 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130737 4167 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130745 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130736294 +0000 UTC m=+2.861182782 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130762 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130752264 +0000 UTC m=+2.861198652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130788 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130770644 +0000 UTC m=+2.861217113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:20.135837 master-0 kubenswrapper[4167]: E0216 17:14:20.130819 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130805045 +0000 UTC m=+2.861251573 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.130848 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130834116 +0000 UTC m=+2.861280594 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.130977 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.130952379 +0000 UTC m=+2.861398917 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.130943 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.130996 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.13098845 +0000 UTC m=+2.861434928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131051 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.131125 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.131184 4167 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131242 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.131317 4167 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.131336 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.131280368 +0000 UTC m=+2.861726826 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131368 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131399 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.131426 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.131409332 +0000 UTC m=+2.861855790 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.131446 4167 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.131451 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.131463 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.131448163 +0000 UTC m=+2.861894641 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.131486 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.131476564 +0000 UTC m=+2.861923032 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131509 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131565 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131594 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131622 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131676 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131704 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131732 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131760 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131787 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131815 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.131840 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.131844 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.132083 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.132167 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132209 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.132185643 +0000 UTC m=+2.862632111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132224 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132247 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132290 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132290 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.132275855 +0000 UTC m=+2.862722323 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132326 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.132315186 +0000 UTC m=+2.862761654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132330 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132359 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.132350787 +0000 UTC m=+2.862797185 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132384 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132397 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.132390308 +0000 UTC m=+2.862836696 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132412 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.132405499 +0000 UTC m=+2.862851897 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.132383 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132464 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.13244281 +0000 UTC m=+2.862889268 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132486 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132512 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.132503811 +0000 UTC m=+2.862950209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132539 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.132544 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132579 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.132570903 +0000 UTC m=+2.863017301 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132467 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132650 4167 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132473 4167 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133006 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.132651 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.132628825 +0000 UTC m=+2.863075293 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133076 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133063387 +0000 UTC m=+2.863509765 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133115 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133124 4167 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133145 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133184 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133163729 +0000 UTC m=+2.863610187 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133208 4167 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133221 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.13320543 +0000 UTC m=+2.863651858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133237 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133296 4167 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133252 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133238131 +0000 UTC m=+2.863684559 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133261 4167 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133325 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133316103 +0000 UTC m=+2.863762601 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133351 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133333954 +0000 UTC m=+2.863780332 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133368 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133361085 +0000 UTC m=+2.863807463 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133400 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133436 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133456 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133467 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133458007 +0000 UTC m=+2.863904465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133494 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133518 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133511659 +0000 UTC m=+2.863958027 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133491 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133533 4167 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133560 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133570 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.13356178 +0000 UTC m=+2.864008258 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133589 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133581641 +0000 UTC m=+2.864028069 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133612 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133621 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133644 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133637362 +0000 UTC m=+2.864083830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133661 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133679 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133708 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133700194 +0000 UTC m=+2.864146572 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133683 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133723 4167 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133733 4167 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133739 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133752 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133746705 +0000 UTC m=+2.864193083 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133768 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133783 4167 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133801 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133796766 +0000 UTC m=+2.864243144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133799 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133825 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133846 4167 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133873 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133865928 +0000 UTC m=+2.864312306 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133875 4167 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133899 4167 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.133847 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133902 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133895649 +0000 UTC m=+2.864342117 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133952 4167 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.133940 4167 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.134035 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.13394834 +0000 UTC m=+2.864394778 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.134071 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.134101 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.134127 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.134129 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.134161 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.134137926 +0000 UTC m=+2.864584304 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.134186 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.134177287 +0000 UTC m=+2.864623745 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.134210 4167 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.134221 4167 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.134227 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.134247 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.134238888 +0000 UTC m=+2.864685336 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: I0216 17:14:20.134270 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.134286 4167 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:20.141100 master-0 kubenswrapper[4167]: E0216 17:14:20.134317 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.13430822 +0000 UTC m=+2.864754678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:20.145449 master-0 kubenswrapper[4167]: E0216 17:14:20.134332 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.134325201 +0000 UTC m=+2.864771649 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:20.145449 master-0 kubenswrapper[4167]: E0216 17:14:20.134346 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:20.145449 master-0 kubenswrapper[4167]: E0216 17:14:20.134375 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.134366782 +0000 UTC m=+2.864813230 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:20.145449 master-0 kubenswrapper[4167]: E0216 17:14:20.134376 4167 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:20.145449 master-0 kubenswrapper[4167]: E0216 17:14:20.134409 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.134400633 +0000 UTC m=+2.864847101 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:20.145449 master-0 kubenswrapper[4167]: I0216 17:14:20.143540 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76rtg\" (UniqueName: \"kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:20.171616 master-0 kubenswrapper[4167]: E0216 17:14:20.171579 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:20.171779 master-0 kubenswrapper[4167]: E0216 17:14:20.171624 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.171779 master-0 kubenswrapper[4167]: E0216 17:14:20.171642 4167 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.171779 master-0 kubenswrapper[4167]: E0216 17:14:20.171734 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.671713742 +0000 UTC m=+2.402160120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.180835 master-0 kubenswrapper[4167]: I0216 17:14:20.180483 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j99jl\" (UniqueName: \"kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:20.208511 master-0 kubenswrapper[4167]: I0216 17:14:20.208373 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgm4p\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-kube-api-access-lgm4p\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:20.215726 master-0 kubenswrapper[4167]: I0216 17:14:20.215532 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:14:20.237266 master-0 kubenswrapper[4167]: I0216 17:14:20.237171 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:20.237731 master-0 kubenswrapper[4167]: E0216 17:14:20.237393 4167 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:14:20.237731 master-0 kubenswrapper[4167]: E0216 17:14:20.237426 4167 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.237731 master-0 kubenswrapper[4167]: E0216 17:14:20.237438 4167 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.237731 master-0 kubenswrapper[4167]: I0216 17:14:20.237556 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:20.237731 master-0 kubenswrapper[4167]: E0216 17:14:20.237624 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.237598205 +0000 UTC m=+2.968044583 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.238037 master-0 kubenswrapper[4167]: E0216 17:14:20.237804 4167 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.238037 master-0 kubenswrapper[4167]: E0216 17:14:20.237819 4167 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.238037 master-0 kubenswrapper[4167]: E0216 17:14:20.237831 4167 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.238037 master-0 kubenswrapper[4167]: E0216 17:14:20.237876 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.237866862 +0000 UTC m=+2.968313310 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.238037 master-0 kubenswrapper[4167]: I0216 17:14:20.238016 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:20.238552 master-0 kubenswrapper[4167]: E0216 17:14:20.238189 4167 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.238552 master-0 kubenswrapper[4167]: E0216 17:14:20.238212 4167 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.238552 master-0 kubenswrapper[4167]: E0216 17:14:20.238223 4167 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.238552 master-0 kubenswrapper[4167]: E0216 17:14:20.238276 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.238255313 +0000 UTC m=+2.968701761 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.239178 master-0 kubenswrapper[4167]: I0216 17:14:20.238967 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:20.239423 master-0 kubenswrapper[4167]: E0216 17:14:20.239279 4167 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.239423 master-0 kubenswrapper[4167]: E0216 17:14:20.239311 4167 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.239423 master-0 kubenswrapper[4167]: E0216 17:14:20.239322 4167 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.239423 master-0 kubenswrapper[4167]: E0216 17:14:20.239526 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.239505087 +0000 UTC m=+2.969951465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.247202 master-0 kubenswrapper[4167]: I0216 17:14:20.247157 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjpvn\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-kube-api-access-tjpvn\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:20.252407 master-0 kubenswrapper[4167]: E0216 17:14:20.252291 4167 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Feb 16 17:14:20.252407 master-0 kubenswrapper[4167]: E0216 17:14:20.252318 4167 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.252407 master-0 kubenswrapper[4167]: E0216 17:14:20.252331 4167 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.252407 master-0 kubenswrapper[4167]: E0216 17:14:20.252392 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.752377365 +0000 UTC m=+2.482823743 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.277063 master-0 kubenswrapper[4167]: E0216 17:14:20.277005 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.277063 master-0 kubenswrapper[4167]: E0216 17:14:20.277046 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.277063 master-0 kubenswrapper[4167]: E0216 17:14:20.277062 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.277364 master-0 kubenswrapper[4167]: E0216 17:14:20.277130 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.777110584 +0000 UTC m=+2.507556962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.293582 master-0 kubenswrapper[4167]: I0216 17:14:20.293534 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:14:20.323298 master-0 kubenswrapper[4167]: I0216 17:14:20.323244 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:20.332920 master-0 kubenswrapper[4167]: E0216 17:14:20.332863 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:20.332920 master-0 kubenswrapper[4167]: E0216 17:14:20.332898 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.332920 master-0 kubenswrapper[4167]: E0216 17:14:20.332915 4167 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.333342 master-0 kubenswrapper[4167]: E0216 17:14:20.333007 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.832987356 +0000 UTC m=+2.563433744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.345595 master-0 kubenswrapper[4167]: I0216 17:14:20.345520 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:20.345879 master-0 kubenswrapper[4167]: E0216 17:14:20.345822 4167 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:20.345993 master-0 kubenswrapper[4167]: E0216 17:14:20.345878 4167 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.345993 master-0 kubenswrapper[4167]: E0216 17:14:20.345903 4167 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.346203 master-0 kubenswrapper[4167]: E0216 17:14:20.346146 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.346096361 +0000 UTC m=+3.076542879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.355864 master-0 kubenswrapper[4167]: I0216 17:14:20.355810 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:14:20.376527 master-0 kubenswrapper[4167]: E0216 17:14:20.376461 4167 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.376527 master-0 kubenswrapper[4167]: E0216 17:14:20.376520 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.376658 master-0 kubenswrapper[4167]: E0216 17:14:20.376609 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.876579316 +0000 UTC m=+2.607025684 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.400445 master-0 kubenswrapper[4167]: I0216 17:14:20.400368 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:20.443213 master-0 kubenswrapper[4167]: E0216 17:14:20.443179 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.443213 master-0 kubenswrapper[4167]: E0216 17:14:20.443214 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.443898 master-0 kubenswrapper[4167]: E0216 17:14:20.443235 4167 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.443898 master-0 kubenswrapper[4167]: E0216 17:14:20.443292 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.943273039 +0000 UTC m=+2.673719417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.443898 master-0 kubenswrapper[4167]: I0216 17:14:20.443618 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nfk2\" (UniqueName: \"kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:14:20.444273 master-0 kubenswrapper[4167]: I0216 17:14:20.444246 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:20.444383 master-0 kubenswrapper[4167]: E0216 17:14:20.444354 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:14:20.449048 master-0 kubenswrapper[4167]: I0216 17:14:20.449003 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:20.449732 master-0 kubenswrapper[4167]: E0216 17:14:20.449675 4167 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.449773 master-0 kubenswrapper[4167]: E0216 17:14:20.449732 4167 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.449921 master-0 kubenswrapper[4167]: I0216 17:14:20.449891 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:20.449988 master-0 kubenswrapper[4167]: I0216 17:14:20.449933 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:20.450084 master-0 kubenswrapper[4167]: E0216 17:14:20.449747 4167 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.450170 master-0 kubenswrapper[4167]: E0216 17:14:20.450139 4167 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.450202 master-0 kubenswrapper[4167]: E0216 17:14:20.450172 4167 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.450202 master-0 kubenswrapper[4167]: E0216 17:14:20.450186 4167 projected.go:194] Error preparing data for projected volume kube-api-access-sbrtz for pod openshift-console-operator/console-operator-7777d5cc66-64vhv: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.450257 master-0 kubenswrapper[4167]: E0216 17:14:20.450201 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:20.450257 master-0 kubenswrapper[4167]: E0216 17:14:20.450147 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.450122785 +0000 UTC m=+3.180569163 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.450343 master-0 kubenswrapper[4167]: E0216 17:14:20.450319 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.450373 master-0 kubenswrapper[4167]: E0216 17:14:20.450346 4167 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.450405 master-0 kubenswrapper[4167]: E0216 17:14:20.450399 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.450382882 +0000 UTC m=+3.180829260 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sbrtz" (UniqueName: "kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.450436 master-0 kubenswrapper[4167]: E0216 17:14:20.450419 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.450408532 +0000 UTC m=+3.180854910 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.450744 master-0 kubenswrapper[4167]: I0216 17:14:20.450718 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:20.450789 master-0 kubenswrapper[4167]: I0216 17:14:20.450759 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:20.451471 master-0 kubenswrapper[4167]: E0216 17:14:20.451448 4167 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.451560 master-0 kubenswrapper[4167]: E0216 17:14:20.451467 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.451603 master-0 kubenswrapper[4167]: E0216 17:14:20.451584 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.451569284 +0000 UTC m=+3.182015662 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.451670 master-0 kubenswrapper[4167]: E0216 17:14:20.451649 4167 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.451717 master-0 kubenswrapper[4167]: E0216 17:14:20.451670 4167 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.451717 master-0 kubenswrapper[4167]: E0216 17:14:20.451684 4167 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.451775 master-0 kubenswrapper[4167]: E0216 17:14:20.451743 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.451734738 +0000 UTC m=+3.182181116 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.460786 master-0 kubenswrapper[4167]: I0216 17:14:20.460766 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:20.476472 master-0 kubenswrapper[4167]: E0216 17:14:20.476435 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:20.476472 master-0 kubenswrapper[4167]: E0216 17:14:20.476468 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.476580 master-0 kubenswrapper[4167]: E0216 17:14:20.476481 4167 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.476580 master-0 kubenswrapper[4167]: E0216 17:14:20.476541 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:20.976521029 +0000 UTC m=+2.706967417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.497235 master-0 kubenswrapper[4167]: I0216 17:14:20.497163 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"68f33191bbdd9baf1095b1769d81979c4da11a7d920a2c849ce21980a92a7ecd"} Feb 16 17:14:20.498778 master-0 kubenswrapper[4167]: I0216 17:14:20.498739 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" event={"ID":"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db","Type":"ContainerStarted","Data":"7c0f3d13513bd6c6e77c15eb6e5a3adbbc887557bb6861520939736f5b53a319"} Feb 16 17:14:20.500629 master-0 kubenswrapper[4167]: I0216 17:14:20.500589 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" event={"ID":"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a","Type":"ContainerStarted","Data":"2b957c1b138060c3f9a90460ab5c917baa8b9b7947048853c149a12a64c8d256"} Feb 16 17:14:20.502379 master-0 kubenswrapper[4167]: I0216 17:14:20.502286 4167 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="c55642f3cd4757427d8ce15c64da7ae2428162fcd1938bdc539b69bc96d63577" exitCode=0 Feb 16 17:14:20.502379 master-0 kubenswrapper[4167]: I0216 17:14:20.502341 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"c55642f3cd4757427d8ce15c64da7ae2428162fcd1938bdc539b69bc96d63577"} Feb 16 17:14:20.506215 master-0 kubenswrapper[4167]: I0216 17:14:20.506097 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerStarted","Data":"c92187707a62247f952801b025b307303a897139059d70c3a1774dbd038d340c"} Feb 16 17:14:20.506215 master-0 kubenswrapper[4167]: I0216 17:14:20.506125 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerStarted","Data":"9a063b7eebbf7665801f6c436290f4be7bb969d600b58e4ec089553b0db33f2c"} Feb 16 17:14:20.506215 master-0 kubenswrapper[4167]: I0216 17:14:20.506134 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerStarted","Data":"0f552d93f93fd9732af72437d235c69c813e130e35741b2d3290feef041fca23"} Feb 16 17:14:20.508093 master-0 kubenswrapper[4167]: I0216 17:14:20.508029 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" event={"ID":"b6ad958f-25e4-40cb-89ec-5da9cb6395c7","Type":"ContainerStarted","Data":"69978caa7c6df44b4132b190aa96ee90fadd8b7bb9d61222abed2f640a709a92"} Feb 16 17:14:20.509300 master-0 kubenswrapper[4167]: I0216 17:14:20.509277 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerStarted","Data":"5be425293550e3c23f77af0361bf1f6e5f1b68ae077b612972bafc5cd8d78142"} Feb 16 17:14:20.509300 master-0 kubenswrapper[4167]: I0216 17:14:20.509298 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerStarted","Data":"7f8330c3bb76d22d354e24a41e8d51cbaaa63368eaca8d8e23a100303a48a87c"} Feb 16 17:14:20.509421 master-0 kubenswrapper[4167]: I0216 17:14:20.509307 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerStarted","Data":"e44b7278b2810a6e423fa7a03723078be8b30f08c40fc76aacee28ced6ab10ee"} Feb 16 17:14:20.510567 master-0 kubenswrapper[4167]: I0216 17:14:20.510543 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r7wj" event={"ID":"43f65f23-4ddd-471a-9cb3-b0945382d83c","Type":"ContainerStarted","Data":"b5340dba206575465123430a978854c8cf594ab3774f619d7e0a78e45bbafe6d"} Feb 16 17:14:20.512327 master-0 kubenswrapper[4167]: I0216 17:14:20.512300 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2ws9r" event={"ID":"9c48005e-c4df-4332-87fc-ec028f2c6921","Type":"ContainerStarted","Data":"bdc020491a9803f4c1dec5b73c04013b94c7a0d002b33673a1496e7041e90131"} Feb 16 17:14:20.512838 master-0 kubenswrapper[4167]: E0216 17:14:20.512814 4167 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.512838 master-0 kubenswrapper[4167]: E0216 17:14:20.512833 4167 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.512948 master-0 kubenswrapper[4167]: E0216 17:14:20.512845 4167 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.512948 master-0 kubenswrapper[4167]: E0216 17:14:20.512891 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.012877003 +0000 UTC m=+2.743323381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.513851 master-0 kubenswrapper[4167]: E0216 17:14:20.513830 4167 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.513851 master-0 kubenswrapper[4167]: E0216 17:14:20.513847 4167 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.513995 master-0 kubenswrapper[4167]: E0216 17:14:20.513855 4167 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.513995 master-0 kubenswrapper[4167]: E0216 17:14:20.513881 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.01387305 +0000 UTC m=+2.744319428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.514339 master-0 kubenswrapper[4167]: I0216 17:14:20.514320 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerStarted","Data":"fb0eb9a2d89e5a8c05107e52dac05cad382239c1fb6091b590b472abd41b5f6c"} Feb 16 17:14:20.514422 master-0 kubenswrapper[4167]: I0216 17:14:20.514340 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerStarted","Data":"563e966c42d8d2240f31748f653afb3d8d4e662a2fa21d902c34b0ec3ca19cea"} Feb 16 17:14:20.516749 master-0 kubenswrapper[4167]: I0216 17:14:20.516717 4167 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="bf19cbebe470aeab36871967cd73f47519dfde9649e2d46d2611b8f4d328df8a" exitCode=0 Feb 16 17:14:20.516831 master-0 kubenswrapper[4167]: I0216 17:14:20.516810 4167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:14:20.517664 master-0 kubenswrapper[4167]: I0216 17:14:20.517451 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"bf19cbebe470aeab36871967cd73f47519dfde9649e2d46d2611b8f4d328df8a"} Feb 16 17:14:20.518240 master-0 kubenswrapper[4167]: I0216 17:14:20.518212 4167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:14:20.518323 master-0 kubenswrapper[4167]: I0216 17:14:20.518244 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:14:20.538410 master-0 kubenswrapper[4167]: I0216 17:14:20.537542 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:14:20.552866 master-0 kubenswrapper[4167]: I0216 17:14:20.552818 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:20.553024 master-0 kubenswrapper[4167]: E0216 17:14:20.552969 4167 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.553024 master-0 kubenswrapper[4167]: E0216 17:14:20.552993 4167 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.553024 master-0 kubenswrapper[4167]: E0216 17:14:20.553004 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.553133 master-0 kubenswrapper[4167]: E0216 17:14:20.553048 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.553033649 +0000 UTC m=+3.283480017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.553833 master-0 kubenswrapper[4167]: I0216 17:14:20.553698 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:20.554027 master-0 kubenswrapper[4167]: E0216 17:14:20.553845 4167 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:14:20.554027 master-0 kubenswrapper[4167]: E0216 17:14:20.553861 4167 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.554027 master-0 kubenswrapper[4167]: E0216 17:14:20.553879 4167 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.554027 master-0 kubenswrapper[4167]: E0216 17:14:20.553914 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.553902083 +0000 UTC m=+3.284348461 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.559045 master-0 kubenswrapper[4167]: I0216 17:14:20.559007 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:20.571357 master-0 kubenswrapper[4167]: E0216 17:14:20.571076 4167 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:20.571357 master-0 kubenswrapper[4167]: E0216 17:14:20.571102 4167 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.571357 master-0 kubenswrapper[4167]: E0216 17:14:20.571114 4167 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.571357 master-0 kubenswrapper[4167]: E0216 17:14:20.571163 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.07115021 +0000 UTC m=+2.801596588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.593470 master-0 kubenswrapper[4167]: E0216 17:14:20.593412 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:20.593470 master-0 kubenswrapper[4167]: E0216 17:14:20.593449 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.593470 master-0 kubenswrapper[4167]: E0216 17:14:20.593459 4167 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.593866 master-0 kubenswrapper[4167]: E0216 17:14:20.593545 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.093523945 +0000 UTC m=+2.823970333 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.600203 master-0 kubenswrapper[4167]: I0216 17:14:20.600170 4167 request.go:700] Waited for 1.008686343s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/serviceaccounts/openshift-state-metrics/token Feb 16 17:14:20.606382 master-0 kubenswrapper[4167]: I0216 17:14:20.606331 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 16 17:14:20.624736 master-0 kubenswrapper[4167]: I0216 17:14:20.624677 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:20.633880 master-0 kubenswrapper[4167]: E0216 17:14:20.633828 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.633880 master-0 kubenswrapper[4167]: E0216 17:14:20.633870 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.633880 master-0 kubenswrapper[4167]: E0216 17:14:20.633889 4167 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.634157 master-0 kubenswrapper[4167]: E0216 17:14:20.633991 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.133935888 +0000 UTC m=+2.864382276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.660781 master-0 kubenswrapper[4167]: I0216 17:14:20.660747 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:20.660938 master-0 kubenswrapper[4167]: I0216 17:14:20.660917 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:20.661236 master-0 kubenswrapper[4167]: E0216 17:14:20.660939 4167 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.661236 master-0 kubenswrapper[4167]: E0216 17:14:20.661114 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.661236 master-0 kubenswrapper[4167]: E0216 17:14:20.661014 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:20.661236 master-0 kubenswrapper[4167]: E0216 17:14:20.661166 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.661236 master-0 kubenswrapper[4167]: E0216 17:14:20.661186 4167 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.661762 master-0 kubenswrapper[4167]: E0216 17:14:20.661722 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.661549566 +0000 UTC m=+3.391995944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.661940 master-0 kubenswrapper[4167]: E0216 17:14:20.661926 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.661858864 +0000 UTC m=+3.392305252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.662552 master-0 kubenswrapper[4167]: I0216 17:14:20.662305 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:14:20.675090 master-0 kubenswrapper[4167]: E0216 17:14:20.675049 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:20.675090 master-0 kubenswrapper[4167]: E0216 17:14:20.675083 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.675236 master-0 kubenswrapper[4167]: E0216 17:14:20.675097 4167 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.675236 master-0 kubenswrapper[4167]: E0216 17:14:20.675150 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.175132773 +0000 UTC m=+2.905579141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.695719 master-0 kubenswrapper[4167]: E0216 17:14:20.695672 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.695719 master-0 kubenswrapper[4167]: E0216 17:14:20.695708 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.695832 master-0 kubenswrapper[4167]: E0216 17:14:20.695724 4167 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.695832 master-0 kubenswrapper[4167]: E0216 17:14:20.695775 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.195754781 +0000 UTC m=+2.926201169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.751858 master-0 kubenswrapper[4167]: I0216 17:14:20.751744 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:20.756337 master-0 kubenswrapper[4167]: I0216 17:14:20.756293 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:20.756337 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:20.756337 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:20.756337 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:20.756521 master-0 kubenswrapper[4167]: I0216 17:14:20.756342 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:20.773116 master-0 kubenswrapper[4167]: I0216 17:14:20.771105 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:20.773116 master-0 kubenswrapper[4167]: I0216 17:14:20.771201 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:20.773116 master-0 kubenswrapper[4167]: E0216 17:14:20.771434 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:20.773116 master-0 kubenswrapper[4167]: E0216 17:14:20.771459 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.773116 master-0 kubenswrapper[4167]: E0216 17:14:20.771472 4167 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.773116 master-0 kubenswrapper[4167]: E0216 17:14:20.771522 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.771506891 +0000 UTC m=+3.501953259 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.773116 master-0 kubenswrapper[4167]: E0216 17:14:20.771653 4167 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Feb 16 17:14:20.773116 master-0 kubenswrapper[4167]: E0216 17:14:20.771681 4167 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.773116 master-0 kubenswrapper[4167]: E0216 17:14:20.771694 4167 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.773116 master-0 kubenswrapper[4167]: E0216 17:14:20.771757 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.771730037 +0000 UTC m=+3.502176415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.873537 master-0 kubenswrapper[4167]: I0216 17:14:20.873487 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:20.873537 master-0 kubenswrapper[4167]: I0216 17:14:20.873534 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:20.874221 master-0 kubenswrapper[4167]: E0216 17:14:20.874189 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.874221 master-0 kubenswrapper[4167]: E0216 17:14:20.874217 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.874281 master-0 kubenswrapper[4167]: E0216 17:14:20.874228 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.874281 master-0 kubenswrapper[4167]: E0216 17:14:20.874272 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.874256831 +0000 UTC m=+3.604703209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.874385 master-0 kubenswrapper[4167]: E0216 17:14:20.874340 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:20.874385 master-0 kubenswrapper[4167]: E0216 17:14:20.874356 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.874385 master-0 kubenswrapper[4167]: E0216 17:14:20.874366 4167 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.874459 master-0 kubenswrapper[4167]: E0216 17:14:20.874405 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.874393395 +0000 UTC m=+3.604839763 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.980118 master-0 kubenswrapper[4167]: I0216 17:14:20.979369 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:20.980118 master-0 kubenswrapper[4167]: I0216 17:14:20.979768 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:20.980118 master-0 kubenswrapper[4167]: E0216 17:14:20.979580 4167 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.980118 master-0 kubenswrapper[4167]: E0216 17:14:20.979822 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.980118 master-0 kubenswrapper[4167]: I0216 17:14:20.979848 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:20.980118 master-0 kubenswrapper[4167]: E0216 17:14:20.979888 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.979867799 +0000 UTC m=+3.710314207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.980118 master-0 kubenswrapper[4167]: E0216 17:14:20.979986 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:20.980118 master-0 kubenswrapper[4167]: E0216 17:14:20.980006 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.980118 master-0 kubenswrapper[4167]: E0216 17:14:20.980019 4167 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.980118 master-0 kubenswrapper[4167]: E0216 17:14:20.980071 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.980053964 +0000 UTC m=+3.710500412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.980599 master-0 kubenswrapper[4167]: E0216 17:14:20.980325 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:14:20.980599 master-0 kubenswrapper[4167]: E0216 17:14:20.980382 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:14:20.980599 master-0 kubenswrapper[4167]: E0216 17:14:20.980410 4167 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:20.980599 master-0 kubenswrapper[4167]: E0216 17:14:20.980496 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:21.980467435 +0000 UTC m=+3.710913853 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.083025 master-0 kubenswrapper[4167]: I0216 17:14:21.082980 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:21.083201 master-0 kubenswrapper[4167]: I0216 17:14:21.083032 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:21.083201 master-0 kubenswrapper[4167]: I0216 17:14:21.083112 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:21.083420 master-0 kubenswrapper[4167]: E0216 17:14:21.083391 4167 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.083420 master-0 kubenswrapper[4167]: E0216 17:14:21.083417 4167 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.083594 master-0 kubenswrapper[4167]: E0216 17:14:21.083432 4167 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.083594 master-0 kubenswrapper[4167]: E0216 17:14:21.083391 4167 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.083594 master-0 kubenswrapper[4167]: E0216 17:14:21.083483 4167 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.083594 master-0 kubenswrapper[4167]: E0216 17:14:21.083491 4167 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.083594 master-0 kubenswrapper[4167]: E0216 17:14:21.083505 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:22.083487903 +0000 UTC m=+3.813934281 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.083594 master-0 kubenswrapper[4167]: E0216 17:14:21.083528 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:22.083520494 +0000 UTC m=+3.813966872 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.084903 master-0 kubenswrapper[4167]: E0216 17:14:21.083600 4167 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:21.084903 master-0 kubenswrapper[4167]: E0216 17:14:21.083630 4167 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.084903 master-0 kubenswrapper[4167]: E0216 17:14:21.083839 4167 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.084903 master-0 kubenswrapper[4167]: E0216 17:14:21.084292 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:22.084258564 +0000 UTC m=+3.814704962 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.145308 master-0 kubenswrapper[4167]: I0216 17:14:21.145051 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:21.187332 master-0 kubenswrapper[4167]: I0216 17:14:21.187222 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:21.187332 master-0 kubenswrapper[4167]: I0216 17:14:21.187300 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:21.187472 master-0 kubenswrapper[4167]: E0216 17:14:21.187391 4167 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:21.187472 master-0 kubenswrapper[4167]: I0216 17:14:21.187439 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:21.187472 master-0 kubenswrapper[4167]: E0216 17:14:21.187448 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.187434105 +0000 UTC m=+4.917880483 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:21.187604 master-0 kubenswrapper[4167]: E0216 17:14:21.187484 4167 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:21.187604 master-0 kubenswrapper[4167]: I0216 17:14:21.187492 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:21.187604 master-0 kubenswrapper[4167]: E0216 17:14:21.187507 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.187499137 +0000 UTC m=+4.917945635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:21.187604 master-0 kubenswrapper[4167]: I0216 17:14:21.187530 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:21.187604 master-0 kubenswrapper[4167]: E0216 17:14:21.187554 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:21.187604 master-0 kubenswrapper[4167]: E0216 17:14:21.187568 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.187604 master-0 kubenswrapper[4167]: E0216 17:14:21.187577 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.187604 master-0 kubenswrapper[4167]: E0216 17:14:21.187592 4167 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:21.187604 master-0 kubenswrapper[4167]: E0216 17:14:21.187605 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.1875952 +0000 UTC m=+4.918041568 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.187604 master-0 kubenswrapper[4167]: E0216 17:14:21.187617 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.18761195 +0000 UTC m=+4.918058328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: E0216 17:14:21.187391 4167 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: E0216 17:14:21.187628 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: I0216 17:14:21.187566 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: E0216 17:14:21.187646 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.187640171 +0000 UTC m=+4.918086549 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: E0216 17:14:21.187630 4167 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: E0216 17:14:21.187655 4167 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: E0216 17:14:21.187672 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.187666952 +0000 UTC m=+4.918113430 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: I0216 17:14:21.187670 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: I0216 17:14:21.187699 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: E0216 17:14:21.187713 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: I0216 17:14:21.187720 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: E0216 17:14:21.187737 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.187729313 +0000 UTC m=+4.918175761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: E0216 17:14:21.187749 4167 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: I0216 17:14:21.187758 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: E0216 17:14:21.187765 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.187759894 +0000 UTC m=+4.918206272 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: I0216 17:14:21.187792 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: I0216 17:14:21.187811 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: I0216 17:14:21.187839 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: E0216 17:14:21.187794 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: I0216 17:14:21.187867 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: E0216 17:14:21.187881 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.187874477 +0000 UTC m=+4.918320855 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: I0216 17:14:21.187895 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: I0216 17:14:21.187915 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: I0216 17:14:21.187937 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: I0216 17:14:21.187979 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:21.187986 master-0 kubenswrapper[4167]: E0216 17:14:21.187898 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188035 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188029001 +0000 UTC m=+4.918475379 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188012 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188054 4167 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188061 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188076 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188070443 +0000 UTC m=+4.918516821 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188091 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188101 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188110 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188118 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188112804 +0000 UTC m=+4.918559182 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.187936 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188130 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188137 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188132784 +0000 UTC m=+4.918579162 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.187855 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188150 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188154 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188150205 +0000 UTC m=+4.918596583 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188173 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188189 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188191 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188206 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188201146 +0000 UTC m=+4.918647524 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188221 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188231 4167 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188247 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188252 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188246067 +0000 UTC m=+4.918692445 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188295 4167 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188303 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188314 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188308409 +0000 UTC m=+4.918754777 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188329 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188340 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188349 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188359 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.18835202 +0000 UTC m=+4.918798398 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188374 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188390 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188391 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188385431 +0000 UTC m=+4.918831799 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188416 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188438 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188445 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188458 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188480 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188471183 +0000 UTC m=+4.918917561 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188495 4167 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188515 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188509394 +0000 UTC m=+4.918955772 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188514 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188543 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188550 4167 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188574 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188566476 +0000 UTC m=+4.919012854 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188604 4167 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188627 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188619947 +0000 UTC m=+4.919066425 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188604 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188654 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188680 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188707 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188786 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188817 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188844 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188869 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.188901 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188633 4167 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188947 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188939996 +0000 UTC m=+4.919386374 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188947 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188984 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188979307 +0000 UTC m=+4.919425685 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188665 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189003 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.188998778 +0000 UTC m=+4.919445156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189001 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.187983 4167 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189027 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189023018 +0000 UTC m=+4.919469396 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.187825 4167 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188696 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188718 4167 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188743 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188759 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188782 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188791 4167 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188837 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189094 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188863 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188888 4167 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189129 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188911 4167 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188010 4167 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189047 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189040459 +0000 UTC m=+4.919486837 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189208 4167 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189208 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189189363 +0000 UTC m=+4.919635791 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189179 4167 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189238 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189239 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189233084 +0000 UTC m=+4.919679462 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.188807 4167 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189255 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189249104 +0000 UTC m=+4.919695482 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189266 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189261675 +0000 UTC m=+4.919708053 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189276 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189272005 +0000 UTC m=+4.919718383 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189285 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189281155 +0000 UTC m=+4.919727533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189295 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189290756 +0000 UTC m=+4.919737134 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189306 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189300016 +0000 UTC m=+4.919746394 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189315 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189310866 +0000 UTC m=+4.919757244 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.189333 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.189356 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.189375 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189394 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189383178 +0000 UTC m=+4.919829666 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189396 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189408 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189402959 +0000 UTC m=+4.919849337 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189415 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189418 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189414109 +0000 UTC m=+4.919860487 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189429 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189425149 +0000 UTC m=+4.919871527 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189437 4167 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189440 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.1894353 +0000 UTC m=+4.919881668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189454 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.18944826 +0000 UTC m=+4.919894638 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189464 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.18946013 +0000 UTC m=+4.919906508 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189474 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.18947035 +0000 UTC m=+4.919916728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: E0216 17:14:21.189484 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.189479931 +0000 UTC m=+4.919926299 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.189500 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.189407 master-0 kubenswrapper[4167]: I0216 17:14:21.189522 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189563 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189604 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189627 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189655 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189674 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189699 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189726 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189749 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189798 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189821 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189844 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189863 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189879 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189897 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189917 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189935 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189976 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.189995 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190013 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190030 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190050 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190071 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190089 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190108 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190126 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190145 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190171 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190190 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190210 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190228 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190255 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190273 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190291 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190309 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190351 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190379 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190397 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190420 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190448 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190466 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190486 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190504 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190523 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190541 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190565 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190593 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190621 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190647 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190668 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190686 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190707 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190735 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190760 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190782 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190802 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190820 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190837 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190862 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190880 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190898 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190917 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.190943 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191031 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191072 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191102 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191128 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191151 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191171 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191200 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191229 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191248 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191272 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191302 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191328 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191355 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191381 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191408 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191437 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191465 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191487 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191506 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191524 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191552 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191588 4167 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191598 4167 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191623 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191613188 +0000 UTC m=+4.922059566 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191628 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191647 4167 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191656 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191659 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191636 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191630139 +0000 UTC m=+4.922076517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191687 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.19167959 +0000 UTC m=+4.922125968 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191699 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191693121 +0000 UTC m=+4.922139499 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191712 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191706271 +0000 UTC m=+4.922152649 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191724 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191718291 +0000 UTC m=+4.922164669 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191731 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191739 4167 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191747 4167 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191752 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191741 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191768 4167 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191736 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191731072 +0000 UTC m=+4.922177450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191792 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191786993 +0000 UTC m=+4.922233371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191794 4167 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191803 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191797713 +0000 UTC m=+4.922244091 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191813 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191808994 +0000 UTC m=+4.922255372 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191824 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191818764 +0000 UTC m=+4.922265132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191835 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191830414 +0000 UTC m=+4.922276792 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191842 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191851 4167 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191846 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191841245 +0000 UTC m=+4.922287623 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191874 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191885 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191872815 +0000 UTC m=+4.922319233 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191901 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191893696 +0000 UTC m=+4.922340144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191907 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191919 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191911527 +0000 UTC m=+4.922357905 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191661 4167 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191934 4167 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191936 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191929187 +0000 UTC m=+4.922375635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191947 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191975 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: E0216 17:14:21.191985 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191945847 +0000 UTC m=+4.922392295 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:21.195498 master-0 kubenswrapper[4167]: I0216 17:14:21.191587 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192004 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.191996419 +0000 UTC m=+4.922442877 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192023 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.192016359 +0000 UTC m=+4.922462827 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192026 4167 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192040 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.19203338 +0000 UTC m=+4.922479838 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192047 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192066 4167 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: I0216 17:14:21.192069 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192103 4167 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192108 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192126 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192135 4167 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192146 4167 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.191989 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192173 4167 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192185 4167 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192067 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.192060261 +0000 UTC m=+4.922506639 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192211 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.192201694 +0000 UTC m=+4.922648122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192217 4167 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192231 4167 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192304 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192318 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192329 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192337 4167 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192372 4167 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192391 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192400 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192406 4167 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192505 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192567 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192673 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192693 4167 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192703 4167 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192711 4167 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192721 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192741 4167 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192771 4167 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192800 4167 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192771 4167 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192821 4167 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192827 4167 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192852 4167 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192858 4167 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192800 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192879 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192885 4167 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192828 4167 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192899 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192925 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192931 4167 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192937 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192192 4167 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192215 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192998 4167 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: object "openshift-image-registry"/"kube-root-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193006 4167 projected.go:288] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: object "openshift-image-registry"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193013 4167 projected.go:194] Error preparing data for projected volume kube-api-access-b5mwd for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193016 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.192711 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193055 4167 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193074 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193083 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193100 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193133 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193150 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193190 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.191922 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193212 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: I0216 17:14:21.193261 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193300 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193278934 +0000 UTC m=+4.923725312 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193319 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193309644 +0000 UTC m=+4.923756092 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193334 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193327105 +0000 UTC m=+4.923773493 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193347 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193341665 +0000 UTC m=+4.923788113 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193362 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193355186 +0000 UTC m=+4.923801644 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193373 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193368416 +0000 UTC m=+4.923814784 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193384 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193379696 +0000 UTC m=+4.923826074 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193203 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193465 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193487 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193515 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193400 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193395647 +0000 UTC m=+4.923842025 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193516 4167 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193529 4167 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193575 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193608 4167 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193430 4167 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193620 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193562361 +0000 UTC m=+4.924008749 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193610 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193452 4167 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193658 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193646353 +0000 UTC m=+4.924092741 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193671 4167 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193679 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193669674 +0000 UTC m=+4.924116062 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193701 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:22.193693025 +0000 UTC m=+3.924139413 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193535 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193717 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193710565 +0000 UTC m=+4.924156953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193735 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:22.193727306 +0000 UTC m=+3.924173694 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193751 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193744666 +0000 UTC m=+4.924191054 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193766 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193759267 +0000 UTC m=+4.924205655 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193775 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193735 4167 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193811 4167 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193779 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193773107 +0000 UTC m=+4.924219495 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193825 4167 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193838 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193826958 +0000 UTC m=+4.924273336 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193851 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193846269 +0000 UTC m=+4.924292647 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193590 4167 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193862 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193857569 +0000 UTC m=+4.924303947 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193873 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.19386903 +0000 UTC m=+4.924315408 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193749 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193885 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.19388059 +0000 UTC m=+4.924326968 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193896 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.19389105 +0000 UTC m=+4.924337428 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193546 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193911 4167 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193921 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.19390142 +0000 UTC m=+4.924347808 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193536 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193645 4167 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193830 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193857 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193942 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193932731 +0000 UTC m=+4.924379119 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193611 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194009 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194014 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194070 4167 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.193982 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.193976902 +0000 UTC m=+4.924423280 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194143 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194135777 +0000 UTC m=+4.924582145 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194126 4167 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194153 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194148827 +0000 UTC m=+4.924595195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194169 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194163137 +0000 UTC m=+4.924609515 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194169 4167 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194181 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194175948 +0000 UTC m=+4.924622326 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194187 4167 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194191 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194186748 +0000 UTC m=+4.924633116 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194197 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194200 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194196168 +0000 UTC m=+4.924642546 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5mwd" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194211 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194205899 +0000 UTC m=+4.924652277 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194221 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194216559 +0000 UTC m=+4.924662927 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194227 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194245 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194257 4167 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194252 4167 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.200086 master-0 kubenswrapper[4167]: E0216 17:14:21.194231 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194226889 +0000 UTC m=+4.924673267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194298 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194289401 +0000 UTC m=+4.924735779 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194313 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194307191 +0000 UTC m=+4.924753569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194325 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194319952 +0000 UTC m=+4.924766330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194336 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194330512 +0000 UTC m=+4.924776890 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194348 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194342732 +0000 UTC m=+4.924789110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194360 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194354503 +0000 UTC m=+4.924800881 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194370 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194365873 +0000 UTC m=+4.924812251 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194381 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194376383 +0000 UTC m=+4.924822751 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194392 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0 podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194387044 +0000 UTC m=+4.924833412 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194402 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194397074 +0000 UTC m=+4.924843452 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194413 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194408664 +0000 UTC m=+4.924855042 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194424 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194418874 +0000 UTC m=+4.924865252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194436 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194430045 +0000 UTC m=+4.924876423 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194449 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194442845 +0000 UTC m=+4.924889223 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194461 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194455925 +0000 UTC m=+4.924902303 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194473 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194468216 +0000 UTC m=+4.924914594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194483 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194478056 +0000 UTC m=+4.924924434 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.194501 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.194527 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194549 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194543438 +0000 UTC m=+4.924989816 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194561 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194556678 +0000 UTC m=+4.925003056 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194571 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194566728 +0000 UTC m=+4.925013106 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194581 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194576469 +0000 UTC m=+4.925022847 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194592 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194586349 +0000 UTC m=+4.925032727 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194603 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194598839 +0000 UTC m=+4.925045217 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194612 4167 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194615 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.19461037 +0000 UTC m=+4.925056748 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194661 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194652631 +0000 UTC m=+4.925098999 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194579 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194674 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194667311 +0000 UTC m=+4.925113689 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194689 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194684572 +0000 UTC m=+4.925130950 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194705 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194699972 +0000 UTC m=+4.925146350 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194721 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194715582 +0000 UTC m=+4.925161960 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194734 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194730333 +0000 UTC m=+4.925176711 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194747 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194742523 +0000 UTC m=+4.925188901 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194759 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:22.194755333 +0000 UTC m=+3.925201711 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194771 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194766494 +0000 UTC m=+4.925212872 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.194789 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194811 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194806155 +0000 UTC m=+4.925252533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194818 4167 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194822 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194816585 +0000 UTC m=+4.925262953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194840 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.194833176 +0000 UTC m=+4.925279554 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.194863 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.194893 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.194913 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194921 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194932 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.194933 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194939 4167 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.194974 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.194994 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.19498071 +0000 UTC m=+4.925427168 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195022 4167 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195028 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195045 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.195038571 +0000 UTC m=+4.925485039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195058 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.195052112 +0000 UTC m=+4.925498490 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195065 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195074 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195078 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195088 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.195081792 +0000 UTC m=+4.925528170 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195028 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195099 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.195093793 +0000 UTC m=+4.925540161 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195116 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195136 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195154 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195172 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195189 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195219 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195240 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195259 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195283 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195321 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195354 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195379 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195404 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195427 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195450 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195467 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195484 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195506 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195528 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195551 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: I0216 17:14:21.195573 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195649 4167 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195670 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.195664458 +0000 UTC m=+4.926110836 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195709 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195717 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195725 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195742 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.19573721 +0000 UTC m=+4.926183588 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195754 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.19574909 +0000 UTC m=+4.926195468 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195787 4167 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195795 4167 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195801 4167 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195821 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.195815192 +0000 UTC m=+4.926261570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195841 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195858 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.195852993 +0000 UTC m=+4.926299371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195878 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195893 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.195888864 +0000 UTC m=+4.926335242 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:21.205414 master-0 kubenswrapper[4167]: E0216 17:14:21.195923 4167 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.195939 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.195934805 +0000 UTC m=+4.926381183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.195977 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.195994 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.195989467 +0000 UTC m=+4.926435845 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196028 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196035 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196042 4167 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196060 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.196055479 +0000 UTC m=+4.926501847 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196091 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196108 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.19610283 +0000 UTC m=+4.926549208 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196140 4167 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196148 4167 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196154 4167 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196171 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.196166682 +0000 UTC m=+4.926613060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196193 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196210 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.196205103 +0000 UTC m=+4.926651471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196237 4167 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196252 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.196248074 +0000 UTC m=+4.926694452 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196302 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196327 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.196319326 +0000 UTC m=+4.926765704 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196353 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196390 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.196383768 +0000 UTC m=+4.926830236 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196422 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196439 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.196433129 +0000 UTC m=+4.926879507 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196472 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196480 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196487 4167 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196503 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.196498361 +0000 UTC m=+4.926944729 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196527 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196544 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.196538972 +0000 UTC m=+4.926985350 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196573 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196591 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.196585893 +0000 UTC m=+4.927032261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196616 4167 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196633 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.196628234 +0000 UTC m=+4.927074612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196652 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196667 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.196662635 +0000 UTC m=+4.927109003 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196698 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196707 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196713 4167 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.209228 master-0 kubenswrapper[4167]: E0216 17:14:21.196729 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.196724227 +0000 UTC m=+4.927170595 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.261968 master-0 kubenswrapper[4167]: I0216 17:14:21.261914 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:14:21.297760 master-0 kubenswrapper[4167]: I0216 17:14:21.297728 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:21.297984 master-0 kubenswrapper[4167]: E0216 17:14:21.297936 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.298024 master-0 kubenswrapper[4167]: E0216 17:14:21.297987 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.298024 master-0 kubenswrapper[4167]: E0216 17:14:21.298005 4167 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.298174 master-0 kubenswrapper[4167]: E0216 17:14:21.298067 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:22.298045618 +0000 UTC m=+4.028492056 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.298717 master-0 kubenswrapper[4167]: I0216 17:14:21.298662 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:21.298769 master-0 kubenswrapper[4167]: I0216 17:14:21.298717 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:21.298853 master-0 kubenswrapper[4167]: E0216 17:14:21.298831 4167 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:14:21.298906 master-0 kubenswrapper[4167]: E0216 17:14:21.298854 4167 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.298906 master-0 kubenswrapper[4167]: E0216 17:14:21.298866 4167 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.298971 master-0 kubenswrapper[4167]: E0216 17:14:21.298906 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.298893331 +0000 UTC m=+5.029339699 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.298971 master-0 kubenswrapper[4167]: E0216 17:14:21.298948 4167 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.299028 master-0 kubenswrapper[4167]: E0216 17:14:21.298978 4167 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.299028 master-0 kubenswrapper[4167]: E0216 17:14:21.298989 4167 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.299028 master-0 kubenswrapper[4167]: I0216 17:14:21.299019 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:21.299129 master-0 kubenswrapper[4167]: E0216 17:14:21.299029 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.299017785 +0000 UTC m=+5.029464233 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.299129 master-0 kubenswrapper[4167]: I0216 17:14:21.299082 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:21.299215 master-0 kubenswrapper[4167]: E0216 17:14:21.299190 4167 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.299261 master-0 kubenswrapper[4167]: E0216 17:14:21.299216 4167 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.299261 master-0 kubenswrapper[4167]: E0216 17:14:21.299226 4167 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.299311 master-0 kubenswrapper[4167]: E0216 17:14:21.299260 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.299249941 +0000 UTC m=+5.029696349 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.299311 master-0 kubenswrapper[4167]: E0216 17:14:21.299270 4167 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.299311 master-0 kubenswrapper[4167]: E0216 17:14:21.299285 4167 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.299311 master-0 kubenswrapper[4167]: E0216 17:14:21.299294 4167 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.299455 master-0 kubenswrapper[4167]: E0216 17:14:21.299407 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.299392795 +0000 UTC m=+5.029839263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.402420 master-0 kubenswrapper[4167]: I0216 17:14:21.402370 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:21.402593 master-0 kubenswrapper[4167]: E0216 17:14:21.402502 4167 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:21.402593 master-0 kubenswrapper[4167]: E0216 17:14:21.402527 4167 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.402593 master-0 kubenswrapper[4167]: E0216 17:14:21.402540 4167 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.402712 master-0 kubenswrapper[4167]: E0216 17:14:21.402680 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.402654359 +0000 UTC m=+5.133100747 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.445104 master-0 kubenswrapper[4167]: I0216 17:14:21.445067 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:21.445104 master-0 kubenswrapper[4167]: I0216 17:14:21.445090 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445104 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445159 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445212 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445181 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: E0216 17:14:21.445225 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445238 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445264 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445264 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445251 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445274 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445306 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: E0216 17:14:21.445366 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445373 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445380 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445389 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445411 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445420 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445427 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445413 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445444 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445396 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445422 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445463 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445472 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445477 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445433 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445498 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445384 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445451 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445393 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445484 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445464 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445546 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: E0216 17:14:21.445612 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445637 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445851 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445857 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445873 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445881 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445900 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445875 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445886 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445929 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445934 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445938 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445930 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445938 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445983 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445919 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445948 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445997 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445918 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: E0216 17:14:21.446034 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445915 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.446069 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445888 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445976 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.445954 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.446109 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.446070 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.446114 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.446114 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: E0216 17:14:21.446167 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.446201 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: I0216 17:14:21.446215 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:21.446457 master-0 kubenswrapper[4167]: E0216 17:14:21.446312 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.446538 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.446632 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.446722 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.446934 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.447020 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.447090 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.447165 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.447371 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.447566 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.447719 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.447877 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.447930 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.448015 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.448075 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.448153 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.448233 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.448313 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.448380 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.448479 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.448656 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:14:21.448723 master-0 kubenswrapper[4167]: E0216 17:14:21.448733 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:14:21.449528 master-0 kubenswrapper[4167]: E0216 17:14:21.448803 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:14:21.449528 master-0 kubenswrapper[4167]: E0216 17:14:21.448877 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:14:21.449528 master-0 kubenswrapper[4167]: E0216 17:14:21.448943 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:14:21.449528 master-0 kubenswrapper[4167]: E0216 17:14:21.449088 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:14:21.449528 master-0 kubenswrapper[4167]: E0216 17:14:21.449220 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:14:21.449528 master-0 kubenswrapper[4167]: E0216 17:14:21.449368 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:14:21.449528 master-0 kubenswrapper[4167]: E0216 17:14:21.449486 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:14:21.449739 master-0 kubenswrapper[4167]: E0216 17:14:21.449609 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:14:21.449739 master-0 kubenswrapper[4167]: E0216 17:14:21.449712 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:14:21.449823 master-0 kubenswrapper[4167]: E0216 17:14:21.449796 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:14:21.449879 master-0 kubenswrapper[4167]: E0216 17:14:21.449842 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:14:21.450603 master-0 kubenswrapper[4167]: E0216 17:14:21.449951 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:14:21.450603 master-0 kubenswrapper[4167]: E0216 17:14:21.450132 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" Feb 16 17:14:21.450603 master-0 kubenswrapper[4167]: E0216 17:14:21.450397 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" Feb 16 17:14:21.450603 master-0 kubenswrapper[4167]: E0216 17:14:21.450497 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:14:21.450603 master-0 kubenswrapper[4167]: E0216 17:14:21.450570 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:14:21.450786 master-0 kubenswrapper[4167]: E0216 17:14:21.450627 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:14:21.450786 master-0 kubenswrapper[4167]: E0216 17:14:21.450720 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:14:21.450866 master-0 kubenswrapper[4167]: E0216 17:14:21.450788 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:14:21.450866 master-0 kubenswrapper[4167]: E0216 17:14:21.450850 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:14:21.450996 master-0 kubenswrapper[4167]: E0216 17:14:21.450945 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:14:21.451082 master-0 kubenswrapper[4167]: E0216 17:14:21.451050 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:14:21.451155 master-0 kubenswrapper[4167]: E0216 17:14:21.451128 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:14:21.451284 master-0 kubenswrapper[4167]: E0216 17:14:21.451252 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:14:21.451365 master-0 kubenswrapper[4167]: E0216 17:14:21.451341 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:14:21.451432 master-0 kubenswrapper[4167]: E0216 17:14:21.451406 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:14:21.451535 master-0 kubenswrapper[4167]: E0216 17:14:21.451502 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:14:21.451664 master-0 kubenswrapper[4167]: E0216 17:14:21.451637 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:14:21.452212 master-0 kubenswrapper[4167]: E0216 17:14:21.451791 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:14:21.452212 master-0 kubenswrapper[4167]: E0216 17:14:21.451860 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:14:21.452212 master-0 kubenswrapper[4167]: E0216 17:14:21.451996 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:14:21.452212 master-0 kubenswrapper[4167]: E0216 17:14:21.452159 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:14:21.452473 master-0 kubenswrapper[4167]: E0216 17:14:21.452307 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:14:21.452473 master-0 kubenswrapper[4167]: E0216 17:14:21.452450 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:14:21.452580 master-0 kubenswrapper[4167]: E0216 17:14:21.452546 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:14:21.505714 master-0 kubenswrapper[4167]: I0216 17:14:21.505656 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:21.505902 master-0 kubenswrapper[4167]: I0216 17:14:21.505756 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:21.505951 master-0 kubenswrapper[4167]: E0216 17:14:21.505893 4167 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.505951 master-0 kubenswrapper[4167]: E0216 17:14:21.505921 4167 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.505951 master-0 kubenswrapper[4167]: E0216 17:14:21.505932 4167 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.506110 master-0 kubenswrapper[4167]: E0216 17:14:21.505997 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.505978545 +0000 UTC m=+5.236424963 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.506110 master-0 kubenswrapper[4167]: I0216 17:14:21.505993 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:21.506110 master-0 kubenswrapper[4167]: E0216 17:14:21.506084 4167 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.506110 master-0 kubenswrapper[4167]: E0216 17:14:21.506101 4167 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.506110 master-0 kubenswrapper[4167]: E0216 17:14:21.506111 4167 projected.go:194] Error preparing data for projected volume kube-api-access-sbrtz for pod openshift-console-operator/console-operator-7777d5cc66-64vhv: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.506324 master-0 kubenswrapper[4167]: E0216 17:14:21.506126 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:21.506324 master-0 kubenswrapper[4167]: E0216 17:14:21.506137 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.506324 master-0 kubenswrapper[4167]: E0216 17:14:21.506144 4167 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.506324 master-0 kubenswrapper[4167]: E0216 17:14:21.506183 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.50617638 +0000 UTC m=+5.236622758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.506324 master-0 kubenswrapper[4167]: E0216 17:14:21.506289 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.506275573 +0000 UTC m=+5.236721951 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-sbrtz" (UniqueName: "kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.506544 master-0 kubenswrapper[4167]: I0216 17:14:21.506339 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:21.506544 master-0 kubenswrapper[4167]: I0216 17:14:21.506369 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:21.507015 master-0 kubenswrapper[4167]: E0216 17:14:21.506974 4167 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.507015 master-0 kubenswrapper[4167]: E0216 17:14:21.507015 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.507129 master-0 kubenswrapper[4167]: E0216 17:14:21.507016 4167 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.507129 master-0 kubenswrapper[4167]: E0216 17:14:21.507033 4167 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.507129 master-0 kubenswrapper[4167]: E0216 17:14:21.507040 4167 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.507129 master-0 kubenswrapper[4167]: E0216 17:14:21.507071 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.507064614 +0000 UTC m=+5.237510992 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.507129 master-0 kubenswrapper[4167]: E0216 17:14:21.507100 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.507079815 +0000 UTC m=+5.237526233 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.522203 master-0 kubenswrapper[4167]: I0216 17:14:21.521995 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerStarted","Data":"86772b4198b375da8b261ca9d985412c379fd2e5f2c105fbe6e6bbfb8a419784"} Feb 16 17:14:21.522203 master-0 kubenswrapper[4167]: I0216 17:14:21.522029 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerStarted","Data":"1a8885af29cb94472b23ade7b86cbcb90ba289e387a2780efabf272cfe37dbff"} Feb 16 17:14:21.523785 master-0 kubenswrapper[4167]: I0216 17:14:21.523756 4167 generic.go:334] "Generic (PLEG): container finished" podID="a94f9b8e-b020-4aab-8373-6c056ec07464" containerID="32bcf8b4221ade916814c37734f932ec999365c00cca010a1877fafe51a23d7e" exitCode=0 Feb 16 17:14:21.523845 master-0 kubenswrapper[4167]: I0216 17:14:21.523795 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerDied","Data":"32bcf8b4221ade916814c37734f932ec999365c00cca010a1877fafe51a23d7e"} Feb 16 17:14:21.527147 master-0 kubenswrapper[4167]: I0216 17:14:21.526466 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"17104543b9da3593ba424383349543390a7b869403edf6ab8bc4ba2652888980"} Feb 16 17:14:21.531007 master-0 kubenswrapper[4167]: I0216 17:14:21.530859 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"f43833d221ea9f4a9e4534c7ac99a93808c280574641dc386194df75a19f49c9"} Feb 16 17:14:21.531007 master-0 kubenswrapper[4167]: I0216 17:14:21.530907 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"7c3093aff1f8ebccbe96292ac184022bc15d8eeb25fdec354abddcd0eccfb95a"} Feb 16 17:14:21.531007 master-0 kubenswrapper[4167]: I0216 17:14:21.530920 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"b7d42dce3d54f9d3e617b618c16e9ef08c99739ea91e000b4b1d99443db8553d"} Feb 16 17:14:21.531007 master-0 kubenswrapper[4167]: I0216 17:14:21.530932 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"cc397b3d5cc6cbc9a903e00354fc794059c1f176e81929eee27fef44e0ed535b"} Feb 16 17:14:21.531007 master-0 kubenswrapper[4167]: I0216 17:14:21.530944 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"783acd5adfcf8bec6e5a632c51786877d81220b6ece8f11093f1631f55f8aab9"} Feb 16 17:14:21.531007 master-0 kubenswrapper[4167]: I0216 17:14:21.530956 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"7011e9e23652e9ae0c244b5fce826a0e0485e2276d5484734e7bac8ba4afe778"} Feb 16 17:14:21.532745 master-0 kubenswrapper[4167]: I0216 17:14:21.532682 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerStarted","Data":"96393b87f1556d3dbadb7885f2d7eb14d8ecd4df9a7dfa6725c3fcb9e35d3601"} Feb 16 17:14:21.534736 master-0 kubenswrapper[4167]: I0216 17:14:21.534600 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" event={"ID":"4549ea98-7379-49e1-8452-5efb643137ca","Type":"ContainerStarted","Data":"2be53eae191398924971d0097300b007201955ba0357c275cdcb7ace2ef35cd4"} Feb 16 17:14:21.536621 master-0 kubenswrapper[4167]: I0216 17:14:21.536588 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vfxj4" event={"ID":"a6fe41b0-1a42-4f07-8220-d9aaa50788ad","Type":"ContainerStarted","Data":"9bb121c20d32f58b029c9f769238c1b163f3d4ad3991ada34d435c50d5d6208a"} Feb 16 17:14:21.609346 master-0 kubenswrapper[4167]: I0216 17:14:21.609302 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:21.609528 master-0 kubenswrapper[4167]: E0216 17:14:21.609405 4167 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.609528 master-0 kubenswrapper[4167]: E0216 17:14:21.609426 4167 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.609528 master-0 kubenswrapper[4167]: E0216 17:14:21.609440 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.609661 master-0 kubenswrapper[4167]: I0216 17:14:21.609605 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:21.609661 master-0 kubenswrapper[4167]: E0216 17:14:21.609624 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.609606329 +0000 UTC m=+5.340052717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.609969 master-0 kubenswrapper[4167]: E0216 17:14:21.609693 4167 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:14:21.609969 master-0 kubenswrapper[4167]: E0216 17:14:21.609720 4167 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.609969 master-0 kubenswrapper[4167]: E0216 17:14:21.609735 4167 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.610139 master-0 kubenswrapper[4167]: E0216 17:14:21.609997 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.609978949 +0000 UTC m=+5.340425377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.617050 master-0 kubenswrapper[4167]: I0216 17:14:21.617016 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 16 17:14:21.628045 master-0 kubenswrapper[4167]: I0216 17:14:21.628010 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 16 17:14:21.715059 master-0 kubenswrapper[4167]: I0216 17:14:21.715009 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:21.715059 master-0 kubenswrapper[4167]: I0216 17:14:21.715056 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:21.715203 master-0 kubenswrapper[4167]: E0216 17:14:21.715186 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:21.715248 master-0 kubenswrapper[4167]: E0216 17:14:21.715195 4167 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.715248 master-0 kubenswrapper[4167]: E0216 17:14:21.715230 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.715333 master-0 kubenswrapper[4167]: E0216 17:14:21.715206 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.715333 master-0 kubenswrapper[4167]: E0216 17:14:21.715309 4167 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.715457 master-0 kubenswrapper[4167]: E0216 17:14:21.715284 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.715262418 +0000 UTC m=+5.445708876 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.716271 master-0 kubenswrapper[4167]: E0216 17:14:21.716197 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.716182343 +0000 UTC m=+5.446628721 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.754722 master-0 kubenswrapper[4167]: I0216 17:14:21.754658 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:21.754722 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:21.754722 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:21.754722 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:21.754722 master-0 kubenswrapper[4167]: I0216 17:14:21.754716 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:21.818919 master-0 kubenswrapper[4167]: I0216 17:14:21.818718 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:21.819126 master-0 kubenswrapper[4167]: E0216 17:14:21.818876 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:21.819126 master-0 kubenswrapper[4167]: E0216 17:14:21.818993 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.819126 master-0 kubenswrapper[4167]: E0216 17:14:21.819007 4167 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.819126 master-0 kubenswrapper[4167]: E0216 17:14:21.819058 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.819040946 +0000 UTC m=+5.549487324 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.819126 master-0 kubenswrapper[4167]: I0216 17:14:21.819101 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:21.819280 master-0 kubenswrapper[4167]: E0216 17:14:21.819244 4167 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Feb 16 17:14:21.819307 master-0 kubenswrapper[4167]: E0216 17:14:21.819280 4167 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.819307 master-0 kubenswrapper[4167]: E0216 17:14:21.819294 4167 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.819873 master-0 kubenswrapper[4167]: E0216 17:14:21.819713 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.819681163 +0000 UTC m=+5.550127551 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.923801 master-0 kubenswrapper[4167]: I0216 17:14:21.923749 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:21.924031 master-0 kubenswrapper[4167]: I0216 17:14:21.923847 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:21.924031 master-0 kubenswrapper[4167]: E0216 17:14:21.924005 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:21.924031 master-0 kubenswrapper[4167]: E0216 17:14:21.924029 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:21.924258 master-0 kubenswrapper[4167]: E0216 17:14:21.924041 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.924258 master-0 kubenswrapper[4167]: E0216 17:14:21.924050 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:21.924258 master-0 kubenswrapper[4167]: E0216 17:14:21.924054 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.924258 master-0 kubenswrapper[4167]: E0216 17:14:21.924063 4167 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.924258 master-0 kubenswrapper[4167]: E0216 17:14:21.924129 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.924108919 +0000 UTC m=+5.654555297 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:21.924258 master-0 kubenswrapper[4167]: E0216 17:14:21.924160 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:23.92414163 +0000 UTC m=+5.654588018 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.027947 master-0 kubenswrapper[4167]: I0216 17:14:22.027571 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:22.027947 master-0 kubenswrapper[4167]: E0216 17:14:22.027730 4167 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:22.028386 master-0 kubenswrapper[4167]: E0216 17:14:22.027996 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:22.028386 master-0 kubenswrapper[4167]: I0216 17:14:22.027978 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:22.028386 master-0 kubenswrapper[4167]: E0216 17:14:22.028034 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:24.028021181 +0000 UTC m=+5.758467559 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:22.028386 master-0 kubenswrapper[4167]: E0216 17:14:22.028044 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:14:22.028386 master-0 kubenswrapper[4167]: E0216 17:14:22.028059 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:14:22.028386 master-0 kubenswrapper[4167]: E0216 17:14:22.028069 4167 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.028386 master-0 kubenswrapper[4167]: I0216 17:14:22.028116 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:22.028386 master-0 kubenswrapper[4167]: E0216 17:14:22.028187 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:24.028175065 +0000 UTC m=+5.758621443 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.028386 master-0 kubenswrapper[4167]: E0216 17:14:22.028214 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:22.028386 master-0 kubenswrapper[4167]: E0216 17:14:22.028223 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:22.028386 master-0 kubenswrapper[4167]: E0216 17:14:22.028230 4167 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.028386 master-0 kubenswrapper[4167]: E0216 17:14:22.028260 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:24.028251577 +0000 UTC m=+5.758697955 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.131493 master-0 kubenswrapper[4167]: I0216 17:14:22.131449 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:22.131493 master-0 kubenswrapper[4167]: I0216 17:14:22.131487 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:22.131493 master-0 kubenswrapper[4167]: I0216 17:14:22.131509 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:22.131813 master-0 kubenswrapper[4167]: E0216 17:14:22.131687 4167 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:22.131813 master-0 kubenswrapper[4167]: E0216 17:14:22.131702 4167 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:22.131813 master-0 kubenswrapper[4167]: E0216 17:14:22.131711 4167 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.131813 master-0 kubenswrapper[4167]: E0216 17:14:22.131750 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:24.131737467 +0000 UTC m=+5.862183835 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.131813 master-0 kubenswrapper[4167]: E0216 17:14:22.131749 4167 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:22.131813 master-0 kubenswrapper[4167]: E0216 17:14:22.131790 4167 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:22.131813 master-0 kubenswrapper[4167]: E0216 17:14:22.131789 4167 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:22.131813 master-0 kubenswrapper[4167]: E0216 17:14:22.131809 4167 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.131813 master-0 kubenswrapper[4167]: E0216 17:14:22.131820 4167 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:22.132164 master-0 kubenswrapper[4167]: E0216 17:14:22.131831 4167 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.132164 master-0 kubenswrapper[4167]: E0216 17:14:22.131888 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:24.13186381 +0000 UTC m=+5.862310218 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.132164 master-0 kubenswrapper[4167]: E0216 17:14:22.131920 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:24.131907962 +0000 UTC m=+5.862354380 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.233921 master-0 kubenswrapper[4167]: I0216 17:14:22.233871 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:22.233921 master-0 kubenswrapper[4167]: I0216 17:14:22.233914 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:22.234226 master-0 kubenswrapper[4167]: I0216 17:14:22.233995 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:22.234226 master-0 kubenswrapper[4167]: E0216 17:14:22.234169 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:22.234226 master-0 kubenswrapper[4167]: E0216 17:14:22.234212 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:22.234406 master-0 kubenswrapper[4167]: E0216 17:14:22.234233 4167 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.234406 master-0 kubenswrapper[4167]: E0216 17:14:22.234311 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:24.234282782 +0000 UTC m=+5.964729200 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.234406 master-0 kubenswrapper[4167]: E0216 17:14:22.234333 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:22.234406 master-0 kubenswrapper[4167]: E0216 17:14:22.234361 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:22.234406 master-0 kubenswrapper[4167]: E0216 17:14:22.234377 4167 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.234678 master-0 kubenswrapper[4167]: E0216 17:14:22.234424 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:24.234408455 +0000 UTC m=+5.964854843 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.234678 master-0 kubenswrapper[4167]: E0216 17:14:22.234523 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:22.234678 master-0 kubenswrapper[4167]: E0216 17:14:22.234544 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:22.234678 master-0 kubenswrapper[4167]: E0216 17:14:22.234556 4167 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.234678 master-0 kubenswrapper[4167]: E0216 17:14:22.234590 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:24.23457899 +0000 UTC m=+5.965025378 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.338329 master-0 kubenswrapper[4167]: I0216 17:14:22.337862 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:22.339224 master-0 kubenswrapper[4167]: E0216 17:14:22.339196 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:22.339224 master-0 kubenswrapper[4167]: E0216 17:14:22.339221 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:22.339692 master-0 kubenswrapper[4167]: E0216 17:14:22.339233 4167 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.339692 master-0 kubenswrapper[4167]: E0216 17:14:22.339279 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:24.339263452 +0000 UTC m=+6.069709840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:22.444266 master-0 kubenswrapper[4167]: I0216 17:14:22.444220 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:22.444506 master-0 kubenswrapper[4167]: E0216 17:14:22.444321 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:14:22.541737 master-0 kubenswrapper[4167]: I0216 17:14:22.541672 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/3.log" Feb 16 17:14:22.542534 master-0 kubenswrapper[4167]: I0216 17:14:22.542106 4167 generic.go:334] "Generic (PLEG): container finished" podID="702322ac-7610-4568-9a68-b6acbd1f0c12" containerID="86772b4198b375da8b261ca9d985412c379fd2e5f2c105fbe6e6bbfb8a419784" exitCode=255 Feb 16 17:14:22.542534 master-0 kubenswrapper[4167]: I0216 17:14:22.542143 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerDied","Data":"86772b4198b375da8b261ca9d985412c379fd2e5f2c105fbe6e6bbfb8a419784"} Feb 16 17:14:22.543150 master-0 kubenswrapper[4167]: I0216 17:14:22.543109 4167 scope.go:117] "RemoveContainer" containerID="86772b4198b375da8b261ca9d985412c379fd2e5f2c105fbe6e6bbfb8a419784" Feb 16 17:14:22.544643 master-0 kubenswrapper[4167]: I0216 17:14:22.544606 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerStarted","Data":"9f85722a140cc9cccf5e480580bd91d292bda981ade94e3a57f3d684826a3098"} Feb 16 17:14:22.544700 master-0 kubenswrapper[4167]: I0216 17:14:22.544647 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerStarted","Data":"d48ed75d01835cb567fe69a84180dd706bf0eb3a5743df83cbcb5e81501a9b38"} Feb 16 17:14:22.546990 master-0 kubenswrapper[4167]: I0216 17:14:22.546939 4167 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="96393b87f1556d3dbadb7885f2d7eb14d8ecd4df9a7dfa6725c3fcb9e35d3601" exitCode=0 Feb 16 17:14:22.547077 master-0 kubenswrapper[4167]: I0216 17:14:22.546989 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"96393b87f1556d3dbadb7885f2d7eb14d8ecd4df9a7dfa6725c3fcb9e35d3601"} Feb 16 17:14:22.560642 master-0 kubenswrapper[4167]: I0216 17:14:22.560314 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 16 17:14:22.614924 master-0 kubenswrapper[4167]: I0216 17:14:22.613827 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:22.614924 master-0 kubenswrapper[4167]: I0216 17:14:22.614057 4167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:14:22.777347 master-0 kubenswrapper[4167]: I0216 17:14:22.777268 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:22.777347 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:22.777347 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:22.777347 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:22.777347 master-0 kubenswrapper[4167]: I0216 17:14:22.777336 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:23.290118 master-0 kubenswrapper[4167]: I0216 17:14:23.290026 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:23.290355 master-0 kubenswrapper[4167]: I0216 17:14:23.290161 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:23.290355 master-0 kubenswrapper[4167]: E0216 17:14:23.290208 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:23.290355 master-0 kubenswrapper[4167]: E0216 17:14:23.290237 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.290355 master-0 kubenswrapper[4167]: E0216 17:14:23.290257 4167 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.290355 master-0 kubenswrapper[4167]: I0216 17:14:23.290231 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:23.290355 master-0 kubenswrapper[4167]: E0216 17:14:23.290318 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.290297816 +0000 UTC m=+9.020744204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.290608 master-0 kubenswrapper[4167]: I0216 17:14:23.290366 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:23.290608 master-0 kubenswrapper[4167]: I0216 17:14:23.290432 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:23.290608 master-0 kubenswrapper[4167]: E0216 17:14:23.290447 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:23.290608 master-0 kubenswrapper[4167]: E0216 17:14:23.290473 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:23.290608 master-0 kubenswrapper[4167]: E0216 17:14:23.290489 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.290608 master-0 kubenswrapper[4167]: E0216 17:14:23.290510 4167 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.290608 master-0 kubenswrapper[4167]: E0216 17:14:23.290520 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.290506112 +0000 UTC m=+9.020952500 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:23.290608 master-0 kubenswrapper[4167]: I0216 17:14:23.290522 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:23.290608 master-0 kubenswrapper[4167]: E0216 17:14:23.290584 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.290558233 +0000 UTC m=+9.021004641 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.290608 master-0 kubenswrapper[4167]: E0216 17:14:23.290597 4167 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.290644 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.290628455 +0000 UTC m=+9.021074903 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: I0216 17:14:23.290640 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: I0216 17:14:23.290699 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.290585 4167 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: I0216 17:14:23.290743 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.290768 4167 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.290660 4167 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: object "openshift-image-registry"/"kube-root-ca.crt" not registered Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.290786 4167 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.290800 4167 projected.go:288] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: object "openshift-image-registry"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.290815 4167 projected.go:194] Error preparing data for projected volume kube-api-access-b5mwd for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.290726 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.290860 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.290887 4167 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.290827 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.29081385 +0000 UTC m=+9.021260288 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.290910 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.291019 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.290935 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.290921073 +0000 UTC m=+9.021367531 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5mwd" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: I0216 17:14:23.291117 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.291177 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.29116556 +0000 UTC m=+9.021611948 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.291174 master-0 kubenswrapper[4167]: E0216 17:14:23.291189 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: I0216 17:14:23.291173 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291198 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.29118907 +0000 UTC m=+9.021635458 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291229 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291259 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.291243862 +0000 UTC m=+9.021690330 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: I0216 17:14:23.291337 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291395 4167 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291409 4167 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291420 4167 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: I0216 17:14:23.291449 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291463 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.291437387 +0000 UTC m=+9.021883795 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291530 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0 podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.291500239 +0000 UTC m=+9.021946687 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291535 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: I0216 17:14:23.291572 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291609 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.291588631 +0000 UTC m=+9.022035259 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291661 4167 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291692 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.291673633 +0000 UTC m=+9.022120151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: I0216 17:14:23.291660 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291723 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.291708044 +0000 UTC m=+9.022154532 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291726 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: I0216 17:14:23.291759 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291770 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.291759006 +0000 UTC m=+9.022205394 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291882 4167 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: I0216 17:14:23.291896 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: I0216 17:14:23.291938 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:23.291986 master-0 kubenswrapper[4167]: E0216 17:14:23.291954 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.29193309 +0000 UTC m=+9.022379508 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292034 4167 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292057 4167 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292067 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.292056884 +0000 UTC m=+9.022503272 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: I0216 17:14:23.292056 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292105 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.292090685 +0000 UTC m=+9.022537103 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: I0216 17:14:23.292137 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292178 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292214 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292242 4167 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292282 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.292269459 +0000 UTC m=+9.022715877 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: I0216 17:14:23.292184 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292396 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: I0216 17:14:23.292439 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292475 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.292452144 +0000 UTC m=+9.022898592 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292504 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: I0216 17:14:23.292530 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292555 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.292531857 +0000 UTC m=+9.022978265 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: I0216 17:14:23.292601 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: I0216 17:14:23.292668 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: I0216 17:14:23.292735 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292665 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292814 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.292797744 +0000 UTC m=+9.023244192 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292819 4167 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292819 4167 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292851 4167 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292865 4167 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292874 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292740 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292924 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292935 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.293007 master-0 kubenswrapper[4167]: E0216 17:14:23.292908 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.292894296 +0000 UTC m=+9.023340754 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.293034 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293091 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293092 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.293077731 +0000 UTC m=+9.023524149 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293127 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.293115622 +0000 UTC m=+9.023562040 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.293173 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293213 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.293198805 +0000 UTC m=+9.023645213 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293236 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.293225775 +0000 UTC m=+9.023672193 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293287 4167 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293342 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.293327878 +0000 UTC m=+9.023774296 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.293376 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.293434 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.293482 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293531 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.293563 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293581 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.293567615 +0000 UTC m=+9.024014023 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.293620 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293634 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293671 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.293658207 +0000 UTC m=+9.024104625 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.293702 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293743 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.293746 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293766 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293783 4167 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.293799 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293825 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.293811531 +0000 UTC m=+9.024257959 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293871 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.293883 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.293911 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.293899244 +0000 UTC m=+9.024345662 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.293940 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.294009 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.294028 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.294048 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.294074 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.294061098 +0000 UTC m=+9.024507516 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.294114 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.294130 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: I0216 17:14:23.294165 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.294172 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.294160041 +0000 UTC m=+9.024606449 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.294240 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.294261 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.294277 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.294264883 +0000 UTC m=+9.024711291 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.294300 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.294288174 +0000 UTC m=+9.024734592 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.294311 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:23.294303 master-0 kubenswrapper[4167]: E0216 17:14:23.294346 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.294334935 +0000 UTC m=+9.024781343 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294376 4167 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.294385 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.294429 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.294467 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294395 4167 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294535 4167 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294572 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294595 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294611 4167 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294635 4167 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294457 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294708 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294725 4167 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294747 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294575 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.294562141 +0000 UTC m=+9.025008549 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294797 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.294784387 +0000 UTC m=+9.025230805 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294818 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.294808378 +0000 UTC m=+9.025254786 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294840 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.294829039 +0000 UTC m=+9.025275457 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294494 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.294890 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294913 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.294897431 +0000 UTC m=+9.025343849 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.294948 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294994 4167 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295012 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.295000133 +0000 UTC m=+9.025446541 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294709 4167 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295060 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295077 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.295064665 +0000 UTC m=+9.025511083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295099 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.295087516 +0000 UTC m=+9.025533924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295131 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295139 4167 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295171 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295177 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.295165678 +0000 UTC m=+9.025612096 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295223 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295251 4167 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295275 4167 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295279 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295295 4167 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295322 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295351 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.295331922 +0000 UTC m=+9.025778420 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295405 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295403 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295446 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.295434935 +0000 UTC m=+9.025881353 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295479 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295520 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295560 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295601 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295501 4167 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.294839 4167 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295724 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.295712233 +0000 UTC m=+9.026158641 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295641 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295839 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295882 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295925 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.295995 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.296039 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.296075 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295844 4167 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.295917 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296168 4167 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296070 4167 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296141 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.296125274 +0000 UTC m=+9.026571692 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296249 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296258 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.296246017 +0000 UTC m=+9.026692425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296323 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296380 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.2963513 +0000 UTC m=+9.026797728 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296417 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296417 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.296398921 +0000 UTC m=+9.026845389 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296382 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296506 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296511 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.296491704 +0000 UTC m=+9.026938122 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296530 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296545 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296551 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.296533965 +0000 UTC m=+9.026980413 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296585 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.296568976 +0000 UTC m=+9.027015394 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296598 4167 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296635 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.296607367 +0000 UTC m=+9.027053795 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296676 4167 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296681 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.296659378 +0000 UTC m=+9.027105816 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296717 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.296703249 +0000 UTC m=+9.027149777 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296749 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.29673348 +0000 UTC m=+9.027179938 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296780 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296800 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.296772521 +0000 UTC m=+9.027218939 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296906 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.297103 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.296979 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: I0216 17:14:23.297179 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.297204 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.297179062 +0000 UTC m=+9.027625460 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:23.297073 master-0 kubenswrapper[4167]: E0216 17:14:23.297216 4167 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297024 4167 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297264 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.297249714 +0000 UTC m=+9.027696132 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.297256 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297298 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.297275505 +0000 UTC m=+9.027721983 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297334 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297342 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.297374 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297392 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.297379628 +0000 UTC m=+9.027826046 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297454 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297467 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.297445069 +0000 UTC m=+9.027891487 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297549 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.29748186 +0000 UTC m=+9.027928278 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297580 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.297566613 +0000 UTC m=+9.028013031 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297604 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.297591583 +0000 UTC m=+9.028037991 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.297653 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.297712 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.297781 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.297829 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297794 4167 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297903 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.297888511 +0000 UTC m=+9.028334929 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297846 4167 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297938 4167 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298033 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.297983 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.297945893 +0000 UTC m=+9.028392311 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.298198 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298261 4167 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.298263 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298311 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.298295443 +0000 UTC m=+9.028741851 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.298348 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.298416 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.298453 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298492 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.298540 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298556 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.298537879 +0000 UTC m=+9.028984297 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298622 4167 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.298622 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298674 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.298656712 +0000 UTC m=+9.029103150 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298718 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298762 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.298748225 +0000 UTC m=+9.029194643 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298762 4167 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298811 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.298799076 +0000 UTC m=+9.029245494 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.298717 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298838 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.298827207 +0000 UTC m=+9.029273625 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298846 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298866 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.298855598 +0000 UTC m=+9.029302016 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298883 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298869 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298938 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.298922359 +0000 UTC m=+9.029368857 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.298990 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.29895196 +0000 UTC m=+9.029398448 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.299023 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299063 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.299051083 +0000 UTC m=+9.029497561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299112 4167 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299164 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.299147006 +0000 UTC m=+9.029593424 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.299202 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.299250 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.299296 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299366 4167 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299388 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299413 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.299396832 +0000 UTC m=+9.029843230 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299441 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.299426003 +0000 UTC m=+9.029872411 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299504 4167 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.299533 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299579 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.299557167 +0000 UTC m=+9.030003585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299618 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.299634 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299674 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.299655249 +0000 UTC m=+9.030101667 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.299727 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299743 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299808 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.299791143 +0000 UTC m=+9.030237561 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.299799 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299913 4167 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.299934 4167 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300022 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.300000139 +0000 UTC m=+9.030446577 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.300010 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300070 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.30004293 +0000 UTC m=+9.030489358 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300087 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300163 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.300141452 +0000 UTC m=+9.030587880 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.300228 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.300301 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.300369 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300414 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.300435 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300457 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.300500 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300512 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.300496932 +0000 UTC m=+9.030943330 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300564 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300572 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.300551004 +0000 UTC m=+9.030997462 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300591 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300601 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.300589685 +0000 UTC m=+9.031036173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300663 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.300647796 +0000 UTC m=+9.031094214 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300693 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300753 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.300737049 +0000 UTC m=+9.031183507 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.300816 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.300885 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.300977 4167 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.300999 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.301027 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.301012996 +0000 UTC m=+9.031459454 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.301097 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.301104 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.301129 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.301141 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.301127899 +0000 UTC m=+9.031574307 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.301177 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.30116615 +0000 UTC m=+9.031612538 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.301182 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.301221 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.301209631 +0000 UTC m=+9.031656039 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.301280 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.301321 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.301361 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.301404 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.301443 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.301543 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: I0216 17:14:23.301583 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.301606 4167 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.301695 4167 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.301739 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.301725825 +0000 UTC m=+9.032172243 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:23.301669 master-0 kubenswrapper[4167]: E0216 17:14:23.301808 4167 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.301849 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.301821328 +0000 UTC m=+9.032267746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.301879 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.301932 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.301940 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.301877219 +0000 UTC m=+9.032323687 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302085 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.301643 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302018 4167 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302196 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302396 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.302091215 +0000 UTC m=+9.032537643 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302468 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.302442585 +0000 UTC m=+9.032889013 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302507 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.302494236 +0000 UTC m=+9.032940724 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.302563 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302621 4167 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302638 4167 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302673 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.302663131 +0000 UTC m=+9.033109519 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.302632 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.302721 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302733 4167 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302770 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.302737083 +0000 UTC m=+9.033183521 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302794 4167 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302819 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.302800174 +0000 UTC m=+9.033246592 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302854 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.302837675 +0000 UTC m=+9.033284103 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.302895 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.302879067 +0000 UTC m=+9.033325505 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.302949 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.303056 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303081 4167 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.303111 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303147 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.303129963 +0000 UTC m=+9.033576381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.303193 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303224 4167 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.303260 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303283 4167 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303296 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.303275367 +0000 UTC m=+9.033721825 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303323 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.303313248 +0000 UTC m=+9.033759636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.303351 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.303408 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303358 4167 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.303437 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303445 4167 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303510 4167 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303381 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.303466 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303569 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303572 4167 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303511 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.303497443 +0000 UTC m=+9.033943871 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303660 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.303642467 +0000 UTC m=+9.034088885 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303414 4167 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303684 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.303672888 +0000 UTC m=+9.034119306 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.303733 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303812 4167 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303844 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.303833512 +0000 UTC m=+9.034279900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303860 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.303853883 +0000 UTC m=+9.034300271 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.303805 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303876 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.303869613 +0000 UTC m=+9.034316001 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303881 4167 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303893 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.303886134 +0000 UTC m=+9.034332522 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.303918 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.303933 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.303917525 +0000 UTC m=+9.034363943 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304008 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304047 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.304038598 +0000 UTC m=+9.034484986 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.304053 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.304129 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304151 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304180 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.304170161 +0000 UTC m=+9.034616549 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.304199 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.304240 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.304266 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.304294 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304310 4167 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304361 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.304346206 +0000 UTC m=+9.034792644 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304374 4167 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304420 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304450 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.304427288 +0000 UTC m=+9.034873766 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304383 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304488 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.30446961 +0000 UTC m=+9.034916068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304502 4167 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304520 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.304506061 +0000 UTC m=+9.034952549 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304549 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.304533531 +0000 UTC m=+9.034979939 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.304583 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.304682 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.304721 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304796 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304824 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304835 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.304823429 +0000 UTC m=+9.035269837 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.304873 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304880 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.30486584 +0000 UTC m=+9.035312228 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304925 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.304993 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.304980913 +0000 UTC m=+9.035427301 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.304924 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305016 4167 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305071 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.305057495 +0000 UTC m=+9.035503993 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305082 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305119 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.305106237 +0000 UTC m=+9.035552655 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.305164 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.305222 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.305272 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305310 4167 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305356 4167 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.305364 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305401 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.305388694 +0000 UTC m=+9.035835102 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305406 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.305439 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305454 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.305437826 +0000 UTC m=+9.035884304 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305464 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305536 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.305560 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305611 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305685 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.30560868 +0000 UTC m=+9.036055118 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.305751 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.305729324 +0000 UTC m=+9.036175792 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.305992 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.306034 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.306051 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.306082 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.306067913 +0000 UTC m=+9.036514351 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.306115 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.306106764 +0000 UTC m=+9.036553152 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.306147 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.306125604 +0000 UTC m=+9.036572012 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.306169 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: E0216 17:14:23.306250 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.306228987 +0000 UTC m=+9.036675425 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:23.309219 master-0 kubenswrapper[4167]: I0216 17:14:23.306301 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.306376 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.306520 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.306582 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.306602 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.306578977 +0000 UTC m=+9.037025395 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.306642 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.306624508 +0000 UTC m=+9.037070956 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.306684 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.306740 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.306777 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.306841 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.306822503 +0000 UTC m=+9.037268911 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.306785 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.306905 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.306910 4167 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.306933 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.307025 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.307070 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307112 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.30708388 +0000 UTC m=+9.037530308 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307129 4167 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307165 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307166 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.307175 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307217 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.307194703 +0000 UTC m=+9.037641131 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307282 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.307290 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307336 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.307319787 +0000 UTC m=+9.037766225 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.307371 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307412 4167 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.307420 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307437 4167 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.307470 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307521 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.307476841 +0000 UTC m=+9.037923229 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307528 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307553 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.307539923 +0000 UTC m=+9.037986421 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307558 4167 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307573 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.307563153 +0000 UTC m=+9.038009621 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307579 4167 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307594 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.307582094 +0000 UTC m=+9.038028482 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307595 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307608 4167 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.307636 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307653 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.307636345 +0000 UTC m=+9.038082733 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307687 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.307668196 +0000 UTC m=+9.038114624 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307709 4167 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.307729 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307743 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.307735158 +0000 UTC m=+9.038181546 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307772 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.307759519 +0000 UTC m=+9.038205937 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307793 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307816 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.30780967 +0000 UTC m=+9.038256058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.307842 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.307873 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.307897 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307938 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.307987 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.307980155 +0000 UTC m=+9.038426543 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.308012 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.308038 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308060 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308089 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308113 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308138 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308159 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.308144579 +0000 UTC m=+9.038590997 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308184 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.30817262 +0000 UTC m=+9.038619038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308190 4167 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308202 4167 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308212 4167 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308237 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.308227531 +0000 UTC m=+9.038673919 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308243 4167 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308117 4167 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308263 4167 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308271 4167 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308278 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.308267332 +0000 UTC m=+9.038713760 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.308064 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308294 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.308287713 +0000 UTC m=+9.038734101 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.308348 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.308391 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308530 4167 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308571 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.30855718 +0000 UTC m=+9.039003598 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308641 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308659 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308675 4167 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: E0216 17:14:23.308785 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.308769846 +0000 UTC m=+9.039216264 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.314106 master-0 kubenswrapper[4167]: I0216 17:14:23.312810 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:23.411757 master-0 kubenswrapper[4167]: I0216 17:14:23.411695 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:23.411950 master-0 kubenswrapper[4167]: I0216 17:14:23.411793 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:23.412055 master-0 kubenswrapper[4167]: E0216 17:14:23.412012 4167 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.412098 master-0 kubenswrapper[4167]: E0216 17:14:23.412059 4167 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.412098 master-0 kubenswrapper[4167]: E0216 17:14:23.412082 4167 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.412156 master-0 kubenswrapper[4167]: E0216 17:14:23.412079 4167 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:14:23.412156 master-0 kubenswrapper[4167]: E0216 17:14:23.412146 4167 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.412219 master-0 kubenswrapper[4167]: E0216 17:14:23.412160 4167 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.412313 master-0 kubenswrapper[4167]: E0216 17:14:23.412278 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.412252906 +0000 UTC m=+9.142699324 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.412313 master-0 kubenswrapper[4167]: E0216 17:14:23.412313 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.412306838 +0000 UTC m=+9.142753216 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.412406 master-0 kubenswrapper[4167]: I0216 17:14:23.412387 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:23.412471 master-0 kubenswrapper[4167]: I0216 17:14:23.412454 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:23.412660 master-0 kubenswrapper[4167]: E0216 17:14:23.412609 4167 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.412660 master-0 kubenswrapper[4167]: E0216 17:14:23.412651 4167 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.413354 master-0 kubenswrapper[4167]: I0216 17:14:23.412687 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:23.413354 master-0 kubenswrapper[4167]: E0216 17:14:23.412666 4167 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.413354 master-0 kubenswrapper[4167]: E0216 17:14:23.412891 4167 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:23.413354 master-0 kubenswrapper[4167]: E0216 17:14:23.412913 4167 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.413354 master-0 kubenswrapper[4167]: E0216 17:14:23.412923 4167 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.413354 master-0 kubenswrapper[4167]: E0216 17:14:23.412939 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.412914684 +0000 UTC m=+9.143361102 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.413354 master-0 kubenswrapper[4167]: E0216 17:14:23.412984 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.412954495 +0000 UTC m=+9.143400873 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.413354 master-0 kubenswrapper[4167]: E0216 17:14:23.413218 4167 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.413354 master-0 kubenswrapper[4167]: E0216 17:14:23.413232 4167 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.413354 master-0 kubenswrapper[4167]: E0216 17:14:23.413240 4167 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.413354 master-0 kubenswrapper[4167]: E0216 17:14:23.413271 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.413261213 +0000 UTC m=+9.143707661 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.444924 master-0 kubenswrapper[4167]: I0216 17:14:23.444857 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:23.444924 master-0 kubenswrapper[4167]: I0216 17:14:23.444886 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:23.444924 master-0 kubenswrapper[4167]: I0216 17:14:23.444898 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:23.445225 master-0 kubenswrapper[4167]: I0216 17:14:23.445021 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:23.445225 master-0 kubenswrapper[4167]: E0216 17:14:23.445014 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:14:23.445225 master-0 kubenswrapper[4167]: I0216 17:14:23.445075 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:23.445225 master-0 kubenswrapper[4167]: I0216 17:14:23.445086 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:23.445225 master-0 kubenswrapper[4167]: I0216 17:14:23.445098 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:23.445225 master-0 kubenswrapper[4167]: E0216 17:14:23.445145 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:14:23.445225 master-0 kubenswrapper[4167]: I0216 17:14:23.445169 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:23.445225 master-0 kubenswrapper[4167]: I0216 17:14:23.445171 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:23.445225 master-0 kubenswrapper[4167]: I0216 17:14:23.445190 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:23.445225 master-0 kubenswrapper[4167]: I0216 17:14:23.445205 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:23.445650 master-0 kubenswrapper[4167]: I0216 17:14:23.445248 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:23.445650 master-0 kubenswrapper[4167]: I0216 17:14:23.445269 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:23.445650 master-0 kubenswrapper[4167]: I0216 17:14:23.445286 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:23.445650 master-0 kubenswrapper[4167]: I0216 17:14:23.445304 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:23.445650 master-0 kubenswrapper[4167]: I0216 17:14:23.445301 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:23.445915 master-0 kubenswrapper[4167]: E0216 17:14:23.445881 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:14:23.445994 master-0 kubenswrapper[4167]: I0216 17:14:23.445920 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:23.445994 master-0 kubenswrapper[4167]: I0216 17:14:23.445926 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:23.445994 master-0 kubenswrapper[4167]: I0216 17:14:23.445971 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:23.445994 master-0 kubenswrapper[4167]: I0216 17:14:23.445939 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:23.445994 master-0 kubenswrapper[4167]: I0216 17:14:23.445990 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.445971 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.445998 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446014 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446043 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446045 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446046 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446065 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446072 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446083 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446095 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446100 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446117 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446120 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446123 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446134 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446123 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446144 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446150 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446160 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446175 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:23.446196 master-0 kubenswrapper[4167]: I0216 17:14:23.446175 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: E0216 17:14:23.446269 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446299 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446308 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446304 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446315 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446330 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446343 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446347 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446334 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446359 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446364 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446375 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446387 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: E0216 17:14:23.446424 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446453 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446469 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446492 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446491 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446505 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: E0216 17:14:23.446558 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446578 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446582 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: I0216 17:14:23.446603 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: E0216 17:14:23.446692 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: E0216 17:14:23.446761 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: E0216 17:14:23.446815 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:14:23.446883 master-0 kubenswrapper[4167]: E0216 17:14:23.446879 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:14:23.447827 master-0 kubenswrapper[4167]: E0216 17:14:23.447000 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:14:23.447827 master-0 kubenswrapper[4167]: E0216 17:14:23.447079 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:14:23.447827 master-0 kubenswrapper[4167]: E0216 17:14:23.447142 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:14:23.447827 master-0 kubenswrapper[4167]: E0216 17:14:23.447182 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:14:23.447827 master-0 kubenswrapper[4167]: E0216 17:14:23.447276 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:14:23.447827 master-0 kubenswrapper[4167]: E0216 17:14:23.447323 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:14:23.447827 master-0 kubenswrapper[4167]: E0216 17:14:23.447396 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:14:23.447827 master-0 kubenswrapper[4167]: E0216 17:14:23.447511 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:14:23.447827 master-0 kubenswrapper[4167]: E0216 17:14:23.447596 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:14:23.447827 master-0 kubenswrapper[4167]: E0216 17:14:23.447656 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:14:23.447827 master-0 kubenswrapper[4167]: E0216 17:14:23.447715 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:14:23.447827 master-0 kubenswrapper[4167]: E0216 17:14:23.447768 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:14:23.447827 master-0 kubenswrapper[4167]: E0216 17:14:23.447805 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:14:23.448216 master-0 kubenswrapper[4167]: E0216 17:14:23.447890 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:14:23.448216 master-0 kubenswrapper[4167]: E0216 17:14:23.447978 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:14:23.448216 master-0 kubenswrapper[4167]: E0216 17:14:23.448036 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:14:23.448216 master-0 kubenswrapper[4167]: E0216 17:14:23.448133 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:14:23.448354 master-0 kubenswrapper[4167]: E0216 17:14:23.448303 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:14:23.448390 master-0 kubenswrapper[4167]: E0216 17:14:23.448367 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:14:23.448514 master-0 kubenswrapper[4167]: E0216 17:14:23.448484 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:14:23.448609 master-0 kubenswrapper[4167]: E0216 17:14:23.448561 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:14:23.448678 master-0 kubenswrapper[4167]: E0216 17:14:23.448655 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:14:23.448777 master-0 kubenswrapper[4167]: E0216 17:14:23.448717 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:14:23.448809 master-0 kubenswrapper[4167]: E0216 17:14:23.448795 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:14:23.448874 master-0 kubenswrapper[4167]: E0216 17:14:23.448853 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:14:23.448932 master-0 kubenswrapper[4167]: E0216 17:14:23.448910 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:14:23.449034 master-0 kubenswrapper[4167]: E0216 17:14:23.449010 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:14:23.449141 master-0 kubenswrapper[4167]: E0216 17:14:23.449112 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:14:23.449340 master-0 kubenswrapper[4167]: E0216 17:14:23.449178 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:14:23.449340 master-0 kubenswrapper[4167]: E0216 17:14:23.449292 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:14:23.449427 master-0 kubenswrapper[4167]: E0216 17:14:23.449403 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:14:23.449459 master-0 kubenswrapper[4167]: E0216 17:14:23.449430 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:14:23.449630 master-0 kubenswrapper[4167]: E0216 17:14:23.449492 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:14:23.449630 master-0 kubenswrapper[4167]: E0216 17:14:23.449600 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:14:23.450104 master-0 kubenswrapper[4167]: E0216 17:14:23.449843 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" Feb 16 17:14:23.450104 master-0 kubenswrapper[4167]: E0216 17:14:23.450035 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" Feb 16 17:14:23.450210 master-0 kubenswrapper[4167]: E0216 17:14:23.450145 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:14:23.450262 master-0 kubenswrapper[4167]: E0216 17:14:23.450243 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:14:23.450437 master-0 kubenswrapper[4167]: E0216 17:14:23.450306 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:14:23.450437 master-0 kubenswrapper[4167]: E0216 17:14:23.450402 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:14:23.450551 master-0 kubenswrapper[4167]: E0216 17:14:23.450525 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:14:23.450723 master-0 kubenswrapper[4167]: E0216 17:14:23.450598 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:14:23.450723 master-0 kubenswrapper[4167]: E0216 17:14:23.450689 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:14:23.450834 master-0 kubenswrapper[4167]: E0216 17:14:23.450736 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:14:23.450834 master-0 kubenswrapper[4167]: E0216 17:14:23.450793 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:14:23.450913 master-0 kubenswrapper[4167]: E0216 17:14:23.450854 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:14:23.450952 master-0 kubenswrapper[4167]: E0216 17:14:23.450915 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:14:23.451007 master-0 kubenswrapper[4167]: E0216 17:14:23.450952 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:14:23.451054 master-0 kubenswrapper[4167]: E0216 17:14:23.451014 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:14:23.451099 master-0 kubenswrapper[4167]: E0216 17:14:23.451055 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:14:23.451142 master-0 kubenswrapper[4167]: E0216 17:14:23.451115 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:14:23.451183 master-0 kubenswrapper[4167]: E0216 17:14:23.451164 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:14:23.515854 master-0 kubenswrapper[4167]: I0216 17:14:23.515784 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:23.515854 master-0 kubenswrapper[4167]: I0216 17:14:23.515861 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:23.516417 master-0 kubenswrapper[4167]: E0216 17:14:23.516257 4167 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.516417 master-0 kubenswrapper[4167]: E0216 17:14:23.516279 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.516417 master-0 kubenswrapper[4167]: E0216 17:14:23.516350 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.516333682 +0000 UTC m=+9.246780060 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.516417 master-0 kubenswrapper[4167]: E0216 17:14:23.516402 4167 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.516417 master-0 kubenswrapper[4167]: E0216 17:14:23.516422 4167 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.516639 master-0 kubenswrapper[4167]: E0216 17:14:23.516433 4167 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.516639 master-0 kubenswrapper[4167]: E0216 17:14:23.516473 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.516462456 +0000 UTC m=+9.246908834 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.518187 master-0 kubenswrapper[4167]: I0216 17:14:23.518154 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:23.518298 master-0 kubenswrapper[4167]: E0216 17:14:23.518270 4167 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.518355 master-0 kubenswrapper[4167]: E0216 17:14:23.518303 4167 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.518355 master-0 kubenswrapper[4167]: E0216 17:14:23.518321 4167 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.518446 master-0 kubenswrapper[4167]: E0216 17:14:23.518366 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.518351327 +0000 UTC m=+9.248797725 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.518647 master-0 kubenswrapper[4167]: I0216 17:14:23.518609 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:23.518723 master-0 kubenswrapper[4167]: I0216 17:14:23.518668 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:23.518832 master-0 kubenswrapper[4167]: E0216 17:14:23.518799 4167 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.518907 master-0 kubenswrapper[4167]: E0216 17:14:23.518839 4167 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.518907 master-0 kubenswrapper[4167]: E0216 17:14:23.518855 4167 projected.go:194] Error preparing data for projected volume kube-api-access-sbrtz for pod openshift-console-operator/console-operator-7777d5cc66-64vhv: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.518907 master-0 kubenswrapper[4167]: E0216 17:14:23.518888 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:23.519065 master-0 kubenswrapper[4167]: E0216 17:14:23.518911 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.519065 master-0 kubenswrapper[4167]: E0216 17:14:23.518924 4167 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.519065 master-0 kubenswrapper[4167]: E0216 17:14:23.518986 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.518972484 +0000 UTC m=+9.249418872 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-sbrtz" (UniqueName: "kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.519065 master-0 kubenswrapper[4167]: E0216 17:14:23.519007 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.519000034 +0000 UTC m=+9.249446412 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.553616 master-0 kubenswrapper[4167]: I0216 17:14:23.553554 4167 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="119cdc0c316bb0c22884b04a73b0f44ee539b508b79917ed7ba1ed5c5cd65b51" exitCode=0 Feb 16 17:14:23.556363 master-0 kubenswrapper[4167]: I0216 17:14:23.553635 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"119cdc0c316bb0c22884b04a73b0f44ee539b508b79917ed7ba1ed5c5cd65b51"} Feb 16 17:14:23.556363 master-0 kubenswrapper[4167]: I0216 17:14:23.555698 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/4.log" Feb 16 17:14:23.556363 master-0 kubenswrapper[4167]: I0216 17:14:23.556187 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/3.log" Feb 16 17:14:23.557670 master-0 kubenswrapper[4167]: I0216 17:14:23.556751 4167 generic.go:334] "Generic (PLEG): container finished" podID="702322ac-7610-4568-9a68-b6acbd1f0c12" containerID="15f7dbfc42911fc9756e060ae848e6787711a1dbc91f05fce2dd151ae4191fab" exitCode=255 Feb 16 17:14:23.557670 master-0 kubenswrapper[4167]: I0216 17:14:23.556786 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerDied","Data":"15f7dbfc42911fc9756e060ae848e6787711a1dbc91f05fce2dd151ae4191fab"} Feb 16 17:14:23.557670 master-0 kubenswrapper[4167]: I0216 17:14:23.556837 4167 scope.go:117] "RemoveContainer" containerID="86772b4198b375da8b261ca9d985412c379fd2e5f2c105fbe6e6bbfb8a419784" Feb 16 17:14:23.557670 master-0 kubenswrapper[4167]: I0216 17:14:23.556897 4167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:14:23.557670 master-0 kubenswrapper[4167]: I0216 17:14:23.557417 4167 scope.go:117] "RemoveContainer" containerID="15f7dbfc42911fc9756e060ae848e6787711a1dbc91f05fce2dd151ae4191fab" Feb 16 17:14:23.557670 master-0 kubenswrapper[4167]: E0216 17:14:23.557588 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-approver-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=machine-approver-controller pod=machine-approver-8569dd85ff-4vxmz_openshift-cluster-machine-approver(702322ac-7610-4568-9a68-b6acbd1f0c12)\"" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" podUID="702322ac-7610-4568-9a68-b6acbd1f0c12" Feb 16 17:14:23.561749 master-0 kubenswrapper[4167]: I0216 17:14:23.561628 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:23.593153 master-0 kubenswrapper[4167]: E0216 17:14:23.592814 4167 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:14:23.620399 master-0 kubenswrapper[4167]: I0216 17:14:23.620329 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:23.620828 master-0 kubenswrapper[4167]: I0216 17:14:23.620790 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:23.620903 master-0 kubenswrapper[4167]: E0216 17:14:23.620832 4167 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.620903 master-0 kubenswrapper[4167]: E0216 17:14:23.620866 4167 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.620903 master-0 kubenswrapper[4167]: E0216 17:14:23.620904 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.621099 master-0 kubenswrapper[4167]: E0216 17:14:23.621046 4167 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:14:23.621099 master-0 kubenswrapper[4167]: E0216 17:14:23.621076 4167 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.621099 master-0 kubenswrapper[4167]: E0216 17:14:23.621094 4167 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.621286 master-0 kubenswrapper[4167]: E0216 17:14:23.621157 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.621135788 +0000 UTC m=+9.351582176 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.621361 master-0 kubenswrapper[4167]: E0216 17:14:23.621348 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.621337494 +0000 UTC m=+9.351783882 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.729016 master-0 kubenswrapper[4167]: I0216 17:14:23.728954 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:23.729016 master-0 kubenswrapper[4167]: I0216 17:14:23.729016 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:23.729238 master-0 kubenswrapper[4167]: E0216 17:14:23.729140 4167 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.729238 master-0 kubenswrapper[4167]: E0216 17:14:23.729161 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.729238 master-0 kubenswrapper[4167]: E0216 17:14:23.729223 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:23.729238 master-0 kubenswrapper[4167]: E0216 17:14:23.729233 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.729238 master-0 kubenswrapper[4167]: E0216 17:14:23.729232 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.729190042 +0000 UTC m=+9.459636420 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.729238 master-0 kubenswrapper[4167]: E0216 17:14:23.729240 4167 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.729423 master-0 kubenswrapper[4167]: E0216 17:14:23.729269 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.729262564 +0000 UTC m=+9.459708942 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.753830 master-0 kubenswrapper[4167]: I0216 17:14:23.753746 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:23.753830 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:23.753830 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:23.753830 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:23.753830 master-0 kubenswrapper[4167]: I0216 17:14:23.753825 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:23.834223 master-0 kubenswrapper[4167]: I0216 17:14:23.834165 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:23.834431 master-0 kubenswrapper[4167]: E0216 17:14:23.834325 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:23.834431 master-0 kubenswrapper[4167]: E0216 17:14:23.834357 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.834431 master-0 kubenswrapper[4167]: E0216 17:14:23.834366 4167 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.834563 master-0 kubenswrapper[4167]: I0216 17:14:23.834450 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:23.834563 master-0 kubenswrapper[4167]: E0216 17:14:23.834536 4167 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Feb 16 17:14:23.834563 master-0 kubenswrapper[4167]: E0216 17:14:23.834555 4167 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.834563 master-0 kubenswrapper[4167]: E0216 17:14:23.834565 4167 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.834721 master-0 kubenswrapper[4167]: E0216 17:14:23.834630 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.834608664 +0000 UTC m=+9.565055042 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.834721 master-0 kubenswrapper[4167]: E0216 17:14:23.834650 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.834644385 +0000 UTC m=+9.565090763 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.931110 master-0 kubenswrapper[4167]: I0216 17:14:23.930978 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=5.9309359310000005 podStartE2EDuration="5.930935931s" podCreationTimestamp="2026-02-16 17:14:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:14:23.930387186 +0000 UTC m=+5.660833574" watchObservedRunningTime="2026-02-16 17:14:23.930935931 +0000 UTC m=+5.661382309" Feb 16 17:14:23.937785 master-0 kubenswrapper[4167]: I0216 17:14:23.937717 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:23.938020 master-0 kubenswrapper[4167]: I0216 17:14:23.937801 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:23.938250 master-0 kubenswrapper[4167]: E0216 17:14:23.938219 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:23.938342 master-0 kubenswrapper[4167]: E0216 17:14:23.938329 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.938422 master-0 kubenswrapper[4167]: E0216 17:14:23.938411 4167 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.938544 master-0 kubenswrapper[4167]: E0216 17:14:23.938506 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:23.938544 master-0 kubenswrapper[4167]: E0216 17:14:23.938539 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:23.938638 master-0 kubenswrapper[4167]: E0216 17:14:23.938554 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.938638 master-0 kubenswrapper[4167]: E0216 17:14:23.938620 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.938602068 +0000 UTC m=+9.669048466 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:23.938701 master-0 kubenswrapper[4167]: E0216 17:14:23.938645 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:27.938636049 +0000 UTC m=+9.669082437 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.042772 master-0 kubenswrapper[4167]: I0216 17:14:24.042707 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:24.043041 master-0 kubenswrapper[4167]: E0216 17:14:24.042902 4167 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:24.043041 master-0 kubenswrapper[4167]: E0216 17:14:24.042937 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:24.043127 master-0 kubenswrapper[4167]: I0216 17:14:24.043075 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:24.043127 master-0 kubenswrapper[4167]: E0216 17:14:24.043115 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:28.043089275 +0000 UTC m=+9.773535683 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:24.043269 master-0 kubenswrapper[4167]: I0216 17:14:24.043241 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:24.043325 master-0 kubenswrapper[4167]: E0216 17:14:24.043264 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:14:24.043325 master-0 kubenswrapper[4167]: E0216 17:14:24.043292 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:14:24.043325 master-0 kubenswrapper[4167]: E0216 17:14:24.043313 4167 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.043442 master-0 kubenswrapper[4167]: E0216 17:14:24.043370 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:24.043442 master-0 kubenswrapper[4167]: E0216 17:14:24.043388 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:24.043442 master-0 kubenswrapper[4167]: E0216 17:14:24.043397 4167 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.043668 master-0 kubenswrapper[4167]: E0216 17:14:24.043452 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:28.043430804 +0000 UTC m=+9.773877182 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.043668 master-0 kubenswrapper[4167]: E0216 17:14:24.043484 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:28.043474255 +0000 UTC m=+9.773920633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.147301 master-0 kubenswrapper[4167]: I0216 17:14:24.147219 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:24.147502 master-0 kubenswrapper[4167]: I0216 17:14:24.147291 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:24.147584 master-0 kubenswrapper[4167]: E0216 17:14:24.147480 4167 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:24.147584 master-0 kubenswrapper[4167]: E0216 17:14:24.147521 4167 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:24.147584 master-0 kubenswrapper[4167]: E0216 17:14:24.147551 4167 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.147731 master-0 kubenswrapper[4167]: E0216 17:14:24.147613 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:28.147593362 +0000 UTC m=+9.878039750 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.147731 master-0 kubenswrapper[4167]: E0216 17:14:24.147619 4167 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:24.147731 master-0 kubenswrapper[4167]: E0216 17:14:24.147650 4167 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:24.147731 master-0 kubenswrapper[4167]: E0216 17:14:24.147669 4167 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.147731 master-0 kubenswrapper[4167]: I0216 17:14:24.147605 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:24.147929 master-0 kubenswrapper[4167]: E0216 17:14:24.147726 4167 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:24.147929 master-0 kubenswrapper[4167]: E0216 17:14:24.147754 4167 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:24.147929 master-0 kubenswrapper[4167]: E0216 17:14:24.147769 4167 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.147929 master-0 kubenswrapper[4167]: E0216 17:14:24.147735 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:28.147714636 +0000 UTC m=+9.878161024 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.148110 master-0 kubenswrapper[4167]: E0216 17:14:24.147932 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:28.147915941 +0000 UTC m=+9.878362329 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.254041 master-0 kubenswrapper[4167]: I0216 17:14:24.253982 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:24.254177 master-0 kubenswrapper[4167]: I0216 17:14:24.254089 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:24.254221 master-0 kubenswrapper[4167]: E0216 17:14:24.254179 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:24.254221 master-0 kubenswrapper[4167]: E0216 17:14:24.254205 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:24.254221 master-0 kubenswrapper[4167]: E0216 17:14:24.254217 4167 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.254341 master-0 kubenswrapper[4167]: I0216 17:14:24.254245 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:24.254341 master-0 kubenswrapper[4167]: E0216 17:14:24.254278 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:28.254259569 +0000 UTC m=+9.984706027 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.254434 master-0 kubenswrapper[4167]: E0216 17:14:24.254397 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:24.254477 master-0 kubenswrapper[4167]: E0216 17:14:24.254435 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:24.254477 master-0 kubenswrapper[4167]: E0216 17:14:24.254450 4167 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.254659 master-0 kubenswrapper[4167]: E0216 17:14:24.254603 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:28.254547686 +0000 UTC m=+9.984994104 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.254724 master-0 kubenswrapper[4167]: E0216 17:14:24.254657 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:24.254724 master-0 kubenswrapper[4167]: E0216 17:14:24.254687 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:24.254724 master-0 kubenswrapper[4167]: E0216 17:14:24.254707 4167 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.254832 master-0 kubenswrapper[4167]: E0216 17:14:24.254773 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:28.254751302 +0000 UTC m=+9.985197720 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.357993 master-0 kubenswrapper[4167]: I0216 17:14:24.357934 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:24.358367 master-0 kubenswrapper[4167]: E0216 17:14:24.358188 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:24.358426 master-0 kubenswrapper[4167]: E0216 17:14:24.358376 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:24.358426 master-0 kubenswrapper[4167]: E0216 17:14:24.358417 4167 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.358670 master-0 kubenswrapper[4167]: E0216 17:14:24.358633 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:28.358605742 +0000 UTC m=+10.089052130 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:24.444478 master-0 kubenswrapper[4167]: I0216 17:14:24.444390 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:24.444720 master-0 kubenswrapper[4167]: E0216 17:14:24.444586 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:14:24.493103 master-0 kubenswrapper[4167]: I0216 17:14:24.492815 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=9.492776712 podStartE2EDuration="9.492776712s" podCreationTimestamp="2026-02-16 17:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:14:24.48308626 +0000 UTC m=+6.213532678" watchObservedRunningTime="2026-02-16 17:14:24.492776712 +0000 UTC m=+6.223223130" Feb 16 17:14:24.569214 master-0 kubenswrapper[4167]: I0216 17:14:24.569094 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"bc93650ebf3cc691f951d0213bee9685e2f6189244bf9df33a231ed89e535a91"} Feb 16 17:14:24.572100 master-0 kubenswrapper[4167]: I0216 17:14:24.572040 4167 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="519b24845f87f3d3fb23bb5bd08c3e4a0335b200d6852da76a28fd2e4e18e56c" exitCode=0 Feb 16 17:14:24.572282 master-0 kubenswrapper[4167]: I0216 17:14:24.572106 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"519b24845f87f3d3fb23bb5bd08c3e4a0335b200d6852da76a28fd2e4e18e56c"} Feb 16 17:14:24.573803 master-0 kubenswrapper[4167]: I0216 17:14:24.573711 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/4.log" Feb 16 17:14:24.574370 master-0 kubenswrapper[4167]: I0216 17:14:24.574323 4167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:14:24.754230 master-0 kubenswrapper[4167]: I0216 17:14:24.754184 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:24.754230 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:24.754230 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:24.754230 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:24.754571 master-0 kubenswrapper[4167]: I0216 17:14:24.754257 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:24.771586 master-0 kubenswrapper[4167]: I0216 17:14:24.771504 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:24.779622 master-0 kubenswrapper[4167]: I0216 17:14:24.779580 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:14:25.445336 master-0 kubenswrapper[4167]: I0216 17:14:25.445268 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:25.445336 master-0 kubenswrapper[4167]: I0216 17:14:25.445277 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:25.445714 master-0 kubenswrapper[4167]: E0216 17:14:25.445544 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:14:25.445871 master-0 kubenswrapper[4167]: E0216 17:14:25.445817 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:14:25.445871 master-0 kubenswrapper[4167]: I0216 17:14:25.445841 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:25.445871 master-0 kubenswrapper[4167]: I0216 17:14:25.445866 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.445862 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.445867 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.445869 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.445906 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.445929 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.445931 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.445993 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.445949 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.445951 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.445911 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.446040 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.446050 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.446020 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: E0216 17:14:25.446063 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.446096 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.446090 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:25.446127 master-0 kubenswrapper[4167]: I0216 17:14:25.446087 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446197 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446223 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446243 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446264 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446286 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446313 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446353 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: E0216 17:14:25.446355 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446383 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446408 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446454 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446475 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: E0216 17:14:25.446467 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446501 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446532 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446550 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446566 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446601 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446582 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446625 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: E0216 17:14:25.446576 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446650 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: E0216 17:14:25.446657 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446689 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446718 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446772 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446797 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446834 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446874 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: E0216 17:14:25.446871 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446906 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446921 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446971 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.446993 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.447005 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.447028 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.447065 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.447107 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.447169 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: E0216 17:14:25.447180 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.447215 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.447267 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.447300 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.447330 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.447354 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: E0216 17:14:25.447352 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.447374 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.447392 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: I0216 17:14:25.447410 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:25.447405 master-0 kubenswrapper[4167]: E0216 17:14:25.447475 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.447572 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.447674 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.447751 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.447832 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.448039 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.448090 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.448227 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.448293 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.448412 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.448546 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.448619 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.448726 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.448831 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.448937 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.449046 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.449145 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.449227 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.449300 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.449440 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.449529 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.449615 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.449674 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.449865 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.449949 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.450037 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.450157 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.450212 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.450273 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.450332 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.450445 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.450494 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.450571 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.450677 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.450791 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.450873 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.450945 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.451041 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.451103 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:14:25.451154 master-0 kubenswrapper[4167]: E0216 17:14:25.451153 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:14:25.454870 master-0 kubenswrapper[4167]: E0216 17:14:25.451311 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:14:25.454870 master-0 kubenswrapper[4167]: E0216 17:14:25.451362 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:14:25.454870 master-0 kubenswrapper[4167]: E0216 17:14:25.451435 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:14:25.454870 master-0 kubenswrapper[4167]: E0216 17:14:25.451506 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:14:25.454870 master-0 kubenswrapper[4167]: E0216 17:14:25.451570 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:14:25.454870 master-0 kubenswrapper[4167]: E0216 17:14:25.451666 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:14:25.454870 master-0 kubenswrapper[4167]: E0216 17:14:25.451815 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:14:25.454870 master-0 kubenswrapper[4167]: E0216 17:14:25.452018 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:14:25.454870 master-0 kubenswrapper[4167]: E0216 17:14:25.452082 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:14:25.454870 master-0 kubenswrapper[4167]: E0216 17:14:25.452186 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:14:25.454870 master-0 kubenswrapper[4167]: E0216 17:14:25.452361 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:14:25.454870 master-0 kubenswrapper[4167]: E0216 17:14:25.452436 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:14:25.582395 master-0 kubenswrapper[4167]: I0216 17:14:25.582325 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerStarted","Data":"7da2b6b8b7edde6f232d515ed5c514f0070b24cb257f53fc8e388374c4a56e7a"} Feb 16 17:14:25.754844 master-0 kubenswrapper[4167]: I0216 17:14:25.754762 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:25.754844 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:25.754844 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:25.754844 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:25.755419 master-0 kubenswrapper[4167]: I0216 17:14:25.754843 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:25.843856 master-0 kubenswrapper[4167]: I0216 17:14:25.843769 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:26.445151 master-0 kubenswrapper[4167]: I0216 17:14:26.445110 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:26.445453 master-0 kubenswrapper[4167]: E0216 17:14:26.445422 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:14:26.478780 master-0 kubenswrapper[4167]: I0216 17:14:26.478640 4167 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 17:14:26.479151 master-0 kubenswrapper[4167]: I0216 17:14:26.479081 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="32286c81635de6de1cf7f328273c1a49" containerName="startup-monitor" containerID="cri-o://d7c38e55f71867938246c19521c872dc2168e928e2d36640288dfca85978e020" gracePeriod=5 Feb 16 17:14:26.597810 master-0 kubenswrapper[4167]: I0216 17:14:26.596672 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-czzz2" event={"ID":"b3fa6ac1-781f-446c-b6b4-18bdb7723c23","Type":"ContainerStarted","Data":"f20084f5737352054f19e5471f4cd7b6012bf7a4a364d64611d452b2501d140a"} Feb 16 17:14:26.606403 master-0 kubenswrapper[4167]: I0216 17:14:26.606293 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"89df1b736cd989f6bae1660e2384e5940c7dcd9c34b7c051214046710bb00dab"} Feb 16 17:14:26.606403 master-0 kubenswrapper[4167]: I0216 17:14:26.606328 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:26.644639 master-0 kubenswrapper[4167]: I0216 17:14:26.644602 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:26.740396 master-0 kubenswrapper[4167]: I0216 17:14:26.740312 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:26.756485 master-0 kubenswrapper[4167]: I0216 17:14:26.756417 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:26.756485 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:26.756485 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:26.756485 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:26.756666 master-0 kubenswrapper[4167]: I0216 17:14:26.756527 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:26.766168 master-0 kubenswrapper[4167]: I0216 17:14:26.766127 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:27.263897 master-0 kubenswrapper[4167]: I0216 17:14:27.263819 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:14:27.387815 master-0 kubenswrapper[4167]: I0216 17:14:27.387491 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:27.387815 master-0 kubenswrapper[4167]: I0216 17:14:27.387815 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:27.388126 master-0 kubenswrapper[4167]: I0216 17:14:27.387894 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:27.388126 master-0 kubenswrapper[4167]: I0216 17:14:27.387916 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.388126 master-0 kubenswrapper[4167]: I0216 17:14:27.387935 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.388126 master-0 kubenswrapper[4167]: I0216 17:14:27.387989 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:27.388126 master-0 kubenswrapper[4167]: I0216 17:14:27.388015 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:27.388126 master-0 kubenswrapper[4167]: I0216 17:14:27.388033 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.388126 master-0 kubenswrapper[4167]: I0216 17:14:27.388071 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:27.388126 master-0 kubenswrapper[4167]: I0216 17:14:27.388097 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:27.388496 master-0 kubenswrapper[4167]: I0216 17:14:27.388117 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:27.388496 master-0 kubenswrapper[4167]: I0216 17:14:27.388247 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:27.388496 master-0 kubenswrapper[4167]: I0216 17:14:27.388269 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:27.388496 master-0 kubenswrapper[4167]: I0216 17:14:27.388307 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.388496 master-0 kubenswrapper[4167]: I0216 17:14:27.388329 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:27.388496 master-0 kubenswrapper[4167]: I0216 17:14:27.388349 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:27.388496 master-0 kubenswrapper[4167]: I0216 17:14:27.388386 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:27.388496 master-0 kubenswrapper[4167]: I0216 17:14:27.388406 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:27.388496 master-0 kubenswrapper[4167]: I0216 17:14:27.388424 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:27.388496 master-0 kubenswrapper[4167]: I0216 17:14:27.388461 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:27.388496 master-0 kubenswrapper[4167]: I0216 17:14:27.388481 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.388496 master-0 kubenswrapper[4167]: I0216 17:14:27.388500 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:27.388925 master-0 kubenswrapper[4167]: I0216 17:14:27.388519 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:27.388925 master-0 kubenswrapper[4167]: I0216 17:14:27.388558 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:27.388925 master-0 kubenswrapper[4167]: I0216 17:14:27.388575 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:27.388925 master-0 kubenswrapper[4167]: I0216 17:14:27.388594 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:27.388925 master-0 kubenswrapper[4167]: I0216 17:14:27.388647 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:27.388925 master-0 kubenswrapper[4167]: I0216 17:14:27.388668 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:27.388925 master-0 kubenswrapper[4167]: I0216 17:14:27.388725 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:27.388925 master-0 kubenswrapper[4167]: I0216 17:14:27.388746 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:27.388925 master-0 kubenswrapper[4167]: I0216 17:14:27.388785 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:27.388925 master-0 kubenswrapper[4167]: I0216 17:14:27.388812 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:27.388925 master-0 kubenswrapper[4167]: I0216 17:14:27.388859 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:27.388925 master-0 kubenswrapper[4167]: I0216 17:14:27.388885 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:27.388925 master-0 kubenswrapper[4167]: I0216 17:14:27.388911 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: I0216 17:14:27.389004 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: I0216 17:14:27.389024 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: I0216 17:14:27.389066 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: I0216 17:14:27.389088 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: I0216 17:14:27.389111 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: I0216 17:14:27.389154 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: I0216 17:14:27.389182 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: I0216 17:14:27.389201 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: I0216 17:14:27.389253 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: I0216 17:14:27.389272 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: E0216 17:14:27.387734 4167 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: E0216 17:14:27.389337 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: E0216 17:14:27.389371 4167 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:27.389439 master-0 kubenswrapper[4167]: E0216 17:14:27.389446 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389479 4167 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389498 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389536 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389558 4167 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389566 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389653 4167 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389671 4167 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389704 4167 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389711 4167 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389742 4167 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389774 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389780 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389830 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389835 4167 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389872 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389890 4167 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: I0216 17:14:27.389289 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389913 4167 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389287 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389984 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:27.389973 master-0 kubenswrapper[4167]: E0216 17:14:27.389991 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.389449 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.389538 4167 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390045 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390053 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.389657 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390089 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390099 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390123 4167 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390158 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390164 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.389339 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.389322419 +0000 UTC m=+17.119768787 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390189 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390180552 +0000 UTC m=+17.120626930 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390193 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390201 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390195513 +0000 UTC m=+17.120641891 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390211 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390207433 +0000 UTC m=+17.120653811 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390220 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390216563 +0000 UTC m=+17.120662941 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390229 4167 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390248 4167 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390269 4167 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390298 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390126 4167 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390317 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390346 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390361 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390230 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390226164 +0000 UTC m=+17.120672542 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390385 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390378628 +0000 UTC m=+17.120825006 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390395 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390389968 +0000 UTC m=+17.120836346 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390402 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390405 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390400588 +0000 UTC m=+17.120846956 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390417 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390413719 +0000 UTC m=+17.120860097 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390272 4167 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390433 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390422229 +0000 UTC m=+17.120868607 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390347 4167 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390446 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.39044182 +0000 UTC m=+17.120888198 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390466 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.39045867 +0000 UTC m=+17.120905048 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390483 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.39047579 +0000 UTC m=+17.120922168 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390496 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390489921 +0000 UTC m=+17.120936299 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390508 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390502301 +0000 UTC m=+17.120948679 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390522 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390514622 +0000 UTC m=+17.120961000 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390533 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390528092 +0000 UTC m=+17.120974470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390546 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390540942 +0000 UTC m=+17.120987320 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390558 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390552973 +0000 UTC m=+17.120999361 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390571 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390565743 +0000 UTC m=+17.121012121 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390584 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390578073 +0000 UTC m=+17.121024451 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: E0216 17:14:27.390597 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390591934 +0000 UTC m=+17.121038312 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: I0216 17:14:27.390630 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: I0216 17:14:27.390694 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: I0216 17:14:27.390734 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:27.390720 master-0 kubenswrapper[4167]: I0216 17:14:27.390760 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.390784 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.390811 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.390771 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.390892 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391025 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.390997195 +0000 UTC m=+17.121443593 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391063 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391048176 +0000 UTC m=+17.121494644 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391089 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391076717 +0000 UTC m=+17.121523195 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391116 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391101797 +0000 UTC m=+17.121548245 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391122 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391141 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391129218 +0000 UTC m=+17.121575606 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.391163 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391173 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391159559 +0000 UTC m=+17.121606017 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391194 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391198 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.39118534 +0000 UTC m=+17.121631818 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391217 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.39120836 +0000 UTC m=+17.121654738 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391229 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391224191 +0000 UTC m=+17.121670559 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391239 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391234401 +0000 UTC m=+17.121680779 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391247 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391243441 +0000 UTC m=+17.121689819 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391256 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391252541 +0000 UTC m=+17.121698919 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391266 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391261852 +0000 UTC m=+17.121708230 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391276 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391271852 +0000 UTC m=+17.121718350 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391285 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391281592 +0000 UTC m=+17.121727960 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391294 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391290192 +0000 UTC m=+17.121736570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391304 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391299523 +0000 UTC m=+17.121745901 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391304 4167 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391312 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391308613 +0000 UTC m=+17.121754991 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391324 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391319853 +0000 UTC m=+17.121766231 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391333 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391329294 +0000 UTC m=+17.121775672 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391343 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391339614 +0000 UTC m=+17.121785992 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391352 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391348754 +0000 UTC m=+17.121795132 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391361 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391357524 +0000 UTC m=+17.121803892 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391371 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391366725 +0000 UTC m=+17.121813103 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391372 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391379 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391375445 +0000 UTC m=+17.121821823 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391424 4167 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391426 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391413206 +0000 UTC m=+17.121859684 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.391474 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391480 4167 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391491 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391546 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391494728 +0000 UTC m=+17.121941106 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391596 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391588071 +0000 UTC m=+17.122034579 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: E0216 17:14:27.391618 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.391611111 +0000 UTC m=+17.122057609 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.391669 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.391720 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.391787 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.391812 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.391838 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.391887 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.391911 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.391938 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.391990 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392021 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392046 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392070 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392097 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392129 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392155 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392182 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392209 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392234 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392275 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392302 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392325 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392363 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392388 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392414 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392461 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392486 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392509 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392545 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392582 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392609 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:27.392582 master-0 kubenswrapper[4167]: I0216 17:14:27.392642 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.392667 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.392696 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.392722 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.392749 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.392776 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.392802 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.392858 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.392884 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.392910 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.392933 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.392972 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393002 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393040 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393075 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393100 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393130 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393156 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393180 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393203 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393233 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393260 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393285 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393311 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393336 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393361 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393400 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393428 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393455 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393462 4167 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393502 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.393492202 +0000 UTC m=+17.123938580 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393525 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.393519033 +0000 UTC m=+17.123965411 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393537 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.393532153 +0000 UTC m=+17.123978531 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393662 4167 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393687 4167 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393712 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.393695768 +0000 UTC m=+17.124142236 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393753 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.393723818 +0000 UTC m=+17.124170276 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393778 4167 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393930 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.393920484 +0000 UTC m=+17.124366872 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393934 4167 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.393918 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393779 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.394048 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394066 4167 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394103 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394124 4167 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394135 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394140 4167 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394148 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394079 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394067528 +0000 UTC m=+17.124514006 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393854 4167 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: I0216 17:14:27.394188 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393873 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394147 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393886 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393804 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394254 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393979 4167 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394292 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393984 4167 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394327 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394338 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393816 4167 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394202 4167 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394376 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394212 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394193741 +0000 UTC m=+17.124640119 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394239 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394395 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394387386 +0000 UTC m=+17.124833764 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394408 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394402027 +0000 UTC m=+17.124848395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394396 4167 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394420 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394414997 +0000 UTC m=+17.124861375 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394438 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394433298 +0000 UTC m=+17.124879676 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394241 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394455 4167 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394450 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394444388 +0000 UTC m=+17.124890766 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.393887 4167 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394516 4167 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394558 4167 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394593 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394594 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394475159 +0000 UTC m=+17.124921657 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394614 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394606882 +0000 UTC m=+17.125053370 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394627 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394621453 +0000 UTC m=+17.125067971 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394660 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394652813 +0000 UTC m=+17.125099321 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394667 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394677 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394670314 +0000 UTC m=+17.125116822 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394692 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394685654 +0000 UTC m=+17.125132142 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394705 4167 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394704 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394697685 +0000 UTC m=+17.125144203 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394729 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394722985 +0000 UTC m=+17.125169363 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394740 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394734326 +0000 UTC m=+17.125180704 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394750 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394786 4167 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394642 4167 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394827 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394835 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394841 4167 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394884 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394841 4167 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394903 4167 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394751 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394745916 +0000 UTC m=+17.125192294 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394929 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394922241 +0000 UTC m=+17.125368619 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394943 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394937281 +0000 UTC m=+17.125383659 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394947 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394954 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394948741 +0000 UTC m=+17.125395119 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394982 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394977272 +0000 UTC m=+17.125423650 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.394994 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394989533 +0000 UTC m=+17.125435911 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.395003 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.394999023 +0000 UTC m=+17.125445401 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.395014 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395008773 +0000 UTC m=+17.125455151 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.395023 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.395025 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395019453 +0000 UTC m=+17.125465831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.395038 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395034014 +0000 UTC m=+17.125480392 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.395048 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395043664 +0000 UTC m=+17.125490042 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.395059 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395054434 +0000 UTC m=+17.125500812 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.395067 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395063395 +0000 UTC m=+17.125509773 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.395070 4167 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.395078 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395073765 +0000 UTC m=+17.125520143 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.395087 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395083505 +0000 UTC m=+17.125529883 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.395098 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395093505 +0000 UTC m=+17.125539883 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:14:27.395354 master-0 kubenswrapper[4167]: E0216 17:14:27.395123 4167 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395153 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395141147 +0000 UTC m=+17.125587635 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395178 4167 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395199 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395192438 +0000 UTC m=+17.125638946 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395241 4167 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395266 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.39525813 +0000 UTC m=+17.125704508 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395308 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395334 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395325922 +0000 UTC m=+17.125772420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395379 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395405 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395397424 +0000 UTC m=+17.125843912 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395448 4167 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395472 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395464555 +0000 UTC m=+17.125911053 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395491 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395509 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395504337 +0000 UTC m=+17.125950715 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395551 4167 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395561 4167 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395586 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395578779 +0000 UTC m=+17.126025257 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395631 4167 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395657 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.39564982 +0000 UTC m=+17.126096308 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395700 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395727 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395719702 +0000 UTC m=+17.126166230 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395770 4167 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395798 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395790324 +0000 UTC m=+17.126236802 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395833 4167 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395880 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395872346 +0000 UTC m=+17.126318814 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395915 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.395941 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.395932048 +0000 UTC m=+17.126378556 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396001 4167 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396027 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.39601944 +0000 UTC m=+17.126465948 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396058 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396079 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.396072422 +0000 UTC m=+17.126518920 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396112 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396131 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.396125813 +0000 UTC m=+17.126572191 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396158 4167 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396176 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.396171545 +0000 UTC m=+17.126617913 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396195 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396212 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.396206766 +0000 UTC m=+17.126653144 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396239 4167 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396246 4167 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396263 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.396258197 +0000 UTC m=+17.126704575 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396285 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396302 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.396296728 +0000 UTC m=+17.126743106 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396339 4167 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396362 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.39635503 +0000 UTC m=+17.126801538 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396392 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396416 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.396408231 +0000 UTC m=+17.126854729 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396433 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396456 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.396448972 +0000 UTC m=+17.126895460 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396494 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396520 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.396511414 +0000 UTC m=+17.126957932 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.396548 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.396580 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.396610 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.396639 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.396678 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396695 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396709 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.396707 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396716 4167 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.396735 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.396761 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396776 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.396790 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396798 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.396790971 +0000 UTC m=+17.127237349 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.396815 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.396838 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.396864 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396873 4167 projected.go:288] Couldn't get configMap openshift-image-registry/kube-root-ca.crt: object "openshift-image-registry"/"kube-root-ca.crt" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396883 4167 projected.go:288] Couldn't get configMap openshift-image-registry/openshift-service-ca.crt: object "openshift-image-registry"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.396885 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396890 4167 projected.go:194] Error preparing data for projected volume kube-api-access-b5mwd for pod openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk: [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396921 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.396913325 +0000 UTC m=+17.127359703 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-b5mwd" (UniqueName: "kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : [object "openshift-image-registry"/"kube-root-ca.crt" not registered, object "openshift-image-registry"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396928 4167 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396948 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.396942065 +0000 UTC m=+17.127388443 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.396995 4167 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397009 4167 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397019 4167 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397021 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397031 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397038 4167 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397045 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.397036688 +0000 UTC m=+17.127483066 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397077 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397128 4167 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397140 4167 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397148 4167 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397199 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397248 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397305 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.397329 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397354 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397406 4167 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397081 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.397073269 +0000 UTC m=+17.127519647 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397441 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0 podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.397432489 +0000 UTC m=+17.127878867 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397454 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.397447549 +0000 UTC m=+17.127894037 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397466 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.397460079 +0000 UTC m=+17.127906457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397479 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.3974725 +0000 UTC m=+17.127918878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397491 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.39748486 +0000 UTC m=+17.127931238 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397502 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.39749683 +0000 UTC m=+17.127943208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397514 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.397508591 +0000 UTC m=+17.127954969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397525 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.397519381 +0000 UTC m=+17.127965869 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.397548 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.397576 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397598 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397645 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.397637664 +0000 UTC m=+17.128084042 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397645 4167 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397666 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.397661445 +0000 UTC m=+17.128107813 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397667 4167 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397684 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.397679965 +0000 UTC m=+17.128126343 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.397604 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.397737 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397699 4167 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.397770 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397749 4167 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.397799 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397810 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric podName:e1443fb7-cb1e-4105-b604-b88c749620c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.397802169 +0000 UTC m=+17.128248547 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.397829 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.397859 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397872 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: E0216 17:14:27.397884 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.397887 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.397912 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:27.400114 master-0 kubenswrapper[4167]: I0216 17:14:27.397933 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.397951 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.397986 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398006 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398027 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.397891 4167 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398072 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398081 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398117 4167 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398131 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398159 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398161 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398202 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398210 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398217 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.397924 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398237 4167 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.397983 4167 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398032 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398042 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398035605 +0000 UTC m=+17.128481983 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398266 4167 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398275 4167 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398279 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398296 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398289002 +0000 UTC m=+17.128735380 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398314 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398321 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398330 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398336 4167 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398359 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398353104 +0000 UTC m=+17.128799482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398370 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398365004 +0000 UTC m=+17.128811372 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398381 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398375224 +0000 UTC m=+17.128821602 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398393 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398387435 +0000 UTC m=+17.128833813 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398396 4167 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398405 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398399205 +0000 UTC m=+17.128845583 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398417 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398411145 +0000 UTC m=+17.128857523 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398428 4167 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398434 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398427086 +0000 UTC m=+17.128873554 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398446 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398440986 +0000 UTC m=+17.128887364 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398459 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398453416 +0000 UTC m=+17.128899794 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398467 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398472 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398465207 +0000 UTC m=+17.128911585 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398484 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398478327 +0000 UTC m=+17.128924805 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398496 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398490507 +0000 UTC m=+17.128936885 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398357 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398508 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398502478 +0000 UTC m=+17.128948856 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398436 4167 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398535 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398536 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398529708 +0000 UTC m=+17.128976086 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398568 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398593 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.39858587 +0000 UTC m=+17.129032358 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398614 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398654 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398698 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398713 4167 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398719 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398739 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398731054 +0000 UTC m=+17.129177432 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398761 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398768 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398779 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398792 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398784495 +0000 UTC m=+17.129230873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398812 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398847 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398873 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398874 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398897 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.398890208 +0000 UTC m=+17.129336586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398924 4167 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.398971 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.3989423 +0000 UTC m=+17.129388788 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.398937 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.399014 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.399065 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.399085 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.399104 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.399144 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: I0216 17:14:27.399164 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399021 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399590 4167 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399607 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.399596257 +0000 UTC m=+17.130042635 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399626 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.399618558 +0000 UTC m=+17.130065036 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399060 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399642 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399651 4167 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399657 4167 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399665 4167 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399672 4167 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399252 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399674 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.399666809 +0000 UTC m=+17.130113187 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399720 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.39969357 +0000 UTC m=+17.130139948 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399732 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.399726791 +0000 UTC m=+17.130173169 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399414 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399755 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.399749601 +0000 UTC m=+17.130195979 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399500 4167 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399798 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.399774412 +0000 UTC m=+17.130220790 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399501 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399817 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399824 4167 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399844 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.399837544 +0000 UTC m=+17.130283922 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399536 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399920 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.399913806 +0000 UTC m=+17.130360184 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399562 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399934 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399941 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.400022 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.399952567 +0000 UTC m=+17.130398945 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399570 4167 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.400144 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.400116991 +0000 UTC m=+17.130563369 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.399298 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:27.404799 master-0 kubenswrapper[4167]: E0216 17:14:27.400194 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.400185253 +0000 UTC m=+17.130631631 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:27.444891 master-0 kubenswrapper[4167]: I0216 17:14:27.444821 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:27.444891 master-0 kubenswrapper[4167]: I0216 17:14:27.444843 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:27.445153 master-0 kubenswrapper[4167]: I0216 17:14:27.444825 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:27.445153 master-0 kubenswrapper[4167]: E0216 17:14:27.444923 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:14:27.445153 master-0 kubenswrapper[4167]: I0216 17:14:27.444945 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:27.445153 master-0 kubenswrapper[4167]: I0216 17:14:27.444989 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:27.445153 master-0 kubenswrapper[4167]: I0216 17:14:27.444978 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:27.445153 master-0 kubenswrapper[4167]: I0216 17:14:27.444987 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:27.445153 master-0 kubenswrapper[4167]: I0216 17:14:27.444945 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:27.445153 master-0 kubenswrapper[4167]: E0216 17:14:27.445106 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:14:27.445153 master-0 kubenswrapper[4167]: I0216 17:14:27.445137 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:27.445153 master-0 kubenswrapper[4167]: I0216 17:14:27.445147 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445143 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445155 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445170 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445194 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445173 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445199 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445217 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445233 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445205 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445220 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445249 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445248 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445131 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445274 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445337 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: E0216 17:14:27.445343 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445366 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445368 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445384 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445376 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445398 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445403 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445408 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445412 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445412 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445400 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445518 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445520 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: E0216 17:14:27.445523 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445546 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:27.445521 master-0 kubenswrapper[4167]: I0216 17:14:27.445536 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445559 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445542 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445558 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445561 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445543 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445560 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445638 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445721 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.445723 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445748 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445757 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445785 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445789 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445788 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445803 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445810 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445829 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.445860 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445882 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445921 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445933 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.445945 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.446004 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.446013 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.446029 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: I0216 17:14:27.446043 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.446090 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.446140 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.446245 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.446321 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.446382 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.446485 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.446550 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.446659 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.446745 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.446794 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:14:27.447054 master-0 kubenswrapper[4167]: E0216 17:14:27.446987 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.447108 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.447189 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.447247 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.447322 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.447551 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.447608 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.447700 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.447804 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448031 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448136 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448199 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448237 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448287 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448330 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448375 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448419 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448478 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448533 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448605 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448645 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448737 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448830 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448899 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.448993 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:14:27.449081 master-0 kubenswrapper[4167]: E0216 17:14:27.449061 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:14:27.449954 master-0 kubenswrapper[4167]: E0216 17:14:27.449128 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:14:27.449954 master-0 kubenswrapper[4167]: E0216 17:14:27.449187 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:14:27.449954 master-0 kubenswrapper[4167]: E0216 17:14:27.449253 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:14:27.449954 master-0 kubenswrapper[4167]: E0216 17:14:27.449350 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:14:27.449954 master-0 kubenswrapper[4167]: E0216 17:14:27.449415 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:14:27.449954 master-0 kubenswrapper[4167]: E0216 17:14:27.449520 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:14:27.449954 master-0 kubenswrapper[4167]: E0216 17:14:27.449597 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:14:27.449954 master-0 kubenswrapper[4167]: E0216 17:14:27.449682 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:14:27.449954 master-0 kubenswrapper[4167]: E0216 17:14:27.449750 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:14:27.449954 master-0 kubenswrapper[4167]: E0216 17:14:27.449836 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:14:27.449954 master-0 kubenswrapper[4167]: E0216 17:14:27.449895 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:14:27.449954 master-0 kubenswrapper[4167]: E0216 17:14:27.449937 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:14:27.450499 master-0 kubenswrapper[4167]: E0216 17:14:27.450053 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:14:27.450499 master-0 kubenswrapper[4167]: E0216 17:14:27.450131 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:14:27.450499 master-0 kubenswrapper[4167]: E0216 17:14:27.450257 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:14:27.450499 master-0 kubenswrapper[4167]: E0216 17:14:27.450353 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:14:27.450499 master-0 kubenswrapper[4167]: E0216 17:14:27.450469 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:14:27.450687 master-0 kubenswrapper[4167]: E0216 17:14:27.450531 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:14:27.450687 master-0 kubenswrapper[4167]: E0216 17:14:27.450611 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:14:27.501984 master-0 kubenswrapper[4167]: I0216 17:14:27.501902 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:27.502197 master-0 kubenswrapper[4167]: I0216 17:14:27.502007 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:27.502197 master-0 kubenswrapper[4167]: E0216 17:14:27.502076 4167 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:14:27.502197 master-0 kubenswrapper[4167]: E0216 17:14:27.502101 4167 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.502197 master-0 kubenswrapper[4167]: E0216 17:14:27.502114 4167 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.502197 master-0 kubenswrapper[4167]: E0216 17:14:27.502169 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.502153082 +0000 UTC m=+17.232599450 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.502404 master-0 kubenswrapper[4167]: I0216 17:14:27.502255 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:27.502404 master-0 kubenswrapper[4167]: E0216 17:14:27.502255 4167 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.502404 master-0 kubenswrapper[4167]: E0216 17:14:27.502312 4167 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.502404 master-0 kubenswrapper[4167]: E0216 17:14:27.502325 4167 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.502404 master-0 kubenswrapper[4167]: E0216 17:14:27.502341 4167 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.502404 master-0 kubenswrapper[4167]: E0216 17:14:27.502353 4167 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.502404 master-0 kubenswrapper[4167]: E0216 17:14:27.502352 4167 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.502404 master-0 kubenswrapper[4167]: E0216 17:14:27.502361 4167 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.502404 master-0 kubenswrapper[4167]: E0216 17:14:27.502368 4167 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.502404 master-0 kubenswrapper[4167]: E0216 17:14:27.502377 4167 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.502404 master-0 kubenswrapper[4167]: I0216 17:14:27.502287 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:27.502404 master-0 kubenswrapper[4167]: E0216 17:14:27.502397 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.502376748 +0000 UTC m=+17.232823196 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.502739 master-0 kubenswrapper[4167]: E0216 17:14:27.502425 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.502415509 +0000 UTC m=+17.232861977 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.502739 master-0 kubenswrapper[4167]: E0216 17:14:27.502451 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.50244292 +0000 UTC m=+17.232889418 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.502739 master-0 kubenswrapper[4167]: I0216 17:14:27.502597 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:27.502739 master-0 kubenswrapper[4167]: E0216 17:14:27.502700 4167 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:27.502739 master-0 kubenswrapper[4167]: E0216 17:14:27.502712 4167 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.502739 master-0 kubenswrapper[4167]: E0216 17:14:27.502720 4167 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.502905 master-0 kubenswrapper[4167]: E0216 17:14:27.502815 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.50280654 +0000 UTC m=+17.233252918 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.605093 master-0 kubenswrapper[4167]: I0216 17:14:27.604936 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:27.605093 master-0 kubenswrapper[4167]: I0216 17:14:27.605050 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:27.605093 master-0 kubenswrapper[4167]: E0216 17:14:27.605079 4167 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.605682 master-0 kubenswrapper[4167]: E0216 17:14:27.605106 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.605682 master-0 kubenswrapper[4167]: E0216 17:14:27.605152 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.605135908 +0000 UTC m=+17.335582356 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.605682 master-0 kubenswrapper[4167]: E0216 17:14:27.605272 4167 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.605682 master-0 kubenswrapper[4167]: E0216 17:14:27.605317 4167 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.605682 master-0 kubenswrapper[4167]: E0216 17:14:27.605328 4167 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.605682 master-0 kubenswrapper[4167]: E0216 17:14:27.605382 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.605365794 +0000 UTC m=+17.335812172 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.607040 master-0 kubenswrapper[4167]: I0216 17:14:27.607004 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:27.607222 master-0 kubenswrapper[4167]: E0216 17:14:27.607192 4167 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.607265 master-0 kubenswrapper[4167]: E0216 17:14:27.607224 4167 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.607265 master-0 kubenswrapper[4167]: E0216 17:14:27.607238 4167 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.607265 master-0 kubenswrapper[4167]: I0216 17:14:27.607255 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:27.607371 master-0 kubenswrapper[4167]: E0216 17:14:27.607288 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.607272276 +0000 UTC m=+17.337718664 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.607371 master-0 kubenswrapper[4167]: I0216 17:14:27.607320 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:27.607371 master-0 kubenswrapper[4167]: E0216 17:14:27.607339 4167 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.607371 master-0 kubenswrapper[4167]: E0216 17:14:27.607354 4167 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.607371 master-0 kubenswrapper[4167]: E0216 17:14:27.607365 4167 projected.go:194] Error preparing data for projected volume kube-api-access-sbrtz for pod openshift-console-operator/console-operator-7777d5cc66-64vhv: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.607518 master-0 kubenswrapper[4167]: E0216 17:14:27.607399 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.607388209 +0000 UTC m=+17.337834647 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-sbrtz" (UniqueName: "kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.607518 master-0 kubenswrapper[4167]: E0216 17:14:27.607459 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:27.607518 master-0 kubenswrapper[4167]: E0216 17:14:27.607473 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.607518 master-0 kubenswrapper[4167]: E0216 17:14:27.607482 4167 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.607518 master-0 kubenswrapper[4167]: E0216 17:14:27.607510 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.607501432 +0000 UTC m=+17.337947830 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.614913 master-0 kubenswrapper[4167]: I0216 17:14:27.611071 4167 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="7da2b6b8b7edde6f232d515ed5c514f0070b24cb257f53fc8e388374c4a56e7a" exitCode=0 Feb 16 17:14:27.614913 master-0 kubenswrapper[4167]: I0216 17:14:27.611255 4167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:14:27.614913 master-0 kubenswrapper[4167]: I0216 17:14:27.612223 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"7da2b6b8b7edde6f232d515ed5c514f0070b24cb257f53fc8e388374c4a56e7a"} Feb 16 17:14:27.615237 master-0 kubenswrapper[4167]: I0216 17:14:27.614951 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:27.719251 master-0 kubenswrapper[4167]: I0216 17:14:27.719181 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:27.719495 master-0 kubenswrapper[4167]: E0216 17:14:27.719354 4167 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.719495 master-0 kubenswrapper[4167]: E0216 17:14:27.719382 4167 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.719495 master-0 kubenswrapper[4167]: E0216 17:14:27.719395 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.719495 master-0 kubenswrapper[4167]: E0216 17:14:27.719434 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.71942034 +0000 UTC m=+17.449866718 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.720621 master-0 kubenswrapper[4167]: E0216 17:14:27.720570 4167 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:14:27.720621 master-0 kubenswrapper[4167]: E0216 17:14:27.720606 4167 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.720621 master-0 kubenswrapper[4167]: E0216 17:14:27.720615 4167 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.720777 master-0 kubenswrapper[4167]: E0216 17:14:27.720655 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.720640453 +0000 UTC m=+17.451086831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.720777 master-0 kubenswrapper[4167]: I0216 17:14:27.720547 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:27.761361 master-0 kubenswrapper[4167]: I0216 17:14:27.761295 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:27.761361 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:27.761361 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:27.761361 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:27.761688 master-0 kubenswrapper[4167]: I0216 17:14:27.761370 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:27.823279 master-0 kubenswrapper[4167]: I0216 17:14:27.823221 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:27.823279 master-0 kubenswrapper[4167]: I0216 17:14:27.823276 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:27.823505 master-0 kubenswrapper[4167]: E0216 17:14:27.823401 4167 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.823505 master-0 kubenswrapper[4167]: E0216 17:14:27.823426 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.823505 master-0 kubenswrapper[4167]: E0216 17:14:27.823475 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.823458415 +0000 UTC m=+17.553904793 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:27.824056 master-0 kubenswrapper[4167]: E0216 17:14:27.824025 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:27.824056 master-0 kubenswrapper[4167]: E0216 17:14:27.824050 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.824193 master-0 kubenswrapper[4167]: E0216 17:14:27.824060 4167 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.824193 master-0 kubenswrapper[4167]: E0216 17:14:27.824172 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.824161674 +0000 UTC m=+17.554608052 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.925706 master-0 kubenswrapper[4167]: I0216 17:14:27.925654 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:27.925706 master-0 kubenswrapper[4167]: I0216 17:14:27.925720 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:27.926336 master-0 kubenswrapper[4167]: E0216 17:14:27.926297 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:27.926336 master-0 kubenswrapper[4167]: E0216 17:14:27.926335 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.926408 master-0 kubenswrapper[4167]: E0216 17:14:27.926349 4167 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.926408 master-0 kubenswrapper[4167]: E0216 17:14:27.926404 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.92638458 +0000 UTC m=+17.656830968 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.926527 master-0 kubenswrapper[4167]: E0216 17:14:27.926487 4167 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Feb 16 17:14:27.926572 master-0 kubenswrapper[4167]: E0216 17:14:27.926534 4167 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Feb 16 17:14:27.926572 master-0 kubenswrapper[4167]: E0216 17:14:27.926546 4167 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:14:27.926630 master-0 kubenswrapper[4167]: E0216 17:14:27.926609 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:35.926579396 +0000 UTC m=+17.657025774 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.029666 master-0 kubenswrapper[4167]: I0216 17:14:28.029599 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:28.029666 master-0 kubenswrapper[4167]: I0216 17:14:28.029650 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:28.030017 master-0 kubenswrapper[4167]: E0216 17:14:28.029866 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:28.030017 master-0 kubenswrapper[4167]: E0216 17:14:28.029907 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:28.030017 master-0 kubenswrapper[4167]: E0216 17:14:28.029925 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.030017 master-0 kubenswrapper[4167]: E0216 17:14:28.029986 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:28.030017 master-0 kubenswrapper[4167]: E0216 17:14:28.030003 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:28.030017 master-0 kubenswrapper[4167]: E0216 17:14:28.030011 4167 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.030355 master-0 kubenswrapper[4167]: E0216 17:14:28.030027 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:36.030001004 +0000 UTC m=+17.760447402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.030355 master-0 kubenswrapper[4167]: E0216 17:14:28.030058 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:36.030046035 +0000 UTC m=+17.760492473 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.136869 master-0 kubenswrapper[4167]: I0216 17:14:28.136797 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:28.137147 master-0 kubenswrapper[4167]: E0216 17:14:28.137051 4167 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:28.137147 master-0 kubenswrapper[4167]: E0216 17:14:28.137086 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:28.137147 master-0 kubenswrapper[4167]: I0216 17:14:28.137052 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:28.137147 master-0 kubenswrapper[4167]: E0216 17:14:28.137137 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:14:28.137462 master-0 kubenswrapper[4167]: E0216 17:14:28.137164 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:14:28.137462 master-0 kubenswrapper[4167]: E0216 17:14:28.137178 4167 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.137462 master-0 kubenswrapper[4167]: E0216 17:14:28.137141 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:36.137123823 +0000 UTC m=+17.867570251 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:28.137462 master-0 kubenswrapper[4167]: I0216 17:14:28.137278 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:28.137462 master-0 kubenswrapper[4167]: E0216 17:14:28.137324 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:36.137297877 +0000 UTC m=+17.867744275 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.137462 master-0 kubenswrapper[4167]: E0216 17:14:28.137343 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:28.137462 master-0 kubenswrapper[4167]: E0216 17:14:28.137356 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:28.137462 master-0 kubenswrapper[4167]: E0216 17:14:28.137366 4167 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.137462 master-0 kubenswrapper[4167]: E0216 17:14:28.137401 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:36.13738682 +0000 UTC m=+17.867833248 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.241760 master-0 kubenswrapper[4167]: I0216 17:14:28.241693 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:28.241760 master-0 kubenswrapper[4167]: I0216 17:14:28.241763 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:28.242067 master-0 kubenswrapper[4167]: E0216 17:14:28.241844 4167 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:28.242067 master-0 kubenswrapper[4167]: E0216 17:14:28.241864 4167 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:28.242067 master-0 kubenswrapper[4167]: E0216 17:14:28.241875 4167 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.242067 master-0 kubenswrapper[4167]: E0216 17:14:28.241909 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:36.241897068 +0000 UTC m=+17.972343446 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.242198 master-0 kubenswrapper[4167]: E0216 17:14:28.242059 4167 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:28.242198 master-0 kubenswrapper[4167]: E0216 17:14:28.242099 4167 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:28.242198 master-0 kubenswrapper[4167]: E0216 17:14:28.242114 4167 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.242284 master-0 kubenswrapper[4167]: E0216 17:14:28.242197 4167 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:28.242284 master-0 kubenswrapper[4167]: E0216 17:14:28.242221 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:36.242181305 +0000 UTC m=+17.972627763 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.242284 master-0 kubenswrapper[4167]: E0216 17:14:28.242227 4167 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:28.242284 master-0 kubenswrapper[4167]: I0216 17:14:28.242066 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:28.242430 master-0 kubenswrapper[4167]: E0216 17:14:28.242244 4167 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.242430 master-0 kubenswrapper[4167]: E0216 17:14:28.242377 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:36.2423606 +0000 UTC m=+17.972806988 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.344855 master-0 kubenswrapper[4167]: I0216 17:14:28.344789 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:28.344855 master-0 kubenswrapper[4167]: I0216 17:14:28.344846 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:28.345100 master-0 kubenswrapper[4167]: E0216 17:14:28.345043 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:28.345100 master-0 kubenswrapper[4167]: E0216 17:14:28.345083 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:28.345100 master-0 kubenswrapper[4167]: E0216 17:14:28.345094 4167 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.345233 master-0 kubenswrapper[4167]: I0216 17:14:28.345111 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:28.345233 master-0 kubenswrapper[4167]: E0216 17:14:28.345157 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:36.345136661 +0000 UTC m=+18.075583059 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.345233 master-0 kubenswrapper[4167]: E0216 17:14:28.345207 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:28.345233 master-0 kubenswrapper[4167]: E0216 17:14:28.345227 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:28.345233 master-0 kubenswrapper[4167]: E0216 17:14:28.345235 4167 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.345442 master-0 kubenswrapper[4167]: E0216 17:14:28.345260 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:36.345252964 +0000 UTC m=+18.075699342 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.345442 master-0 kubenswrapper[4167]: E0216 17:14:28.345305 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:28.345442 master-0 kubenswrapper[4167]: E0216 17:14:28.345323 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:28.345442 master-0 kubenswrapper[4167]: E0216 17:14:28.345333 4167 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.345442 master-0 kubenswrapper[4167]: E0216 17:14:28.345441 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:36.345427269 +0000 UTC m=+18.075873647 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.447017 master-0 kubenswrapper[4167]: I0216 17:14:28.446871 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:28.448892 master-0 kubenswrapper[4167]: I0216 17:14:28.448852 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:28.449054 master-0 kubenswrapper[4167]: E0216 17:14:28.449012 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:28.449054 master-0 kubenswrapper[4167]: E0216 17:14:28.449052 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:28.449152 master-0 kubenswrapper[4167]: E0216 17:14:28.449070 4167 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.449152 master-0 kubenswrapper[4167]: E0216 17:14:28.449135 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:36.449114795 +0000 UTC m=+18.179561283 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:28.452996 master-0 kubenswrapper[4167]: E0216 17:14:28.452887 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:14:28.602709 master-0 kubenswrapper[4167]: E0216 17:14:28.602604 4167 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:14:28.617206 master-0 kubenswrapper[4167]: I0216 17:14:28.617143 4167 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="fa44e879e3a73f46980597ef18cf559cf0bf7029df3d01767e565e4b128ba414" exitCode=0 Feb 16 17:14:28.618250 master-0 kubenswrapper[4167]: I0216 17:14:28.617208 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"fa44e879e3a73f46980597ef18cf559cf0bf7029df3d01767e565e4b128ba414"} Feb 16 17:14:28.618250 master-0 kubenswrapper[4167]: I0216 17:14:28.617533 4167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:14:28.696005 master-0 kubenswrapper[4167]: I0216 17:14:28.694895 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:28.700118 master-0 kubenswrapper[4167]: I0216 17:14:28.700052 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:28.757996 master-0 kubenswrapper[4167]: I0216 17:14:28.755489 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:28.757996 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:28.757996 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:28.757996 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:28.757996 master-0 kubenswrapper[4167]: I0216 17:14:28.755551 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:29.139907 master-0 kubenswrapper[4167]: I0216 17:14:29.139554 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:14:29.139907 master-0 kubenswrapper[4167]: I0216 17:14:29.139857 4167 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:14:29Z","lastTransitionTime":"2026-02-16T17:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:14:29.445132 master-0 kubenswrapper[4167]: I0216 17:14:29.444937 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:29.445132 master-0 kubenswrapper[4167]: I0216 17:14:29.445000 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:29.445500 master-0 kubenswrapper[4167]: E0216 17:14:29.445131 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:14:29.445500 master-0 kubenswrapper[4167]: I0216 17:14:29.445202 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:29.445500 master-0 kubenswrapper[4167]: I0216 17:14:29.445237 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:29.445500 master-0 kubenswrapper[4167]: I0216 17:14:29.445236 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:29.445500 master-0 kubenswrapper[4167]: I0216 17:14:29.445251 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:29.445500 master-0 kubenswrapper[4167]: I0216 17:14:29.445286 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:29.445500 master-0 kubenswrapper[4167]: I0216 17:14:29.445365 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:29.445500 master-0 kubenswrapper[4167]: I0216 17:14:29.445392 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:29.445500 master-0 kubenswrapper[4167]: I0216 17:14:29.445405 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:29.445500 master-0 kubenswrapper[4167]: E0216 17:14:29.445321 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:14:29.445500 master-0 kubenswrapper[4167]: I0216 17:14:29.445473 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:29.445500 master-0 kubenswrapper[4167]: I0216 17:14:29.445506 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:29.445974 master-0 kubenswrapper[4167]: I0216 17:14:29.445535 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:29.445974 master-0 kubenswrapper[4167]: E0216 17:14:29.445535 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:14:29.445974 master-0 kubenswrapper[4167]: I0216 17:14:29.445560 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:29.445974 master-0 kubenswrapper[4167]: I0216 17:14:29.445561 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:29.445974 master-0 kubenswrapper[4167]: E0216 17:14:29.445627 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:14:29.445974 master-0 kubenswrapper[4167]: I0216 17:14:29.445670 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:29.445974 master-0 kubenswrapper[4167]: E0216 17:14:29.445775 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:14:29.445974 master-0 kubenswrapper[4167]: E0216 17:14:29.445850 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:14:29.445974 master-0 kubenswrapper[4167]: E0216 17:14:29.445911 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:14:29.445974 master-0 kubenswrapper[4167]: E0216 17:14:29.445977 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446020 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: E0216 17:14:29.446115 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446126 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446135 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446145 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446160 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446147 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446177 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446161 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: E0216 17:14:29.446220 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446234 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446249 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446275 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446278 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446289 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446291 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446295 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: E0216 17:14:29.446343 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446357 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446353 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:29.446364 master-0 kubenswrapper[4167]: I0216 17:14:29.446378 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446495 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: E0216 17:14:29.446503 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446498 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446536 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446571 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446579 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446608 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446702 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446711 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446718 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446707 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446704 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446739 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446734 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446754 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446749 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446765 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446777 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446766 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446772 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446805 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: E0216 17:14:29.446841 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: E0216 17:14:29.446904 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.446938 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.447007 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.447010 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.447019 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.447057 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.447066 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.447037 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: E0216 17:14:29.447091 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:14:29.447227 master-0 kubenswrapper[4167]: I0216 17:14:29.447125 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:29.448210 master-0 kubenswrapper[4167]: E0216 17:14:29.447288 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:14:29.448210 master-0 kubenswrapper[4167]: E0216 17:14:29.447500 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" Feb 16 17:14:29.448210 master-0 kubenswrapper[4167]: E0216 17:14:29.447637 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:14:29.448210 master-0 kubenswrapper[4167]: E0216 17:14:29.447753 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:14:29.448210 master-0 kubenswrapper[4167]: E0216 17:14:29.447818 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:14:29.448210 master-0 kubenswrapper[4167]: E0216 17:14:29.447886 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:14:29.448210 master-0 kubenswrapper[4167]: E0216 17:14:29.447933 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:14:29.448210 master-0 kubenswrapper[4167]: E0216 17:14:29.448039 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:14:29.448210 master-0 kubenswrapper[4167]: E0216 17:14:29.448118 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:14:29.448210 master-0 kubenswrapper[4167]: E0216 17:14:29.448162 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:14:29.448604 master-0 kubenswrapper[4167]: E0216 17:14:29.448279 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:14:29.448604 master-0 kubenswrapper[4167]: E0216 17:14:29.448460 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" Feb 16 17:14:29.448604 master-0 kubenswrapper[4167]: E0216 17:14:29.448540 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:14:29.448746 master-0 kubenswrapper[4167]: E0216 17:14:29.448601 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:14:29.448746 master-0 kubenswrapper[4167]: E0216 17:14:29.448673 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:14:29.448746 master-0 kubenswrapper[4167]: E0216 17:14:29.448719 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:14:29.448867 master-0 kubenswrapper[4167]: E0216 17:14:29.448791 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:14:29.448867 master-0 kubenswrapper[4167]: E0216 17:14:29.448845 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:14:29.449210 master-0 kubenswrapper[4167]: E0216 17:14:29.448932 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:14:29.449210 master-0 kubenswrapper[4167]: E0216 17:14:29.448981 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:14:29.449210 master-0 kubenswrapper[4167]: E0216 17:14:29.449081 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:14:29.449210 master-0 kubenswrapper[4167]: E0216 17:14:29.449138 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:14:29.449210 master-0 kubenswrapper[4167]: E0216 17:14:29.449187 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:14:29.449411 master-0 kubenswrapper[4167]: E0216 17:14:29.449270 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:14:29.449411 master-0 kubenswrapper[4167]: E0216 17:14:29.449373 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:14:29.449503 master-0 kubenswrapper[4167]: E0216 17:14:29.449421 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:14:29.449503 master-0 kubenswrapper[4167]: E0216 17:14:29.449483 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:14:29.449589 master-0 kubenswrapper[4167]: E0216 17:14:29.449550 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:14:29.449636 master-0 kubenswrapper[4167]: E0216 17:14:29.449610 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:14:29.449724 master-0 kubenswrapper[4167]: E0216 17:14:29.449686 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:14:29.449779 master-0 kubenswrapper[4167]: E0216 17:14:29.449723 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:14:29.449823 master-0 kubenswrapper[4167]: E0216 17:14:29.449794 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:14:29.449918 master-0 kubenswrapper[4167]: E0216 17:14:29.449885 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:14:29.450028 master-0 kubenswrapper[4167]: E0216 17:14:29.449996 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:14:29.450083 master-0 kubenswrapper[4167]: E0216 17:14:29.450052 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:14:29.450131 master-0 kubenswrapper[4167]: E0216 17:14:29.450113 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:14:29.450178 master-0 kubenswrapper[4167]: E0216 17:14:29.450161 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:14:29.450247 master-0 kubenswrapper[4167]: E0216 17:14:29.450220 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:14:29.450320 master-0 kubenswrapper[4167]: E0216 17:14:29.450301 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:14:29.450453 master-0 kubenswrapper[4167]: E0216 17:14:29.450363 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:14:29.450453 master-0 kubenswrapper[4167]: E0216 17:14:29.450415 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:14:29.450552 master-0 kubenswrapper[4167]: E0216 17:14:29.450533 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:14:29.450675 master-0 kubenswrapper[4167]: E0216 17:14:29.450633 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:14:29.450792 master-0 kubenswrapper[4167]: E0216 17:14:29.450768 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:14:29.450871 master-0 kubenswrapper[4167]: E0216 17:14:29.450845 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:14:29.450917 master-0 kubenswrapper[4167]: E0216 17:14:29.450896 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:14:29.450992 master-0 kubenswrapper[4167]: E0216 17:14:29.450949 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:14:29.626425 master-0 kubenswrapper[4167]: I0216 17:14:29.626317 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerStarted","Data":"dac15d7eada948ad47cd0f531b47c3157a1f1c84f8dd6641421d210893300fa3"} Feb 16 17:14:29.631074 master-0 kubenswrapper[4167]: I0216 17:14:29.631022 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:14:29.754001 master-0 kubenswrapper[4167]: I0216 17:14:29.753784 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:29.754001 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:29.754001 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:29.754001 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:29.754001 master-0 kubenswrapper[4167]: I0216 17:14:29.753860 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:30.444703 master-0 kubenswrapper[4167]: I0216 17:14:30.444600 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:30.444921 master-0 kubenswrapper[4167]: E0216 17:14:30.444865 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:14:30.754846 master-0 kubenswrapper[4167]: I0216 17:14:30.754645 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:30.754846 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:30.754846 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:30.754846 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:30.754846 master-0 kubenswrapper[4167]: I0216 17:14:30.754759 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:31.445006 master-0 kubenswrapper[4167]: I0216 17:14:31.444899 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:31.445006 master-0 kubenswrapper[4167]: I0216 17:14:31.444929 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:31.445006 master-0 kubenswrapper[4167]: I0216 17:14:31.444984 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:31.445006 master-0 kubenswrapper[4167]: I0216 17:14:31.444907 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:31.445006 master-0 kubenswrapper[4167]: I0216 17:14:31.444943 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445088 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445247 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: E0216 17:14:31.445218 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445288 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445290 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445328 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445330 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445364 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445371 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445341 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445399 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445369 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445411 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445353 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445438 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445419 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445424 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445397 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445319 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445631 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445640 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445673 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: E0216 17:14:31.445658 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445705 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445670 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445682 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445733 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445753 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445717 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445787 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445773 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445822 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445699 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445851 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445839 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445729 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445898 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445773 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445790 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445795 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445792 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445953 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445803 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445736 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.446003 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.446017 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445827 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445740 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.446056 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445758 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: E0216 17:14:31.445900 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445925 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445938 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.445947 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: E0216 17:14:31.446155 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.446176 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:31.446011 master-0 kubenswrapper[4167]: I0216 17:14:31.446052 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: I0216 17:14:31.445709 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: I0216 17:14:31.445802 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: I0216 17:14:31.445737 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: I0216 17:14:31.445800 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.446327 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.446476 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.446603 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.446679 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.446760 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.446884 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.447025 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.447122 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.447206 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.447309 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.447384 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.447464 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.447571 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.448090 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.448396 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.448673 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.448780 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.448860 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.448988 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.449070 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.449149 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.449251 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.449331 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.449407 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.449492 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.449576 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.449658 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.449748 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.449879 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.450000 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.450282 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" Feb 16 17:14:31.450397 master-0 kubenswrapper[4167]: E0216 17:14:31.450364 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.450472 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.450554 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.450658 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.450733 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.450839 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.450919 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.451058 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.451166 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.451261 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.451334 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.451409 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.451490 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.451588 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.451721 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.451822 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.451926 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.452058 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.452181 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.452256 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.452336 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.452485 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.452604 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.452682 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.452811 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.452889 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:14:31.452998 master-0 kubenswrapper[4167]: E0216 17:14:31.453015 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:14:31.632417 master-0 kubenswrapper[4167]: I0216 17:14:31.632323 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_32286c81635de6de1cf7f328273c1a49/startup-monitor/2.log" Feb 16 17:14:31.632417 master-0 kubenswrapper[4167]: I0216 17:14:31.632373 4167 generic.go:334] "Generic (PLEG): container finished" podID="32286c81635de6de1cf7f328273c1a49" containerID="d7c38e55f71867938246c19521c872dc2168e928e2d36640288dfca85978e020" exitCode=137 Feb 16 17:14:31.755859 master-0 kubenswrapper[4167]: I0216 17:14:31.755622 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:31.755859 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:31.755859 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:31.755859 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:31.755859 master-0 kubenswrapper[4167]: I0216 17:14:31.755698 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:32.048275 master-0 kubenswrapper[4167]: I0216 17:14:32.047836 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_32286c81635de6de1cf7f328273c1a49/startup-monitor/2.log" Feb 16 17:14:32.048275 master-0 kubenswrapper[4167]: I0216 17:14:32.047997 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:32.097922 master-0 kubenswrapper[4167]: I0216 17:14:32.097587 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 16 17:14:32.098171 master-0 kubenswrapper[4167]: I0216 17:14:32.097980 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 16 17:14:32.098171 master-0 kubenswrapper[4167]: I0216 17:14:32.097744 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log" (OuterVolumeSpecName: "var-log") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:14:32.098171 master-0 kubenswrapper[4167]: I0216 17:14:32.098042 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 16 17:14:32.098385 master-0 kubenswrapper[4167]: I0216 17:14:32.098177 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock" (OuterVolumeSpecName: "var-lock") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:14:32.098385 master-0 kubenswrapper[4167]: I0216 17:14:32.098166 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:14:32.098385 master-0 kubenswrapper[4167]: I0216 17:14:32.098261 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 16 17:14:32.098385 master-0 kubenswrapper[4167]: I0216 17:14:32.098297 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") pod \"32286c81635de6de1cf7f328273c1a49\" (UID: \"32286c81635de6de1cf7f328273c1a49\") " Feb 16 17:14:32.098631 master-0 kubenswrapper[4167]: I0216 17:14:32.098381 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests" (OuterVolumeSpecName: "manifests") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:14:32.101119 master-0 kubenswrapper[4167]: I0216 17:14:32.101013 4167 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.101119 master-0 kubenswrapper[4167]: I0216 17:14:32.101042 4167 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.101119 master-0 kubenswrapper[4167]: I0216 17:14:32.101057 4167 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-manifests\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.101119 master-0 kubenswrapper[4167]: I0216 17:14:32.101070 4167 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-var-log\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.110668 master-0 kubenswrapper[4167]: I0216 17:14:32.110303 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "32286c81635de6de1cf7f328273c1a49" (UID: "32286c81635de6de1cf7f328273c1a49"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:14:32.205829 master-0 kubenswrapper[4167]: I0216 17:14:32.205779 4167 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/32286c81635de6de1cf7f328273c1a49-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.444339 master-0 kubenswrapper[4167]: I0216 17:14:32.444298 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:32.444523 master-0 kubenswrapper[4167]: E0216 17:14:32.444412 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:14:32.450473 master-0 kubenswrapper[4167]: I0216 17:14:32.450419 4167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32286c81635de6de1cf7f328273c1a49" path="/var/lib/kubelet/pods/32286c81635de6de1cf7f328273c1a49/volumes" Feb 16 17:14:32.451065 master-0 kubenswrapper[4167]: I0216 17:14:32.451032 4167 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Feb 16 17:14:32.464121 master-0 kubenswrapper[4167]: I0216 17:14:32.464050 4167 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 17:14:32.464121 master-0 kubenswrapper[4167]: I0216 17:14:32.464097 4167 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="3e8413b2-3361-497a-9ac5-680dae58ae63" Feb 16 17:14:32.469000 master-0 kubenswrapper[4167]: I0216 17:14:32.468903 4167 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 17:14:32.469000 master-0 kubenswrapper[4167]: I0216 17:14:32.468950 4167 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="3e8413b2-3361-497a-9ac5-680dae58ae63" Feb 16 17:14:32.581177 master-0 kubenswrapper[4167]: I0216 17:14:32.578095 4167 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 17:14:32.581177 master-0 kubenswrapper[4167]: I0216 17:14:32.578320 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:32.614907 master-0 kubenswrapper[4167]: I0216 17:14:32.609543 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:32.648139 master-0 kubenswrapper[4167]: I0216 17:14:32.647537 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_32286c81635de6de1cf7f328273c1a49/startup-monitor/2.log" Feb 16 17:14:32.648139 master-0 kubenswrapper[4167]: I0216 17:14:32.647657 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:14:32.648139 master-0 kubenswrapper[4167]: I0216 17:14:32.648049 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:14:32.671661 master-0 kubenswrapper[4167]: I0216 17:14:32.671460 4167 scope.go:117] "RemoveContainer" containerID="d7c38e55f71867938246c19521c872dc2168e928e2d36640288dfca85978e020" Feb 16 17:14:32.731044 master-0 kubenswrapper[4167]: I0216 17:14:32.721899 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume\") pod \"e1443fb7-cb1e-4105-b604-b88c749620c4\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " Feb 16 17:14:32.731044 master-0 kubenswrapper[4167]: I0216 17:14:32.721947 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls\") pod \"e1443fb7-cb1e-4105-b604-b88c749620c4\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " Feb 16 17:14:32.731044 master-0 kubenswrapper[4167]: I0216 17:14:32.724098 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "e1443fb7-cb1e-4105-b604-b88c749620c4" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:14:32.731044 master-0 kubenswrapper[4167]: I0216 17:14:32.724389 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume" (OuterVolumeSpecName: "config-volume") pod "e1443fb7-cb1e-4105-b604-b88c749620c4" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:14:32.756069 master-0 kubenswrapper[4167]: I0216 17:14:32.756013 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:32.756069 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:32.756069 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:32.756069 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:32.756776 master-0 kubenswrapper[4167]: I0216 17:14:32.756083 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:32.826397 master-0 kubenswrapper[4167]: I0216 17:14:32.826341 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric\") pod \"e1443fb7-cb1e-4105-b604-b88c749620c4\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " Feb 16 17:14:32.826583 master-0 kubenswrapper[4167]: I0216 17:14:32.826496 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-config-out\") pod \"e1443fb7-cb1e-4105-b604-b88c749620c4\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " Feb 16 17:14:32.826583 master-0 kubenswrapper[4167]: I0216 17:14:32.826546 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web\") pod \"e1443fb7-cb1e-4105-b604-b88c749620c4\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " Feb 16 17:14:32.826644 master-0 kubenswrapper[4167]: I0216 17:14:32.826592 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle\") pod \"e1443fb7-cb1e-4105-b604-b88c749620c4\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " Feb 16 17:14:32.826644 master-0 kubenswrapper[4167]: I0216 17:14:32.826624 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-metrics-client-ca\") pod \"e1443fb7-cb1e-4105-b604-b88c749620c4\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " Feb 16 17:14:32.826712 master-0 kubenswrapper[4167]: I0216 17:14:32.826673 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy\") pod \"e1443fb7-cb1e-4105-b604-b88c749620c4\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " Feb 16 17:14:32.826712 master-0 kubenswrapper[4167]: I0216 17:14:32.826708 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-main-db\") pod \"e1443fb7-cb1e-4105-b604-b88c749620c4\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " Feb 16 17:14:32.826768 master-0 kubenswrapper[4167]: I0216 17:14:32.826731 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets\") pod \"e1443fb7-cb1e-4105-b604-b88c749620c4\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " Feb 16 17:14:32.826768 master-0 kubenswrapper[4167]: I0216 17:14:32.826756 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjpvn\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-kube-api-access-tjpvn\") pod \"e1443fb7-cb1e-4105-b604-b88c749620c4\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " Feb 16 17:14:32.826822 master-0 kubenswrapper[4167]: I0216 17:14:32.826796 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config\") pod \"e1443fb7-cb1e-4105-b604-b88c749620c4\" (UID: \"e1443fb7-cb1e-4105-b604-b88c749620c4\") " Feb 16 17:14:32.827864 master-0 kubenswrapper[4167]: I0216 17:14:32.827826 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "e1443fb7-cb1e-4105-b604-b88c749620c4" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:14:32.828210 master-0 kubenswrapper[4167]: I0216 17:14:32.828184 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "e1443fb7-cb1e-4105-b604-b88c749620c4" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:14:32.828539 master-0 kubenswrapper[4167]: I0216 17:14:32.828510 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "e1443fb7-cb1e-4105-b604-b88c749620c4" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:14:32.828578 master-0 kubenswrapper[4167]: I0216 17:14:32.828564 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "e1443fb7-cb1e-4105-b604-b88c749620c4" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:14:32.828811 master-0 kubenswrapper[4167]: I0216 17:14:32.828785 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "e1443fb7-cb1e-4105-b604-b88c749620c4" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:14:32.829753 master-0 kubenswrapper[4167]: I0216 17:14:32.829731 4167 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.829753 master-0 kubenswrapper[4167]: I0216 17:14:32.829751 4167 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.829842 master-0 kubenswrapper[4167]: I0216 17:14:32.829762 4167 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e1443fb7-cb1e-4105-b604-b88c749620c4-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.829842 master-0 kubenswrapper[4167]: I0216 17:14:32.829773 4167 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.829842 master-0 kubenswrapper[4167]: I0216 17:14:32.829784 4167 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.829842 master-0 kubenswrapper[4167]: I0216 17:14:32.829795 4167 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.830043 master-0 kubenswrapper[4167]: I0216 17:14:32.830021 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "e1443fb7-cb1e-4105-b604-b88c749620c4" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:14:32.830142 master-0 kubenswrapper[4167]: I0216 17:14:32.830114 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "e1443fb7-cb1e-4105-b604-b88c749620c4" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:14:32.832685 master-0 kubenswrapper[4167]: I0216 17:14:32.832652 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-kube-api-access-tjpvn" (OuterVolumeSpecName: "kube-api-access-tjpvn") pod "e1443fb7-cb1e-4105-b604-b88c749620c4" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4"). InnerVolumeSpecName "kube-api-access-tjpvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:14:32.841246 master-0 kubenswrapper[4167]: I0216 17:14:32.841160 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-config-out" (OuterVolumeSpecName: "config-out") pod "e1443fb7-cb1e-4105-b604-b88c749620c4" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:14:32.849456 master-0 kubenswrapper[4167]: I0216 17:14:32.848138 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config" (OuterVolumeSpecName: "web-config") pod "e1443fb7-cb1e-4105-b604-b88c749620c4" (UID: "e1443fb7-cb1e-4105-b604-b88c749620c4"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:14:32.932521 master-0 kubenswrapper[4167]: I0216 17:14:32.932449 4167 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.932521 master-0 kubenswrapper[4167]: I0216 17:14:32.932483 4167 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-tls-assets\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.932521 master-0 kubenswrapper[4167]: I0216 17:14:32.932495 4167 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjpvn\" (UniqueName: \"kubernetes.io/projected/e1443fb7-cb1e-4105-b604-b88c749620c4-kube-api-access-tjpvn\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.932521 master-0 kubenswrapper[4167]: I0216 17:14:32.932504 4167 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-web-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.932521 master-0 kubenswrapper[4167]: I0216 17:14:32.932514 4167 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/e1443fb7-cb1e-4105-b604-b88c749620c4-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:32.932521 master-0 kubenswrapper[4167]: I0216 17:14:32.932526 4167 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e1443fb7-cb1e-4105-b604-b88c749620c4-config-out\") on node \"master-0\" DevicePath \"\"" Feb 16 17:14:33.012661 master-0 kubenswrapper[4167]: I0216 17:14:33.012469 4167 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 17:14:33.029032 master-0 kubenswrapper[4167]: I0216 17:14:33.028951 4167 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 17:14:33.444987 master-0 kubenswrapper[4167]: I0216 17:14:33.444730 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:33.444987 master-0 kubenswrapper[4167]: I0216 17:14:33.444774 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:33.444987 master-0 kubenswrapper[4167]: I0216 17:14:33.444774 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:33.444987 master-0 kubenswrapper[4167]: I0216 17:14:33.444792 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:33.444987 master-0 kubenswrapper[4167]: I0216 17:14:33.444912 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445080 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: E0216 17:14:33.445083 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445135 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445137 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445163 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445171 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445133 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445193 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445146 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445207 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445222 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445242 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445250 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445259 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445250 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445269 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445274 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:33.445369 master-0 kubenswrapper[4167]: I0216 17:14:33.445291 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445544 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445584 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445587 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445619 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: E0216 17:14:33.445584 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445639 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445652 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445637 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445588 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445618 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445618 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445676 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445693 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445680 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445657 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445714 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445733 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445740 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445703 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445744 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445684 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445699 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: E0216 17:14:33.445829 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445716 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445774 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445791 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445753 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445902 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445873 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445900 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445934 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445950 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445942 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: E0216 17:14:33.446020 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.445925 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.446048 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.446085 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.446076 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.446112 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:33.446098 master-0 kubenswrapper[4167]: I0216 17:14:33.446097 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:33.447563 master-0 kubenswrapper[4167]: I0216 17:14:33.446264 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:33.447563 master-0 kubenswrapper[4167]: I0216 17:14:33.446270 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:33.447563 master-0 kubenswrapper[4167]: E0216 17:14:33.446272 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:14:33.452524 master-0 kubenswrapper[4167]: E0216 17:14:33.452467 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:14:33.452721 master-0 kubenswrapper[4167]: E0216 17:14:33.452663 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:14:33.452832 master-0 kubenswrapper[4167]: E0216 17:14:33.452801 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:14:33.453023 master-0 kubenswrapper[4167]: E0216 17:14:33.452946 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:14:33.453186 master-0 kubenswrapper[4167]: E0216 17:14:33.453141 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:14:33.453329 master-0 kubenswrapper[4167]: E0216 17:14:33.453296 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:14:33.453434 master-0 kubenswrapper[4167]: E0216 17:14:33.453412 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:14:33.453551 master-0 kubenswrapper[4167]: E0216 17:14:33.453531 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:14:33.453643 master-0 kubenswrapper[4167]: E0216 17:14:33.453623 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:14:33.453809 master-0 kubenswrapper[4167]: E0216 17:14:33.453786 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:14:33.453909 master-0 kubenswrapper[4167]: E0216 17:14:33.453888 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:14:33.457205 master-0 kubenswrapper[4167]: E0216 17:14:33.457131 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:14:33.458557 master-0 kubenswrapper[4167]: E0216 17:14:33.458527 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:14:33.458639 master-0 kubenswrapper[4167]: E0216 17:14:33.458625 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:14:33.458712 master-0 kubenswrapper[4167]: E0216 17:14:33.458695 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:14:33.458878 master-0 kubenswrapper[4167]: E0216 17:14:33.458803 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:14:33.458974 master-0 kubenswrapper[4167]: E0216 17:14:33.458895 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:14:33.459152 master-0 kubenswrapper[4167]: E0216 17:14:33.459102 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:14:33.459213 master-0 kubenswrapper[4167]: E0216 17:14:33.457736 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:14:33.459311 master-0 kubenswrapper[4167]: E0216 17:14:33.459243 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:14:33.459520 master-0 kubenswrapper[4167]: E0216 17:14:33.459467 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:14:33.459646 master-0 kubenswrapper[4167]: E0216 17:14:33.459605 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:14:33.459859 master-0 kubenswrapper[4167]: E0216 17:14:33.459822 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:14:33.460124 master-0 kubenswrapper[4167]: E0216 17:14:33.460090 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:14:33.460233 master-0 kubenswrapper[4167]: E0216 17:14:33.460201 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:14:33.460348 master-0 kubenswrapper[4167]: E0216 17:14:33.460318 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:14:33.460431 master-0 kubenswrapper[4167]: E0216 17:14:33.460400 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:14:33.460597 master-0 kubenswrapper[4167]: E0216 17:14:33.460560 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:14:33.460711 master-0 kubenswrapper[4167]: E0216 17:14:33.460676 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:14:33.460860 master-0 kubenswrapper[4167]: E0216 17:14:33.460799 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:14:33.460946 master-0 kubenswrapper[4167]: E0216 17:14:33.460912 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:14:33.461164 master-0 kubenswrapper[4167]: E0216 17:14:33.461138 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:14:33.461501 master-0 kubenswrapper[4167]: E0216 17:14:33.461455 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" Feb 16 17:14:33.461583 master-0 kubenswrapper[4167]: E0216 17:14:33.461541 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:14:33.462111 master-0 kubenswrapper[4167]: E0216 17:14:33.462078 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:14:33.462172 master-0 kubenswrapper[4167]: E0216 17:14:33.462068 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:14:33.462230 master-0 kubenswrapper[4167]: E0216 17:14:33.462173 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:14:33.462280 master-0 kubenswrapper[4167]: E0216 17:14:33.462249 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:14:33.462411 master-0 kubenswrapper[4167]: E0216 17:14:33.462366 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:14:33.462519 master-0 kubenswrapper[4167]: E0216 17:14:33.462490 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:14:33.462694 master-0 kubenswrapper[4167]: E0216 17:14:33.462646 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:14:33.462801 master-0 kubenswrapper[4167]: E0216 17:14:33.462769 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:14:33.462909 master-0 kubenswrapper[4167]: E0216 17:14:33.462875 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:14:33.463097 master-0 kubenswrapper[4167]: E0216 17:14:33.463037 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:14:33.463165 master-0 kubenswrapper[4167]: E0216 17:14:33.463147 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:14:33.463359 master-0 kubenswrapper[4167]: E0216 17:14:33.463324 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:14:33.463484 master-0 kubenswrapper[4167]: E0216 17:14:33.463447 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:14:33.463581 master-0 kubenswrapper[4167]: E0216 17:14:33.463556 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:14:33.463785 master-0 kubenswrapper[4167]: E0216 17:14:33.463709 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:14:33.463861 master-0 kubenswrapper[4167]: E0216 17:14:33.463808 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:14:33.464060 master-0 kubenswrapper[4167]: E0216 17:14:33.464007 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:14:33.464132 master-0 kubenswrapper[4167]: E0216 17:14:33.464092 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:14:33.464218 master-0 kubenswrapper[4167]: E0216 17:14:33.464194 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:14:33.464301 master-0 kubenswrapper[4167]: E0216 17:14:33.464278 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:14:33.464380 master-0 kubenswrapper[4167]: E0216 17:14:33.464359 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:14:33.464495 master-0 kubenswrapper[4167]: E0216 17:14:33.464456 4167 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:14:33.754504 master-0 kubenswrapper[4167]: I0216 17:14:33.754027 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:33.754504 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:33.754504 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:33.754504 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:33.754504 master-0 kubenswrapper[4167]: I0216 17:14:33.754428 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:34.444363 master-0 kubenswrapper[4167]: I0216 17:14:34.444295 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:34.446833 master-0 kubenswrapper[4167]: I0216 17:14:34.446793 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 17:14:34.446955 master-0 kubenswrapper[4167]: I0216 17:14:34.446934 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 17:14:34.447954 master-0 kubenswrapper[4167]: I0216 17:14:34.447927 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 17:14:34.450702 master-0 kubenswrapper[4167]: I0216 17:14:34.450661 4167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1443fb7-cb1e-4105-b604-b88c749620c4" path="/var/lib/kubelet/pods/e1443fb7-cb1e-4105-b604-b88c749620c4/volumes" Feb 16 17:14:34.455023 master-0 kubenswrapper[4167]: I0216 17:14:34.454992 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 17:14:34.754207 master-0 kubenswrapper[4167]: I0216 17:14:34.754076 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:34.754207 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:34.754207 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:34.754207 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:34.754207 master-0 kubenswrapper[4167]: I0216 17:14:34.754142 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:35.417365 master-0 kubenswrapper[4167]: I0216 17:14:35.417308 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:35.417365 master-0 kubenswrapper[4167]: I0216 17:14:35.417356 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:35.417605 master-0 kubenswrapper[4167]: I0216 17:14:35.417394 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:35.417605 master-0 kubenswrapper[4167]: I0216 17:14:35.417425 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:35.417663 master-0 kubenswrapper[4167]: E0216 17:14:35.417589 4167 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:35.417663 master-0 kubenswrapper[4167]: E0216 17:14:35.417639 4167 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:35.417729 master-0 kubenswrapper[4167]: I0216 17:14:35.417595 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:35.417729 master-0 kubenswrapper[4167]: E0216 17:14:35.417679 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.417665562 +0000 UTC m=+33.148111940 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:14:35.417729 master-0 kubenswrapper[4167]: E0216 17:14:35.417717 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:35.417729 master-0 kubenswrapper[4167]: I0216 17:14:35.417721 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:35.417850 master-0 kubenswrapper[4167]: I0216 17:14:35.417752 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:35.417850 master-0 kubenswrapper[4167]: E0216 17:14:35.417766 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.417752525 +0000 UTC m=+33.148198893 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:14:35.417850 master-0 kubenswrapper[4167]: E0216 17:14:35.417783 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.417775505 +0000 UTC m=+33.148221883 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:14:35.417942 master-0 kubenswrapper[4167]: E0216 17:14:35.417856 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:35.417942 master-0 kubenswrapper[4167]: E0216 17:14:35.417885 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:35.417942 master-0 kubenswrapper[4167]: E0216 17:14:35.417911 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.417905499 +0000 UTC m=+33.148351877 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:35.417942 master-0 kubenswrapper[4167]: E0216 17:14:35.417940 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:35.418095 master-0 kubenswrapper[4167]: E0216 17:14:35.418007 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:35.418095 master-0 kubenswrapper[4167]: E0216 17:14:35.418006 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.417933739 +0000 UTC m=+33.148380167 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:14:35.418095 master-0 kubenswrapper[4167]: E0216 17:14:35.418045 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.418037712 +0000 UTC m=+33.148484090 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:35.418095 master-0 kubenswrapper[4167]: I0216 17:14:35.418062 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:35.418095 master-0 kubenswrapper[4167]: I0216 17:14:35.418087 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:35.418095 master-0 kubenswrapper[4167]: E0216 17:14:35.418093 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:35.418264 master-0 kubenswrapper[4167]: I0216 17:14:35.418110 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:35.418264 master-0 kubenswrapper[4167]: E0216 17:14:35.418121 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.418115094 +0000 UTC m=+33.148561472 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:35.418264 master-0 kubenswrapper[4167]: E0216 17:14:35.418154 4167 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:35.418264 master-0 kubenswrapper[4167]: I0216 17:14:35.418159 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:35.418264 master-0 kubenswrapper[4167]: E0216 17:14:35.418175 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.418169086 +0000 UTC m=+33.148615464 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:14:35.418264 master-0 kubenswrapper[4167]: E0216 17:14:35.418188 4167 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:35.418264 master-0 kubenswrapper[4167]: I0216 17:14:35.418192 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:35.418264 master-0 kubenswrapper[4167]: E0216 17:14:35.418208 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.418203417 +0000 UTC m=+33.148649795 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:14:35.418264 master-0 kubenswrapper[4167]: E0216 17:14:35.418221 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.418215217 +0000 UTC m=+33.148661595 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:14:35.418520 master-0 kubenswrapper[4167]: E0216 17:14:35.418330 4167 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:35.418520 master-0 kubenswrapper[4167]: I0216 17:14:35.418357 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.418520 master-0 kubenswrapper[4167]: I0216 17:14:35.418381 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.418520 master-0 kubenswrapper[4167]: E0216 17:14:35.418411 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:35.418520 master-0 kubenswrapper[4167]: E0216 17:14:35.418415 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.418388922 +0000 UTC m=+33.148835360 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:14:35.418520 master-0 kubenswrapper[4167]: E0216 17:14:35.418450 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:35.418520 master-0 kubenswrapper[4167]: E0216 17:14:35.418470 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.418464014 +0000 UTC m=+33.148910392 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:14:35.418520 master-0 kubenswrapper[4167]: I0216 17:14:35.418470 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:35.418520 master-0 kubenswrapper[4167]: E0216 17:14:35.418500 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:35.418773 master-0 kubenswrapper[4167]: E0216 17:14:35.418550 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:35.418773 master-0 kubenswrapper[4167]: E0216 17:14:35.418561 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.418541236 +0000 UTC m=+33.148987684 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:14:35.418773 master-0 kubenswrapper[4167]: E0216 17:14:35.418603 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.418590167 +0000 UTC m=+33.149036545 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:14:35.418773 master-0 kubenswrapper[4167]: I0216 17:14:35.418670 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:35.418773 master-0 kubenswrapper[4167]: I0216 17:14:35.418724 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:35.418773 master-0 kubenswrapper[4167]: E0216 17:14:35.418764 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.418736521 +0000 UTC m=+33.149182949 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:14:35.418948 master-0 kubenswrapper[4167]: E0216 17:14:35.418777 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:35.418948 master-0 kubenswrapper[4167]: E0216 17:14:35.418768 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:35.418948 master-0 kubenswrapper[4167]: I0216 17:14:35.418824 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:35.418948 master-0 kubenswrapper[4167]: E0216 17:14:35.418842 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:35.418948 master-0 kubenswrapper[4167]: E0216 17:14:35.418847 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.418833984 +0000 UTC m=+33.149280422 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:14:35.418948 master-0 kubenswrapper[4167]: E0216 17:14:35.418878 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.418872425 +0000 UTC m=+33.149318793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:14:35.418948 master-0 kubenswrapper[4167]: I0216 17:14:35.418900 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:35.418948 master-0 kubenswrapper[4167]: I0216 17:14:35.418932 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:35.419185 master-0 kubenswrapper[4167]: I0216 17:14:35.418976 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:35.419185 master-0 kubenswrapper[4167]: E0216 17:14:35.419000 4167 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:35.419185 master-0 kubenswrapper[4167]: E0216 17:14:35.419024 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.418996418 +0000 UTC m=+33.149442866 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:14:35.419185 master-0 kubenswrapper[4167]: E0216 17:14:35.419026 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:35.419185 master-0 kubenswrapper[4167]: I0216 17:14:35.419075 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:35.419185 master-0 kubenswrapper[4167]: E0216 17:14:35.419107 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.419090211 +0000 UTC m=+33.149536629 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:14:35.419185 master-0 kubenswrapper[4167]: E0216 17:14:35.419030 4167 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:35.419185 master-0 kubenswrapper[4167]: E0216 17:14:35.419141 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.419131392 +0000 UTC m=+33.149577770 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:14:35.419185 master-0 kubenswrapper[4167]: E0216 17:14:35.419164 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.419151472 +0000 UTC m=+33.149597910 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:14:35.419185 master-0 kubenswrapper[4167]: E0216 17:14:35.419169 4167 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:14:35.419694 master-0 kubenswrapper[4167]: E0216 17:14:35.419229 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.419210484 +0000 UTC m=+33.149656902 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:14:35.419694 master-0 kubenswrapper[4167]: I0216 17:14:35.419271 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:35.419694 master-0 kubenswrapper[4167]: E0216 17:14:35.419316 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:35.419694 master-0 kubenswrapper[4167]: E0216 17:14:35.419346 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.419339048 +0000 UTC m=+33.149785426 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:14:35.419694 master-0 kubenswrapper[4167]: I0216 17:14:35.419377 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:35.419694 master-0 kubenswrapper[4167]: I0216 17:14:35.419401 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:35.419694 master-0 kubenswrapper[4167]: I0216 17:14:35.419422 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.419694 master-0 kubenswrapper[4167]: E0216 17:14:35.419491 4167 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:35.419694 master-0 kubenswrapper[4167]: E0216 17:14:35.419541 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.419526293 +0000 UTC m=+33.149972681 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:14:35.419694 master-0 kubenswrapper[4167]: I0216 17:14:35.419567 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:35.419694 master-0 kubenswrapper[4167]: I0216 17:14:35.419591 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:35.419694 master-0 kubenswrapper[4167]: I0216 17:14:35.419612 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: E0216 17:14:35.419719 4167 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: I0216 17:14:35.419792 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: I0216 17:14:35.419844 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: E0216 17:14:35.419857 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.419847291 +0000 UTC m=+33.150293669 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: E0216 17:14:35.419882 4167 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: E0216 17:14:35.419909 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: E0216 17:14:35.419916 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.419907343 +0000 UTC m=+33.150353851 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: E0216 17:14:35.419933 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.419927063 +0000 UTC m=+33.150373441 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: E0216 17:14:35.419935 4167 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: E0216 17:14:35.419947 4167 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: E0216 17:14:35.419988 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.419980925 +0000 UTC m=+33.150427303 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: E0216 17:14:35.420000 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.419995675 +0000 UTC m=+33.150442053 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: I0216 17:14:35.419884 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: I0216 17:14:35.420031 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:35.420047 master-0 kubenswrapper[4167]: I0216 17:14:35.420054 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: I0216 17:14:35.420079 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420139 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420167 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.42015974 +0000 UTC m=+33.150606218 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420201 4167 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420211 4167 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420229 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.420224121 +0000 UTC m=+33.150670499 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420259 4167 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420279 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.420274083 +0000 UTC m=+33.150720461 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: I0216 17:14:35.420300 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: I0216 17:14:35.420321 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420391 4167 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420395 4167 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420414 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.420407436 +0000 UTC m=+33.150853814 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420427 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.420422067 +0000 UTC m=+33.150868445 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420436 4167 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: I0216 17:14:35.420442 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420458 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.420450998 +0000 UTC m=+33.150897506 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: E0216 17:14:35.420484 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:35.420474 master-0 kubenswrapper[4167]: I0216 17:14:35.420486 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: E0216 17:14:35.420504 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.420499509 +0000 UTC m=+33.150945887 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: I0216 17:14:35.420520 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: E0216 17:14:35.420528 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: I0216 17:14:35.420540 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: E0216 17:14:35.420550 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.42054351 +0000 UTC m=+33.150989888 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: E0216 17:14:35.420569 4167 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: I0216 17:14:35.420581 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: I0216 17:14:35.420604 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: E0216 17:14:35.420613 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.420602792 +0000 UTC m=+33.151049260 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: E0216 17:14:35.420627 4167 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: E0216 17:14:35.420673 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.420658043 +0000 UTC m=+33.151104531 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: E0216 17:14:35.420754 4167 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: E0216 17:14:35.420821 4167 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: E0216 17:14:35.420855 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.420845978 +0000 UTC m=+33.151292456 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: I0216 17:14:35.420773 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: E0216 17:14:35.420873 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.420863779 +0000 UTC m=+33.151310377 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: E0216 17:14:35.420923 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:35.420998 master-0 kubenswrapper[4167]: I0216 17:14:35.420929 4167 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.420951 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.420943101 +0000 UTC m=+33.151389629 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421043 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421089 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: I0216 17:14:35.420948 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421094 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421086465 +0000 UTC m=+33.151532843 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421130 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421124196 +0000 UTC m=+33.151570574 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: I0216 17:14:35.421154 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: I0216 17:14:35.421176 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421239 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421263 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421256859 +0000 UTC m=+33.151703237 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421270 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: I0216 17:14:35.421306 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421322 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421301541 +0000 UTC m=+33.151747929 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421354 4167 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421377 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421372053 +0000 UTC m=+33.151818431 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: I0216 17:14:35.421353 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421388 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: I0216 17:14:35.421405 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421419 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421409534 +0000 UTC m=+33.151855932 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: I0216 17:14:35.421443 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421460 4167 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: I0216 17:14:35.421473 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421483 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421476845 +0000 UTC m=+33.151923223 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: I0216 17:14:35.421503 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:35.421498 master-0 kubenswrapper[4167]: E0216 17:14:35.421517 4167 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: I0216 17:14:35.421530 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421536 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421531347 +0000 UTC m=+33.151977725 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421564 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421583 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421578048 +0000 UTC m=+33.152024426 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421612 4167 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421637 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421641 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.42163338 +0000 UTC m=+33.152079768 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: I0216 17:14:35.421563 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421656 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.42165157 +0000 UTC m=+33.152097948 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421616 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: I0216 17:14:35.421688 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421696 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421691051 +0000 UTC m=+33.152137429 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: I0216 17:14:35.421755 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421757 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421781 4167 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421798 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421790904 +0000 UTC m=+33.152237302 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: I0216 17:14:35.421816 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421839 4167 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421864 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421857236 +0000 UTC m=+33.152303634 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421869 4167 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421883 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421875766 +0000 UTC m=+33.152322154 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: I0216 17:14:35.421837 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.421901 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.421893947 +0000 UTC m=+33.152340335 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: I0216 17:14:35.421920 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: I0216 17:14:35.421976 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.422054 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.422064 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.422087 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.422079082 +0000 UTC m=+33.152525470 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: I0216 17:14:35.422013 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.422106 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.422099052 +0000 UTC m=+33.152545440 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: I0216 17:14:35.422129 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.422133 4167 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: I0216 17:14:35.422159 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:35.422273 master-0 kubenswrapper[4167]: E0216 17:14:35.422179 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.422171164 +0000 UTC m=+33.152617542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422289 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422367 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.422346249 +0000 UTC m=+33.152792707 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422371 4167 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422412 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.42240073 +0000 UTC m=+33.152847198 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: I0216 17:14:35.422406 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: I0216 17:14:35.422462 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422494 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422526 4167 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422560 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.422539784 +0000 UTC m=+33.152986262 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422591 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.422575105 +0000 UTC m=+33.153021523 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422605 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422619 4167 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422632 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422664 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.422655327 +0000 UTC m=+33.153101715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: I0216 17:14:35.422497 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: I0216 17:14:35.422700 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: I0216 17:14:35.422727 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422774 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422800 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.422792261 +0000 UTC m=+33.153238659 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422823 4167 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422839 4167 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422849 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.422876 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.422868163 +0000 UTC m=+33.153314541 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: I0216 17:14:35.422902 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: I0216 17:14:35.422933 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: I0216 17:14:35.422984 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: I0216 17:14:35.423011 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.423057 4167 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.423081 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.423087 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.423078179 +0000 UTC m=+33.153524577 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.423085 4167 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: I0216 17:14:35.423174 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:35.423311 master-0 kubenswrapper[4167]: E0216 17:14:35.423182 4167 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.424317 master-0 kubenswrapper[4167]: E0216 17:14:35.423204 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.423192842 +0000 UTC m=+33.153639220 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:14:35.424407 master-0 kubenswrapper[4167]: I0216 17:14:35.424383 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:35.424450 master-0 kubenswrapper[4167]: E0216 17:14:35.423134 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:14:35.424450 master-0 kubenswrapper[4167]: E0216 17:14:35.424433 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.424450 master-0 kubenswrapper[4167]: E0216 17:14:35.424447 4167 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.424531 master-0 kubenswrapper[4167]: E0216 17:14:35.424483 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.424473436 +0000 UTC m=+33.154919824 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.424531 master-0 kubenswrapper[4167]: E0216 17:14:35.424478 4167 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:35.424531 master-0 kubenswrapper[4167]: E0216 17:14:35.423238 4167 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:35.424531 master-0 kubenswrapper[4167]: E0216 17:14:35.424529 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.424522888 +0000 UTC m=+33.154969276 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:14:35.424644 master-0 kubenswrapper[4167]: E0216 17:14:35.423205 4167 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.424644 master-0 kubenswrapper[4167]: I0216 17:14:35.424536 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:35.424644 master-0 kubenswrapper[4167]: E0216 17:14:35.424562 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.424555339 +0000 UTC m=+33.155001727 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.424644 master-0 kubenswrapper[4167]: I0216 17:14:35.424628 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:35.424760 master-0 kubenswrapper[4167]: E0216 17:14:35.424637 4167 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:14:35.424760 master-0 kubenswrapper[4167]: E0216 17:14:35.424681 4167 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.424760 master-0 kubenswrapper[4167]: E0216 17:14:35.424696 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:35.424760 master-0 kubenswrapper[4167]: E0216 17:14:35.424700 4167 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.424760 master-0 kubenswrapper[4167]: E0216 17:14:35.424649 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.424641161 +0000 UTC m=+33.155087559 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:14:35.424895 master-0 kubenswrapper[4167]: I0216 17:14:35.424765 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:35.424895 master-0 kubenswrapper[4167]: I0216 17:14:35.424797 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:35.424895 master-0 kubenswrapper[4167]: I0216 17:14:35.424826 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:35.424895 master-0 kubenswrapper[4167]: I0216 17:14:35.424855 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:35.424895 master-0 kubenswrapper[4167]: I0216 17:14:35.424882 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:35.425052 master-0 kubenswrapper[4167]: I0216 17:14:35.424914 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:35.425052 master-0 kubenswrapper[4167]: E0216 17:14:35.424917 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:35.425052 master-0 kubenswrapper[4167]: E0216 17:14:35.424947 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.425052 master-0 kubenswrapper[4167]: E0216 17:14:35.424981 4167 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.425052 master-0 kubenswrapper[4167]: E0216 17:14:35.424988 4167 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:35.425052 master-0 kubenswrapper[4167]: E0216 17:14:35.425007 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:35.425052 master-0 kubenswrapper[4167]: E0216 17:14:35.425011 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.425001971 +0000 UTC m=+33.155448359 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.425052 master-0 kubenswrapper[4167]: E0216 17:14:35.425053 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.425036742 +0000 UTC m=+33.155483160 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:14:35.425275 master-0 kubenswrapper[4167]: E0216 17:14:35.425076 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.425065932 +0000 UTC m=+33.155512340 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:14:35.425275 master-0 kubenswrapper[4167]: E0216 17:14:35.425098 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.425087003 +0000 UTC m=+33.155533411 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.425275 master-0 kubenswrapper[4167]: E0216 17:14:35.425120 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:35.425275 master-0 kubenswrapper[4167]: E0216 17:14:35.425131 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.425275 master-0 kubenswrapper[4167]: E0216 17:14:35.425138 4167 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.425275 master-0 kubenswrapper[4167]: I0216 17:14:35.425146 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:35.425275 master-0 kubenswrapper[4167]: E0216 17:14:35.425122 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.425112564 +0000 UTC m=+33.155558972 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:35.425468 master-0 kubenswrapper[4167]: I0216 17:14:35.425250 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:35.425468 master-0 kubenswrapper[4167]: E0216 17:14:35.425078 4167 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:14:35.425468 master-0 kubenswrapper[4167]: E0216 17:14:35.425433 4167 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.425468 master-0 kubenswrapper[4167]: E0216 17:14:35.425448 4167 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.425578 master-0 kubenswrapper[4167]: E0216 17:14:35.425329 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.425319769 +0000 UTC m=+33.155766147 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.425578 master-0 kubenswrapper[4167]: E0216 17:14:35.425368 4167 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:14:35.425578 master-0 kubenswrapper[4167]: E0216 17:14:35.425548 4167 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.425578 master-0 kubenswrapper[4167]: E0216 17:14:35.425562 4167 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.425707 master-0 kubenswrapper[4167]: I0216 17:14:35.425602 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:35.425799 master-0 kubenswrapper[4167]: I0216 17:14:35.425652 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:35.425839 master-0 kubenswrapper[4167]: E0216 17:14:35.425676 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.425662959 +0000 UTC m=+33.156109337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.425897 master-0 kubenswrapper[4167]: E0216 17:14:35.425682 4167 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:35.425935 master-0 kubenswrapper[4167]: E0216 17:14:35.425910 4167 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.425935 master-0 kubenswrapper[4167]: E0216 17:14:35.425925 4167 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.426018 master-0 kubenswrapper[4167]: E0216 17:14:35.425751 4167 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:35.426053 master-0 kubenswrapper[4167]: E0216 17:14:35.426031 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.425839063 +0000 UTC m=+33.156285481 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.426110 master-0 kubenswrapper[4167]: I0216 17:14:35.426077 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:35.426176 master-0 kubenswrapper[4167]: E0216 17:14:35.426155 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:35.426212 master-0 kubenswrapper[4167]: I0216 17:14:35.426153 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.426242 master-0 kubenswrapper[4167]: E0216 17:14:35.426213 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.426193733 +0000 UTC m=+33.156640231 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:14:35.426242 master-0 kubenswrapper[4167]: E0216 17:14:35.426236 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.426227314 +0000 UTC m=+33.156673822 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.426300 master-0 kubenswrapper[4167]: I0216 17:14:35.426269 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.426331 master-0 kubenswrapper[4167]: I0216 17:14:35.426318 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:35.426360 master-0 kubenswrapper[4167]: E0216 17:14:35.426340 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.426331127 +0000 UTC m=+33.156777505 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:14:35.426394 master-0 kubenswrapper[4167]: E0216 17:14:35.426358 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:35.426394 master-0 kubenswrapper[4167]: I0216 17:14:35.426376 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:35.426450 master-0 kubenswrapper[4167]: E0216 17:14:35.426393 4167 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:35.426450 master-0 kubenswrapper[4167]: I0216 17:14:35.426418 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:35.426450 master-0 kubenswrapper[4167]: E0216 17:14:35.426433 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.426418989 +0000 UTC m=+33.156865377 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:14:35.426536 master-0 kubenswrapper[4167]: E0216 17:14:35.426450 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0 podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.42644252 +0000 UTC m=+33.156888898 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:14:35.426571 master-0 kubenswrapper[4167]: E0216 17:14:35.426528 4167 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:35.426571 master-0 kubenswrapper[4167]: E0216 17:14:35.426539 4167 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:35.426632 master-0 kubenswrapper[4167]: E0216 17:14:35.426578 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.426566713 +0000 UTC m=+33.157013181 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:14:35.426632 master-0 kubenswrapper[4167]: I0216 17:14:35.426578 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:35.426632 master-0 kubenswrapper[4167]: E0216 17:14:35.426594 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.426586974 +0000 UTC m=+33.157033462 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:14:35.426632 master-0 kubenswrapper[4167]: I0216 17:14:35.426618 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:35.426746 master-0 kubenswrapper[4167]: I0216 17:14:35.426652 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:35.426746 master-0 kubenswrapper[4167]: E0216 17:14:35.426660 4167 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:35.426746 master-0 kubenswrapper[4167]: I0216 17:14:35.426684 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:35.426746 master-0 kubenswrapper[4167]: E0216 17:14:35.426696 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.426685406 +0000 UTC m=+33.157131784 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:14:35.426746 master-0 kubenswrapper[4167]: I0216 17:14:35.426714 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:35.426746 master-0 kubenswrapper[4167]: I0216 17:14:35.426743 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:35.426907 master-0 kubenswrapper[4167]: E0216 17:14:35.426754 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:35.426907 master-0 kubenswrapper[4167]: E0216 17:14:35.426760 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:35.426907 master-0 kubenswrapper[4167]: E0216 17:14:35.426803 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.426789219 +0000 UTC m=+33.157235677 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:14:35.426907 master-0 kubenswrapper[4167]: E0216 17:14:35.426810 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:35.426907 master-0 kubenswrapper[4167]: E0216 17:14:35.426816 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:35.426907 master-0 kubenswrapper[4167]: E0216 17:14:35.426835 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.42681489 +0000 UTC m=+33.157261328 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:14:35.426907 master-0 kubenswrapper[4167]: E0216 17:14:35.426821 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:14:35.426907 master-0 kubenswrapper[4167]: E0216 17:14:35.426864 4167 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.426907 master-0 kubenswrapper[4167]: E0216 17:14:35.426869 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.426851361 +0000 UTC m=+33.157297809 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:14:35.426907 master-0 kubenswrapper[4167]: E0216 17:14:35.426875 4167 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.426907 master-0 kubenswrapper[4167]: E0216 17:14:35.426906 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.426896972 +0000 UTC m=+33.157343340 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: I0216 17:14:35.426916 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: E0216 17:14:35.426983 4167 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: E0216 17:14:35.427014 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427005405 +0000 UTC m=+33.157451883 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: E0216 17:14:35.426633 4167 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: I0216 17:14:35.427036 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: E0216 17:14:35.427057 4167 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: E0216 17:14:35.427062 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427045996 +0000 UTC m=+33.157492464 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: E0216 17:14:35.427086 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427080177 +0000 UTC m=+33.157526555 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: I0216 17:14:35.427115 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: I0216 17:14:35.427138 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: E0216 17:14:35.427149 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427140769 +0000 UTC m=+33.157587147 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: I0216 17:14:35.427171 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: E0216 17:14:35.427189 4167 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: I0216 17:14:35.427196 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: E0216 17:14:35.427207 4167 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: I0216 17:14:35.427231 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: E0216 17:14:35.427258 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427244521 +0000 UTC m=+33.157690969 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:14:35.427302 master-0 kubenswrapper[4167]: I0216 17:14:35.427295 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427307 4167 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427334 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427324214 +0000 UTC m=+33.157770722 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427340 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427346 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427378 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427368785 +0000 UTC m=+33.157815253 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427402 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427388275 +0000 UTC m=+33.157834733 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427347 4167 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427444 4167 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427489 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427477438 +0000 UTC m=+33.157923916 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427319 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: I0216 17:14:35.427442 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427516 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427522 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427539 4167 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427540 4167 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427553 4167 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: I0216 17:14:35.427561 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427578 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.42756923 +0000 UTC m=+33.158015728 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: I0216 17:14:35.427599 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: I0216 17:14:35.427633 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427643 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427674 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427665793 +0000 UTC m=+33.158112301 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427686 4167 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427701 4167 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427701 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427709 4167 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427719 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427709354 +0000 UTC m=+33.158155862 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: I0216 17:14:35.427700 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427739 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427729315 +0000 UTC m=+33.158175813 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: I0216 17:14:35.427781 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427789 4167 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: E0216 17:14:35.427799 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427789396 +0000 UTC m=+33.158235874 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.427787 master-0 kubenswrapper[4167]: I0216 17:14:35.427812 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.427829 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.427843 4167 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.427856 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.427863 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.427874 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427864418 +0000 UTC m=+33.158310796 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.427893 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427883019 +0000 UTC m=+33.158329417 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.427914 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.427919 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.427941 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.42793263 +0000 UTC m=+33.158379008 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.427981 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.427971811 +0000 UTC m=+33.158418309 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.427984 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428005 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428007 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428015 4167 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428035 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428044 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428033843 +0000 UTC m=+33.158480221 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428084 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428100 4167 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428104 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428098845 +0000 UTC m=+33.158545223 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428135 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428159 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428180 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428183 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428196 4167 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428203 4167 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428226 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428219828 +0000 UTC m=+33.158666206 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428234 4167 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428244 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428238378 +0000 UTC m=+33.158684756 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428203 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428250 4167 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428257 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428251189 +0000 UTC m=+33.158697687 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428283 4167 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428335 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428380 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428368252 +0000 UTC m=+33.158814730 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428401 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428393193 +0000 UTC m=+33.158839691 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428413 4167 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428424 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428439 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428432434 +0000 UTC m=+33.158878812 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428461 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428466 4167 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428489 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428495 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428486645 +0000 UTC m=+33.158933123 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428520 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428587 4167 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428589 4167 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428608 4167 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428611 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428618 4167 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428633 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428651 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428640799 +0000 UTC m=+33.159087187 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428597 4167 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428669 4167 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428678 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428688 4167 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428717 4167 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428693 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.42868683 +0000 UTC m=+33.159133208 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428737 4167 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428743 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428766 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428758602 +0000 UTC m=+33.159204990 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428788 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428795 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: I0216 17:14:35.428821 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428836 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428825304 +0000 UTC m=+33.159271802 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:14:35.428794 master-0 kubenswrapper[4167]: E0216 17:14:35.428843 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.428870 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.428877 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428868325 +0000 UTC m=+33.159314793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.428866 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.428894 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428886856 +0000 UTC m=+33.159333364 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.428908 4167 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.428918 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.428924 4167 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.428928 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428920227 +0000 UTC m=+33.159366605 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.428949 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428941297 +0000 UTC m=+33.159387825 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.428953 4167 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.428981 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.428971908 +0000 UTC m=+33.159418406 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429008 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429040 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429070 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429095 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429125 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429153 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429179 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429181 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429200 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429200 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429187004 +0000 UTC m=+33.159633502 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429225 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429218555 +0000 UTC m=+33.159664933 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429239 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429234265 +0000 UTC m=+33.159680643 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429248 4167 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429257 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429258 4167 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429276 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429267856 +0000 UTC m=+33.159714244 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429294 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429300 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429314 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429309367 +0000 UTC m=+33.159755745 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429332 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429351 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429354 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429371 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429365699 +0000 UTC m=+33.159812077 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429390 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429405 4167 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429414 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429430 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.42942425 +0000 UTC m=+33.159870628 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429442 4167 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429448 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429463 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429457251 +0000 UTC m=+33.159903639 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429478 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429502 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429509 4167 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429521 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429537 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429531173 +0000 UTC m=+33.159977631 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429553 4167 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429575 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429568774 +0000 UTC m=+33.160015262 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429573 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429587 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429580975 +0000 UTC m=+33.160027353 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429607 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429638 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429671 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429614 4167 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429702 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429686008 +0000 UTC m=+33.160132456 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429719 4167 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429632 4167 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429738 4167 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429742 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429736639 +0000 UTC m=+33.160183017 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429774 4167 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429786 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: I0216 17:14:35.429805 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429661 4167 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429692 4167 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429700 4167 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429880 4167 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429634 4167 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429813 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429804751 +0000 UTC m=+33.160251259 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429925 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429914884 +0000 UTC m=+33.160361392 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429939 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429932064 +0000 UTC m=+33.160378572 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429971 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429947025 +0000 UTC m=+33.160393573 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.429987 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.429980435 +0000 UTC m=+33.160426943 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.430095 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls podName:1cd29be8-2b2a-49f7-badd-ff53c686a63d nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.430086518 +0000 UTC m=+33.160533046 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.430114 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.430106639 +0000 UTC m=+33.160553127 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.430130 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.430123349 +0000 UTC m=+33.160569857 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.430145 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.4301376 +0000 UTC m=+33.160584088 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:14:35.430775 master-0 kubenswrapper[4167]: E0216 17:14:35.430161 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:51.43015422 +0000 UTC m=+33.160600708 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:14:35.433162 master-0 kubenswrapper[4167]: I0216 17:14:35.432104 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:35.445087 master-0 kubenswrapper[4167]: I0216 17:14:35.445051 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445080 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445088 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445129 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445148 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445253 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445261 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445278 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445299 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445302 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445318 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445330 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445331 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445408 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445415 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445439 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445445 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445466 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:35.445463 master-0 kubenswrapper[4167]: I0216 17:14:35.445477 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445472 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445497 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445508 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445513 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445456 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445524 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445499 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445446 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445585 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445472 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445616 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445620 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445629 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445660 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445512 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445674 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445679 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445691 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445663 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445628 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445733 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445690 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445487 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445569 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445784 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445767 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445802 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445805 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445826 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:35.445977 master-0 kubenswrapper[4167]: I0216 17:14:35.445871 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:35.446747 master-0 kubenswrapper[4167]: I0216 17:14:35.446044 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:35.446747 master-0 kubenswrapper[4167]: I0216 17:14:35.446054 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:35.446747 master-0 kubenswrapper[4167]: I0216 17:14:35.446073 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:35.446747 master-0 kubenswrapper[4167]: I0216 17:14:35.446060 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:35.446747 master-0 kubenswrapper[4167]: I0216 17:14:35.446091 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:35.446747 master-0 kubenswrapper[4167]: I0216 17:14:35.446078 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:35.446747 master-0 kubenswrapper[4167]: I0216 17:14:35.446183 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:35.446747 master-0 kubenswrapper[4167]: I0216 17:14:35.446200 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:35.446747 master-0 kubenswrapper[4167]: I0216 17:14:35.446209 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:35.446747 master-0 kubenswrapper[4167]: I0216 17:14:35.446211 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:35.446747 master-0 kubenswrapper[4167]: I0216 17:14:35.446318 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:35.446747 master-0 kubenswrapper[4167]: I0216 17:14:35.446347 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:35.451845 master-0 kubenswrapper[4167]: I0216 17:14:35.451810 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 17:14:35.452325 master-0 kubenswrapper[4167]: I0216 17:14:35.452296 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 16 17:14:35.452503 master-0 kubenswrapper[4167]: I0216 17:14:35.452464 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 17:14:35.452503 master-0 kubenswrapper[4167]: I0216 17:14:35.452495 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-mzz6s" Feb 16 17:14:35.452817 master-0 kubenswrapper[4167]: I0216 17:14:35.452784 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 17:14:35.452934 master-0 kubenswrapper[4167]: I0216 17:14:35.452908 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 16 17:14:35.453030 master-0 kubenswrapper[4167]: I0216 17:14:35.453000 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 16 17:14:35.453141 master-0 kubenswrapper[4167]: I0216 17:14:35.453112 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 17:14:35.453265 master-0 kubenswrapper[4167]: I0216 17:14:35.453234 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 17:14:35.453265 master-0 kubenswrapper[4167]: I0216 17:14:35.453255 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 16 17:14:35.453330 master-0 kubenswrapper[4167]: I0216 17:14:35.453264 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-q2gzj" Feb 16 17:14:35.453475 master-0 kubenswrapper[4167]: I0216 17:14:35.453450 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 17:14:35.453701 master-0 kubenswrapper[4167]: I0216 17:14:35.453669 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 17:14:35.454003 master-0 kubenswrapper[4167]: I0216 17:14:35.453976 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 16 17:14:35.454003 master-0 kubenswrapper[4167]: I0216 17:14:35.453996 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 17:14:35.454062 master-0 kubenswrapper[4167]: I0216 17:14:35.454031 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 17:14:35.454062 master-0 kubenswrapper[4167]: I0216 17:14:35.454058 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 17:14:35.454125 master-0 kubenswrapper[4167]: I0216 17:14:35.454092 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 17:14:35.454125 master-0 kubenswrapper[4167]: I0216 17:14:35.454118 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 16 17:14:35.454239 master-0 kubenswrapper[4167]: I0216 17:14:35.453987 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 16 17:14:35.454333 master-0 kubenswrapper[4167]: I0216 17:14:35.454304 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 17:14:35.454408 master-0 kubenswrapper[4167]: I0216 17:14:35.454390 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 17:14:35.454443 master-0 kubenswrapper[4167]: I0216 17:14:35.454426 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 17:14:35.456273 master-0 kubenswrapper[4167]: I0216 17:14:35.456247 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:14:35.456441 master-0 kubenswrapper[4167]: I0216 17:14:35.456414 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 16 17:14:35.456847 master-0 kubenswrapper[4167]: I0216 17:14:35.456820 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 16 17:14:35.456977 master-0 kubenswrapper[4167]: I0216 17:14:35.456942 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 16 17:14:35.457156 master-0 kubenswrapper[4167]: I0216 17:14:35.457133 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 17:14:35.457344 master-0 kubenswrapper[4167]: I0216 17:14:35.457325 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 17:14:35.457455 master-0 kubenswrapper[4167]: I0216 17:14:35.457434 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 17:14:35.457821 master-0 kubenswrapper[4167]: I0216 17:14:35.457793 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 17:14:35.458062 master-0 kubenswrapper[4167]: I0216 17:14:35.458038 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:14:35.458258 master-0 kubenswrapper[4167]: I0216 17:14:35.458233 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 16 17:14:35.458304 master-0 kubenswrapper[4167]: I0216 17:14:35.458281 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 17:14:35.458357 master-0 kubenswrapper[4167]: I0216 17:14:35.458342 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 17:14:35.458465 master-0 kubenswrapper[4167]: I0216 17:14:35.458440 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 16 17:14:35.458504 master-0 kubenswrapper[4167]: I0216 17:14:35.458475 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 17:14:35.458504 master-0 kubenswrapper[4167]: I0216 17:14:35.458048 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 17:14:35.458559 master-0 kubenswrapper[4167]: I0216 17:14:35.458283 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 17:14:35.458586 master-0 kubenswrapper[4167]: I0216 17:14:35.458572 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 17:14:35.458633 master-0 kubenswrapper[4167]: I0216 17:14:35.458473 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 17:14:35.458754 master-0 kubenswrapper[4167]: I0216 17:14:35.458735 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 17:14:35.458788 master-0 kubenswrapper[4167]: I0216 17:14:35.458775 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 17:14:35.458817 master-0 kubenswrapper[4167]: I0216 17:14:35.458806 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-gtxjb" Feb 16 17:14:35.458881 master-0 kubenswrapper[4167]: I0216 17:14:35.458862 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 17:14:35.458915 master-0 kubenswrapper[4167]: I0216 17:14:35.458904 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 17:14:35.458944 master-0 kubenswrapper[4167]: I0216 17:14:35.458921 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-5lx84" Feb 16 17:14:35.458944 master-0 kubenswrapper[4167]: I0216 17:14:35.458870 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 17:14:35.459071 master-0 kubenswrapper[4167]: I0216 17:14:35.458982 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 16 17:14:35.462268 master-0 kubenswrapper[4167]: I0216 17:14:35.462244 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 17:14:35.462712 master-0 kubenswrapper[4167]: I0216 17:14:35.462679 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 17:14:35.462871 master-0 kubenswrapper[4167]: I0216 17:14:35.462845 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 17:14:35.462936 master-0 kubenswrapper[4167]: I0216 17:14:35.462908 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-hk5sk" Feb 16 17:14:35.463002 master-0 kubenswrapper[4167]: I0216 17:14:35.462846 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 16 17:14:35.463002 master-0 kubenswrapper[4167]: I0216 17:14:35.462953 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 17:14:35.463590 master-0 kubenswrapper[4167]: I0216 17:14:35.463557 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 17:14:35.463744 master-0 kubenswrapper[4167]: I0216 17:14:35.463714 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 17:14:35.464195 master-0 kubenswrapper[4167]: I0216 17:14:35.464165 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:14:35.469087 master-0 kubenswrapper[4167]: I0216 17:14:35.469053 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 17:14:35.475635 master-0 kubenswrapper[4167]: I0216 17:14:35.475568 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 16 17:14:35.481761 master-0 kubenswrapper[4167]: I0216 17:14:35.481718 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:35.481761 master-0 kubenswrapper[4167]: I0216 17:14:35.481739 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 17:14:35.481980 master-0 kubenswrapper[4167]: I0216 17:14:35.481932 4167 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:14:35.482137 master-0 kubenswrapper[4167]: I0216 17:14:35.482117 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 17:14:35.482229 master-0 kubenswrapper[4167]: I0216 17:14:35.482197 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 17:14:35.482632 master-0 kubenswrapper[4167]: I0216 17:14:35.482585 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 17:14:35.483984 master-0 kubenswrapper[4167]: I0216 17:14:35.482197 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" Feb 16 17:14:35.483984 master-0 kubenswrapper[4167]: I0216 17:14:35.483626 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 17:14:35.484687 master-0 kubenswrapper[4167]: I0216 17:14:35.484600 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 17:14:35.484873 master-0 kubenswrapper[4167]: I0216 17:14:35.484855 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 17:14:35.485924 master-0 kubenswrapper[4167]: I0216 17:14:35.485811 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 17:14:35.485924 master-0 kubenswrapper[4167]: I0216 17:14:35.485824 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-qmzhq" Feb 16 17:14:35.485924 master-0 kubenswrapper[4167]: I0216 17:14:35.485890 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 17:14:35.487543 master-0 kubenswrapper[4167]: I0216 17:14:35.487521 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 17:14:35.488234 master-0 kubenswrapper[4167]: I0216 17:14:35.487592 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 17:14:35.488234 master-0 kubenswrapper[4167]: I0216 17:14:35.487613 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 17:14:35.488234 master-0 kubenswrapper[4167]: I0216 17:14:35.487688 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 17:14:35.488234 master-0 kubenswrapper[4167]: I0216 17:14:35.487702 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 17:14:35.488234 master-0 kubenswrapper[4167]: I0216 17:14:35.487942 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 16 17:14:35.488234 master-0 kubenswrapper[4167]: I0216 17:14:35.488076 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 17:14:35.488234 master-0 kubenswrapper[4167]: I0216 17:14:35.488123 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 17:14:35.488234 master-0 kubenswrapper[4167]: I0216 17:14:35.488175 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 17:14:35.488875 master-0 kubenswrapper[4167]: I0216 17:14:35.488270 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 16 17:14:35.488875 master-0 kubenswrapper[4167]: I0216 17:14:35.488345 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 17:14:35.488875 master-0 kubenswrapper[4167]: I0216 17:14:35.488680 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 17:14:35.488875 master-0 kubenswrapper[4167]: I0216 17:14:35.488712 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-b9gfw" Feb 16 17:14:35.488875 master-0 kubenswrapper[4167]: I0216 17:14:35.488803 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 17:14:35.488875 master-0 kubenswrapper[4167]: I0216 17:14:35.488869 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-lcpkn" Feb 16 17:14:35.489395 master-0 kubenswrapper[4167]: I0216 17:14:35.488915 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 17:14:35.489395 master-0 kubenswrapper[4167]: I0216 17:14:35.489053 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 17:14:35.489395 master-0 kubenswrapper[4167]: I0216 17:14:35.489078 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 17:14:35.489395 master-0 kubenswrapper[4167]: I0216 17:14:35.489145 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 17:14:35.489395 master-0 kubenswrapper[4167]: I0216 17:14:35.489293 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 16 17:14:35.489395 master-0 kubenswrapper[4167]: I0216 17:14:35.489307 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 17:14:35.489395 master-0 kubenswrapper[4167]: I0216 17:14:35.489324 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 17:14:35.489395 master-0 kubenswrapper[4167]: I0216 17:14:35.489293 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:14:35.490029 master-0 kubenswrapper[4167]: I0216 17:14:35.489440 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 17:14:35.490029 master-0 kubenswrapper[4167]: I0216 17:14:35.489771 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 17:14:35.490937 master-0 kubenswrapper[4167]: I0216 17:14:35.490874 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 17:14:35.492720 master-0 kubenswrapper[4167]: I0216 17:14:35.492439 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 17:14:35.494805 master-0 kubenswrapper[4167]: I0216 17:14:35.493032 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 16 17:14:35.496052 master-0 kubenswrapper[4167]: I0216 17:14:35.495536 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 16 17:14:35.496052 master-0 kubenswrapper[4167]: I0216 17:14:35.495557 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 17:14:35.497295 master-0 kubenswrapper[4167]: I0216 17:14:35.496474 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 16 17:14:35.499730 master-0 kubenswrapper[4167]: I0216 17:14:35.499257 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:14:35.507568 master-0 kubenswrapper[4167]: I0216 17:14:35.507531 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 16 17:14:35.532499 master-0 kubenswrapper[4167]: I0216 17:14:35.532474 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:35.532686 master-0 kubenswrapper[4167]: I0216 17:14:35.532673 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:35.532880 master-0 kubenswrapper[4167]: I0216 17:14:35.532860 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:35.533034 master-0 kubenswrapper[4167]: I0216 17:14:35.533017 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:35.533168 master-0 kubenswrapper[4167]: I0216 17:14:35.533153 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:35.536607 master-0 kubenswrapper[4167]: I0216 17:14:35.536566 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:35.539012 master-0 kubenswrapper[4167]: I0216 17:14:35.538983 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:35.539143 master-0 kubenswrapper[4167]: I0216 17:14:35.539096 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 16 17:14:35.547284 master-0 kubenswrapper[4167]: I0216 17:14:35.547238 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-j874l" Feb 16 17:14:35.577064 master-0 kubenswrapper[4167]: I0216 17:14:35.576921 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 16 17:14:35.587988 master-0 kubenswrapper[4167]: I0216 17:14:35.587932 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 17:14:35.607874 master-0 kubenswrapper[4167]: I0216 17:14:35.607836 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-rzjlw" Feb 16 17:14:35.627940 master-0 kubenswrapper[4167]: I0216 17:14:35.627898 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 16 17:14:35.635926 master-0 kubenswrapper[4167]: I0216 17:14:35.635885 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:35.636116 master-0 kubenswrapper[4167]: I0216 17:14:35.635997 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:35.636116 master-0 kubenswrapper[4167]: I0216 17:14:35.636035 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:35.636348 master-0 kubenswrapper[4167]: I0216 17:14:35.636325 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:35.636524 master-0 kubenswrapper[4167]: I0216 17:14:35.636502 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:35.639019 master-0 kubenswrapper[4167]: I0216 17:14:35.638983 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:35.639301 master-0 kubenswrapper[4167]: I0216 17:14:35.639269 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:35.639369 master-0 kubenswrapper[4167]: I0216 17:14:35.639333 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:35.639579 master-0 kubenswrapper[4167]: I0216 17:14:35.639542 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:35.640302 master-0 kubenswrapper[4167]: I0216 17:14:35.640278 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:35.646926 master-0 kubenswrapper[4167]: I0216 17:14:35.646878 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 16 17:14:35.661341 master-0 kubenswrapper[4167]: I0216 17:14:35.661302 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:14:35.667612 master-0 kubenswrapper[4167]: I0216 17:14:35.667507 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 16 17:14:35.687677 master-0 kubenswrapper[4167]: I0216 17:14:35.687630 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 16 17:14:35.707639 master-0 kubenswrapper[4167]: I0216 17:14:35.707562 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-6858s" Feb 16 17:14:35.727316 master-0 kubenswrapper[4167]: I0216 17:14:35.727282 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 17:14:35.745156 master-0 kubenswrapper[4167]: I0216 17:14:35.740339 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:35.745156 master-0 kubenswrapper[4167]: I0216 17:14:35.740730 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:35.745397 master-0 kubenswrapper[4167]: I0216 17:14:35.745351 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:35.745734 master-0 kubenswrapper[4167]: I0216 17:14:35.745684 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:35.748702 master-0 kubenswrapper[4167]: I0216 17:14:35.748656 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 17:14:35.755442 master-0 kubenswrapper[4167]: I0216 17:14:35.755397 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:35.755442 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:35.755442 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:35.755442 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:35.755634 master-0 kubenswrapper[4167]: I0216 17:14:35.755470 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:35.772404 master-0 kubenswrapper[4167]: I0216 17:14:35.770871 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 17:14:35.787306 master-0 kubenswrapper[4167]: I0216 17:14:35.787275 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 17:14:35.811682 master-0 kubenswrapper[4167]: I0216 17:14:35.811523 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 17:14:35.827624 master-0 kubenswrapper[4167]: I0216 17:14:35.827575 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 17:14:35.832725 master-0 kubenswrapper[4167]: I0216 17:14:35.832685 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:35.842263 master-0 kubenswrapper[4167]: W0216 17:14:35.842216 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dfc09be_2f60_4420_8d3a_6b00b1d3e6fd.slice/crio-d6317aeb0ae338125dc56ee5a1eb68432d21e83121742ed80c3417f5a033acb9 WatchSource:0}: Error finding container d6317aeb0ae338125dc56ee5a1eb68432d21e83121742ed80c3417f5a033acb9: Status 404 returned error can't find the container with id d6317aeb0ae338125dc56ee5a1eb68432d21e83121742ed80c3417f5a033acb9 Feb 16 17:14:35.847378 master-0 kubenswrapper[4167]: I0216 17:14:35.846863 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 17:14:35.848035 master-0 kubenswrapper[4167]: I0216 17:14:35.847802 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:35.848035 master-0 kubenswrapper[4167]: I0216 17:14:35.847851 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:35.853411 master-0 kubenswrapper[4167]: I0216 17:14:35.853373 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:35.873382 master-0 kubenswrapper[4167]: I0216 17:14:35.867478 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 17:14:35.889009 master-0 kubenswrapper[4167]: I0216 17:14:35.888273 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-nslxl" Feb 16 17:14:35.908789 master-0 kubenswrapper[4167]: I0216 17:14:35.907450 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 17:14:35.926997 master-0 kubenswrapper[4167]: I0216 17:14:35.926913 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 17:14:35.949105 master-0 kubenswrapper[4167]: I0216 17:14:35.948243 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 17:14:35.954722 master-0 kubenswrapper[4167]: I0216 17:14:35.954526 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:35.954722 master-0 kubenswrapper[4167]: I0216 17:14:35.954609 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:35.959856 master-0 kubenswrapper[4167]: I0216 17:14:35.959824 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:35.963082 master-0 kubenswrapper[4167]: I0216 17:14:35.962781 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:35.967009 master-0 kubenswrapper[4167]: I0216 17:14:35.966853 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 17:14:35.987646 master-0 kubenswrapper[4167]: I0216 17:14:35.987604 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 17:14:36.007369 master-0 kubenswrapper[4167]: I0216 17:14:36.007215 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 17:14:36.010560 master-0 kubenswrapper[4167]: W0216 17:14:36.010522 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08a90dc5_b0d8_4aad_a002_736492b6c1a9.slice/crio-683ffe06251ed72ef8d61ff252246b14c88a741de000ac4aaaeb3cad2c9bfd7b WatchSource:0}: Error finding container 683ffe06251ed72ef8d61ff252246b14c88a741de000ac4aaaeb3cad2c9bfd7b: Status 404 returned error can't find the container with id 683ffe06251ed72ef8d61ff252246b14c88a741de000ac4aaaeb3cad2c9bfd7b Feb 16 17:14:36.019323 master-0 kubenswrapper[4167]: I0216 17:14:36.019293 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:36.027339 master-0 kubenswrapper[4167]: I0216 17:14:36.027295 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 17:14:36.048104 master-0 kubenswrapper[4167]: I0216 17:14:36.048065 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 17:14:36.058710 master-0 kubenswrapper[4167]: I0216 17:14:36.058658 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:36.058710 master-0 kubenswrapper[4167]: I0216 17:14:36.058713 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:36.062536 master-0 kubenswrapper[4167]: I0216 17:14:36.062108 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:36.063257 master-0 kubenswrapper[4167]: I0216 17:14:36.063103 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:36.066905 master-0 kubenswrapper[4167]: I0216 17:14:36.066870 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 17:14:36.088244 master-0 kubenswrapper[4167]: I0216 17:14:36.088203 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 17:14:36.112299 master-0 kubenswrapper[4167]: I0216 17:14:36.111846 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 17:14:36.127583 master-0 kubenswrapper[4167]: I0216 17:14:36.127547 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 17:14:36.147334 master-0 kubenswrapper[4167]: I0216 17:14:36.147223 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 17:14:36.162774 master-0 kubenswrapper[4167]: I0216 17:14:36.162725 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:36.162774 master-0 kubenswrapper[4167]: I0216 17:14:36.162778 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:36.162936 master-0 kubenswrapper[4167]: I0216 17:14:36.162886 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:36.166643 master-0 kubenswrapper[4167]: I0216 17:14:36.166610 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 17:14:36.167411 master-0 kubenswrapper[4167]: I0216 17:14:36.167286 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:36.187056 master-0 kubenswrapper[4167]: I0216 17:14:36.187017 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 17:14:36.208380 master-0 kubenswrapper[4167]: I0216 17:14:36.208250 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 17:14:36.228981 master-0 kubenswrapper[4167]: I0216 17:14:36.228934 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 17:14:36.247443 master-0 kubenswrapper[4167]: I0216 17:14:36.247406 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 17:14:36.266160 master-0 kubenswrapper[4167]: I0216 17:14:36.266088 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:36.266612 master-0 kubenswrapper[4167]: I0216 17:14:36.266573 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:36.266664 master-0 kubenswrapper[4167]: I0216 17:14:36.266632 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:36.267147 master-0 kubenswrapper[4167]: I0216 17:14:36.266915 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:36.267820 master-0 kubenswrapper[4167]: I0216 17:14:36.267737 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 17:14:36.289360 master-0 kubenswrapper[4167]: I0216 17:14:36.289327 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 17:14:36.307803 master-0 kubenswrapper[4167]: I0216 17:14:36.307446 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 17:14:36.328919 master-0 kubenswrapper[4167]: I0216 17:14:36.328805 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 17:14:36.348301 master-0 kubenswrapper[4167]: I0216 17:14:36.348240 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 17:14:36.368691 master-0 kubenswrapper[4167]: I0216 17:14:36.368653 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 17:14:36.373463 master-0 kubenswrapper[4167]: I0216 17:14:36.373412 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:36.373540 master-0 kubenswrapper[4167]: I0216 17:14:36.373488 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:36.373597 master-0 kubenswrapper[4167]: I0216 17:14:36.373582 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:36.378225 master-0 kubenswrapper[4167]: I0216 17:14:36.378181 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:36.380060 master-0 kubenswrapper[4167]: I0216 17:14:36.380017 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:36.383117 master-0 kubenswrapper[4167]: I0216 17:14:36.382768 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:36.388337 master-0 kubenswrapper[4167]: I0216 17:14:36.388236 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 17:14:36.408494 master-0 kubenswrapper[4167]: I0216 17:14:36.408422 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 17:14:36.419281 master-0 kubenswrapper[4167]: I0216 17:14:36.419247 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:36.428102 master-0 kubenswrapper[4167]: I0216 17:14:36.427977 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 17:14:36.447658 master-0 kubenswrapper[4167]: I0216 17:14:36.447614 4167 scope.go:117] "RemoveContainer" containerID="15f7dbfc42911fc9756e060ae848e6787711a1dbc91f05fce2dd151ae4191fab" Feb 16 17:14:36.449100 master-0 kubenswrapper[4167]: W0216 17:14:36.449051 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3beb7bf_922f_425d_8a19_fd407a7153a8.slice/crio-6a950ea16efc0111b36acc900a8438a1c145cf7c8d9b5d6aa9ca821d93d52b74 WatchSource:0}: Error finding container 6a950ea16efc0111b36acc900a8438a1c145cf7c8d9b5d6aa9ca821d93d52b74: Status 404 returned error can't find the container with id 6a950ea16efc0111b36acc900a8438a1c145cf7c8d9b5d6aa9ca821d93d52b74 Feb 16 17:14:36.451807 master-0 kubenswrapper[4167]: I0216 17:14:36.451776 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 17:14:36.467627 master-0 kubenswrapper[4167]: I0216 17:14:36.467570 4167 request.go:700] Waited for 1.00927597s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&limit=500&resourceVersion=0 Feb 16 17:14:36.477802 master-0 kubenswrapper[4167]: I0216 17:14:36.477749 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 17:14:36.478005 master-0 kubenswrapper[4167]: I0216 17:14:36.477952 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:36.488753 master-0 kubenswrapper[4167]: I0216 17:14:36.488685 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:36.491131 master-0 kubenswrapper[4167]: I0216 17:14:36.491039 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:14:36.493790 master-0 kubenswrapper[4167]: I0216 17:14:36.493741 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 17:14:36.507574 master-0 kubenswrapper[4167]: I0216 17:14:36.507514 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 17:14:36.507723 master-0 kubenswrapper[4167]: I0216 17:14:36.507635 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:36.526095 master-0 kubenswrapper[4167]: I0216 17:14:36.526045 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:14:36.527564 master-0 kubenswrapper[4167]: I0216 17:14:36.527532 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 17:14:36.533481 master-0 kubenswrapper[4167]: E0216 17:14:36.533433 4167 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:36.548030 master-0 kubenswrapper[4167]: I0216 17:14:36.547923 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 17:14:36.569198 master-0 kubenswrapper[4167]: I0216 17:14:36.567517 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 17:14:36.587019 master-0 kubenswrapper[4167]: I0216 17:14:36.586978 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 17:14:36.610452 master-0 kubenswrapper[4167]: I0216 17:14:36.607592 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 17:14:36.635235 master-0 kubenswrapper[4167]: I0216 17:14:36.628262 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 17:14:36.652777 master-0 kubenswrapper[4167]: I0216 17:14:36.652733 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 17:14:36.664992 master-0 kubenswrapper[4167]: I0216 17:14:36.664934 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-dhhfh" event={"ID":"08a90dc5-b0d8-4aad-a002-736492b6c1a9","Type":"ContainerStarted","Data":"605f4d37dca5a7b7cb782ef97f0d0cf40cf83a6f18a4a48dec6e2d525f6967fe"} Feb 16 17:14:36.667120 master-0 kubenswrapper[4167]: I0216 17:14:36.667065 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-dhhfh" event={"ID":"08a90dc5-b0d8-4aad-a002-736492b6c1a9","Type":"ContainerStarted","Data":"683ffe06251ed72ef8d61ff252246b14c88a741de000ac4aaaeb3cad2c9bfd7b"} Feb 16 17:14:36.667420 master-0 kubenswrapper[4167]: I0216 17:14:36.667384 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:36.667566 master-0 kubenswrapper[4167]: I0216 17:14:36.667524 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" event={"ID":"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd","Type":"ContainerStarted","Data":"e663fa585fa9b333671e87d8b98645a4ea3e7b1b7a725b36b379c4e86f1caadd"} Feb 16 17:14:36.667622 master-0 kubenswrapper[4167]: I0216 17:14:36.667570 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" event={"ID":"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd","Type":"ContainerStarted","Data":"d6317aeb0ae338125dc56ee5a1eb68432d21e83121742ed80c3417f5a033acb9"} Feb 16 17:14:36.669310 master-0 kubenswrapper[4167]: I0216 17:14:36.669267 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-3enh2b6fkpcog" Feb 16 17:14:36.672313 master-0 kubenswrapper[4167]: I0216 17:14:36.672265 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerStarted","Data":"01462dd0f4166d41beb7fd8c332582d34d495a6f6d735a8086a4f7400ebb6e1a"} Feb 16 17:14:36.672313 master-0 kubenswrapper[4167]: I0216 17:14:36.672305 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerStarted","Data":"6a950ea16efc0111b36acc900a8438a1c145cf7c8d9b5d6aa9ca821d93d52b74"} Feb 16 17:14:36.675093 master-0 kubenswrapper[4167]: I0216 17:14:36.675059 4167 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-dhhfh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Feb 16 17:14:36.675150 master-0 kubenswrapper[4167]: I0216 17:14:36.675106 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Feb 16 17:14:36.688689 master-0 kubenswrapper[4167]: I0216 17:14:36.686778 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 17:14:36.707701 master-0 kubenswrapper[4167]: I0216 17:14:36.707662 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" Feb 16 17:14:36.727346 master-0 kubenswrapper[4167]: I0216 17:14:36.727288 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 17:14:36.747123 master-0 kubenswrapper[4167]: I0216 17:14:36.747077 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 17:14:36.754507 master-0 kubenswrapper[4167]: W0216 17:14:36.754456 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80d3b238_70c3_4e71_96a1_99405352033f.slice/crio-6e8a064157660a2ed7585c372d0f7a76e1a44291120c4c4baf3c2ea85dc5f3fd WatchSource:0}: Error finding container 6e8a064157660a2ed7585c372d0f7a76e1a44291120c4c4baf3c2ea85dc5f3fd: Status 404 returned error can't find the container with id 6e8a064157660a2ed7585c372d0f7a76e1a44291120c4c4baf3c2ea85dc5f3fd Feb 16 17:14:36.757553 master-0 kubenswrapper[4167]: I0216 17:14:36.757512 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:36.757553 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:36.757553 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:36.757553 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:36.757853 master-0 kubenswrapper[4167]: I0216 17:14:36.757565 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:36.767280 master-0 kubenswrapper[4167]: W0216 17:14:36.764181 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ff68421_1741_41c1_93d5_5c722dfd295e.slice/crio-ea56dcd91b08af3da71913b620b925ed8d14ff022c56b3dc2c3a185448ee9a9f WatchSource:0}: Error finding container ea56dcd91b08af3da71913b620b925ed8d14ff022c56b3dc2c3a185448ee9a9f: Status 404 returned error can't find the container with id ea56dcd91b08af3da71913b620b925ed8d14ff022c56b3dc2c3a185448ee9a9f Feb 16 17:14:36.767392 master-0 kubenswrapper[4167]: I0216 17:14:36.767357 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-r5p9m" Feb 16 17:14:36.820327 master-0 kubenswrapper[4167]: I0216 17:14:36.819207 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-t46bw" Feb 16 17:14:36.822787 master-0 kubenswrapper[4167]: I0216 17:14:36.822750 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 17:14:36.827232 master-0 kubenswrapper[4167]: I0216 17:14:36.827204 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 17:14:36.849360 master-0 kubenswrapper[4167]: I0216 17:14:36.849310 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-kh5s4" Feb 16 17:14:36.850109 master-0 kubenswrapper[4167]: E0216 17:14:36.850017 4167 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:36.850109 master-0 kubenswrapper[4167]: E0216 17:14:36.850050 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:36.850212 master-0 kubenswrapper[4167]: E0216 17:14:36.850121 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:52.850086922 +0000 UTC m=+34.580533300 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:36.868075 master-0 kubenswrapper[4167]: I0216 17:14:36.868044 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 17:14:36.887794 master-0 kubenswrapper[4167]: I0216 17:14:36.887503 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 17:14:36.907877 master-0 kubenswrapper[4167]: I0216 17:14:36.907530 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 17:14:36.933404 master-0 kubenswrapper[4167]: I0216 17:14:36.933370 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 17:14:36.949222 master-0 kubenswrapper[4167]: I0216 17:14:36.949162 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 17:14:36.967158 master-0 kubenswrapper[4167]: I0216 17:14:36.967126 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 17:14:36.987406 master-0 kubenswrapper[4167]: I0216 17:14:36.987368 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 17:14:37.007394 master-0 kubenswrapper[4167]: I0216 17:14:37.007323 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 17:14:37.027454 master-0 kubenswrapper[4167]: I0216 17:14:37.027401 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 17:14:37.047155 master-0 kubenswrapper[4167]: I0216 17:14:37.047115 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 17:14:37.069045 master-0 kubenswrapper[4167]: I0216 17:14:37.068995 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 17:14:37.087159 master-0 kubenswrapper[4167]: I0216 17:14:37.087122 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:14:37.107128 master-0 kubenswrapper[4167]: I0216 17:14:37.107073 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 17:14:37.124119 master-0 kubenswrapper[4167]: I0216 17:14:37.123871 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:37.126860 master-0 kubenswrapper[4167]: I0216 17:14:37.126838 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:14:37.134331 master-0 kubenswrapper[4167]: E0216 17:14:37.134305 4167 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.134406 master-0 kubenswrapper[4167]: E0216 17:14:37.134398 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:14:53.134363204 +0000 UTC m=+34.864809622 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.146914 master-0 kubenswrapper[4167]: I0216 17:14:37.146887 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:14:37.163689 master-0 kubenswrapper[4167]: E0216 17:14:37.163652 4167 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.163775 master-0 kubenswrapper[4167]: E0216 17:14:37.163692 4167 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.163775 master-0 kubenswrapper[4167]: E0216 17:14:37.163765 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:53.163741969 +0000 UTC m=+34.894188357 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.164091 master-0 kubenswrapper[4167]: E0216 17:14:37.164030 4167 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.167462 master-0 kubenswrapper[4167]: I0216 17:14:37.167416 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:14:37.187624 master-0 kubenswrapper[4167]: I0216 17:14:37.187525 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-7mlbn" Feb 16 17:14:37.207185 master-0 kubenswrapper[4167]: I0216 17:14:37.207113 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:14:37.227485 master-0 kubenswrapper[4167]: I0216 17:14:37.227430 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:14:37.246982 master-0 kubenswrapper[4167]: I0216 17:14:37.246923 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ztpz8" Feb 16 17:14:37.267096 master-0 kubenswrapper[4167]: E0216 17:14:37.267030 4167 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.267419 master-0 kubenswrapper[4167]: E0216 17:14:37.267380 4167 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.275758 master-0 kubenswrapper[4167]: I0216 17:14:37.275703 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:14:37.287134 master-0 kubenswrapper[4167]: I0216 17:14:37.287090 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:14:37.287918 master-0 kubenswrapper[4167]: E0216 17:14:37.287875 4167 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.288021 master-0 kubenswrapper[4167]: E0216 17:14:37.287998 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:53.28794756 +0000 UTC m=+35.018393978 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.307893 master-0 kubenswrapper[4167]: I0216 17:14:37.307828 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:14:37.327391 master-0 kubenswrapper[4167]: I0216 17:14:37.327330 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:14:37.348147 master-0 kubenswrapper[4167]: I0216 17:14:37.348093 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 17:14:37.367803 master-0 kubenswrapper[4167]: I0216 17:14:37.367741 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-94r9k" Feb 16 17:14:37.387750 master-0 kubenswrapper[4167]: I0216 17:14:37.387680 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:14:37.407082 master-0 kubenswrapper[4167]: I0216 17:14:37.407032 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 16 17:14:37.426916 master-0 kubenswrapper[4167]: I0216 17:14:37.426861 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-x2982" Feb 16 17:14:37.447218 master-0 kubenswrapper[4167]: I0216 17:14:37.447135 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 17:14:37.466952 master-0 kubenswrapper[4167]: I0216 17:14:37.466909 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 17:14:37.485809 master-0 kubenswrapper[4167]: I0216 17:14:37.485757 4167 request.go:700] Waited for 2.025920408s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0 Feb 16 17:14:37.487049 master-0 kubenswrapper[4167]: I0216 17:14:37.487019 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 17:14:37.506539 master-0 kubenswrapper[4167]: I0216 17:14:37.506494 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 17:14:37.526953 master-0 kubenswrapper[4167]: I0216 17:14:37.526855 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 17:14:37.547307 master-0 kubenswrapper[4167]: I0216 17:14:37.547240 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 17:14:37.558209 master-0 kubenswrapper[4167]: E0216 17:14:37.558170 4167 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.558286 master-0 kubenswrapper[4167]: E0216 17:14:37.558239 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:53.558220183 +0000 UTC m=+35.288666561 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.567841 master-0 kubenswrapper[4167]: I0216 17:14:37.567803 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:14:37.587243 master-0 kubenswrapper[4167]: I0216 17:14:37.587199 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 17:14:37.606687 master-0 kubenswrapper[4167]: I0216 17:14:37.606625 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 17:14:37.627507 master-0 kubenswrapper[4167]: I0216 17:14:37.627453 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 17:14:37.635114 master-0 kubenswrapper[4167]: E0216 17:14:37.635048 4167 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.635304 master-0 kubenswrapper[4167]: E0216 17:14:37.635156 4167 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:14:53.635125684 +0000 UTC m=+35.365572102 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:14:37.648946 master-0 kubenswrapper[4167]: I0216 17:14:37.648305 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 17:14:37.667594 master-0 kubenswrapper[4167]: I0216 17:14:37.667551 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 17:14:37.676601 master-0 kubenswrapper[4167]: I0216 17:14:37.676546 4167 generic.go:334] "Generic (PLEG): container finished" podID="cc9a20f4-255a-4312-8f43-174a28c06340" containerID="2001ecf48088c3f22d86ef45355137d145ab3f62e6976a4be6309762841b9c8e" exitCode=0 Feb 16 17:14:37.676808 master-0 kubenswrapper[4167]: I0216 17:14:37.676620 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerDied","Data":"2001ecf48088c3f22d86ef45355137d145ab3f62e6976a4be6309762841b9c8e"} Feb 16 17:14:37.676808 master-0 kubenswrapper[4167]: I0216 17:14:37.676654 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerStarted","Data":"74a116fd16165558774c833b5cda5488fa226dc9a3ce208ffcbd6e199fe185f0"} Feb 16 17:14:37.677749 master-0 kubenswrapper[4167]: I0216 17:14:37.677693 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" event={"ID":"80d3b238-70c3-4e71-96a1-99405352033f","Type":"ContainerStarted","Data":"e0d85abf94603412016468a1a7df04b05545e2fdd348a9efcb6d6e8215a730c2"} Feb 16 17:14:37.677859 master-0 kubenswrapper[4167]: I0216 17:14:37.677766 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" event={"ID":"80d3b238-70c3-4e71-96a1-99405352033f","Type":"ContainerStarted","Data":"6e8a064157660a2ed7585c372d0f7a76e1a44291120c4c4baf3c2ea85dc5f3fd"} Feb 16 17:14:37.678487 master-0 kubenswrapper[4167]: I0216 17:14:37.678462 4167 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:14:37.683419 master-0 kubenswrapper[4167]: I0216 17:14:37.683369 4167 generic.go:334] "Generic (PLEG): container finished" podID="f3beb7bf-922f-425d-8a19-fd407a7153a8" containerID="01462dd0f4166d41beb7fd8c332582d34d495a6f6d735a8086a4f7400ebb6e1a" exitCode=0 Feb 16 17:14:37.683611 master-0 kubenswrapper[4167]: I0216 17:14:37.683460 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerDied","Data":"01462dd0f4166d41beb7fd8c332582d34d495a6f6d735a8086a4f7400ebb6e1a"} Feb 16 17:14:37.687369 master-0 kubenswrapper[4167]: I0216 17:14:37.687322 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 17:14:37.689007 master-0 kubenswrapper[4167]: I0216 17:14:37.688697 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/4.log" Feb 16 17:14:37.690524 master-0 kubenswrapper[4167]: I0216 17:14:37.690479 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerStarted","Data":"5a005145926df321f6fea8816a53db69398079fcdc302fcb4d3cf33642812538"} Feb 16 17:14:37.691901 master-0 kubenswrapper[4167]: I0216 17:14:37.691860 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" event={"ID":"0ff68421-1741-41c1-93d5-5c722dfd295e","Type":"ContainerStarted","Data":"cb48db8b0c54e896a8fe0f23386c8f40c351681307dcca24daa28ca39af6e204"} Feb 16 17:14:37.691901 master-0 kubenswrapper[4167]: I0216 17:14:37.691897 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" event={"ID":"0ff68421-1741-41c1-93d5-5c722dfd295e","Type":"ContainerStarted","Data":"ea56dcd91b08af3da71913b620b925ed8d14ff022c56b3dc2c3a185448ee9a9f"} Feb 16 17:14:37.692446 master-0 kubenswrapper[4167]: I0216 17:14:37.692413 4167 patch_prober.go:28] interesting pod/downloads-dcd7b7d95-dhhfh container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" start-of-body= Feb 16 17:14:37.692517 master-0 kubenswrapper[4167]: I0216 17:14:37.692453 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.80:8080/\": dial tcp 10.128.0.80:8080: connect: connection refused" Feb 16 17:14:37.755968 master-0 kubenswrapper[4167]: I0216 17:14:37.755843 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:37.755968 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:37.755968 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:37.755968 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:37.755968 master-0 kubenswrapper[4167]: I0216 17:14:37.755897 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:38.702036 master-0 kubenswrapper[4167]: I0216 17:14:38.701646 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerStarted","Data":"3708a423296c9d3d2e8452d20ea6ffc1b7ab5b9bc13fe83c0bc92cdc1d392021"} Feb 16 17:14:38.704037 master-0 kubenswrapper[4167]: I0216 17:14:38.703474 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerStarted","Data":"0f63465af13e38831d16f01b6a2cab1e201a306d8c14807be366329f70aea8a0"} Feb 16 17:14:38.754056 master-0 kubenswrapper[4167]: I0216 17:14:38.754000 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:38.754056 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:38.754056 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:38.754056 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:38.754406 master-0 kubenswrapper[4167]: I0216 17:14:38.754065 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:39.370574 master-0 kubenswrapper[4167]: I0216 17:14:39.370398 4167 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Feb 16 17:14:39.754645 master-0 kubenswrapper[4167]: I0216 17:14:39.754553 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:39.754645 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:39.754645 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:39.754645 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:39.754645 master-0 kubenswrapper[4167]: I0216 17:14:39.754633 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:40.711542 master-0 kubenswrapper[4167]: I0216 17:14:40.711478 4167 generic.go:334] "Generic (PLEG): container finished" podID="f3beb7bf-922f-425d-8a19-fd407a7153a8" containerID="3708a423296c9d3d2e8452d20ea6ffc1b7ab5b9bc13fe83c0bc92cdc1d392021" exitCode=0 Feb 16 17:14:40.711542 master-0 kubenswrapper[4167]: I0216 17:14:40.711548 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerDied","Data":"3708a423296c9d3d2e8452d20ea6ffc1b7ab5b9bc13fe83c0bc92cdc1d392021"} Feb 16 17:14:40.713595 master-0 kubenswrapper[4167]: I0216 17:14:40.713546 4167 generic.go:334] "Generic (PLEG): container finished" podID="cc9a20f4-255a-4312-8f43-174a28c06340" containerID="0f63465af13e38831d16f01b6a2cab1e201a306d8c14807be366329f70aea8a0" exitCode=0 Feb 16 17:14:40.713595 master-0 kubenswrapper[4167]: I0216 17:14:40.713590 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerDied","Data":"0f63465af13e38831d16f01b6a2cab1e201a306d8c14807be366329f70aea8a0"} Feb 16 17:14:40.754556 master-0 kubenswrapper[4167]: I0216 17:14:40.754489 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:40.754556 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:40.754556 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:40.754556 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:40.755222 master-0 kubenswrapper[4167]: I0216 17:14:40.754577 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:41.720246 master-0 kubenswrapper[4167]: I0216 17:14:41.720161 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerStarted","Data":"ad7184f5e454ed28cae744f6dce96256b237252d9accb9b83a29e875cd597f37"} Feb 16 17:14:41.722897 master-0 kubenswrapper[4167]: I0216 17:14:41.722840 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerStarted","Data":"55903a364e6bb62b7d7cc10dbb056ba756dd2642598ae93d70c487737c5ebd22"} Feb 16 17:14:41.755392 master-0 kubenswrapper[4167]: I0216 17:14:41.755275 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:41.755392 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:41.755392 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:41.755392 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:41.755392 master-0 kubenswrapper[4167]: I0216 17:14:41.755381 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:42.754298 master-0 kubenswrapper[4167]: I0216 17:14:42.754199 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:42.754298 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:42.754298 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:42.754298 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:42.754298 master-0 kubenswrapper[4167]: I0216 17:14:42.754263 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:43.754714 master-0 kubenswrapper[4167]: I0216 17:14:43.754649 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:43.754714 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:43.754714 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:43.754714 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:43.754714 master-0 kubenswrapper[4167]: I0216 17:14:43.754712 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:44.754294 master-0 kubenswrapper[4167]: I0216 17:14:44.754218 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:44.754294 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:44.754294 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:44.754294 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:44.754617 master-0 kubenswrapper[4167]: I0216 17:14:44.754311 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:45.755776 master-0 kubenswrapper[4167]: I0216 17:14:45.755657 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:45.755776 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:45.755776 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:45.755776 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:45.755776 master-0 kubenswrapper[4167]: I0216 17:14:45.755754 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:45.857490 master-0 kubenswrapper[4167]: I0216 17:14:45.857409 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:14:46.267080 master-0 kubenswrapper[4167]: I0216 17:14:46.267000 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:46.267080 master-0 kubenswrapper[4167]: I0216 17:14:46.267087 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:46.354149 master-0 kubenswrapper[4167]: I0216 17:14:46.354084 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:46.508102 master-0 kubenswrapper[4167]: I0216 17:14:46.508032 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:46.508325 master-0 kubenswrapper[4167]: I0216 17:14:46.508129 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:46.556765 master-0 kubenswrapper[4167]: I0216 17:14:46.556636 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:46.754903 master-0 kubenswrapper[4167]: I0216 17:14:46.754841 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:46.754903 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:46.754903 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:46.754903 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:46.754903 master-0 kubenswrapper[4167]: I0216 17:14:46.754894 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:46.802500 master-0 kubenswrapper[4167]: I0216 17:14:46.802432 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:14:46.802500 master-0 kubenswrapper[4167]: I0216 17:14:46.802510 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:14:47.271150 master-0 kubenswrapper[4167]: I0216 17:14:47.271107 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:14:47.754149 master-0 kubenswrapper[4167]: I0216 17:14:47.754080 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:47.754149 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:47.754149 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:47.754149 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:47.754149 master-0 kubenswrapper[4167]: I0216 17:14:47.754147 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:48.754744 master-0 kubenswrapper[4167]: I0216 17:14:48.754644 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:48.754744 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:48.754744 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:48.754744 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:48.754744 master-0 kubenswrapper[4167]: I0216 17:14:48.754723 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:49.754144 master-0 kubenswrapper[4167]: I0216 17:14:49.754084 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:49.754144 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:49.754144 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:49.754144 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:49.754144 master-0 kubenswrapper[4167]: I0216 17:14:49.754139 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:50.753585 master-0 kubenswrapper[4167]: I0216 17:14:50.753454 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:50.753585 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:50.753585 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:50.753585 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:50.753585 master-0 kubenswrapper[4167]: I0216 17:14:50.753524 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:51.436674 master-0 kubenswrapper[4167]: I0216 17:14:51.436578 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:51.436674 master-0 kubenswrapper[4167]: I0216 17:14:51.436666 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:51.437079 master-0 kubenswrapper[4167]: I0216 17:14:51.436710 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:51.437079 master-0 kubenswrapper[4167]: I0216 17:14:51.436897 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.437079 master-0 kubenswrapper[4167]: I0216 17:14:51.436938 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:51.438367 master-0 kubenswrapper[4167]: I0216 17:14:51.437224 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:51.438367 master-0 kubenswrapper[4167]: I0216 17:14:51.438154 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:51.438367 master-0 kubenswrapper[4167]: I0216 17:14:51.438212 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:51.438367 master-0 kubenswrapper[4167]: I0216 17:14:51.438255 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:51.438367 master-0 kubenswrapper[4167]: I0216 17:14:51.438298 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:51.438367 master-0 kubenswrapper[4167]: I0216 17:14:51.438343 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:51.438367 master-0 kubenswrapper[4167]: I0216 17:14:51.438380 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:51.438920 master-0 kubenswrapper[4167]: I0216 17:14:51.438422 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:51.438920 master-0 kubenswrapper[4167]: I0216 17:14:51.438817 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:51.438920 master-0 kubenswrapper[4167]: I0216 17:14:51.438898 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:51.439181 master-0 kubenswrapper[4167]: I0216 17:14:51.439006 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:51.439181 master-0 kubenswrapper[4167]: I0216 17:14:51.439071 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.439181 master-0 kubenswrapper[4167]: I0216 17:14:51.439134 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.439181 master-0 kubenswrapper[4167]: I0216 17:14:51.439179 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:51.439538 master-0 kubenswrapper[4167]: I0216 17:14:51.439223 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:51.439538 master-0 kubenswrapper[4167]: I0216 17:14:51.439286 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.439538 master-0 kubenswrapper[4167]: I0216 17:14:51.439354 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:51.439538 master-0 kubenswrapper[4167]: I0216 17:14:51.439417 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.439538 master-0 kubenswrapper[4167]: I0216 17:14:51.439480 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.439538 master-0 kubenswrapper[4167]: I0216 17:14:51.439536 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:51.440286 master-0 kubenswrapper[4167]: I0216 17:14:51.439594 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:51.440286 master-0 kubenswrapper[4167]: I0216 17:14:51.439652 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.440286 master-0 kubenswrapper[4167]: I0216 17:14:51.439716 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:51.440286 master-0 kubenswrapper[4167]: I0216 17:14:51.439780 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:51.440286 master-0 kubenswrapper[4167]: I0216 17:14:51.439850 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.440286 master-0 kubenswrapper[4167]: I0216 17:14:51.439906 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:51.440286 master-0 kubenswrapper[4167]: I0216 17:14:51.440003 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:51.440286 master-0 kubenswrapper[4167]: I0216 17:14:51.440073 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:51.442533 master-0 kubenswrapper[4167]: I0216 17:14:51.442496 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:51.443543 master-0 kubenswrapper[4167]: I0216 17:14:51.443494 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:51.443710 master-0 kubenswrapper[4167]: I0216 17:14:51.443571 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:51.443710 master-0 kubenswrapper[4167]: I0216 17:14:51.443600 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:51.443710 master-0 kubenswrapper[4167]: I0216 17:14:51.443626 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:51.443710 master-0 kubenswrapper[4167]: I0216 17:14:51.443640 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:51.443710 master-0 kubenswrapper[4167]: I0216 17:14:51.443652 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.443710 master-0 kubenswrapper[4167]: I0216 17:14:51.443702 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.443739 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.443777 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.443821 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.443852 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.443882 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.443910 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.443935 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.443981 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.444009 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.444037 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.444078 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.444104 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.444129 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.444154 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.444178 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.444188 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.444207 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:51.444263 master-0 kubenswrapper[4167]: I0216 17:14:51.444232 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:51.445700 master-0 kubenswrapper[4167]: I0216 17:14:51.444573 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:51.445700 master-0 kubenswrapper[4167]: I0216 17:14:51.445595 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.445700 master-0 kubenswrapper[4167]: I0216 17:14:51.445595 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.445700 master-0 kubenswrapper[4167]: I0216 17:14:51.445634 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446386 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446447 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446479 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446513 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446543 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446573 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446607 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446630 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446653 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446679 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446703 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446728 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446753 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446777 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446800 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446823 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446849 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446874 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446898 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446901 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.446922 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.447019 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.447053 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.447085 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.447116 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.447144 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.447173 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.447760 master-0 kubenswrapper[4167]: I0216 17:14:51.447197 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:51.450374 master-0 kubenswrapper[4167]: I0216 17:14:51.447984 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:51.450374 master-0 kubenswrapper[4167]: I0216 17:14:51.448650 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:51.450374 master-0 kubenswrapper[4167]: I0216 17:14:51.449526 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:51.451177 master-0 kubenswrapper[4167]: I0216 17:14:51.450717 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:51.451475 master-0 kubenswrapper[4167]: I0216 17:14:51.451257 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:51.451475 master-0 kubenswrapper[4167]: I0216 17:14:51.447212 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.451475 master-0 kubenswrapper[4167]: I0216 17:14:51.451359 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:51.451475 master-0 kubenswrapper[4167]: I0216 17:14:51.451404 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:51.451475 master-0 kubenswrapper[4167]: I0216 17:14:51.451441 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:51.451475 master-0 kubenswrapper[4167]: I0216 17:14:51.451449 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:51.452115 master-0 kubenswrapper[4167]: I0216 17:14:51.451489 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:51.452115 master-0 kubenswrapper[4167]: I0216 17:14:51.451552 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:51.452115 master-0 kubenswrapper[4167]: I0216 17:14:51.451594 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:51.452115 master-0 kubenswrapper[4167]: I0216 17:14:51.451633 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:51.452115 master-0 kubenswrapper[4167]: I0216 17:14:51.451692 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:51.452115 master-0 kubenswrapper[4167]: I0216 17:14:51.451732 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:51.452115 master-0 kubenswrapper[4167]: I0216 17:14:51.451744 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.452115 master-0 kubenswrapper[4167]: I0216 17:14:51.451761 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:51.452115 master-0 kubenswrapper[4167]: I0216 17:14:51.451769 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:51.453339 master-0 kubenswrapper[4167]: I0216 17:14:51.453290 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:51.453636 master-0 kubenswrapper[4167]: I0216 17:14:51.453593 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:51.453910 master-0 kubenswrapper[4167]: I0216 17:14:51.453868 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.454200 master-0 kubenswrapper[4167]: I0216 17:14:51.454158 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:51.454482 master-0 kubenswrapper[4167]: I0216 17:14:51.454441 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:51.454678 master-0 kubenswrapper[4167]: I0216 17:14:51.454628 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:51.455145 master-0 kubenswrapper[4167]: I0216 17:14:51.455085 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:51.455541 master-0 kubenswrapper[4167]: I0216 17:14:51.455493 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:51.455541 master-0 kubenswrapper[4167]: I0216 17:14:51.455541 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:51.455806 master-0 kubenswrapper[4167]: I0216 17:14:51.455574 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:51.455806 master-0 kubenswrapper[4167]: I0216 17:14:51.455605 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:51.455806 master-0 kubenswrapper[4167]: I0216 17:14:51.455635 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:51.455806 master-0 kubenswrapper[4167]: I0216 17:14:51.455659 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:51.455806 master-0 kubenswrapper[4167]: I0216 17:14:51.455686 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:51.455806 master-0 kubenswrapper[4167]: I0216 17:14:51.455711 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:51.455806 master-0 kubenswrapper[4167]: I0216 17:14:51.455736 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:51.455806 master-0 kubenswrapper[4167]: I0216 17:14:51.455763 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:51.455806 master-0 kubenswrapper[4167]: I0216 17:14:51.455789 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:51.455806 master-0 kubenswrapper[4167]: I0216 17:14:51.455820 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.455847 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.455873 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.455900 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.455924 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.455949 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.455993 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.456023 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.456051 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.456078 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.456104 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.456131 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.456156 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.456182 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.456212 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.456240 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.456270 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:51.456762 master-0 kubenswrapper[4167]: I0216 17:14:51.456547 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:51.458292 master-0 kubenswrapper[4167]: I0216 17:14:51.456841 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:51.458292 master-0 kubenswrapper[4167]: I0216 17:14:51.457598 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:51.458292 master-0 kubenswrapper[4167]: I0216 17:14:51.458197 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:51.458968 master-0 kubenswrapper[4167]: I0216 17:14:51.458918 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.459168 master-0 kubenswrapper[4167]: I0216 17:14:51.459130 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:51.459230 master-0 kubenswrapper[4167]: I0216 17:14:51.459204 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:51.459282 master-0 kubenswrapper[4167]: I0216 17:14:51.459242 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:51.459282 master-0 kubenswrapper[4167]: I0216 17:14:51.459275 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.459392 master-0 kubenswrapper[4167]: I0216 17:14:51.459336 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.460207 master-0 kubenswrapper[4167]: I0216 17:14:51.459425 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.460382 master-0 kubenswrapper[4167]: I0216 17:14:51.460282 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:51.460382 master-0 kubenswrapper[4167]: I0216 17:14:51.460337 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:51.460382 master-0 kubenswrapper[4167]: I0216 17:14:51.460370 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:51.460382 master-0 kubenswrapper[4167]: I0216 17:14:51.460404 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.460382 master-0 kubenswrapper[4167]: I0216 17:14:51.460435 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:51.460382 master-0 kubenswrapper[4167]: I0216 17:14:51.460468 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:51.460382 master-0 kubenswrapper[4167]: I0216 17:14:51.460508 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:51.460382 master-0 kubenswrapper[4167]: I0216 17:14:51.460544 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:51.461286 master-0 kubenswrapper[4167]: I0216 17:14:51.460654 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:51.461286 master-0 kubenswrapper[4167]: I0216 17:14:51.460713 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:51.461286 master-0 kubenswrapper[4167]: I0216 17:14:51.460747 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:51.461286 master-0 kubenswrapper[4167]: I0216 17:14:51.460778 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:51.461286 master-0 kubenswrapper[4167]: I0216 17:14:51.460809 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:51.461286 master-0 kubenswrapper[4167]: I0216 17:14:51.460834 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:51.461286 master-0 kubenswrapper[4167]: I0216 17:14:51.460862 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.461286 master-0 kubenswrapper[4167]: I0216 17:14:51.460887 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:51.461286 master-0 kubenswrapper[4167]: I0216 17:14:51.461213 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:51.463238 master-0 kubenswrapper[4167]: I0216 17:14:51.461646 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.463238 master-0 kubenswrapper[4167]: I0216 17:14:51.462119 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:51.463238 master-0 kubenswrapper[4167]: I0216 17:14:51.462441 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:51.463238 master-0 kubenswrapper[4167]: I0216 17:14:51.462470 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:51.463238 master-0 kubenswrapper[4167]: I0216 17:14:51.462934 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:51.463238 master-0 kubenswrapper[4167]: I0216 17:14:51.463069 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:51.463542 master-0 kubenswrapper[4167]: I0216 17:14:51.463336 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:51.463542 master-0 kubenswrapper[4167]: I0216 17:14:51.463430 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:51.463635 master-0 kubenswrapper[4167]: I0216 17:14:51.463578 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:51.464164 master-0 kubenswrapper[4167]: I0216 17:14:51.463683 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:51.464164 master-0 kubenswrapper[4167]: I0216 17:14:51.463798 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:51.464164 master-0 kubenswrapper[4167]: I0216 17:14:51.463900 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:51.464164 master-0 kubenswrapper[4167]: I0216 17:14:51.464114 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:51.464164 master-0 kubenswrapper[4167]: I0216 17:14:51.464148 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:51.464340 master-0 kubenswrapper[4167]: I0216 17:14:51.464180 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:51.464340 master-0 kubenswrapper[4167]: I0216 17:14:51.464196 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:51.464340 master-0 kubenswrapper[4167]: I0216 17:14:51.464277 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:51.464434 master-0 kubenswrapper[4167]: I0216 17:14:51.464356 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:51.464434 master-0 kubenswrapper[4167]: I0216 17:14:51.464394 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:51.464600 master-0 kubenswrapper[4167]: I0216 17:14:51.464571 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:51.464736 master-0 kubenswrapper[4167]: I0216 17:14:51.464711 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:51.464822 master-0 kubenswrapper[4167]: I0216 17:14:51.464768 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:51.464822 master-0 kubenswrapper[4167]: I0216 17:14:51.464805 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:51.464937 master-0 kubenswrapper[4167]: I0216 17:14:51.464911 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:51.465043 master-0 kubenswrapper[4167]: I0216 17:14:51.464934 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:51.465043 master-0 kubenswrapper[4167]: I0216 17:14:51.464952 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:51.465043 master-0 kubenswrapper[4167]: I0216 17:14:51.464915 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.465043 master-0 kubenswrapper[4167]: I0216 17:14:51.464943 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:51.465043 master-0 kubenswrapper[4167]: I0216 17:14:51.465019 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:51.465239 master-0 kubenswrapper[4167]: I0216 17:14:51.465095 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:51.465489 master-0 kubenswrapper[4167]: I0216 17:14:51.465458 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:51.465596 master-0 kubenswrapper[4167]: I0216 17:14:51.465567 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:51.465657 master-0 kubenswrapper[4167]: I0216 17:14:51.465623 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.465768 master-0 kubenswrapper[4167]: I0216 17:14:51.465742 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:51.466425 master-0 kubenswrapper[4167]: I0216 17:14:51.466398 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.466529 master-0 kubenswrapper[4167]: I0216 17:14:51.466504 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:51.467932 master-0 kubenswrapper[4167]: I0216 17:14:51.467898 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:51.467932 master-0 kubenswrapper[4167]: I0216 17:14:51.467907 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:51.468066 master-0 kubenswrapper[4167]: I0216 17:14:51.468036 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:51.468126 master-0 kubenswrapper[4167]: I0216 17:14:51.468095 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.468668 master-0 kubenswrapper[4167]: I0216 17:14:51.468637 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:51.468726 master-0 kubenswrapper[4167]: I0216 17:14:51.468661 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:51.468860 master-0 kubenswrapper[4167]: I0216 17:14:51.468832 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:51.468904 master-0 kubenswrapper[4167]: I0216 17:14:51.468844 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:51.469047 master-0 kubenswrapper[4167]: I0216 17:14:51.469026 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.469167 master-0 kubenswrapper[4167]: I0216 17:14:51.469139 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:51.469307 master-0 kubenswrapper[4167]: I0216 17:14:51.469275 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:51.469429 master-0 kubenswrapper[4167]: I0216 17:14:51.469406 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:51.469429 master-0 kubenswrapper[4167]: I0216 17:14:51.469401 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:51.469839 master-0 kubenswrapper[4167]: I0216 17:14:51.469806 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:51.470085 master-0 kubenswrapper[4167]: I0216 17:14:51.470059 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:51.470085 master-0 kubenswrapper[4167]: I0216 17:14:51.470062 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:51.470284 master-0 kubenswrapper[4167]: I0216 17:14:51.470255 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.470609 master-0 kubenswrapper[4167]: I0216 17:14:51.470576 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:51.470659 master-0 kubenswrapper[4167]: I0216 17:14:51.470627 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:51.470695 master-0 kubenswrapper[4167]: I0216 17:14:51.470662 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:51.470727 master-0 kubenswrapper[4167]: I0216 17:14:51.470712 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:51.470761 master-0 kubenswrapper[4167]: I0216 17:14:51.470719 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:51.471018 master-0 kubenswrapper[4167]: I0216 17:14:51.470992 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:51.471271 master-0 kubenswrapper[4167]: I0216 17:14:51.471243 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:51.472328 master-0 kubenswrapper[4167]: I0216 17:14:51.472299 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:51.472505 master-0 kubenswrapper[4167]: I0216 17:14:51.472470 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:51.472549 master-0 kubenswrapper[4167]: I0216 17:14:51.472489 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.472584 master-0 kubenswrapper[4167]: I0216 17:14:51.472557 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:51.472673 master-0 kubenswrapper[4167]: I0216 17:14:51.472640 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:51.472925 master-0 kubenswrapper[4167]: I0216 17:14:51.472882 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:51.472925 master-0 kubenswrapper[4167]: I0216 17:14:51.472901 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:51.474012 master-0 kubenswrapper[4167]: I0216 17:14:51.473982 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:51.474462 master-0 kubenswrapper[4167]: I0216 17:14:51.474432 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:51.474751 master-0 kubenswrapper[4167]: I0216 17:14:51.474714 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:51.474852 master-0 kubenswrapper[4167]: I0216 17:14:51.474823 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:51.475040 master-0 kubenswrapper[4167]: I0216 17:14:51.475006 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:51.475284 master-0 kubenswrapper[4167]: I0216 17:14:51.475247 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:51.475703 master-0 kubenswrapper[4167]: I0216 17:14:51.475673 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:51.476037 master-0 kubenswrapper[4167]: I0216 17:14:51.476004 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:51.476571 master-0 kubenswrapper[4167]: I0216 17:14:51.476535 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:51.476608 master-0 kubenswrapper[4167]: I0216 17:14:51.476585 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:51.476829 master-0 kubenswrapper[4167]: I0216 17:14:51.476789 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:51.477200 master-0 kubenswrapper[4167]: I0216 17:14:51.477179 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:51.477309 master-0 kubenswrapper[4167]: I0216 17:14:51.477278 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:51.477608 master-0 kubenswrapper[4167]: I0216 17:14:51.477581 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:51.477760 master-0 kubenswrapper[4167]: I0216 17:14:51.477727 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:51.477921 master-0 kubenswrapper[4167]: I0216 17:14:51.477881 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:51.478247 master-0 kubenswrapper[4167]: I0216 17:14:51.478216 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:51.478282 master-0 kubenswrapper[4167]: I0216 17:14:51.478236 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.478454 master-0 kubenswrapper[4167]: I0216 17:14:51.478426 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:51.478536 master-0 kubenswrapper[4167]: I0216 17:14:51.478492 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.478577 master-0 kubenswrapper[4167]: I0216 17:14:51.478505 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:51.478948 master-0 kubenswrapper[4167]: I0216 17:14:51.478916 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:51.479013 master-0 kubenswrapper[4167]: I0216 17:14:51.478989 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:51.479013 master-0 kubenswrapper[4167]: I0216 17:14:51.478997 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:51.479074 master-0 kubenswrapper[4167]: I0216 17:14:51.479013 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:51.479103 master-0 kubenswrapper[4167]: I0216 17:14:51.479072 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:51.479131 master-0 kubenswrapper[4167]: I0216 17:14:51.479112 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:51.479215 master-0 kubenswrapper[4167]: I0216 17:14:51.479184 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:51.479328 master-0 kubenswrapper[4167]: I0216 17:14:51.479300 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.479328 master-0 kubenswrapper[4167]: I0216 17:14:51.479316 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:51.479398 master-0 kubenswrapper[4167]: I0216 17:14:51.479329 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:51.479432 master-0 kubenswrapper[4167]: I0216 17:14:51.479391 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:51.479432 master-0 kubenswrapper[4167]: I0216 17:14:51.479404 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:51.479489 master-0 kubenswrapper[4167]: I0216 17:14:51.479455 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:51.479489 master-0 kubenswrapper[4167]: I0216 17:14:51.479470 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:51.479550 master-0 kubenswrapper[4167]: I0216 17:14:51.479502 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:51.479624 master-0 kubenswrapper[4167]: I0216 17:14:51.479600 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:51.479661 master-0 kubenswrapper[4167]: I0216 17:14:51.479600 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:51.480053 master-0 kubenswrapper[4167]: I0216 17:14:51.480022 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:51.480128 master-0 kubenswrapper[4167]: I0216 17:14:51.480091 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.481490 master-0 kubenswrapper[4167]: I0216 17:14:51.481449 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:51.482521 master-0 kubenswrapper[4167]: I0216 17:14:51.482491 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:51.488698 master-0 kubenswrapper[4167]: I0216 17:14:51.488658 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:14:51.495275 master-0 kubenswrapper[4167]: I0216 17:14:51.495251 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:51.511167 master-0 kubenswrapper[4167]: I0216 17:14:51.511146 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:51.518261 master-0 kubenswrapper[4167]: I0216 17:14:51.518226 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:14:51.528078 master-0 kubenswrapper[4167]: I0216 17:14:51.528034 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:51.528231 master-0 kubenswrapper[4167]: I0216 17:14:51.528195 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:14:51.536287 master-0 kubenswrapper[4167]: I0216 17:14:51.536246 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:51.539721 master-0 kubenswrapper[4167]: I0216 17:14:51.539252 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:14:51.564625 master-0 kubenswrapper[4167]: I0216 17:14:51.563933 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:14:51.566355 master-0 kubenswrapper[4167]: I0216 17:14:51.566153 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:51.575022 master-0 kubenswrapper[4167]: I0216 17:14:51.574599 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:14:51.577157 master-0 kubenswrapper[4167]: I0216 17:14:51.575558 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.577157 master-0 kubenswrapper[4167]: I0216 17:14:51.575620 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.577157 master-0 kubenswrapper[4167]: I0216 17:14:51.575763 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:51.577157 master-0 kubenswrapper[4167]: I0216 17:14:51.575774 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:51.577157 master-0 kubenswrapper[4167]: I0216 17:14:51.576173 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:51.577157 master-0 kubenswrapper[4167]: I0216 17:14:51.576250 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:51.577157 master-0 kubenswrapper[4167]: I0216 17:14:51.576273 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.577157 master-0 kubenswrapper[4167]: I0216 17:14:51.576344 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:51.577157 master-0 kubenswrapper[4167]: I0216 17:14:51.576618 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:51.577157 master-0 kubenswrapper[4167]: I0216 17:14:51.576919 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:51.577157 master-0 kubenswrapper[4167]: I0216 17:14:51.576920 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:51.577157 master-0 kubenswrapper[4167]: I0216 17:14:51.577067 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:51.577157 master-0 kubenswrapper[4167]: I0216 17:14:51.577157 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:51.577525 master-0 kubenswrapper[4167]: I0216 17:14:51.577267 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:51.577990 master-0 kubenswrapper[4167]: I0216 17:14:51.577928 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:51.578106 master-0 kubenswrapper[4167]: I0216 17:14:51.578048 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.578843 master-0 kubenswrapper[4167]: I0216 17:14:51.578770 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:51.579586 master-0 kubenswrapper[4167]: I0216 17:14:51.579499 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:51.580175 master-0 kubenswrapper[4167]: I0216 17:14:51.580142 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:51.582066 master-0 kubenswrapper[4167]: I0216 17:14:51.582020 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:51.583166 master-0 kubenswrapper[4167]: I0216 17:14:51.583119 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:51.586696 master-0 kubenswrapper[4167]: I0216 17:14:51.586430 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:51.591258 master-0 kubenswrapper[4167]: I0216 17:14:51.591229 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:14:51.603121 master-0 kubenswrapper[4167]: I0216 17:14:51.602037 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:14:51.608670 master-0 kubenswrapper[4167]: I0216 17:14:51.608621 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:51.610598 master-0 kubenswrapper[4167]: I0216 17:14:51.610562 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:51.614850 master-0 kubenswrapper[4167]: I0216 17:14:51.614805 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:14:51.616644 master-0 kubenswrapper[4167]: I0216 17:14:51.616056 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:51.621714 master-0 kubenswrapper[4167]: I0216 17:14:51.621261 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:14:51.652549 master-0 kubenswrapper[4167]: I0216 17:14:51.632496 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:14:51.652549 master-0 kubenswrapper[4167]: I0216 17:14:51.637540 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:14:51.652549 master-0 kubenswrapper[4167]: I0216 17:14:51.642502 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:51.659340 master-0 kubenswrapper[4167]: I0216 17:14:51.654950 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:14:51.659340 master-0 kubenswrapper[4167]: I0216 17:14:51.655342 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:14:51.666865 master-0 kubenswrapper[4167]: I0216 17:14:51.662229 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:14:51.666865 master-0 kubenswrapper[4167]: I0216 17:14:51.662578 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:14:51.684239 master-0 kubenswrapper[4167]: I0216 17:14:51.684193 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:14:51.684885 master-0 kubenswrapper[4167]: I0216 17:14:51.684481 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:51.714288 master-0 kubenswrapper[4167]: I0216 17:14:51.713876 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:14:51.719668 master-0 kubenswrapper[4167]: I0216 17:14:51.718536 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:14:51.719842 master-0 kubenswrapper[4167]: I0216 17:14:51.719802 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:14:51.721950 master-0 kubenswrapper[4167]: I0216 17:14:51.721688 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:51.721950 master-0 kubenswrapper[4167]: I0216 17:14:51.721699 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:14:51.725497 master-0 kubenswrapper[4167]: I0216 17:14:51.722989 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:14:51.725497 master-0 kubenswrapper[4167]: I0216 17:14:51.723252 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:14:51.732440 master-0 kubenswrapper[4167]: I0216 17:14:51.732341 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:14:51.736285 master-0 kubenswrapper[4167]: I0216 17:14:51.736177 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:14:51.742498 master-0 kubenswrapper[4167]: I0216 17:14:51.742442 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:14:51.754912 master-0 kubenswrapper[4167]: I0216 17:14:51.754879 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:14:51.756481 master-0 kubenswrapper[4167]: I0216 17:14:51.755179 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:14:51.761387 master-0 kubenswrapper[4167]: I0216 17:14:51.761348 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:14:51.769346 master-0 kubenswrapper[4167]: I0216 17:14:51.769303 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:14:51.774248 master-0 kubenswrapper[4167]: I0216 17:14:51.774206 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:14:51.774728 master-0 kubenswrapper[4167]: I0216 17:14:51.774213 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:14:51.778208 master-0 kubenswrapper[4167]: I0216 17:14:51.778176 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:51.778208 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:51.778208 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:51.778208 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:51.778366 master-0 kubenswrapper[4167]: I0216 17:14:51.778216 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:51.778366 master-0 kubenswrapper[4167]: I0216 17:14:51.778287 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:14:51.781536 master-0 kubenswrapper[4167]: I0216 17:14:51.781447 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:14:51.792433 master-0 kubenswrapper[4167]: I0216 17:14:51.792386 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:51.826336 master-0 kubenswrapper[4167]: I0216 17:14:51.826295 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:51.843495 master-0 kubenswrapper[4167]: I0216 17:14:51.843450 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:51.876428 master-0 kubenswrapper[4167]: I0216 17:14:51.876329 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:14:51.876761 master-0 kubenswrapper[4167]: I0216 17:14:51.876721 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:14:52.531083 master-0 kubenswrapper[4167]: I0216 17:14:52.530994 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:52.596574 master-0 kubenswrapper[4167]: I0216 17:14:52.596461 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:14:52.734381 master-0 kubenswrapper[4167]: W0216 17:14:52.734122 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e90be63_ff6c_4e9e_8b9e_1ad9cf941845.slice/crio-f107ae316f55b51d061b3a12491109a244744e44c6d6de631a2ab51eebfef6f7 WatchSource:0}: Error finding container f107ae316f55b51d061b3a12491109a244744e44c6d6de631a2ab51eebfef6f7: Status 404 returned error can't find the container with id f107ae316f55b51d061b3a12491109a244744e44c6d6de631a2ab51eebfef6f7 Feb 16 17:14:52.736470 master-0 kubenswrapper[4167]: W0216 17:14:52.734906 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode10d0b0c_4c2a_45b3_8d69_3070d566b97d.slice/crio-f1724e9c2bd1ff4344a07335bce2830943d7b7087ec0c0d09f279f91dba5b5ce WatchSource:0}: Error finding container f1724e9c2bd1ff4344a07335bce2830943d7b7087ec0c0d09f279f91dba5b5ce: Status 404 returned error can't find the container with id f1724e9c2bd1ff4344a07335bce2830943d7b7087ec0c0d09f279f91dba5b5ce Feb 16 17:14:52.783304 master-0 kubenswrapper[4167]: I0216 17:14:52.783121 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" event={"ID":"e10d0b0c-4c2a-45b3-8d69-3070d566b97d","Type":"ContainerStarted","Data":"f1724e9c2bd1ff4344a07335bce2830943d7b7087ec0c0d09f279f91dba5b5ce"} Feb 16 17:14:52.786385 master-0 kubenswrapper[4167]: I0216 17:14:52.785891 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" event={"ID":"54fba066-0e9e-49f6-8a86-34d5b4b660df","Type":"ContainerStarted","Data":"c00c293318565b07397e1a71e0b4feb257c80615d1e58f58dd410107df3b8c00"} Feb 16 17:14:52.787595 master-0 kubenswrapper[4167]: I0216 17:14:52.787563 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" event={"ID":"e73ee493-de15-44c2-bd51-e12fcbb27a15","Type":"ContainerStarted","Data":"9c81a36ee0003fbfa2ae1aa7c17ae0c583bca4ad7c740fc7152d56ee13b665ae"} Feb 16 17:14:52.788495 master-0 kubenswrapper[4167]: I0216 17:14:52.788472 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" event={"ID":"7390ccc6-dfbe-4f51-960c-7628f49bffb7","Type":"ContainerStarted","Data":"1980784f4ac4908996942c52249d3b5f7de60ab3cc766f241cbe276c786f48c2"} Feb 16 17:14:52.791822 master-0 kubenswrapper[4167]: I0216 17:14:52.789765 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerStarted","Data":"f107ae316f55b51d061b3a12491109a244744e44c6d6de631a2ab51eebfef6f7"} Feb 16 17:14:52.793258 master-0 kubenswrapper[4167]: I0216 17:14:52.793207 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" event={"ID":"188e42e5-9f9c-42af-ba15-5548c4fa4b52","Type":"ContainerStarted","Data":"0fa56c92dfe255661c8d85b020cca6b51ea0c56f33aabcca1babf584239ffbba"} Feb 16 17:14:52.831457 master-0 kubenswrapper[4167]: I0216 17:14:52.830877 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:52.831457 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:52.831457 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:52.831457 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:52.831457 master-0 kubenswrapper[4167]: I0216 17:14:52.830941 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:52.899198 master-0 kubenswrapper[4167]: I0216 17:14:52.897878 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:52.913943 master-0 kubenswrapper[4167]: I0216 17:14:52.912712 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:53.128833 master-0 kubenswrapper[4167]: I0216 17:14:53.118849 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:14:53.203842 master-0 kubenswrapper[4167]: I0216 17:14:53.203796 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:53.203952 master-0 kubenswrapper[4167]: I0216 17:14:53.203922 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:53.206734 master-0 kubenswrapper[4167]: I0216 17:14:53.206714 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:53.207659 master-0 kubenswrapper[4167]: I0216 17:14:53.207588 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:53.305103 master-0 kubenswrapper[4167]: I0216 17:14:53.305052 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:53.307886 master-0 kubenswrapper[4167]: I0216 17:14:53.307832 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:53.445994 master-0 kubenswrapper[4167]: W0216 17:14:53.445925 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc303189e_adae_4fe2_8dd7_cc9b80f73e66.slice/crio-4e30214ed70fe1bf336f6dc7a7eae7449ee1c0ed02a90d94ccdd84e67b8e09e7 WatchSource:0}: Error finding container 4e30214ed70fe1bf336f6dc7a7eae7449ee1c0ed02a90d94ccdd84e67b8e09e7: Status 404 returned error can't find the container with id 4e30214ed70fe1bf336f6dc7a7eae7449ee1c0ed02a90d94ccdd84e67b8e09e7 Feb 16 17:14:53.447036 master-0 kubenswrapper[4167]: I0216 17:14:53.446997 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:14:53.484213 master-0 kubenswrapper[4167]: I0216 17:14:53.482762 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:53.543943 master-0 kubenswrapper[4167]: I0216 17:14:53.543436 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:53.609884 master-0 kubenswrapper[4167]: I0216 17:14:53.609829 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:53.616595 master-0 kubenswrapper[4167]: I0216 17:14:53.616553 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:53.688949 master-0 kubenswrapper[4167]: I0216 17:14:53.688912 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:14:53.711311 master-0 kubenswrapper[4167]: I0216 17:14:53.711258 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:53.716927 master-0 kubenswrapper[4167]: I0216 17:14:53.716880 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:53.753871 master-0 kubenswrapper[4167]: I0216 17:14:53.751522 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:14:53.754728 master-0 kubenswrapper[4167]: I0216 17:14:53.754680 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:53.754728 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:53.754728 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:53.754728 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:53.754843 master-0 kubenswrapper[4167]: I0216 17:14:53.754756 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:53.798538 master-0 kubenswrapper[4167]: I0216 17:14:53.798382 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" event={"ID":"970d4376-f299-412c-a8ee-90aa980c689e","Type":"ContainerStarted","Data":"bcbf3762d19a54b2a685cc881ec42b19efc1bfbfac85ef7ef905079a6b7510c7"} Feb 16 17:14:53.799654 master-0 kubenswrapper[4167]: I0216 17:14:53.799620 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vwvwx" event={"ID":"c303189e-adae-4fe2-8dd7-cc9b80f73e66","Type":"ContainerStarted","Data":"4e30214ed70fe1bf336f6dc7a7eae7449ee1c0ed02a90d94ccdd84e67b8e09e7"} Feb 16 17:14:53.800570 master-0 kubenswrapper[4167]: I0216 17:14:53.800545 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerStarted","Data":"a4718c8a0a1726172554b1765ae7602cf0120fc3d7890373df285714609c399d"} Feb 16 17:14:53.801804 master-0 kubenswrapper[4167]: I0216 17:14:53.801770 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerStarted","Data":"9989c01fc82b13e01ba462441db07496e1ba7a4f459fb76d4a48c27e9484ec2b"} Feb 16 17:14:53.802713 master-0 kubenswrapper[4167]: I0216 17:14:53.802683 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerStarted","Data":"129b7e5ab3ce71838d111eb05c7387a5f825c49c9108ae54cea97069f748e4f3"} Feb 16 17:14:54.217257 master-0 kubenswrapper[4167]: W0216 17:14:54.217213 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5192fa49_d81c_47ce_b2ab_f90996cc0bd5.slice/crio-aa646b4b074bf4f40a556d318f63f6465db70b16d625aae03dc45348b3cf3422 WatchSource:0}: Error finding container aa646b4b074bf4f40a556d318f63f6465db70b16d625aae03dc45348b3cf3422: Status 404 returned error can't find the container with id aa646b4b074bf4f40a556d318f63f6465db70b16d625aae03dc45348b3cf3422 Feb 16 17:14:54.754018 master-0 kubenswrapper[4167]: I0216 17:14:54.753892 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:54.754018 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:54.754018 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:54.754018 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:54.754239 master-0 kubenswrapper[4167]: I0216 17:14:54.753955 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:54.809474 master-0 kubenswrapper[4167]: I0216 17:14:54.809428 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" event={"ID":"54fba066-0e9e-49f6-8a86-34d5b4b660df","Type":"ContainerStarted","Data":"674059d1546130b2b1cd88434ec07f49b78934c11e7c0706b6bce62bcf537cfe"} Feb 16 17:14:54.810291 master-0 kubenswrapper[4167]: I0216 17:14:54.810253 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:54.812535 master-0 kubenswrapper[4167]: I0216 17:14:54.812491 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" event={"ID":"ba37ef0e-373c-4ccc-b082-668630399765","Type":"ContainerStarted","Data":"2d562ff22342c33ee44d46b526b0bfa7e6a3ebfe2962b9620b9239cd95a74d02"} Feb 16 17:14:54.813478 master-0 kubenswrapper[4167]: I0216 17:14:54.813445 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" event={"ID":"737fcc7d-d850-4352-9f17-383c85d5bc28","Type":"ContainerStarted","Data":"a1acea3f2c56902b09933de282084ee0960208e5a82323ac3b71e0b64f08c247"} Feb 16 17:14:54.814486 master-0 kubenswrapper[4167]: I0216 17:14:54.814431 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerStarted","Data":"18f5c265c432314a2af6b8e33282747e8fabd2c06b1d6d6cafc0d7febc770a77"} Feb 16 17:14:54.815903 master-0 kubenswrapper[4167]: I0216 17:14:54.815863 4167 generic.go:334] "Generic (PLEG): container finished" podID="0393fe12-2533-4c9c-a8e4-a58003c88f36" containerID="2869554abec6d39b010f4b882680666b6bbe70b0fe823a9ff469a7c848580032" exitCode=0 Feb 16 17:14:54.817940 master-0 kubenswrapper[4167]: I0216 17:14:54.817898 4167 generic.go:334] "Generic (PLEG): container finished" podID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" containerID="f56650a7cd747679b26e32b6c1845bca37f45bc74f5c310339afb92474238f56" exitCode=0 Feb 16 17:14:54.818113 master-0 kubenswrapper[4167]: I0216 17:14:54.818087 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:14:54.818211 master-0 kubenswrapper[4167]: I0216 17:14:54.818185 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerDied","Data":"2869554abec6d39b010f4b882680666b6bbe70b0fe823a9ff469a7c848580032"} Feb 16 17:14:54.818211 master-0 kubenswrapper[4167]: I0216 17:14:54.818208 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" event={"ID":"5192fa49-d81c-47ce-b2ab-f90996cc0bd5","Type":"ContainerStarted","Data":"aa646b4b074bf4f40a556d318f63f6465db70b16d625aae03dc45348b3cf3422"} Feb 16 17:14:54.818306 master-0 kubenswrapper[4167]: I0216 17:14:54.818218 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" event={"ID":"7390ccc6-dfbe-4f51-960c-7628f49bffb7","Type":"ContainerDied","Data":"f56650a7cd747679b26e32b6c1845bca37f45bc74f5c310339afb92474238f56"} Feb 16 17:14:54.818913 master-0 kubenswrapper[4167]: I0216 17:14:54.818884 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" event={"ID":"970d4376-f299-412c-a8ee-90aa980c689e","Type":"ContainerStarted","Data":"a0d06520735ed0b60c5bb08d65bc6702e8532961f3a95e0c641eb0fffcd1225a"} Feb 16 17:14:54.819888 master-0 kubenswrapper[4167]: I0216 17:14:54.819859 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vwvwx" event={"ID":"c303189e-adae-4fe2-8dd7-cc9b80f73e66","Type":"ContainerStarted","Data":"9aa1b15d60f99d2cf55f79f2c375c36924f7525007528026162ba065a295f718"} Feb 16 17:14:54.820698 master-0 kubenswrapper[4167]: I0216 17:14:54.820669 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:14:54.821577 master-0 kubenswrapper[4167]: I0216 17:14:54.821545 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" event={"ID":"e73ee493-de15-44c2-bd51-e12fcbb27a15","Type":"ContainerStarted","Data":"725fcc0c6cb710edb4a6da087267f92efc9dc223a018289fd1f7f613bf9f07d9"} Feb 16 17:14:54.822192 master-0 kubenswrapper[4167]: I0216 17:14:54.822156 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:54.823320 master-0 kubenswrapper[4167]: I0216 17:14:54.823280 4167 patch_prober.go:28] interesting pod/packageserver-6d5d8c8c95-kzfjw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.59:5443/healthz\": dial tcp 10.128.0.59:5443: connect: connection refused" start-of-body= Feb 16 17:14:54.823387 master-0 kubenswrapper[4167]: I0216 17:14:54.823343 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.59:5443/healthz\": dial tcp 10.128.0.59:5443: connect: connection refused" Feb 16 17:14:54.823994 master-0 kubenswrapper[4167]: I0216 17:14:54.823946 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" event={"ID":"188e42e5-9f9c-42af-ba15-5548c4fa4b52","Type":"ContainerStarted","Data":"7d8809cdf4d92b85ae754a964638d7b2833c5008e5808250e883e4568bdc6480"} Feb 16 17:14:54.824806 master-0 kubenswrapper[4167]: I0216 17:14:54.824776 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" event={"ID":"e10d0b0c-4c2a-45b3-8d69-3070d566b97d","Type":"ContainerStarted","Data":"e1c8414590ed2fbc23be76e18fa26d224f487dcbc205c4862bf9e762f7a9b956"} Feb 16 17:14:54.826603 master-0 kubenswrapper[4167]: I0216 17:14:54.825839 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerStarted","Data":"2d47112fd42c5255c0ee3f609db46766e89e9137b3690ef0ab34d4341b2caa25"} Feb 16 17:14:54.826603 master-0 kubenswrapper[4167]: I0216 17:14:54.826521 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" event={"ID":"62220aa5-4065-472c-8a17-c0a58942ab8a","Type":"ContainerStarted","Data":"3b3fa5e56cb540aa17b565b61f578116e53b8b2c3a52524f40cf80d83f94fac5"} Feb 16 17:14:55.315422 master-0 kubenswrapper[4167]: W0216 17:14:55.314760 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd020c902_2adb_4919_8dd9_0c2109830580.slice/crio-d229d8350a46007efa31fce29e3e6933bd8222c4a2a14f20af20fbc6424ac37a WatchSource:0}: Error finding container d229d8350a46007efa31fce29e3e6933bd8222c4a2a14f20af20fbc6424ac37a: Status 404 returned error can't find the container with id d229d8350a46007efa31fce29e3e6933bd8222c4a2a14f20af20fbc6424ac37a Feb 16 17:14:55.353358 master-0 kubenswrapper[4167]: W0216 17:14:55.353315 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod404c402a_705f_4352_b9df_b89562070d9c.slice/crio-3ab50f8adc10204f1bc3114a18a64bfda2ccbef9c6a2fae257583389a8db999c WatchSource:0}: Error finding container 3ab50f8adc10204f1bc3114a18a64bfda2ccbef9c6a2fae257583389a8db999c: Status 404 returned error can't find the container with id 3ab50f8adc10204f1bc3114a18a64bfda2ccbef9c6a2fae257583389a8db999c Feb 16 17:14:55.530690 master-0 kubenswrapper[4167]: W0216 17:14:55.528925 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62fc29f4_557f_4a75_8b78_6ca425c81b81.slice/crio-3c0f9d70844a85bf1c2a4aaf6d3f62e34050072ca4a47e2d3f08d6945b6534a9 WatchSource:0}: Error finding container 3c0f9d70844a85bf1c2a4aaf6d3f62e34050072ca4a47e2d3f08d6945b6534a9: Status 404 returned error can't find the container with id 3c0f9d70844a85bf1c2a4aaf6d3f62e34050072ca4a47e2d3f08d6945b6534a9 Feb 16 17:14:55.530690 master-0 kubenswrapper[4167]: W0216 17:14:55.529374 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3c7d762_e2fe_49ca_ade5_3982d91ec2a2.slice/crio-1a4317c15c8f6b3b060db5869ec057e60eb2eca4199697c12efc99522ded2fee WatchSource:0}: Error finding container 1a4317c15c8f6b3b060db5869ec057e60eb2eca4199697c12efc99522ded2fee: Status 404 returned error can't find the container with id 1a4317c15c8f6b3b060db5869ec057e60eb2eca4199697c12efc99522ded2fee Feb 16 17:14:55.767752 master-0 kubenswrapper[4167]: I0216 17:14:55.757147 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:55.767752 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:55.767752 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:55.767752 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:55.767752 master-0 kubenswrapper[4167]: I0216 17:14:55.757217 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:55.909619 master-0 kubenswrapper[4167]: I0216 17:14:55.901562 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" event={"ID":"ba37ef0e-373c-4ccc-b082-668630399765","Type":"ContainerStarted","Data":"665ff877a5eaf5a510df18b8f4147b7c1c014cfa568401ceaf371fba6aeaf030"} Feb 16 17:14:55.930033 master-0 kubenswrapper[4167]: I0216 17:14:55.925937 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerStarted","Data":"6f2a67f11077d9bc44da6ffe3c1160975d06906e26e1b32fd900e010533a01a0"} Feb 16 17:14:55.930033 master-0 kubenswrapper[4167]: I0216 17:14:55.925995 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerStarted","Data":"536d2d36b7150f597b82f8a80177337dfdfebd960e570e013680726a6270dab4"} Feb 16 17:14:55.947011 master-0 kubenswrapper[4167]: I0216 17:14:55.943216 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerStarted","Data":"1a4317c15c8f6b3b060db5869ec057e60eb2eca4199697c12efc99522ded2fee"} Feb 16 17:14:55.959207 master-0 kubenswrapper[4167]: I0216 17:14:55.958902 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" event={"ID":"642e5115-b7f2-4561-bc6b-1a74b6d891c4","Type":"ContainerStarted","Data":"20506eed7a8e99d360a9e95d0662cfc7063c2fc967979dd92900bbe40e86340e"} Feb 16 17:14:55.964934 master-0 kubenswrapper[4167]: I0216 17:14:55.960094 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerStarted","Data":"f96dad44cead28b7fff34367077f4c7ef21164e5c649b3fd03f9c924d8be888c"} Feb 16 17:14:55.975015 master-0 kubenswrapper[4167]: I0216 17:14:55.965476 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerStarted","Data":"4097cfba3355c152f53ccece30b1d44e7d42f98d755a98f2857b3adfeb1c3613"} Feb 16 17:14:55.992153 master-0 kubenswrapper[4167]: I0216 17:14:55.982518 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerStarted","Data":"e413aa866392057ec839c989db6bcca34307e7e6175ba54f4b7e7f66ded4d8a9"} Feb 16 17:14:55.992153 master-0 kubenswrapper[4167]: I0216 17:14:55.982558 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerStarted","Data":"26691319da7c19c61a82eeb43479f28634f8d47b0caa6dc65d47a7d51ad3d898"} Feb 16 17:14:55.992454 master-0 kubenswrapper[4167]: I0216 17:14:55.992387 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" event={"ID":"544c6815-81d7-422a-9e4a-5fcbfabe8da8","Type":"ContainerStarted","Data":"ad670b7e74e7db43600c965245812e09d9e7a8f5a74d69f0bc99195a0453c48d"} Feb 16 17:14:56.002814 master-0 kubenswrapper[4167]: I0216 17:14:55.995549 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"ab93434b1ef87aa6da0d58184cdc1a651c2aec3cff4b94bae1b33fd12276cd11"} Feb 16 17:14:56.002814 master-0 kubenswrapper[4167]: I0216 17:14:55.996892 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerStarted","Data":"2e8c51ee02995cb451e275204a4621ec646796f789060f07f83fc7f116e5121f"} Feb 16 17:14:56.002814 master-0 kubenswrapper[4167]: I0216 17:14:56.000378 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerStarted","Data":"a89824ff11437f147c440f2e7e24b738fc75e444ba13f86c2538e0c7f3df2c05"} Feb 16 17:14:56.007226 master-0 kubenswrapper[4167]: I0216 17:14:56.007154 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerStarted","Data":"bdec6ae782a07a6c51bda6b8c09b172ce7aca4c7036979f712086a714af424bb"} Feb 16 17:14:56.013364 master-0 kubenswrapper[4167]: I0216 17:14:56.013264 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerStarted","Data":"9c28ef50adfad9e997f43b6524d82e92d6c50e17ed611e9085788689a4b21ca3"} Feb 16 17:14:56.018323 master-0 kubenswrapper[4167]: I0216 17:14:56.017907 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerStarted","Data":"cf7cff4653ce8907026ca9da3d6ea679c1e097ee7da695c3bdd94c67135a26e5"} Feb 16 17:14:56.026011 master-0 kubenswrapper[4167]: I0216 17:14:56.023002 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" event={"ID":"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41","Type":"ContainerStarted","Data":"f58db4743cfa3fe716cea570831590166a0ba4bfc223850890b2cfe9bf9ebe43"} Feb 16 17:14:56.026011 master-0 kubenswrapper[4167]: I0216 17:14:56.024384 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qqvg4" event={"ID":"1363cb7b-62cc-497b-af6f-4d5e0eb7f174","Type":"ContainerStarted","Data":"951ab7317e6374bc4671a514f431186aa733b66bbdc47ceece52ba44484e411e"} Feb 16 17:14:56.028345 master-0 kubenswrapper[4167]: I0216 17:14:56.026935 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" event={"ID":"eaf7edff-0a89-4ac0-b9dd-511e098b5434","Type":"ContainerStarted","Data":"143e751a05a9e5bfa7947f334d644691284e401eead0ce2f8d8b8a7830855c99"} Feb 16 17:14:56.045310 master-0 kubenswrapper[4167]: I0216 17:14:56.039940 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerStarted","Data":"0135d939d70cc6c0a0f7a61b3043a49448731eaa91251064c3b28918d59425ed"} Feb 16 17:14:56.045310 master-0 kubenswrapper[4167]: I0216 17:14:56.041041 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:14:56.045938 master-0 kubenswrapper[4167]: I0216 17:14:56.045910 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerStarted","Data":"d393ab4bb3c12eb631944f816f5fcb87c8f3adaab141b498fc5f697eb48de076"} Feb 16 17:14:56.048485 master-0 kubenswrapper[4167]: I0216 17:14:56.048190 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerStarted","Data":"1afa3843de8c3e0280a235cd18e975683aa5dfa6fc4b94d5b89f743910b1907e"} Feb 16 17:14:56.055033 master-0 kubenswrapper[4167]: I0216 17:14:56.054472 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerStarted","Data":"3ab50f8adc10204f1bc3114a18a64bfda2ccbef9c6a2fae257583389a8db999c"} Feb 16 17:14:56.056352 master-0 kubenswrapper[4167]: I0216 17:14:56.056323 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" event={"ID":"0517b180-00ee-47fe-a8e7-36a3931b7e72","Type":"ContainerStarted","Data":"6472da6ab64f994f4a8fb11c79bfa3c442cf53eb1f9f752ee8800a7a90bb80ee"} Feb 16 17:14:56.069883 master-0 kubenswrapper[4167]: I0216 17:14:56.069267 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" event={"ID":"2d1636c0-f34d-444c-822d-77f1d203ddc4","Type":"ContainerStarted","Data":"982956272eafe94a62eb5a30fe13b687cc45974ed971f8a6d79e4eebbb8be575"} Feb 16 17:14:56.070916 master-0 kubenswrapper[4167]: I0216 17:14:56.070763 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" event={"ID":"edbaac23-11f0-4bc7-a7ce-b593c774c0fa","Type":"ContainerStarted","Data":"7aa5e0fac30bca997dd35d80dd9fdfb6462a1602d99e309848fdc4e17be9c74a"} Feb 16 17:14:56.081391 master-0 kubenswrapper[4167]: I0216 17:14:56.081345 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerStarted","Data":"295ee5d12ac289b73bdfd4ac21f436f6a4af4cfa8960b61d78e6df2e602e5fc4"} Feb 16 17:14:56.092007 master-0 kubenswrapper[4167]: I0216 17:14:56.090837 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" event={"ID":"5192fa49-d81c-47ce-b2ab-f90996cc0bd5","Type":"ContainerStarted","Data":"488b352d1dfd257802bcd1296116b56e5206a76528371ecdeaba998886e63d7f"} Feb 16 17:14:56.118002 master-0 kubenswrapper[4167]: I0216 17:14:56.117946 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" event={"ID":"62220aa5-4065-472c-8a17-c0a58942ab8a","Type":"ContainerStarted","Data":"c07ea07c8dfc4d02cfd7cdd8fe4653a27764977193409ade2423da788998dfd1"} Feb 16 17:14:56.118452 master-0 kubenswrapper[4167]: I0216 17:14:56.118419 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:56.122892 master-0 kubenswrapper[4167]: I0216 17:14:56.119591 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" event={"ID":"442600dc-09b2-4fee-9f89-777296b2ee40","Type":"ContainerStarted","Data":"944b7aa35e94729d3b029df4db8e3a8b1fb40932c879cbdef8bd8f483c276fb1"} Feb 16 17:14:56.122892 master-0 kubenswrapper[4167]: I0216 17:14:56.121447 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerStarted","Data":"f96fbaa350cf56c36e28b8de64efc93c80235104b9f1e824c93fd411e2023c13"} Feb 16 17:14:56.125677 master-0 kubenswrapper[4167]: I0216 17:14:56.125631 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:14:56.127568 master-0 kubenswrapper[4167]: I0216 17:14:56.127538 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerStarted","Data":"3270c84c42d3acc78e346c7759aadc5498c38f62e1c46a7a0f41df990c346cf6"} Feb 16 17:14:56.131388 master-0 kubenswrapper[4167]: I0216 17:14:56.131352 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" event={"ID":"737fcc7d-d850-4352-9f17-383c85d5bc28","Type":"ContainerStarted","Data":"8142c200a6b49fc3925f8e8e01c16d1d663fb018383ce7fe3fa9041f33cc3eba"} Feb 16 17:14:56.141789 master-0 kubenswrapper[4167]: I0216 17:14:56.141726 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" event={"ID":"48801344-a48a-493e-aea4-19d998d0b708","Type":"ContainerStarted","Data":"f68893347e5e4e1e54bb7756a6c2242e3fd960993e9c22942b76c129dd0bde64"} Feb 16 17:14:56.149806 master-0 kubenswrapper[4167]: I0216 17:14:56.149772 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerStarted","Data":"df4b384c05a47279a7bd5f81c02ec0874242a573642ef0eaba758b62b4ca3755"} Feb 16 17:14:56.156183 master-0 kubenswrapper[4167]: I0216 17:14:56.156141 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerStarted","Data":"7da4a1ce1c29887c4db7672e2cfcfd99141cb6365a1469a4e51822b3b406be83"} Feb 16 17:14:56.157021 master-0 kubenswrapper[4167]: I0216 17:14:56.156975 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:56.158914 master-0 kubenswrapper[4167]: I0216 17:14:56.158655 4167 patch_prober.go:28] interesting pod/marketplace-operator-6cc5b65c6b-s4gp2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" start-of-body= Feb 16 17:14:56.158914 master-0 kubenswrapper[4167]: I0216 17:14:56.158697 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.11:8080/healthz\": dial tcp 10.128.0.11:8080: connect: connection refused" Feb 16 17:14:56.160696 master-0 kubenswrapper[4167]: I0216 17:14:56.159553 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" event={"ID":"8e623376-9e14-4341-9dcf-7a7c218b6f9f","Type":"ContainerStarted","Data":"3de406e0692dea4b65de62f500f40e28cfd1bee4a308af59c811719127ee2078"} Feb 16 17:14:56.161714 master-0 kubenswrapper[4167]: I0216 17:14:56.161655 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerStarted","Data":"7d708aa4739b4410f4e321c2b51fea9302ab387258e25931b75defaef0c44e9d"} Feb 16 17:14:56.163661 master-0 kubenswrapper[4167]: I0216 17:14:56.163345 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerStarted","Data":"88ee30ea5477637e219f7a6d4672d75231c9b5e6df89ab027f0d4e8be1d26c3d"} Feb 16 17:14:56.216674 master-0 kubenswrapper[4167]: I0216 17:14:56.216631 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerStarted","Data":"45579996dec5fbb8ca9e640c695bcfc916a457471eff47c2eaa4f92ca972cb1d"} Feb 16 17:14:56.244401 master-0 kubenswrapper[4167]: I0216 17:14:56.244346 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerStarted","Data":"8c3703d98c9417570e8f143c11607f4e8cc119e34e9fdad6b21a597f81d5b8cb"} Feb 16 17:14:56.254814 master-0 kubenswrapper[4167]: I0216 17:14:56.254747 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" event={"ID":"e1a7c783-2e23-4284-b648-147984cf1022","Type":"ContainerStarted","Data":"a502ec3bf0d50b01956ff080bd41345aca124936ab734c2fdd6cbe7f895fa605"} Feb 16 17:14:56.256893 master-0 kubenswrapper[4167]: I0216 17:14:56.256583 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"b301d742684aa6408d6a66dfedd94987942a3b6ca54a0fc5f7670caa6e9c3d5d"} Feb 16 17:14:56.259622 master-0 kubenswrapper[4167]: I0216 17:14:56.259562 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerStarted","Data":"d229d8350a46007efa31fce29e3e6933bd8222c4a2a14f20af20fbc6424ac37a"} Feb 16 17:14:56.269820 master-0 kubenswrapper[4167]: I0216 17:14:56.263961 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerStarted","Data":"32ac3e57ee4804be4f20096b57150f2d07f7a078ebe1be2ec3b36046aced134d"} Feb 16 17:14:56.273388 master-0 kubenswrapper[4167]: I0216 17:14:56.273326 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" event={"ID":"5a275679-b7b6-4c28-b389-94cd2b014d6c","Type":"ContainerStarted","Data":"880f00e2417fe56bd3f36b01d651c9989de5cc1b814083470d0a73dc2bd55cea"} Feb 16 17:14:56.276505 master-0 kubenswrapper[4167]: I0216 17:14:56.276141 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerStarted","Data":"8be46334fb66fe6506d2685575256c509d602c3155b7997ac9e303d0fc33bfa7"} Feb 16 17:14:56.279641 master-0 kubenswrapper[4167]: I0216 17:14:56.279576 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" event={"ID":"78be97a3-18d1-4962-804f-372974dc8ccc","Type":"ContainerStarted","Data":"f1c69f47077a0d88e68fb2df496a5d16ac3edc78b2a771a647c0c21b095468ef"} Feb 16 17:14:56.280904 master-0 kubenswrapper[4167]: I0216 17:14:56.280865 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerStarted","Data":"6871209d50e722a2a11630103c9a0e86e18faa7cc8bd17ed36f25d6d4abbc155"} Feb 16 17:14:56.283081 master-0 kubenswrapper[4167]: I0216 17:14:56.282631 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" event={"ID":"62fc29f4-557f-4a75-8b78-6ca425c81b81","Type":"ContainerStarted","Data":"3c0f9d70844a85bf1c2a4aaf6d3f62e34050072ca4a47e2d3f08d6945b6534a9"} Feb 16 17:14:56.283981 master-0 kubenswrapper[4167]: I0216 17:14:56.283832 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:56.286398 master-0 kubenswrapper[4167]: I0216 17:14:56.286191 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:14:56.298574 master-0 kubenswrapper[4167]: I0216 17:14:56.298521 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:14:56.758155 master-0 kubenswrapper[4167]: I0216 17:14:56.757583 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:56.758155 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:56.758155 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:56.758155 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:56.758155 master-0 kubenswrapper[4167]: I0216 17:14:56.757644 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:57.339052 master-0 kubenswrapper[4167]: I0216 17:14:57.339013 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerStarted","Data":"bf55a464d01636c344c4211041c9e4a6cbe20bd14450c7a3003fe8c18c5fc450"} Feb 16 17:14:57.343108 master-0 kubenswrapper[4167]: I0216 17:14:57.343074 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerStarted","Data":"48167f7958f079be9f38df909933908c7490dd9f2008247aca0f3eae6929ea97"} Feb 16 17:14:57.348626 master-0 kubenswrapper[4167]: I0216 17:14:57.348568 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerStarted","Data":"1b593f959c471380eec535372fc7c882c7401dbf006c1a4484b0dd77ca11bb58"} Feb 16 17:14:57.365087 master-0 kubenswrapper[4167]: I0216 17:14:57.364192 4167 generic.go:334] "Generic (PLEG): container finished" podID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" containerID="370d7368e50996d0d7579d5c1c2642c1f8c9eb27ecdaf0811aa1f2d4faf16d1c" exitCode=0 Feb 16 17:14:57.365087 master-0 kubenswrapper[4167]: I0216 17:14:57.364250 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerDied","Data":"370d7368e50996d0d7579d5c1c2642c1f8c9eb27ecdaf0811aa1f2d4faf16d1c"} Feb 16 17:14:57.373546 master-0 kubenswrapper[4167]: I0216 17:14:57.373508 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" event={"ID":"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41","Type":"ContainerStarted","Data":"4592a4eaa04bafb6bbe94b87216e3a2f3a1314c0e37470ac34dbb8c88f6b8e44"} Feb 16 17:14:57.383822 master-0 kubenswrapper[4167]: I0216 17:14:57.382176 4167 generic.go:334] "Generic (PLEG): container finished" podID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" containerID="abc290466d1067605570ce3384a8cdef38ad934ae6a70e6be1423d8d93dc6eb5" exitCode=0 Feb 16 17:14:57.383822 master-0 kubenswrapper[4167]: I0216 17:14:57.382224 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerDied","Data":"abc290466d1067605570ce3384a8cdef38ad934ae6a70e6be1423d8d93dc6eb5"} Feb 16 17:14:57.387254 master-0 kubenswrapper[4167]: I0216 17:14:57.386307 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerStarted","Data":"5d9451c96c98af13cb798c2c5e2955ce2d811fd9cc5e1e55ff9ac4c446ae6e7e"} Feb 16 17:14:57.388576 master-0 kubenswrapper[4167]: I0216 17:14:57.388535 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qqvg4" event={"ID":"1363cb7b-62cc-497b-af6f-4d5e0eb7f174","Type":"ContainerStarted","Data":"4e84b2065d1af8978ad93515e956abd6154a82b2a6c17439456375941f7a255b"} Feb 16 17:14:57.396715 master-0 kubenswrapper[4167]: I0216 17:14:57.395175 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:14:57.402026 master-0 kubenswrapper[4167]: I0216 17:14:57.401717 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerStarted","Data":"d71c0d414d7ee8db843bc4ce26b9d372fabc4cab276f09113b6084d58b461806"} Feb 16 17:14:57.438134 master-0 kubenswrapper[4167]: I0216 17:14:57.438047 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerStarted","Data":"4c1a6b0253eebc598c1d19e2bc8901bc0fc3435f1608b53f029ff531ef5d536b"} Feb 16 17:14:57.448081 master-0 kubenswrapper[4167]: I0216 17:14:57.447919 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerStarted","Data":"6b62b01d85b7699e5e556ac71a310845e6b50e4710b03945dccf0e7bc14e56ea"} Feb 16 17:14:57.453530 master-0 kubenswrapper[4167]: I0216 17:14:57.453502 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerStarted","Data":"d15b4d1a6a8536eea2aa39cd278ffc178d8cef246140518d56f1faaa885e619a"} Feb 16 17:14:57.456339 master-0 kubenswrapper[4167]: I0216 17:14:57.456321 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerStarted","Data":"d72f324a55650c7405e43c567e917b447a66bed241765eff3340e11149168029"} Feb 16 17:14:57.459191 master-0 kubenswrapper[4167]: I0216 17:14:57.458629 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" event={"ID":"e1a7c783-2e23-4284-b648-147984cf1022","Type":"ContainerStarted","Data":"1f624baceef6090453354470adbca23ad324260cb5a3146d25137c56884ff67e"} Feb 16 17:14:57.459308 master-0 kubenswrapper[4167]: I0216 17:14:57.459283 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:57.466165 master-0 kubenswrapper[4167]: I0216 17:14:57.466130 4167 patch_prober.go:28] interesting pod/controller-manager-7fc9897cf8-9rjwd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" start-of-body= Feb 16 17:14:57.466239 master-0 kubenswrapper[4167]: I0216 17:14:57.466179 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" Feb 16 17:14:57.467870 master-0 kubenswrapper[4167]: I0216 17:14:57.467832 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" event={"ID":"2d1636c0-f34d-444c-822d-77f1d203ddc4","Type":"ContainerStarted","Data":"81b6be3a9390d67aaa62a013a1ec4e1d4dd124d8a7ebbce77a890ca7b5c2761d"} Feb 16 17:14:57.469645 master-0 kubenswrapper[4167]: I0216 17:14:57.469621 4167 generic.go:334] "Generic (PLEG): container finished" podID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" containerID="b7a2c1096a31546fa473f7f1c81f541a000673649cc96f8e6bbe4974e00458f0" exitCode=0 Feb 16 17:14:57.469699 master-0 kubenswrapper[4167]: I0216 17:14:57.469661 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerDied","Data":"b7a2c1096a31546fa473f7f1c81f541a000673649cc96f8e6bbe4974e00458f0"} Feb 16 17:14:57.504782 master-0 kubenswrapper[4167]: I0216 17:14:57.504370 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerStarted","Data":"201add77b7fd1e26ee29767f3e4d6ce42612e17d1a1e501190daa73b0bf3ad68"} Feb 16 17:14:57.540867 master-0 kubenswrapper[4167]: I0216 17:14:57.540819 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" event={"ID":"edbaac23-11f0-4bc7-a7ce-b593c774c0fa","Type":"ContainerStarted","Data":"43c5a005dd7a25001bb9feb6c2a1407ea809f5d7df385d399f64fc0554e23523"} Feb 16 17:14:57.545036 master-0 kubenswrapper[4167]: I0216 17:14:57.544733 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" event={"ID":"544c6815-81d7-422a-9e4a-5fcbfabe8da8","Type":"ContainerStarted","Data":"2455980e6d53934b52e5080b8ee3311c8bde147e1418e36ce2112399000fa925"} Feb 16 17:14:57.545242 master-0 kubenswrapper[4167]: I0216 17:14:57.545148 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:57.547458 master-0 kubenswrapper[4167]: I0216 17:14:57.547074 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" event={"ID":"8e623376-9e14-4341-9dcf-7a7c218b6f9f","Type":"ContainerStarted","Data":"a5daed247e3ea86dc7a10458a8c0ff11ac31e756522086c2ac98a2f1125b561c"} Feb 16 17:14:57.550865 master-0 kubenswrapper[4167]: I0216 17:14:57.550544 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" event={"ID":"5a275679-b7b6-4c28-b389-94cd2b014d6c","Type":"ContainerStarted","Data":"61d87b339037e31a6b6263a1f9104ac8668a66669fa4343e3d898a074bb31102"} Feb 16 17:14:57.554688 master-0 kubenswrapper[4167]: I0216 17:14:57.554530 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerStarted","Data":"b4eb28c976464930f1c03f92ec479debd9dd58656d0f14a479c1a70e1cff09c4"} Feb 16 17:14:57.554688 master-0 kubenswrapper[4167]: I0216 17:14:57.554657 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:14:57.556609 master-0 kubenswrapper[4167]: I0216 17:14:57.556050 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerStarted","Data":"ccfd56e5629650dcc52c7297fb4921b657d9eb06f420b97278d5c481fb1e03bf"} Feb 16 17:14:57.560287 master-0 kubenswrapper[4167]: I0216 17:14:57.560262 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" event={"ID":"48801344-a48a-493e-aea4-19d998d0b708","Type":"ContainerStarted","Data":"2ca6b95433456156a7d26505482fe45c8815f83af6b492554b24c8ab621388ee"} Feb 16 17:14:57.754723 master-0 kubenswrapper[4167]: I0216 17:14:57.754364 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:57.754723 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:57.754723 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:57.754723 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:57.754723 master-0 kubenswrapper[4167]: I0216 17:14:57.754424 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:58.574847 master-0 kubenswrapper[4167]: I0216 17:14:58.574778 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" event={"ID":"62fc29f4-557f-4a75-8b78-6ca425c81b81","Type":"ContainerStarted","Data":"60857fac647be6655f4f25895bafd3b477983189710dbd5f1eb5aa32863d98e5"} Feb 16 17:14:58.582406 master-0 kubenswrapper[4167]: I0216 17:14:58.582331 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerStarted","Data":"55e08c69eb23144d5aa7fed58f1dae3c3d541d066648d64253f1ea46150d45de"} Feb 16 17:14:58.584676 master-0 kubenswrapper[4167]: I0216 17:14:58.584617 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:14:58.590814 master-0 kubenswrapper[4167]: I0216 17:14:58.590787 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" event={"ID":"7390ccc6-dfbe-4f51-960c-7628f49bffb7","Type":"ContainerStarted","Data":"a5e1bc9705809e56cb5532964f561268e78fe67a532cfa1c9cdadd3fae8399c5"} Feb 16 17:14:58.597680 master-0 kubenswrapper[4167]: I0216 17:14:58.596472 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerStarted","Data":"3578e7c097759d128de9a7cdaa82fbdaa554fe1149f249c339609e5acacc377e"} Feb 16 17:14:58.597680 master-0 kubenswrapper[4167]: I0216 17:14:58.596501 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerStarted","Data":"a4a81494e8a1f9b6da2ca38e538dbd7b036da6ec2d128354a9233f983cc05bd7"} Feb 16 17:14:58.627341 master-0 kubenswrapper[4167]: I0216 17:14:58.627286 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" event={"ID":"2d1636c0-f34d-444c-822d-77f1d203ddc4","Type":"ContainerStarted","Data":"ce1fecddf778d4ca8cd64c9cabae410947bc8980b5642363ae0b76afacbfeeea"} Feb 16 17:14:58.663206 master-0 kubenswrapper[4167]: I0216 17:14:58.662737 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerStarted","Data":"6dd68bd49fb165551bba65c313f75ef9ca64d5244db2bb6f507206ab912ba745"} Feb 16 17:14:58.663206 master-0 kubenswrapper[4167]: I0216 17:14:58.662802 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerStarted","Data":"fb267cad2e212cce7b077187fb13473a6bdbfaef80e56fef3224842aba558c20"} Feb 16 17:14:58.672611 master-0 kubenswrapper[4167]: I0216 17:14:58.672532 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"dfcf44a82191101ede58522b2f1885bae69752d6fff22ba727eb9abcc1459ac5"} Feb 16 17:14:58.672611 master-0 kubenswrapper[4167]: I0216 17:14:58.672578 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"90e176cdc4a300f72473c75de5d6adc3210b277899f4e0a21e71ec4156cc6e6b"} Feb 16 17:14:58.677558 master-0 kubenswrapper[4167]: I0216 17:14:58.677512 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" event={"ID":"0517b180-00ee-47fe-a8e7-36a3931b7e72","Type":"ContainerStarted","Data":"186fd3f0db823e5328c89f464f7fcd6084fe37b5dd8507f965b2e41f003c7c49"} Feb 16 17:14:58.678365 master-0 kubenswrapper[4167]: I0216 17:14:58.678335 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:14:58.680257 master-0 kubenswrapper[4167]: I0216 17:14:58.680207 4167 patch_prober.go:28] interesting pod/console-operator-7777d5cc66-64vhv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.78:8443/readyz\": dial tcp 10.128.0.78:8443: connect: connection refused" start-of-body= Feb 16 17:14:58.680342 master-0 kubenswrapper[4167]: I0216 17:14:58.680255 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.78:8443/readyz\": dial tcp 10.128.0.78:8443: connect: connection refused" Feb 16 17:14:58.682231 master-0 kubenswrapper[4167]: I0216 17:14:58.682195 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerStarted","Data":"7015900215e20506adcf5495345e6bd2ad2664eeb0994f9660d3953cd6ae7d87"} Feb 16 17:14:58.686917 master-0 kubenswrapper[4167]: I0216 17:14:58.686878 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" event={"ID":"78be97a3-18d1-4962-804f-372974dc8ccc","Type":"ContainerStarted","Data":"37060e3d6082551da36ecc80a6060aed182d6f433dd96f7ed17dfdfc6699bb75"} Feb 16 17:14:58.689185 master-0 kubenswrapper[4167]: I0216 17:14:58.689148 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:58.695292 master-0 kubenswrapper[4167]: I0216 17:14:58.695245 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerStarted","Data":"ad16040839827af79b426b3a5b5b65f211c5b6adf024fe6140adb8c12a4da675"} Feb 16 17:14:58.705625 master-0 kubenswrapper[4167]: I0216 17:14:58.704845 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerStarted","Data":"d7731adcb1e4539a65b2c7f3daf888135793fea7b0a6cb8d9092fe38bfc1b95c"} Feb 16 17:14:58.721338 master-0 kubenswrapper[4167]: I0216 17:14:58.721272 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" event={"ID":"eaf7edff-0a89-4ac0-b9dd-511e098b5434","Type":"ContainerStarted","Data":"82f814829818865ca6f170f3bafba2be0d3e2f523200306ddbd0fd3eb4fe4e96"} Feb 16 17:14:58.724004 master-0 kubenswrapper[4167]: I0216 17:14:58.723273 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerStarted","Data":"11617d13afe5f8d066713748644151c41066462b4ff447d2322da2af49234639"} Feb 16 17:14:58.728517 master-0 kubenswrapper[4167]: I0216 17:14:58.727961 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerStarted","Data":"0c1926d5165564b892069993c2da74e652f04997f279c72560669a0edc4edc9e"} Feb 16 17:14:58.731287 master-0 kubenswrapper[4167]: I0216 17:14:58.731249 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" event={"ID":"642e5115-b7f2-4561-bc6b-1a74b6d891c4","Type":"ContainerStarted","Data":"597c497df5b8691b2fcf10efeeab6512b60a62d7b64ec015e279a0f4c9a23ce3"} Feb 16 17:14:58.733172 master-0 kubenswrapper[4167]: I0216 17:14:58.733030 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerStarted","Data":"c0518932ae18e90d52ddc6e706ebc2050e4927706bf959c7d317686ddf39a8c4"} Feb 16 17:14:58.733278 master-0 kubenswrapper[4167]: I0216 17:14:58.733167 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:14:58.735582 master-0 kubenswrapper[4167]: I0216 17:14:58.735090 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerStarted","Data":"6ba471877e1d22d2e62b0c2c1cb7eebd4f675dec80131e40647259b0b619f256"} Feb 16 17:14:58.737888 master-0 kubenswrapper[4167]: I0216 17:14:58.737805 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" event={"ID":"442600dc-09b2-4fee-9f89-777296b2ee40","Type":"ContainerStarted","Data":"eaa418d7f6a36f58cde0a1f71f41979582dd4e652c978199a1440c9abf983aef"} Feb 16 17:14:58.740207 master-0 kubenswrapper[4167]: I0216 17:14:58.739601 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerStarted","Data":"6ecf5325ac595a4e389f176c15d5d6f6e061e84d5c7efa627d5409dfc8280c18"} Feb 16 17:14:58.741331 master-0 kubenswrapper[4167]: I0216 17:14:58.740807 4167 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="8d38b3dfba8a4db2b412df3ea70b7fbfe74d4d324981121dd71d5baf29b619f2" exitCode=0 Feb 16 17:14:58.741331 master-0 kubenswrapper[4167]: I0216 17:14:58.740842 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"8d38b3dfba8a4db2b412df3ea70b7fbfe74d4d324981121dd71d5baf29b619f2"} Feb 16 17:14:58.742609 master-0 kubenswrapper[4167]: I0216 17:14:58.742545 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerStarted","Data":"18eb2ea9f07d5c6d110107b6cae077ae63beca1d3e388a0edf9c5be4e7025b94"} Feb 16 17:14:58.744162 master-0 kubenswrapper[4167]: I0216 17:14:58.744124 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerStarted","Data":"d3f14f6eb35d23eb8319deb391de3fca7798f82dbc6d17e8ad6ff98c43a1d058"} Feb 16 17:14:58.744483 master-0 kubenswrapper[4167]: I0216 17:14:58.744462 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:14:58.745663 master-0 kubenswrapper[4167]: I0216 17:14:58.745633 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerStarted","Data":"43eb528b3bd9774e59d2bd74bbad7abccd777b07368ee07e8ee390fd02445251"} Feb 16 17:14:58.745777 master-0 kubenswrapper[4167]: I0216 17:14:58.745757 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:14:58.748776 master-0 kubenswrapper[4167]: I0216 17:14:58.748730 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerStarted","Data":"b2286bcf587925ee921929470a8ad12ff00ac21d82c0b9d9106c62d62f43b303"} Feb 16 17:14:58.750349 master-0 kubenswrapper[4167]: I0216 17:14:58.750313 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerStarted","Data":"b9259dd7380ae4bcf4ca64d09a9941e6cc688a7adfc8b3148ff68348868f4e9b"} Feb 16 17:14:58.751593 master-0 kubenswrapper[4167]: I0216 17:14:58.751565 4167 generic.go:334] "Generic (PLEG): container finished" podID="4e51bba5-0ebe-4e55-a588-38b71548c605" containerID="d25e3b6b2ae168e74121c991cfd98405350d4cb7654458c9ed3a111f2c50e639" exitCode=0 Feb 16 17:14:58.751697 master-0 kubenswrapper[4167]: I0216 17:14:58.751673 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerDied","Data":"d25e3b6b2ae168e74121c991cfd98405350d4cb7654458c9ed3a111f2c50e639"} Feb 16 17:14:58.753413 master-0 kubenswrapper[4167]: I0216 17:14:58.753366 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:58.753413 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:58.753413 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:58.753413 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:58.753611 master-0 kubenswrapper[4167]: I0216 17:14:58.753411 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:58.754482 master-0 kubenswrapper[4167]: I0216 17:14:58.754295 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerStarted","Data":"0d0daa3f24697660fe022e8196e98f3bdaaa8ca58ffaa8746bbffff1ded535e4"} Feb 16 17:14:58.761107 master-0 kubenswrapper[4167]: I0216 17:14:58.759250 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:14:58.811473 master-0 kubenswrapper[4167]: I0216 17:14:58.811416 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:14:59.760740 master-0 kubenswrapper[4167]: I0216 17:14:59.755944 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:14:59.760740 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:14:59.760740 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:14:59.760740 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:14:59.760740 master-0 kubenswrapper[4167]: I0216 17:14:59.756067 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:14:59.767558 master-0 kubenswrapper[4167]: I0216 17:14:59.767365 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerStarted","Data":"014df18f96d4689cdbe5b5e5f610e110c485c44cf2bfa273b7618be6223ab8da"} Feb 16 17:14:59.785002 master-0 kubenswrapper[4167]: I0216 17:14:59.774823 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"ade15cf4b2aaf2dd26ff870ce6199f1f51d17aa8957040f94d4d5d686ded606b"} Feb 16 17:14:59.785002 master-0 kubenswrapper[4167]: I0216 17:14:59.774877 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4"} Feb 16 17:14:59.785002 master-0 kubenswrapper[4167]: I0216 17:14:59.778205 4167 generic.go:334] "Generic (PLEG): container finished" podID="4e51bba5-0ebe-4e55-a588-38b71548c605" containerID="3b6d9d34dfa77e34f1823876a1f3db243321a7b08730772ce9a4f69968218e53" exitCode=0 Feb 16 17:14:59.785002 master-0 kubenswrapper[4167]: I0216 17:14:59.778252 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerDied","Data":"3b6d9d34dfa77e34f1823876a1f3db243321a7b08730772ce9a4f69968218e53"} Feb 16 17:14:59.785002 master-0 kubenswrapper[4167]: I0216 17:14:59.780512 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" event={"ID":"62fc29f4-557f-4a75-8b78-6ca425c81b81","Type":"ContainerStarted","Data":"9ecd4207552566962835ca1a9200f6fb8fc356429e9bc13ddc147855492b1c75"} Feb 16 17:14:59.785002 master-0 kubenswrapper[4167]: I0216 17:14:59.782210 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerStarted","Data":"33f0fda402cdc03bfc8199763c81a81e3cd4ddfc874d90a32d5b6282e1837584"} Feb 16 17:14:59.785002 master-0 kubenswrapper[4167]: I0216 17:14:59.783508 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerStarted","Data":"6c8043a73593eb818d47c831f8cfbf4afec441fd3742943afa12b44d4e57561c"} Feb 16 17:14:59.790489 master-0 kubenswrapper[4167]: I0216 17:14:59.785367 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"ac4956c462bb8fea92f8db53bd1874b804dea857d75de80ca406b1072139abdb"} Feb 16 17:14:59.790489 master-0 kubenswrapper[4167]: I0216 17:14:59.788977 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerStarted","Data":"7f86deeb630ebc6a955ba82a1abc0b13aa6b96c4f0240fac5e1a4c87ada309ab"} Feb 16 17:14:59.792082 master-0 kubenswrapper[4167]: I0216 17:14:59.791486 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerStarted","Data":"d35a143a0c6ed1ce7662763f0d9a66b362f68bbc7be23ea6be7a5c81173502df"} Feb 16 17:14:59.792082 master-0 kubenswrapper[4167]: I0216 17:14:59.791512 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerStarted","Data":"ee1025e9b4ffeb3f18a693ceb2d5bb7ea83a0ed3a79e2550a16a5a3521c2dd17"} Feb 16 17:14:59.808417 master-0 kubenswrapper[4167]: I0216 17:14:59.808216 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerStarted","Data":"dbf513153b490360d777be6ca05f18ad905f65e50e441d5e6e8adfc27b930dc9"} Feb 16 17:14:59.824649 master-0 kubenswrapper[4167]: I0216 17:14:59.824604 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerStarted","Data":"f812e7bcfc10722b8073186c03d561876e550c6cdad7f1232e13cf4de32d1296"} Feb 16 17:14:59.824829 master-0 kubenswrapper[4167]: I0216 17:14:59.824663 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerStarted","Data":"94b035f260832506870f2b288720f1abb22262dbd77e6d2acf520d043c2d80ce"} Feb 16 17:14:59.830088 master-0 kubenswrapper[4167]: I0216 17:14:59.830034 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerStarted","Data":"647f8e57100443092b8c3ff37a546c586b56745048e6f03ef72ab7b78b1506b2"} Feb 16 17:14:59.835153 master-0 kubenswrapper[4167]: I0216 17:14:59.835104 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:15:00.760047 master-0 kubenswrapper[4167]: I0216 17:15:00.759572 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:15:00.760047 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:15:00.760047 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:15:00.760047 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:15:00.760047 master-0 kubenswrapper[4167]: I0216 17:15:00.759644 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:15:00.844450 master-0 kubenswrapper[4167]: I0216 17:15:00.843715 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerStarted","Data":"4c3708dd5173f57b63fb9db602aa499819921f3c4c1bcfe8a0eea93737da7e08"} Feb 16 17:15:00.848433 master-0 kubenswrapper[4167]: I0216 17:15:00.848035 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"90e54b033aa4260eb52b48645f94866cbe3ba5be21ffcc3d30b71a50cb404c60"} Feb 16 17:15:00.848433 master-0 kubenswrapper[4167]: I0216 17:15:00.848081 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"b4845c2ffe41f4498d078fd73515d97a8341dbecaec68cf2489590e8b32f1efc"} Feb 16 17:15:00.848433 master-0 kubenswrapper[4167]: I0216 17:15:00.848094 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"686d269cdb80a111664589995a5036e0dd37d6ddf66f17cad1121e2a0bedbe87"} Feb 16 17:15:00.848433 master-0 kubenswrapper[4167]: I0216 17:15:00.848105 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerStarted","Data":"5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a"} Feb 16 17:15:00.850435 master-0 kubenswrapper[4167]: I0216 17:15:00.850390 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerStarted","Data":"127a39bb4a42f5db86f2d69dcf6b90ad653286c64c989311c25e1215aca40901"} Feb 16 17:15:00.852810 master-0 kubenswrapper[4167]: I0216 17:15:00.852179 4167 generic.go:334] "Generic (PLEG): container finished" podID="0393fe12-2533-4c9c-a8e4-a58003c88f36" containerID="33f0fda402cdc03bfc8199763c81a81e3cd4ddfc874d90a32d5b6282e1837584" exitCode=0 Feb 16 17:15:00.852810 master-0 kubenswrapper[4167]: I0216 17:15:00.852215 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerDied","Data":"33f0fda402cdc03bfc8199763c81a81e3cd4ddfc874d90a32d5b6282e1837584"} Feb 16 17:15:00.860690 master-0 kubenswrapper[4167]: I0216 17:15:00.858654 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"fcda713f163db03b897ab292aff9fcb7b6f07b2cb0cc8de01b750b4174bcb94a"} Feb 16 17:15:00.860690 master-0 kubenswrapper[4167]: I0216 17:15:00.858698 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"f8e0fde3ac327eefe9afd4d8bd37736cca600b0b5d7bc7662c2dd9d135042d1f"} Feb 16 17:15:00.860690 master-0 kubenswrapper[4167]: I0216 17:15:00.858714 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"e05dd3f5f3e806cfbd23e9c168d7df8ca93928467a534cded34e4d4897e99cda"} Feb 16 17:15:00.861758 master-0 kubenswrapper[4167]: I0216 17:15:00.861706 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:15:01.514992 master-0 kubenswrapper[4167]: I0216 17:15:01.514915 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:15:01.539411 master-0 kubenswrapper[4167]: I0216 17:15:01.539351 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:15:01.540256 master-0 kubenswrapper[4167]: I0216 17:15:01.540234 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:15:01.569891 master-0 kubenswrapper[4167]: I0216 17:15:01.569826 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:15:01.592218 master-0 kubenswrapper[4167]: I0216 17:15:01.591182 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:15:01.754560 master-0 kubenswrapper[4167]: I0216 17:15:01.754516 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:15:01.754560 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:15:01.754560 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:15:01.754560 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:15:01.754843 master-0 kubenswrapper[4167]: I0216 17:15:01.754571 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:15:01.867067 master-0 kubenswrapper[4167]: I0216 17:15:01.866656 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerStarted","Data":"066bdd4f20e0a6244f992ccc6dcdaa26b8b14420e27b885c9487a8ceb4e614b3"} Feb 16 17:15:01.872377 master-0 kubenswrapper[4167]: I0216 17:15:01.872300 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:15:01.877989 master-0 kubenswrapper[4167]: I0216 17:15:01.877325 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:15:01.877989 master-0 kubenswrapper[4167]: I0216 17:15:01.877394 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: I0216 17:15:01.898117 4167 patch_prober.go:28] interesting pod/apiserver-fc4bf7f79-tqnlw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [+]log ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [+]etcd ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [+]poststarthook/max-in-flight-filter ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [+]poststarthook/openshift.io-startinformers ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: livez check failed Feb 16 17:15:01.898623 master-0 kubenswrapper[4167]: I0216 17:15:01.898201 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:15:02.597060 master-0 kubenswrapper[4167]: I0216 17:15:02.596982 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:15:02.754875 master-0 kubenswrapper[4167]: I0216 17:15:02.754788 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:15:02.754875 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:15:02.754875 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:15:02.754875 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:15:02.754875 master-0 kubenswrapper[4167]: I0216 17:15:02.754869 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:15:02.874847 master-0 kubenswrapper[4167]: I0216 17:15:02.874731 4167 generic.go:334] "Generic (PLEG): container finished" podID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" containerID="4c3708dd5173f57b63fb9db602aa499819921f3c4c1bcfe8a0eea93737da7e08" exitCode=0 Feb 16 17:15:02.875336 master-0 kubenswrapper[4167]: I0216 17:15:02.874945 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerDied","Data":"4c3708dd5173f57b63fb9db602aa499819921f3c4c1bcfe8a0eea93737da7e08"} Feb 16 17:15:03.649111 master-0 kubenswrapper[4167]: I0216 17:15:03.649065 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:15:03.754145 master-0 kubenswrapper[4167]: I0216 17:15:03.754054 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:15:03.754145 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:15:03.754145 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:15:03.754145 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:15:03.754542 master-0 kubenswrapper[4167]: I0216 17:15:03.754142 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:15:03.891335 master-0 kubenswrapper[4167]: I0216 17:15:03.891253 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerStarted","Data":"3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67"} Feb 16 17:15:04.757188 master-0 kubenswrapper[4167]: I0216 17:15:04.757128 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:15:04.757188 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:15:04.757188 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:15:04.757188 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:15:04.757533 master-0 kubenswrapper[4167]: I0216 17:15:04.757207 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:15:05.754481 master-0 kubenswrapper[4167]: I0216 17:15:05.754382 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:15:05.754481 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:15:05.754481 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:15:05.754481 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:15:05.754481 master-0 kubenswrapper[4167]: I0216 17:15:05.754444 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:15:06.761224 master-0 kubenswrapper[4167]: I0216 17:15:06.761069 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:15:06.761224 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:15:06.761224 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:15:06.761224 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:15:06.762288 master-0 kubenswrapper[4167]: I0216 17:15:06.761227 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:15:06.779459 master-0 kubenswrapper[4167]: I0216 17:15:06.779273 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:15:06.884853 master-0 kubenswrapper[4167]: I0216 17:15:06.884796 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:15:06.892278 master-0 kubenswrapper[4167]: I0216 17:15:06.892239 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:15:07.208610 master-0 kubenswrapper[4167]: I0216 17:15:07.208527 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:15:07.240153 master-0 kubenswrapper[4167]: I0216 17:15:07.240103 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:15:07.630030 master-0 kubenswrapper[4167]: I0216 17:15:07.629867 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:15:07.754534 master-0 kubenswrapper[4167]: I0216 17:15:07.754487 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:15:07.754534 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:15:07.754534 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:15:07.754534 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:15:07.754869 master-0 kubenswrapper[4167]: I0216 17:15:07.754541 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:15:08.754136 master-0 kubenswrapper[4167]: I0216 17:15:08.754093 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:15:08.754136 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:15:08.754136 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:15:08.754136 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:15:08.754731 master-0 kubenswrapper[4167]: I0216 17:15:08.754177 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:15:09.755152 master-0 kubenswrapper[4167]: I0216 17:15:09.755015 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:15:09.755152 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:15:09.755152 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:15:09.755152 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:15:09.755152 master-0 kubenswrapper[4167]: I0216 17:15:09.755148 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:15:09.846565 master-0 kubenswrapper[4167]: I0216 17:15:09.846493 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:15:10.755072 master-0 kubenswrapper[4167]: I0216 17:15:10.754983 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:15:10.755072 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:15:10.755072 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:15:10.755072 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:15:10.755691 master-0 kubenswrapper[4167]: I0216 17:15:10.755081 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:15:11.489046 master-0 kubenswrapper[4167]: I0216 17:15:11.488977 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:15:11.489461 master-0 kubenswrapper[4167]: I0216 17:15:11.489421 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:15:11.555292 master-0 kubenswrapper[4167]: I0216 17:15:11.555223 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:15:11.633759 master-0 kubenswrapper[4167]: I0216 17:15:11.633678 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:15:11.633759 master-0 kubenswrapper[4167]: I0216 17:15:11.633762 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:15:11.638044 master-0 kubenswrapper[4167]: I0216 17:15:11.638006 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:15:11.638044 master-0 kubenswrapper[4167]: I0216 17:15:11.638046 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:15:11.711111 master-0 kubenswrapper[4167]: I0216 17:15:11.711062 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:15:11.754424 master-0 kubenswrapper[4167]: I0216 17:15:11.754319 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:15:11.754424 master-0 kubenswrapper[4167]: [-]has-synced failed: reason withheld Feb 16 17:15:11.754424 master-0 kubenswrapper[4167]: [+]process-running ok Feb 16 17:15:11.754424 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:15:11.754424 master-0 kubenswrapper[4167]: I0216 17:15:11.754394 4167 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:15:11.988036 master-0 kubenswrapper[4167]: I0216 17:15:11.987951 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:15:12.006028 master-0 kubenswrapper[4167]: I0216 17:15:12.005929 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:15:12.757116 master-0 kubenswrapper[4167]: I0216 17:15:12.757057 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:15:12.759545 master-0 kubenswrapper[4167]: I0216 17:15:12.759511 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:15:18.474379 master-0 kubenswrapper[4167]: E0216 17:15:18.474293 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1757e5acb838902f4011b0dfc2a220b74f028cc05105e020bfd55c78d223550\": container with ID starting with e1757e5acb838902f4011b0dfc2a220b74f028cc05105e020bfd55c78d223550 not found: ID does not exist" containerID="e1757e5acb838902f4011b0dfc2a220b74f028cc05105e020bfd55c78d223550" Feb 16 17:15:18.475332 master-0 kubenswrapper[4167]: I0216 17:15:18.474391 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="e1757e5acb838902f4011b0dfc2a220b74f028cc05105e020bfd55c78d223550" err="rpc error: code = NotFound desc = could not find container \"e1757e5acb838902f4011b0dfc2a220b74f028cc05105e020bfd55c78d223550\": container with ID starting with e1757e5acb838902f4011b0dfc2a220b74f028cc05105e020bfd55c78d223550 not found: ID does not exist" Feb 16 17:15:18.475332 master-0 kubenswrapper[4167]: E0216 17:15:18.475156 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bc3b0a3c7a9d33ae08dca9280e63bf069111d5618b0180dcffe8f0d1a422a94\": container with ID starting with 3bc3b0a3c7a9d33ae08dca9280e63bf069111d5618b0180dcffe8f0d1a422a94 not found: ID does not exist" containerID="3bc3b0a3c7a9d33ae08dca9280e63bf069111d5618b0180dcffe8f0d1a422a94" Feb 16 17:15:18.475332 master-0 kubenswrapper[4167]: I0216 17:15:18.475209 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="3bc3b0a3c7a9d33ae08dca9280e63bf069111d5618b0180dcffe8f0d1a422a94" err="rpc error: code = NotFound desc = could not find container \"3bc3b0a3c7a9d33ae08dca9280e63bf069111d5618b0180dcffe8f0d1a422a94\": container with ID starting with 3bc3b0a3c7a9d33ae08dca9280e63bf069111d5618b0180dcffe8f0d1a422a94 not found: ID does not exist" Feb 16 17:15:18.475808 master-0 kubenswrapper[4167]: E0216 17:15:18.475735 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fba47eb24f74d8dc16f686568935b61829e866ee98f522e6506264cd0528411b\": container with ID starting with fba47eb24f74d8dc16f686568935b61829e866ee98f522e6506264cd0528411b not found: ID does not exist" containerID="fba47eb24f74d8dc16f686568935b61829e866ee98f522e6506264cd0528411b" Feb 16 17:15:18.475905 master-0 kubenswrapper[4167]: I0216 17:15:18.475808 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="fba47eb24f74d8dc16f686568935b61829e866ee98f522e6506264cd0528411b" err="rpc error: code = NotFound desc = could not find container \"fba47eb24f74d8dc16f686568935b61829e866ee98f522e6506264cd0528411b\": container with ID starting with fba47eb24f74d8dc16f686568935b61829e866ee98f522e6506264cd0528411b not found: ID does not exist" Feb 16 17:15:18.476472 master-0 kubenswrapper[4167]: E0216 17:15:18.476401 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c533cb820095a78f39c6f42ffac03ee174e3ee777a839e55594e3c6025a91606\": container with ID starting with c533cb820095a78f39c6f42ffac03ee174e3ee777a839e55594e3c6025a91606 not found: ID does not exist" containerID="c533cb820095a78f39c6f42ffac03ee174e3ee777a839e55594e3c6025a91606" Feb 16 17:15:18.476472 master-0 kubenswrapper[4167]: I0216 17:15:18.476457 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="c533cb820095a78f39c6f42ffac03ee174e3ee777a839e55594e3c6025a91606" err="rpc error: code = NotFound desc = could not find container \"c533cb820095a78f39c6f42ffac03ee174e3ee777a839e55594e3c6025a91606\": container with ID starting with c533cb820095a78f39c6f42ffac03ee174e3ee777a839e55594e3c6025a91606 not found: ID does not exist" Feb 16 17:15:18.476927 master-0 kubenswrapper[4167]: E0216 17:15:18.476873 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31bd9a890a5059e0b580fdd2f2ff6c6e7a091febf9ee4623d9457ea21452090a\": container with ID starting with 31bd9a890a5059e0b580fdd2f2ff6c6e7a091febf9ee4623d9457ea21452090a not found: ID does not exist" containerID="31bd9a890a5059e0b580fdd2f2ff6c6e7a091febf9ee4623d9457ea21452090a" Feb 16 17:15:18.476927 master-0 kubenswrapper[4167]: I0216 17:15:18.476918 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="31bd9a890a5059e0b580fdd2f2ff6c6e7a091febf9ee4623d9457ea21452090a" err="rpc error: code = NotFound desc = could not find container \"31bd9a890a5059e0b580fdd2f2ff6c6e7a091febf9ee4623d9457ea21452090a\": container with ID starting with 31bd9a890a5059e0b580fdd2f2ff6c6e7a091febf9ee4623d9457ea21452090a not found: ID does not exist" Feb 16 17:15:18.477454 master-0 kubenswrapper[4167]: E0216 17:15:18.477373 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2ac9e086f60d8394dafb914a71895490f5e6c94804446cbace8d6c68123a12b\": container with ID starting with e2ac9e086f60d8394dafb914a71895490f5e6c94804446cbace8d6c68123a12b not found: ID does not exist" containerID="e2ac9e086f60d8394dafb914a71895490f5e6c94804446cbace8d6c68123a12b" Feb 16 17:15:18.477558 master-0 kubenswrapper[4167]: I0216 17:15:18.477414 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="e2ac9e086f60d8394dafb914a71895490f5e6c94804446cbace8d6c68123a12b" err="rpc error: code = NotFound desc = could not find container \"e2ac9e086f60d8394dafb914a71895490f5e6c94804446cbace8d6c68123a12b\": container with ID starting with e2ac9e086f60d8394dafb914a71895490f5e6c94804446cbace8d6c68123a12b not found: ID does not exist" Feb 16 17:15:18.477896 master-0 kubenswrapper[4167]: E0216 17:15:18.477840 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a2847bba4fb3a03af2537d3d204da4564b5ff5aff8a875dc47e687e78893239\": container with ID starting with 9a2847bba4fb3a03af2537d3d204da4564b5ff5aff8a875dc47e687e78893239 not found: ID does not exist" containerID="9a2847bba4fb3a03af2537d3d204da4564b5ff5aff8a875dc47e687e78893239" Feb 16 17:15:18.477896 master-0 kubenswrapper[4167]: I0216 17:15:18.477882 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="9a2847bba4fb3a03af2537d3d204da4564b5ff5aff8a875dc47e687e78893239" err="rpc error: code = NotFound desc = could not find container \"9a2847bba4fb3a03af2537d3d204da4564b5ff5aff8a875dc47e687e78893239\": container with ID starting with 9a2847bba4fb3a03af2537d3d204da4564b5ff5aff8a875dc47e687e78893239 not found: ID does not exist" Feb 16 17:15:18.480674 master-0 kubenswrapper[4167]: E0216 17:15:18.480621 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01f4f89971ebb359e0eeec52882d07d21354b8e08fc6c1173fde440ef3e5d38b\": container with ID starting with 01f4f89971ebb359e0eeec52882d07d21354b8e08fc6c1173fde440ef3e5d38b not found: ID does not exist" containerID="01f4f89971ebb359e0eeec52882d07d21354b8e08fc6c1173fde440ef3e5d38b" Feb 16 17:15:18.480788 master-0 kubenswrapper[4167]: I0216 17:15:18.480666 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="01f4f89971ebb359e0eeec52882d07d21354b8e08fc6c1173fde440ef3e5d38b" err="rpc error: code = NotFound desc = could not find container \"01f4f89971ebb359e0eeec52882d07d21354b8e08fc6c1173fde440ef3e5d38b\": container with ID starting with 01f4f89971ebb359e0eeec52882d07d21354b8e08fc6c1173fde440ef3e5d38b not found: ID does not exist" Feb 16 17:15:18.481272 master-0 kubenswrapper[4167]: E0216 17:15:18.481207 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd7ca8585e9770f668282d25d327e2a91ec2db08fd3ae45538225dc8a5ab9091\": container with ID starting with cd7ca8585e9770f668282d25d327e2a91ec2db08fd3ae45538225dc8a5ab9091 not found: ID does not exist" containerID="cd7ca8585e9770f668282d25d327e2a91ec2db08fd3ae45538225dc8a5ab9091" Feb 16 17:15:18.481367 master-0 kubenswrapper[4167]: I0216 17:15:18.481278 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="cd7ca8585e9770f668282d25d327e2a91ec2db08fd3ae45538225dc8a5ab9091" err="rpc error: code = NotFound desc = could not find container \"cd7ca8585e9770f668282d25d327e2a91ec2db08fd3ae45538225dc8a5ab9091\": container with ID starting with cd7ca8585e9770f668282d25d327e2a91ec2db08fd3ae45538225dc8a5ab9091 not found: ID does not exist" Feb 16 17:15:18.482053 master-0 kubenswrapper[4167]: E0216 17:15:18.482007 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab4cf257e6e0f29ed254052561254039fa1b8a8f9b4ce54fa741917d9a4c1648\": container with ID starting with ab4cf257e6e0f29ed254052561254039fa1b8a8f9b4ce54fa741917d9a4c1648 not found: ID does not exist" containerID="ab4cf257e6e0f29ed254052561254039fa1b8a8f9b4ce54fa741917d9a4c1648" Feb 16 17:15:18.482053 master-0 kubenswrapper[4167]: I0216 17:15:18.482044 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="ab4cf257e6e0f29ed254052561254039fa1b8a8f9b4ce54fa741917d9a4c1648" err="rpc error: code = NotFound desc = could not find container \"ab4cf257e6e0f29ed254052561254039fa1b8a8f9b4ce54fa741917d9a4c1648\": container with ID starting with ab4cf257e6e0f29ed254052561254039fa1b8a8f9b4ce54fa741917d9a4c1648 not found: ID does not exist" Feb 16 17:15:18.482492 master-0 kubenswrapper[4167]: E0216 17:15:18.482440 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85003325a476039d5eb44432fdcf7d2532212a32580f381790355fd21b4f3730\": container with ID starting with 85003325a476039d5eb44432fdcf7d2532212a32580f381790355fd21b4f3730 not found: ID does not exist" containerID="85003325a476039d5eb44432fdcf7d2532212a32580f381790355fd21b4f3730" Feb 16 17:15:18.482584 master-0 kubenswrapper[4167]: I0216 17:15:18.482486 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="85003325a476039d5eb44432fdcf7d2532212a32580f381790355fd21b4f3730" err="rpc error: code = NotFound desc = could not find container \"85003325a476039d5eb44432fdcf7d2532212a32580f381790355fd21b4f3730\": container with ID starting with 85003325a476039d5eb44432fdcf7d2532212a32580f381790355fd21b4f3730 not found: ID does not exist" Feb 16 17:15:18.483028 master-0 kubenswrapper[4167]: E0216 17:15:18.482982 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe91129f000124aca4ffcb46fef9c002b10b433ceec06a4c01ffe2fc33ca2b2a\": container with ID starting with fe91129f000124aca4ffcb46fef9c002b10b433ceec06a4c01ffe2fc33ca2b2a not found: ID does not exist" containerID="fe91129f000124aca4ffcb46fef9c002b10b433ceec06a4c01ffe2fc33ca2b2a" Feb 16 17:15:18.483028 master-0 kubenswrapper[4167]: I0216 17:15:18.483017 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="fe91129f000124aca4ffcb46fef9c002b10b433ceec06a4c01ffe2fc33ca2b2a" err="rpc error: code = NotFound desc = could not find container \"fe91129f000124aca4ffcb46fef9c002b10b433ceec06a4c01ffe2fc33ca2b2a\": container with ID starting with fe91129f000124aca4ffcb46fef9c002b10b433ceec06a4c01ffe2fc33ca2b2a not found: ID does not exist" Feb 16 17:15:18.483635 master-0 kubenswrapper[4167]: E0216 17:15:18.483586 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38ea24f9e1f52bb2d156d60e530f0e5be87fcb1b940f2e51d9ccf98bf655afc7\": container with ID starting with 38ea24f9e1f52bb2d156d60e530f0e5be87fcb1b940f2e51d9ccf98bf655afc7 not found: ID does not exist" containerID="38ea24f9e1f52bb2d156d60e530f0e5be87fcb1b940f2e51d9ccf98bf655afc7" Feb 16 17:15:18.483635 master-0 kubenswrapper[4167]: I0216 17:15:18.483625 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="38ea24f9e1f52bb2d156d60e530f0e5be87fcb1b940f2e51d9ccf98bf655afc7" err="rpc error: code = NotFound desc = could not find container \"38ea24f9e1f52bb2d156d60e530f0e5be87fcb1b940f2e51d9ccf98bf655afc7\": container with ID starting with 38ea24f9e1f52bb2d156d60e530f0e5be87fcb1b940f2e51d9ccf98bf655afc7 not found: ID does not exist" Feb 16 17:15:18.484261 master-0 kubenswrapper[4167]: E0216 17:15:18.484198 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3268564838eb6e6a4f98f7ce91f31bb8894c255d354c86d8cadc7b120d01a6ff\": container with ID starting with 3268564838eb6e6a4f98f7ce91f31bb8894c255d354c86d8cadc7b120d01a6ff not found: ID does not exist" containerID="3268564838eb6e6a4f98f7ce91f31bb8894c255d354c86d8cadc7b120d01a6ff" Feb 16 17:15:18.484261 master-0 kubenswrapper[4167]: I0216 17:15:18.484254 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="3268564838eb6e6a4f98f7ce91f31bb8894c255d354c86d8cadc7b120d01a6ff" err="rpc error: code = NotFound desc = could not find container \"3268564838eb6e6a4f98f7ce91f31bb8894c255d354c86d8cadc7b120d01a6ff\": container with ID starting with 3268564838eb6e6a4f98f7ce91f31bb8894c255d354c86d8cadc7b120d01a6ff not found: ID does not exist" Feb 16 17:15:25.898250 master-0 kubenswrapper[4167]: I0216 17:15:25.898126 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-lf4cb_9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/authentication-operator/3.log" Feb 16 17:15:26.302152 master-0 kubenswrapper[4167]: I0216 17:15:26.301686 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-864ddd5f56-pm4rt_f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/router/2.log" Feb 16 17:15:26.893822 master-0 kubenswrapper[4167]: I0216 17:15:26.893733 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-66788cb45c-dp9bc_7390ccc6-dfbe-4f51-960c-7628f49bffb7/fix-audit-permissions/1.log" Feb 16 17:15:27.099419 master-0 kubenswrapper[4167]: I0216 17:15:27.099379 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-66788cb45c-dp9bc_7390ccc6-dfbe-4f51-960c-7628f49bffb7/oauth-apiserver/1.log" Feb 16 17:15:27.494151 master-0 kubenswrapper[4167]: I0216 17:15:27.494066 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-7777d5cc66-64vhv_0517b180-00ee-47fe-a8e7-36a3931b7e72/console-operator/3.log" Feb 16 17:15:28.294221 master-0 kubenswrapper[4167]: I0216 17:15:28.294149 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-dcd7b7d95-dhhfh_08a90dc5-b0d8-4aad-a002-736492b6c1a9/download-server/2.log" Feb 16 17:15:28.892813 master-0 kubenswrapper[4167]: I0216 17:15:28.892669 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-86b8869b79-nhxlp_d9859457-f0d1-4754-a6c5-cf05d5abf447/dns-operator/2.log" Feb 16 17:15:29.096984 master-0 kubenswrapper[4167]: I0216 17:15:29.096874 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-86b8869b79-nhxlp_d9859457-f0d1-4754-a6c5-cf05d5abf447/kube-rbac-proxy/1.log" Feb 16 17:15:29.693354 master-0 kubenswrapper[4167]: I0216 17:15:29.693266 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-qcgxx_2d96ccdc-0b09-437d-bfca-1958af5d9953/dns/1.log" Feb 16 17:15:29.893733 master-0 kubenswrapper[4167]: I0216 17:15:29.893672 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-qcgxx_2d96ccdc-0b09-437d-bfca-1958af5d9953/kube-rbac-proxy/2.log" Feb 16 17:15:30.291529 master-0 kubenswrapper[4167]: I0216 17:15:30.291490 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-vfxj4_a6fe41b0-1a42-4f07-8220-d9aaa50788ad/dns-node-resolver/3.log" Feb 16 17:15:30.692980 master-0 kubenswrapper[4167]: I0216 17:15:30.692935 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-96c8c64b8-zwwnk_5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/cluster-image-registry-operator/1.log" Feb 16 17:15:31.094180 master-0 kubenswrapper[4167]: E0216 17:15:31.094002 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c53a58c131794a80fa1c0999460553c2cc95a04f4d47697c0e7fb42de126acf\": container with ID starting with 2c53a58c131794a80fa1c0999460553c2cc95a04f4d47697c0e7fb42de126acf not found: ID does not exist" containerID="2c53a58c131794a80fa1c0999460553c2cc95a04f4d47697c0e7fb42de126acf" Feb 16 17:15:31.496011 master-0 kubenswrapper[4167]: I0216 17:15:31.495947 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k_442600dc-09b2-4fee-9f89-777296b2ee40/kube-controller-manager-operator/3.log" Feb 16 17:15:31.591594 master-0 kubenswrapper[4167]: I0216 17:15:31.591518 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:15:31.639058 master-0 kubenswrapper[4167]: I0216 17:15:31.638925 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:15:31.645092 master-0 kubenswrapper[4167]: I0216 17:15:31.645023 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:15:31.696243 master-0 kubenswrapper[4167]: I0216 17:15:31.693548 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:15:31.718170 master-0 kubenswrapper[4167]: I0216 17:15:31.718126 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_80420f2e7c3cdda71f7d0d6ccbe6f9f3/kube-controller-manager/6.log" Feb 16 17:15:32.103291 master-0 kubenswrapper[4167]: I0216 17:15:32.102324 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_80420f2e7c3cdda71f7d0d6ccbe6f9f3/kube-controller-manager/7.log" Feb 16 17:15:32.300340 master-0 kubenswrapper[4167]: I0216 17:15:32.300257 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_80420f2e7c3cdda71f7d0d6ccbe6f9f3/cluster-policy-controller/2.log" Feb 16 17:15:32.893453 master-0 kubenswrapper[4167]: I0216 17:15:32.893382 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/setup/1.log" Feb 16 17:15:33.094808 master-0 kubenswrapper[4167]: I0216 17:15:33.094688 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/5.log" Feb 16 17:15:33.693919 master-0 kubenswrapper[4167]: I0216 17:15:33.693871 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-controller-686c884b4d-ksx48_c8729b1a-e365-4cf7-8a05-91a9987dabe9/machine-config-controller/2.log" Feb 16 17:15:33.895891 master-0 kubenswrapper[4167]: I0216 17:15:33.895819 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-controller-686c884b4d-ksx48_c8729b1a-e365-4cf7-8a05-91a9987dabe9/kube-rbac-proxy/2.log" Feb 16 17:15:34.498169 master-0 kubenswrapper[4167]: I0216 17:15:34.497758 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-98q6v_648abb6c-9c81-4e5c-b5f1-3b7eb254f743/machine-config-daemon/4.log" Feb 16 17:15:34.693685 master-0 kubenswrapper[4167]: I0216 17:15:34.693622 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-98q6v_648abb6c-9c81-4e5c-b5f1-3b7eb254f743/kube-rbac-proxy/2.log" Feb 16 17:15:35.294285 master-0 kubenswrapper[4167]: I0216 17:15:35.294204 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-operator-84976bb859-rsnqc_f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/machine-config-operator/2.log" Feb 16 17:15:35.495709 master-0 kubenswrapper[4167]: I0216 17:15:35.495640 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-operator-84976bb859-rsnqc_f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/kube-rbac-proxy/1.log" Feb 16 17:15:35.894554 master-0 kubenswrapper[4167]: I0216 17:15:35.894481 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-2ws9r_9c48005e-c4df-4332-87fc-ec028f2c6921/machine-config-server/3.log" Feb 16 17:15:41.330992 master-0 kubenswrapper[4167]: I0216 17:15:41.330923 4167 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 17:15:41.331597 master-0 kubenswrapper[4167]: I0216 17:15:41.331264 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="prometheus" containerID="cri-o://fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4" gracePeriod=600 Feb 16 17:15:41.331597 master-0 kubenswrapper[4167]: I0216 17:15:41.331381 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="thanos-sidecar" containerID="cri-o://5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a" gracePeriod=600 Feb 16 17:15:41.331597 master-0 kubenswrapper[4167]: I0216 17:15:41.331421 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="config-reloader" containerID="cri-o://ade15cf4b2aaf2dd26ff870ce6199f1f51d17aa8957040f94d4d5d686ded606b" gracePeriod=600 Feb 16 17:15:41.331597 master-0 kubenswrapper[4167]: I0216 17:15:41.331400 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="kube-rbac-proxy-web" containerID="cri-o://686d269cdb80a111664589995a5036e0dd37d6ddf66f17cad1121e2a0bedbe87" gracePeriod=600 Feb 16 17:15:41.331597 master-0 kubenswrapper[4167]: I0216 17:15:41.331502 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="kube-rbac-proxy" containerID="cri-o://b4845c2ffe41f4498d078fd73515d97a8341dbecaec68cf2489590e8b32f1efc" gracePeriod=600 Feb 16 17:15:41.331809 master-0 kubenswrapper[4167]: I0216 17:15:41.331739 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="kube-rbac-proxy-thanos" containerID="cri-o://90e54b033aa4260eb52b48645f94866cbe3ba5be21ffcc3d30b71a50cb404c60" gracePeriod=600 Feb 16 17:15:41.355945 master-0 kubenswrapper[4167]: E0216 17:15:41.355892 4167 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cd29be8_2b2a_49f7_badd_ff53c686a63d.slice/crio-5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:15:42.156767 master-0 kubenswrapper[4167]: I0216 17:15:42.156685 4167 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="90e54b033aa4260eb52b48645f94866cbe3ba5be21ffcc3d30b71a50cb404c60" exitCode=0 Feb 16 17:15:42.156767 master-0 kubenswrapper[4167]: I0216 17:15:42.156737 4167 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="b4845c2ffe41f4498d078fd73515d97a8341dbecaec68cf2489590e8b32f1efc" exitCode=0 Feb 16 17:15:42.156767 master-0 kubenswrapper[4167]: I0216 17:15:42.156756 4167 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a" exitCode=0 Feb 16 17:15:42.156767 master-0 kubenswrapper[4167]: I0216 17:15:42.156773 4167 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="ade15cf4b2aaf2dd26ff870ce6199f1f51d17aa8957040f94d4d5d686ded606b" exitCode=0 Feb 16 17:15:42.156767 master-0 kubenswrapper[4167]: I0216 17:15:42.156790 4167 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4" exitCode=0 Feb 16 17:15:42.157685 master-0 kubenswrapper[4167]: I0216 17:15:42.156779 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"90e54b033aa4260eb52b48645f94866cbe3ba5be21ffcc3d30b71a50cb404c60"} Feb 16 17:15:42.157685 master-0 kubenswrapper[4167]: I0216 17:15:42.156851 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"b4845c2ffe41f4498d078fd73515d97a8341dbecaec68cf2489590e8b32f1efc"} Feb 16 17:15:42.157685 master-0 kubenswrapper[4167]: I0216 17:15:42.156876 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a"} Feb 16 17:15:42.157685 master-0 kubenswrapper[4167]: I0216 17:15:42.156897 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"ade15cf4b2aaf2dd26ff870ce6199f1f51d17aa8957040f94d4d5d686ded606b"} Feb 16 17:15:42.157685 master-0 kubenswrapper[4167]: I0216 17:15:42.156920 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4"} Feb 16 17:15:42.598710 master-0 kubenswrapper[4167]: E0216 17:15:42.598620 4167 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4 is running failed: container process not found" containerID="fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Feb 16 17:15:42.600084 master-0 kubenswrapper[4167]: E0216 17:15:42.599884 4167 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4 is running failed: container process not found" containerID="fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Feb 16 17:15:42.600768 master-0 kubenswrapper[4167]: E0216 17:15:42.600682 4167 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4 is running failed: container process not found" containerID="fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Feb 16 17:15:42.600896 master-0 kubenswrapper[4167]: E0216 17:15:42.600785 4167 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4 is running failed: container process not found" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="prometheus" Feb 16 17:15:42.803065 master-0 kubenswrapper[4167]: I0216 17:15:42.803030 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:15:42.977022 master-0 kubenswrapper[4167]: I0216 17:15:42.976926 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977022 master-0 kubenswrapper[4167]: I0216 17:15:42.977020 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-metrics-client-ca\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977296 master-0 kubenswrapper[4167]: I0216 17:15:42.977054 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977296 master-0 kubenswrapper[4167]: I0216 17:15:42.977105 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977296 master-0 kubenswrapper[4167]: I0216 17:15:42.977167 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977296 master-0 kubenswrapper[4167]: I0216 17:15:42.977196 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977296 master-0 kubenswrapper[4167]: I0216 17:15:42.977228 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgm4p\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-kube-api-access-lgm4p\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977296 master-0 kubenswrapper[4167]: I0216 17:15:42.977254 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977296 master-0 kubenswrapper[4167]: I0216 17:15:42.977279 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config-out\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977603 master-0 kubenswrapper[4167]: I0216 17:15:42.977317 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977603 master-0 kubenswrapper[4167]: I0216 17:15:42.977349 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977603 master-0 kubenswrapper[4167]: I0216 17:15:42.977375 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977603 master-0 kubenswrapper[4167]: I0216 17:15:42.977403 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977603 master-0 kubenswrapper[4167]: I0216 17:15:42.977435 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977603 master-0 kubenswrapper[4167]: I0216 17:15:42.977498 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977603 master-0 kubenswrapper[4167]: I0216 17:15:42.977527 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977603 master-0 kubenswrapper[4167]: I0216 17:15:42.977556 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.977603 master-0 kubenswrapper[4167]: I0216 17:15:42.977584 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-db\") pod \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\" (UID: \"1cd29be8-2b2a-49f7-badd-ff53c686a63d\") " Feb 16 17:15:42.979306 master-0 kubenswrapper[4167]: I0216 17:15:42.979256 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:15:42.979391 master-0 kubenswrapper[4167]: I0216 17:15:42.979283 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:15:42.979862 master-0 kubenswrapper[4167]: I0216 17:15:42.979817 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:15:42.980062 master-0 kubenswrapper[4167]: I0216 17:15:42.979988 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:15:42.980496 master-0 kubenswrapper[4167]: I0216 17:15:42.980464 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:42.982681 master-0 kubenswrapper[4167]: I0216 17:15:42.982629 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:15:42.982931 master-0 kubenswrapper[4167]: I0216 17:15:42.982904 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config-out" (OuterVolumeSpecName: "config-out") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:15:42.983215 master-0 kubenswrapper[4167]: I0216 17:15:42.983165 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:15:42.983215 master-0 kubenswrapper[4167]: I0216 17:15:42.983200 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-kube-api-access-lgm4p" (OuterVolumeSpecName: "kube-api-access-lgm4p") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "kube-api-access-lgm4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:15:42.983367 master-0 kubenswrapper[4167]: I0216 17:15:42.983325 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:15:42.983631 master-0 kubenswrapper[4167]: I0216 17:15:42.983592 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:15:42.983778 master-0 kubenswrapper[4167]: I0216 17:15:42.983746 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:15:42.986336 master-0 kubenswrapper[4167]: I0216 17:15:42.986306 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:15:42.986678 master-0 kubenswrapper[4167]: I0216 17:15:42.986646 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:15:42.986761 master-0 kubenswrapper[4167]: I0216 17:15:42.986712 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:15:42.988293 master-0 kubenswrapper[4167]: I0216 17:15:42.988255 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:15:42.988706 master-0 kubenswrapper[4167]: I0216 17:15:42.988616 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config" (OuterVolumeSpecName: "config") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:15:43.042330 master-0 kubenswrapper[4167]: I0216 17:15:43.042279 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config" (OuterVolumeSpecName: "web-config") pod "1cd29be8-2b2a-49f7-badd-ff53c686a63d" (UID: "1cd29be8-2b2a-49f7-badd-ff53c686a63d"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:15:43.079491 master-0 kubenswrapper[4167]: I0216 17:15:43.079431 4167 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079491 master-0 kubenswrapper[4167]: I0216 17:15:43.079467 4167 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079491 master-0 kubenswrapper[4167]: I0216 17:15:43.079480 4167 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgm4p\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-kube-api-access-lgm4p\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079491 master-0 kubenswrapper[4167]: I0216 17:15:43.079495 4167 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079509 4167 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config-out\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079526 4167 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079566 4167 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079579 4167 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-web-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079591 4167 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1cd29be8-2b2a-49f7-badd-ff53c686a63d-tls-assets\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079603 4167 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079615 4167 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079626 4167 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079642 4167 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079655 4167 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/1cd29be8-2b2a-49f7-badd-ff53c686a63d-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079666 4167 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079679 4167 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1cd29be8-2b2a-49f7-badd-ff53c686a63d-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079693 4167 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.079787 master-0 kubenswrapper[4167]: I0216 17:15:43.079705 4167 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1cd29be8-2b2a-49f7-badd-ff53c686a63d-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:15:43.169370 master-0 kubenswrapper[4167]: I0216 17:15:43.169270 4167 generic.go:334] "Generic (PLEG): container finished" podID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerID="686d269cdb80a111664589995a5036e0dd37d6ddf66f17cad1121e2a0bedbe87" exitCode=0 Feb 16 17:15:43.169370 master-0 kubenswrapper[4167]: I0216 17:15:43.169315 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"686d269cdb80a111664589995a5036e0dd37d6ddf66f17cad1121e2a0bedbe87"} Feb 16 17:15:43.169370 master-0 kubenswrapper[4167]: I0216 17:15:43.169360 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"1cd29be8-2b2a-49f7-badd-ff53c686a63d","Type":"ContainerDied","Data":"ab93434b1ef87aa6da0d58184cdc1a651c2aec3cff4b94bae1b33fd12276cd11"} Feb 16 17:15:43.169370 master-0 kubenswrapper[4167]: I0216 17:15:43.169380 4167 scope.go:117] "RemoveContainer" containerID="90e54b033aa4260eb52b48645f94866cbe3ba5be21ffcc3d30b71a50cb404c60" Feb 16 17:15:43.170004 master-0 kubenswrapper[4167]: I0216 17:15:43.169414 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:15:43.193392 master-0 kubenswrapper[4167]: I0216 17:15:43.193345 4167 scope.go:117] "RemoveContainer" containerID="b4845c2ffe41f4498d078fd73515d97a8341dbecaec68cf2489590e8b32f1efc" Feb 16 17:15:43.218635 master-0 kubenswrapper[4167]: I0216 17:15:43.218592 4167 scope.go:117] "RemoveContainer" containerID="686d269cdb80a111664589995a5036e0dd37d6ddf66f17cad1121e2a0bedbe87" Feb 16 17:15:43.218852 master-0 kubenswrapper[4167]: I0216 17:15:43.218798 4167 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 17:15:43.225015 master-0 kubenswrapper[4167]: I0216 17:15:43.224973 4167 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 17:15:43.233816 master-0 kubenswrapper[4167]: I0216 17:15:43.233771 4167 scope.go:117] "RemoveContainer" containerID="5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a" Feb 16 17:15:43.254093 master-0 kubenswrapper[4167]: I0216 17:15:43.254008 4167 scope.go:117] "RemoveContainer" containerID="ade15cf4b2aaf2dd26ff870ce6199f1f51d17aa8957040f94d4d5d686ded606b" Feb 16 17:15:43.274646 master-0 kubenswrapper[4167]: I0216 17:15:43.274585 4167 scope.go:117] "RemoveContainer" containerID="fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4" Feb 16 17:15:43.288797 master-0 kubenswrapper[4167]: I0216 17:15:43.288758 4167 scope.go:117] "RemoveContainer" containerID="8d38b3dfba8a4db2b412df3ea70b7fbfe74d4d324981121dd71d5baf29b619f2" Feb 16 17:15:43.306404 master-0 kubenswrapper[4167]: I0216 17:15:43.306365 4167 scope.go:117] "RemoveContainer" containerID="90e54b033aa4260eb52b48645f94866cbe3ba5be21ffcc3d30b71a50cb404c60" Feb 16 17:15:43.306835 master-0 kubenswrapper[4167]: E0216 17:15:43.306796 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e54b033aa4260eb52b48645f94866cbe3ba5be21ffcc3d30b71a50cb404c60\": container with ID starting with 90e54b033aa4260eb52b48645f94866cbe3ba5be21ffcc3d30b71a50cb404c60 not found: ID does not exist" containerID="90e54b033aa4260eb52b48645f94866cbe3ba5be21ffcc3d30b71a50cb404c60" Feb 16 17:15:43.306901 master-0 kubenswrapper[4167]: I0216 17:15:43.306836 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e54b033aa4260eb52b48645f94866cbe3ba5be21ffcc3d30b71a50cb404c60"} err="failed to get container status \"90e54b033aa4260eb52b48645f94866cbe3ba5be21ffcc3d30b71a50cb404c60\": rpc error: code = NotFound desc = could not find container \"90e54b033aa4260eb52b48645f94866cbe3ba5be21ffcc3d30b71a50cb404c60\": container with ID starting with 90e54b033aa4260eb52b48645f94866cbe3ba5be21ffcc3d30b71a50cb404c60 not found: ID does not exist" Feb 16 17:15:43.306901 master-0 kubenswrapper[4167]: I0216 17:15:43.306881 4167 scope.go:117] "RemoveContainer" containerID="b4845c2ffe41f4498d078fd73515d97a8341dbecaec68cf2489590e8b32f1efc" Feb 16 17:15:43.307233 master-0 kubenswrapper[4167]: E0216 17:15:43.307212 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4845c2ffe41f4498d078fd73515d97a8341dbecaec68cf2489590e8b32f1efc\": container with ID starting with b4845c2ffe41f4498d078fd73515d97a8341dbecaec68cf2489590e8b32f1efc not found: ID does not exist" containerID="b4845c2ffe41f4498d078fd73515d97a8341dbecaec68cf2489590e8b32f1efc" Feb 16 17:15:43.307302 master-0 kubenswrapper[4167]: I0216 17:15:43.307232 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4845c2ffe41f4498d078fd73515d97a8341dbecaec68cf2489590e8b32f1efc"} err="failed to get container status \"b4845c2ffe41f4498d078fd73515d97a8341dbecaec68cf2489590e8b32f1efc\": rpc error: code = NotFound desc = could not find container \"b4845c2ffe41f4498d078fd73515d97a8341dbecaec68cf2489590e8b32f1efc\": container with ID starting with b4845c2ffe41f4498d078fd73515d97a8341dbecaec68cf2489590e8b32f1efc not found: ID does not exist" Feb 16 17:15:43.307302 master-0 kubenswrapper[4167]: I0216 17:15:43.307244 4167 scope.go:117] "RemoveContainer" containerID="686d269cdb80a111664589995a5036e0dd37d6ddf66f17cad1121e2a0bedbe87" Feb 16 17:15:43.307744 master-0 kubenswrapper[4167]: E0216 17:15:43.307706 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"686d269cdb80a111664589995a5036e0dd37d6ddf66f17cad1121e2a0bedbe87\": container with ID starting with 686d269cdb80a111664589995a5036e0dd37d6ddf66f17cad1121e2a0bedbe87 not found: ID does not exist" containerID="686d269cdb80a111664589995a5036e0dd37d6ddf66f17cad1121e2a0bedbe87" Feb 16 17:15:43.307818 master-0 kubenswrapper[4167]: I0216 17:15:43.307737 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"686d269cdb80a111664589995a5036e0dd37d6ddf66f17cad1121e2a0bedbe87"} err="failed to get container status \"686d269cdb80a111664589995a5036e0dd37d6ddf66f17cad1121e2a0bedbe87\": rpc error: code = NotFound desc = could not find container \"686d269cdb80a111664589995a5036e0dd37d6ddf66f17cad1121e2a0bedbe87\": container with ID starting with 686d269cdb80a111664589995a5036e0dd37d6ddf66f17cad1121e2a0bedbe87 not found: ID does not exist" Feb 16 17:15:43.307818 master-0 kubenswrapper[4167]: I0216 17:15:43.307761 4167 scope.go:117] "RemoveContainer" containerID="5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a" Feb 16 17:15:43.308066 master-0 kubenswrapper[4167]: E0216 17:15:43.308040 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a\": container with ID starting with 5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a not found: ID does not exist" containerID="5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a" Feb 16 17:15:43.308134 master-0 kubenswrapper[4167]: I0216 17:15:43.308065 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a"} err="failed to get container status \"5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a\": rpc error: code = NotFound desc = could not find container \"5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a\": container with ID starting with 5b1f2ccc277f2c0a5ad806d40473407892d8606c2eb1ffe4b7f4720ab1dc152a not found: ID does not exist" Feb 16 17:15:43.308134 master-0 kubenswrapper[4167]: I0216 17:15:43.308078 4167 scope.go:117] "RemoveContainer" containerID="ade15cf4b2aaf2dd26ff870ce6199f1f51d17aa8957040f94d4d5d686ded606b" Feb 16 17:15:43.308288 master-0 kubenswrapper[4167]: E0216 17:15:43.308263 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ade15cf4b2aaf2dd26ff870ce6199f1f51d17aa8957040f94d4d5d686ded606b\": container with ID starting with ade15cf4b2aaf2dd26ff870ce6199f1f51d17aa8957040f94d4d5d686ded606b not found: ID does not exist" containerID="ade15cf4b2aaf2dd26ff870ce6199f1f51d17aa8957040f94d4d5d686ded606b" Feb 16 17:15:43.308358 master-0 kubenswrapper[4167]: I0216 17:15:43.308287 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ade15cf4b2aaf2dd26ff870ce6199f1f51d17aa8957040f94d4d5d686ded606b"} err="failed to get container status \"ade15cf4b2aaf2dd26ff870ce6199f1f51d17aa8957040f94d4d5d686ded606b\": rpc error: code = NotFound desc = could not find container \"ade15cf4b2aaf2dd26ff870ce6199f1f51d17aa8957040f94d4d5d686ded606b\": container with ID starting with ade15cf4b2aaf2dd26ff870ce6199f1f51d17aa8957040f94d4d5d686ded606b not found: ID does not exist" Feb 16 17:15:43.308358 master-0 kubenswrapper[4167]: I0216 17:15:43.308302 4167 scope.go:117] "RemoveContainer" containerID="fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4" Feb 16 17:15:43.308494 master-0 kubenswrapper[4167]: E0216 17:15:43.308470 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4\": container with ID starting with fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4 not found: ID does not exist" containerID="fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4" Feb 16 17:15:43.308546 master-0 kubenswrapper[4167]: I0216 17:15:43.308492 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4"} err="failed to get container status \"fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4\": rpc error: code = NotFound desc = could not find container \"fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4\": container with ID starting with fe32f7de973862c4f9ccce7873c20224854d156058fa081342655a7ca263b6c4 not found: ID does not exist" Feb 16 17:15:43.308546 master-0 kubenswrapper[4167]: I0216 17:15:43.308505 4167 scope.go:117] "RemoveContainer" containerID="8d38b3dfba8a4db2b412df3ea70b7fbfe74d4d324981121dd71d5baf29b619f2" Feb 16 17:15:43.308673 master-0 kubenswrapper[4167]: E0216 17:15:43.308651 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d38b3dfba8a4db2b412df3ea70b7fbfe74d4d324981121dd71d5baf29b619f2\": container with ID starting with 8d38b3dfba8a4db2b412df3ea70b7fbfe74d4d324981121dd71d5baf29b619f2 not found: ID does not exist" containerID="8d38b3dfba8a4db2b412df3ea70b7fbfe74d4d324981121dd71d5baf29b619f2" Feb 16 17:15:43.308729 master-0 kubenswrapper[4167]: I0216 17:15:43.308672 4167 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d38b3dfba8a4db2b412df3ea70b7fbfe74d4d324981121dd71d5baf29b619f2"} err="failed to get container status \"8d38b3dfba8a4db2b412df3ea70b7fbfe74d4d324981121dd71d5baf29b619f2\": rpc error: code = NotFound desc = could not find container \"8d38b3dfba8a4db2b412df3ea70b7fbfe74d4d324981121dd71d5baf29b619f2\": container with ID starting with 8d38b3dfba8a4db2b412df3ea70b7fbfe74d4d324981121dd71d5baf29b619f2 not found: ID does not exist" Feb 16 17:15:44.459719 master-0 kubenswrapper[4167]: I0216 17:15:44.459679 4167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" path="/var/lib/kubelet/pods/1cd29be8-2b2a-49f7-badd-ff53c686a63d/volumes" Feb 16 17:16:18.518678 master-0 kubenswrapper[4167]: E0216 17:16:18.518571 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15d0ed427ea76ce837f303235b162f71a48c98f5857055297e1b3bf45627edde\": container with ID starting with 15d0ed427ea76ce837f303235b162f71a48c98f5857055297e1b3bf45627edde not found: ID does not exist" containerID="15d0ed427ea76ce837f303235b162f71a48c98f5857055297e1b3bf45627edde" Feb 16 17:16:18.518678 master-0 kubenswrapper[4167]: I0216 17:16:18.518660 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="15d0ed427ea76ce837f303235b162f71a48c98f5857055297e1b3bf45627edde" err="rpc error: code = NotFound desc = could not find container \"15d0ed427ea76ce837f303235b162f71a48c98f5857055297e1b3bf45627edde\": container with ID starting with 15d0ed427ea76ce837f303235b162f71a48c98f5857055297e1b3bf45627edde not found: ID does not exist" Feb 16 17:16:18.519763 master-0 kubenswrapper[4167]: E0216 17:16:18.519247 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6648e1eebfd3e88db006d1e478b0438156e9d91120229014683317f4088a677e\": container with ID starting with 6648e1eebfd3e88db006d1e478b0438156e9d91120229014683317f4088a677e not found: ID does not exist" containerID="6648e1eebfd3e88db006d1e478b0438156e9d91120229014683317f4088a677e" Feb 16 17:16:18.519763 master-0 kubenswrapper[4167]: I0216 17:16:18.519286 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="6648e1eebfd3e88db006d1e478b0438156e9d91120229014683317f4088a677e" err="rpc error: code = NotFound desc = could not find container \"6648e1eebfd3e88db006d1e478b0438156e9d91120229014683317f4088a677e\": container with ID starting with 6648e1eebfd3e88db006d1e478b0438156e9d91120229014683317f4088a677e not found: ID does not exist" Feb 16 17:16:18.519880 master-0 kubenswrapper[4167]: E0216 17:16:18.519818 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b34e595911c25aba850a546f90a694c0bd49c6505002297eb0ff69b947d240\": container with ID starting with 88b34e595911c25aba850a546f90a694c0bd49c6505002297eb0ff69b947d240 not found: ID does not exist" containerID="88b34e595911c25aba850a546f90a694c0bd49c6505002297eb0ff69b947d240" Feb 16 17:16:18.519880 master-0 kubenswrapper[4167]: I0216 17:16:18.519864 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="88b34e595911c25aba850a546f90a694c0bd49c6505002297eb0ff69b947d240" err="rpc error: code = NotFound desc = could not find container \"88b34e595911c25aba850a546f90a694c0bd49c6505002297eb0ff69b947d240\": container with ID starting with 88b34e595911c25aba850a546f90a694c0bd49c6505002297eb0ff69b947d240 not found: ID does not exist" Feb 16 17:16:18.520245 master-0 kubenswrapper[4167]: E0216 17:16:18.520196 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4d180212b837cef90316f3a72302aee5c06f2305971b7807d93998938653588\": container with ID starting with a4d180212b837cef90316f3a72302aee5c06f2305971b7807d93998938653588 not found: ID does not exist" containerID="a4d180212b837cef90316f3a72302aee5c06f2305971b7807d93998938653588" Feb 16 17:16:18.520245 master-0 kubenswrapper[4167]: I0216 17:16:18.520236 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="a4d180212b837cef90316f3a72302aee5c06f2305971b7807d93998938653588" err="rpc error: code = NotFound desc = could not find container \"a4d180212b837cef90316f3a72302aee5c06f2305971b7807d93998938653588\": container with ID starting with a4d180212b837cef90316f3a72302aee5c06f2305971b7807d93998938653588 not found: ID does not exist" Feb 16 17:16:18.520603 master-0 kubenswrapper[4167]: E0216 17:16:18.520550 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4286c5fe2de6d9fcd50051605f2f4b5c1cba939d016239a3d000fd0f9a25e9f0\": container with ID starting with 4286c5fe2de6d9fcd50051605f2f4b5c1cba939d016239a3d000fd0f9a25e9f0 not found: ID does not exist" containerID="4286c5fe2de6d9fcd50051605f2f4b5c1cba939d016239a3d000fd0f9a25e9f0" Feb 16 17:16:18.520663 master-0 kubenswrapper[4167]: I0216 17:16:18.520599 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="4286c5fe2de6d9fcd50051605f2f4b5c1cba939d016239a3d000fd0f9a25e9f0" err="rpc error: code = NotFound desc = could not find container \"4286c5fe2de6d9fcd50051605f2f4b5c1cba939d016239a3d000fd0f9a25e9f0\": container with ID starting with 4286c5fe2de6d9fcd50051605f2f4b5c1cba939d016239a3d000fd0f9a25e9f0 not found: ID does not exist" Feb 16 17:16:18.521391 master-0 kubenswrapper[4167]: E0216 17:16:18.521343 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7761c8f384f9fd0a76d9ffed7d0a9aec69eee80353e9faf16ba81e81e59c988\": container with ID starting with d7761c8f384f9fd0a76d9ffed7d0a9aec69eee80353e9faf16ba81e81e59c988 not found: ID does not exist" containerID="d7761c8f384f9fd0a76d9ffed7d0a9aec69eee80353e9faf16ba81e81e59c988" Feb 16 17:16:18.521462 master-0 kubenswrapper[4167]: I0216 17:16:18.521389 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="d7761c8f384f9fd0a76d9ffed7d0a9aec69eee80353e9faf16ba81e81e59c988" err="rpc error: code = NotFound desc = could not find container \"d7761c8f384f9fd0a76d9ffed7d0a9aec69eee80353e9faf16ba81e81e59c988\": container with ID starting with d7761c8f384f9fd0a76d9ffed7d0a9aec69eee80353e9faf16ba81e81e59c988 not found: ID does not exist" Feb 16 17:16:18.521790 master-0 kubenswrapper[4167]: E0216 17:16:18.521734 4167 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1302e1b555512ed30fe816d84e187d9499595f02c728ef307269b3bb72731f62\": container with ID starting with 1302e1b555512ed30fe816d84e187d9499595f02c728ef307269b3bb72731f62 not found: ID does not exist" containerID="1302e1b555512ed30fe816d84e187d9499595f02c728ef307269b3bb72731f62" Feb 16 17:16:18.521790 master-0 kubenswrapper[4167]: I0216 17:16:18.521780 4167 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="1302e1b555512ed30fe816d84e187d9499595f02c728ef307269b3bb72731f62" err="rpc error: code = NotFound desc = could not find container \"1302e1b555512ed30fe816d84e187d9499595f02c728ef307269b3bb72731f62\": container with ID starting with 1302e1b555512ed30fe816d84e187d9499595f02c728ef307269b3bb72731f62 not found: ID does not exist" Feb 16 17:16:47.769647 master-0 kubenswrapper[4167]: I0216 17:16:47.769568 4167 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:16:47.770549 master-0 kubenswrapper[4167]: I0216 17:16:47.770289 4167 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.950758 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-599b567ff7-nrcpr"] Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: E0216 17:16:56.951331 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="kube-rbac-proxy-web" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.951371 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="kube-rbac-proxy-web" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: E0216 17:16:56.951405 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="config-reloader" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.951424 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="config-reloader" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: E0216 17:16:56.951473 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32286c81635de6de1cf7f328273c1a49" containerName="startup-monitor" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.951496 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="32286c81635de6de1cf7f328273c1a49" containerName="startup-monitor" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: E0216 17:16:56.951519 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="kube-rbac-proxy" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.951537 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="kube-rbac-proxy" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: E0216 17:16:56.951564 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="thanos-sidecar" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.951581 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="thanos-sidecar" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: E0216 17:16:56.951623 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="kube-rbac-proxy-thanos" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.951642 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="kube-rbac-proxy-thanos" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: E0216 17:16:56.951674 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="init-config-reloader" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.951692 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="init-config-reloader" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: E0216 17:16:56.951715 4167 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="prometheus" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.951734 4167 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="prometheus" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.952107 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="kube-rbac-proxy" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.952140 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="kube-rbac-proxy-web" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.952169 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="thanos-sidecar" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.952196 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="config-reloader" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.952221 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="kube-rbac-proxy-thanos" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.952254 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cd29be8-2b2a-49f7-badd-ff53c686a63d" containerName="prometheus" Feb 16 17:16:56.952617 master-0 kubenswrapper[4167]: I0216 17:16:56.952291 4167 memory_manager.go:354] "RemoveStaleState removing state" podUID="32286c81635de6de1cf7f328273c1a49" containerName="startup-monitor" Feb 16 17:16:56.955337 master-0 kubenswrapper[4167]: I0216 17:16:56.953443 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:56.957526 master-0 kubenswrapper[4167]: I0216 17:16:56.957492 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 17:16:56.958004 master-0 kubenswrapper[4167]: I0216 17:16:56.957983 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 17:16:56.958839 master-0 kubenswrapper[4167]: I0216 17:16:56.958814 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-sk6rc" Feb 16 17:16:56.959204 master-0 kubenswrapper[4167]: I0216 17:16:56.959181 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 17:16:56.959621 master-0 kubenswrapper[4167]: I0216 17:16:56.959596 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 17:16:56.960006 master-0 kubenswrapper[4167]: I0216 17:16:56.959983 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 17:16:56.960593 master-0 kubenswrapper[4167]: I0216 17:16:56.960508 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r"] Feb 16 17:16:56.961921 master-0 kubenswrapper[4167]: I0216 17:16:56.961882 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:16:56.964779 master-0 kubenswrapper[4167]: I0216 17:16:56.963304 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct"] Feb 16 17:16:56.964779 master-0 kubenswrapper[4167]: I0216 17:16:56.964291 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:16:56.965384 master-0 kubenswrapper[4167]: I0216 17:16:56.965353 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-4vsn8" Feb 16 17:16:56.965697 master-0 kubenswrapper[4167]: I0216 17:16:56.965452 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 17:16:56.967300 master-0 kubenswrapper[4167]: I0216 17:16:56.967270 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-wxz7g" Feb 16 17:16:56.967445 master-0 kubenswrapper[4167]: I0216 17:16:56.967424 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 17:16:56.967589 master-0 kubenswrapper[4167]: I0216 17:16:56.967562 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 17:16:56.968863 master-0 kubenswrapper[4167]: I0216 17:16:56.968829 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 17:16:56.972381 master-0 kubenswrapper[4167]: I0216 17:16:56.972323 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 17:16:56.976354 master-0 kubenswrapper[4167]: I0216 17:16:56.976311 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:56.976929 master-0 kubenswrapper[4167]: I0216 17:16:56.976861 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-795746f87c-qdv9c"] Feb 16 17:16:56.978550 master-0 kubenswrapper[4167]: I0216 17:16:56.977863 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 17:16:56.978550 master-0 kubenswrapper[4167]: I0216 17:16:56.978265 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:56.978550 master-0 kubenswrapper[4167]: I0216 17:16:56.978321 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 17:16:56.978550 master-0 kubenswrapper[4167]: I0216 17:16:56.978422 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 17:16:56.979625 master-0 kubenswrapper[4167]: I0216 17:16:56.978732 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 17:16:56.986285 master-0 kubenswrapper[4167]: I0216 17:16:56.984057 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 17:16:56.986285 master-0 kubenswrapper[4167]: I0216 17:16:56.984373 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-b4rnj" Feb 16 17:16:56.986285 master-0 kubenswrapper[4167]: I0216 17:16:56.984596 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 17:16:56.988084 master-0 kubenswrapper[4167]: I0216 17:16:56.988020 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 17:16:56.991834 master-0 kubenswrapper[4167]: I0216 17:16:56.991766 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xv2wv"] Feb 16 17:16:56.992041 master-0 kubenswrapper[4167]: I0216 17:16:56.991996 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:56.994053 master-0 kubenswrapper[4167]: I0216 17:16:56.992876 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:16:56.994332 master-0 kubenswrapper[4167]: I0216 17:16:56.994293 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 17:16:56.995135 master-0 kubenswrapper[4167]: I0216 17:16:56.995099 4167 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-64f85b8fc9-n9msn"] Feb 16 17:16:56.996098 master-0 kubenswrapper[4167]: I0216 17:16:56.996063 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:56.997376 master-0 kubenswrapper[4167]: I0216 17:16:56.997329 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 17:16:56.999001 master-0 kubenswrapper[4167]: I0216 17:16:56.998907 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct"] Feb 16 17:16:57.003150 master-0 kubenswrapper[4167]: I0216 17:16:57.003091 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r"] Feb 16 17:16:57.005558 master-0 kubenswrapper[4167]: I0216 17:16:57.005507 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 17:16:57.005875 master-0 kubenswrapper[4167]: I0216 17:16:57.005833 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 17:16:57.006347 master-0 kubenswrapper[4167]: I0216 17:16:57.006299 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 17:16:57.006683 master-0 kubenswrapper[4167]: I0216 17:16:57.006637 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 17:16:57.007005 master-0 kubenswrapper[4167]: I0216 17:16:57.006919 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-bstss" Feb 16 17:16:57.007303 master-0 kubenswrapper[4167]: I0216 17:16:57.007253 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-gvwqd" Feb 16 17:16:57.007730 master-0 kubenswrapper[4167]: I0216 17:16:57.007681 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 17:16:57.008327 master-0 kubenswrapper[4167]: I0216 17:16:57.008277 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 17:16:57.008740 master-0 kubenswrapper[4167]: I0216 17:16:57.008682 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 17:16:57.017076 master-0 kubenswrapper[4167]: I0216 17:16:57.010726 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 17:16:57.017076 master-0 kubenswrapper[4167]: I0216 17:16:57.010901 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-599b567ff7-nrcpr"] Feb 16 17:16:57.017076 master-0 kubenswrapper[4167]: I0216 17:16:57.010989 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 17:16:57.017076 master-0 kubenswrapper[4167]: I0216 17:16:57.011025 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 17:16:57.017579 master-0 kubenswrapper[4167]: I0216 17:16:57.017238 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" Feb 16 17:16:57.017834 master-0 kubenswrapper[4167]: I0216 17:16:57.017802 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 17:16:57.020101 master-0 kubenswrapper[4167]: I0216 17:16:57.020050 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 17:16:57.023758 master-0 kubenswrapper[4167]: I0216 17:16:57.023712 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 17:16:57.027668 master-0 kubenswrapper[4167]: I0216 17:16:57.023790 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 17:16:57.027668 master-0 kubenswrapper[4167]: I0216 17:16:57.023927 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 17:16:57.027668 master-0 kubenswrapper[4167]: I0216 17:16:57.023730 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 17:16:57.027668 master-0 kubenswrapper[4167]: I0216 17:16:57.024067 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 17:16:57.027668 master-0 kubenswrapper[4167]: I0216 17:16:57.023943 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-nkhdh" Feb 16 17:16:57.027668 master-0 kubenswrapper[4167]: I0216 17:16:57.023798 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 17:16:57.028315 master-0 kubenswrapper[4167]: I0216 17:16:57.027743 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 17:16:57.028315 master-0 kubenswrapper[4167]: I0216 17:16:57.027753 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 17:16:57.028315 master-0 kubenswrapper[4167]: I0216 17:16:57.027858 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 17:16:57.028315 master-0 kubenswrapper[4167]: I0216 17:16:57.028064 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 17:16:57.028315 master-0 kubenswrapper[4167]: I0216 17:16:57.028066 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 17:16:57.028721 master-0 kubenswrapper[4167]: I0216 17:16:57.028370 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 17:16:57.032213 master-0 kubenswrapper[4167]: I0216 17:16:57.032160 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-795746f87c-qdv9c"] Feb 16 17:16:57.038414 master-0 kubenswrapper[4167]: I0216 17:16:57.038351 4167 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 17:16:57.042788 master-0 kubenswrapper[4167]: I0216 17:16:57.042733 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64f85b8fc9-n9msn"] Feb 16 17:16:57.047721 master-0 kubenswrapper[4167]: I0216 17:16:57.047669 4167 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 17:16:57.063036 master-0 kubenswrapper[4167]: I0216 17:16:57.063001 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 17:16:57.081221 master-0 kubenswrapper[4167]: I0216 17:16:57.081169 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.081419 master-0 kubenswrapper[4167]: I0216 17:16:57.081245 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:16:57.081419 master-0 kubenswrapper[4167]: I0216 17:16:57.081286 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.081419 master-0 kubenswrapper[4167]: I0216 17:16:57.081321 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-service-ca\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.081419 master-0 kubenswrapper[4167]: I0216 17:16:57.081370 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.081419 master-0 kubenswrapper[4167]: I0216 17:16:57.081403 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.081630 master-0 kubenswrapper[4167]: I0216 17:16:57.081438 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.081630 master-0 kubenswrapper[4167]: I0216 17:16:57.081476 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.081738 master-0 kubenswrapper[4167]: I0216 17:16:57.081536 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktgm7\" (UniqueName: \"kubernetes.io/projected/810a2275-fae5-45df-a3b8-92860451d33b-kube-api-access-ktgm7\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:16:57.081797 master-0 kubenswrapper[4167]: I0216 17:16:57.081758 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.081846 master-0 kubenswrapper[4167]: I0216 17:16:57.081797 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.081846 master-0 kubenswrapper[4167]: I0216 17:16:57.081837 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.081925 master-0 kubenswrapper[4167]: I0216 17:16:57.081864 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.081925 master-0 kubenswrapper[4167]: I0216 17:16:57.081896 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.082036 master-0 kubenswrapper[4167]: I0216 17:16:57.081931 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-oauth-serving-cert\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.082036 master-0 kubenswrapper[4167]: I0216 17:16:57.081993 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.082125 master-0 kubenswrapper[4167]: I0216 17:16:57.082076 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.082243 master-0 kubenswrapper[4167]: I0216 17:16:57.082216 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.082300 master-0 kubenswrapper[4167]: I0216 17:16:57.082262 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l67l5\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-kube-api-access-l67l5\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.082300 master-0 kubenswrapper[4167]: I0216 17:16:57.082294 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.082386 master-0 kubenswrapper[4167]: I0216 17:16:57.082335 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.082386 master-0 kubenswrapper[4167]: I0216 17:16:57.082364 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.082470 master-0 kubenswrapper[4167]: I0216 17:16:57.082404 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.082733 master-0 kubenswrapper[4167]: I0216 17:16:57.082704 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.082823 master-0 kubenswrapper[4167]: I0216 17:16:57.082798 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.082877 master-0 kubenswrapper[4167]: I0216 17:16:57.082867 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.082990 master-0 kubenswrapper[4167]: I0216 17:16:57.082943 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-config-out\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.083059 master-0 kubenswrapper[4167]: I0216 17:16:57.083017 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.083106 master-0 kubenswrapper[4167]: I0216 17:16:57.083089 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.083151 master-0 kubenswrapper[4167]: I0216 17:16:57.083123 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-console-config\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.083242 master-0 kubenswrapper[4167]: I0216 17:16:57.083214 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.083291 master-0 kubenswrapper[4167]: I0216 17:16:57.083259 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.083340 master-0 kubenswrapper[4167]: I0216 17:16:57.083305 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-out\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.083380 master-0 kubenswrapper[4167]: I0216 17:16:57.083340 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.083380 master-0 kubenswrapper[4167]: I0216 17:16:57.083368 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.083461 master-0 kubenswrapper[4167]: I0216 17:16:57.083403 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.083461 master-0 kubenswrapper[4167]: I0216 17:16:57.083435 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.083542 master-0 kubenswrapper[4167]: I0216 17:16:57.083470 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-dir\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.083542 master-0 kubenswrapper[4167]: I0216 17:16:57.083493 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.083542 master-0 kubenswrapper[4167]: I0216 17:16:57.083534 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.083656 master-0 kubenswrapper[4167]: I0216 17:16:57.083562 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.083656 master-0 kubenswrapper[4167]: I0216 17:16:57.083592 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.083656 master-0 kubenswrapper[4167]: I0216 17:16:57.083617 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.083656 master-0 kubenswrapper[4167]: I0216 17:16:57.083644 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.083823 master-0 kubenswrapper[4167]: I0216 17:16:57.083675 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv4dt\" (UniqueName: \"kubernetes.io/projected/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-kube-api-access-pv4dt\") pod \"collect-profiles-29521035-zdh6r\" (UID: \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:16:57.083823 master-0 kubenswrapper[4167]: I0216 17:16:57.083706 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-trusted-ca-bundle\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.083823 master-0 kubenswrapper[4167]: I0216 17:16:57.083740 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.083823 master-0 kubenswrapper[4167]: I0216 17:16:57.083764 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2mkf\" (UniqueName: \"kubernetes.io/projected/4573747f-1dff-4f51-9a56-b413287082b6-kube-api-access-k2mkf\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.083823 master-0 kubenswrapper[4167]: I0216 17:16:57.083797 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.083823 master-0 kubenswrapper[4167]: I0216 17:16:57.083825 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:16:57.084091 master-0 kubenswrapper[4167]: I0216 17:16:57.083859 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-config-volume\") pod \"collect-profiles-29521035-zdh6r\" (UID: \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:16:57.084091 master-0 kubenswrapper[4167]: I0216 17:16:57.083893 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4573747f-1dff-4f51-9a56-b413287082b6-console-oauth-config\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.084091 master-0 kubenswrapper[4167]: I0216 17:16:57.083927 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/810a2275-fae5-45df-a3b8-92860451d33b-host\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:16:57.084091 master-0 kubenswrapper[4167]: I0216 17:16:57.083995 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.084091 master-0 kubenswrapper[4167]: I0216 17:16:57.084027 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpjv7\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-kube-api-access-vpjv7\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.084091 master-0 kubenswrapper[4167]: I0216 17:16:57.084056 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.084091 master-0 kubenswrapper[4167]: I0216 17:16:57.084081 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.084355 master-0 kubenswrapper[4167]: I0216 17:16:57.084114 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-secret-volume\") pod \"collect-profiles-29521035-zdh6r\" (UID: \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:16:57.084355 master-0 kubenswrapper[4167]: I0216 17:16:57.084143 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.084355 master-0 kubenswrapper[4167]: I0216 17:16:57.084170 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.084355 master-0 kubenswrapper[4167]: I0216 17:16:57.084197 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.084355 master-0 kubenswrapper[4167]: I0216 17:16:57.084228 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/810a2275-fae5-45df-a3b8-92860451d33b-serviceca\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:16:57.084355 master-0 kubenswrapper[4167]: I0216 17:16:57.084263 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.084355 master-0 kubenswrapper[4167]: I0216 17:16:57.084293 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4573747f-1dff-4f51-9a56-b413287082b6-console-serving-cert\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.084355 master-0 kubenswrapper[4167]: I0216 17:16:57.084319 4167 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.185621 master-0 kubenswrapper[4167]: I0216 17:16:57.185583 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv4dt\" (UniqueName: \"kubernetes.io/projected/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-kube-api-access-pv4dt\") pod \"collect-profiles-29521035-zdh6r\" (UID: \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:16:57.185877 master-0 kubenswrapper[4167]: I0216 17:16:57.185863 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-trusted-ca-bundle\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.186013 master-0 kubenswrapper[4167]: I0216 17:16:57.185993 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2mkf\" (UniqueName: \"kubernetes.io/projected/4573747f-1dff-4f51-9a56-b413287082b6-kube-api-access-k2mkf\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.186146 master-0 kubenswrapper[4167]: I0216 17:16:57.186126 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.186248 master-0 kubenswrapper[4167]: I0216 17:16:57.186233 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.186330 master-0 kubenswrapper[4167]: I0216 17:16:57.186316 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:16:57.186422 master-0 kubenswrapper[4167]: I0216 17:16:57.186409 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-config-volume\") pod \"collect-profiles-29521035-zdh6r\" (UID: \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:16:57.186636 master-0 kubenswrapper[4167]: I0216 17:16:57.186616 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4573747f-1dff-4f51-9a56-b413287082b6-console-oauth-config\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.186751 master-0 kubenswrapper[4167]: I0216 17:16:57.186732 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/810a2275-fae5-45df-a3b8-92860451d33b-host\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:16:57.186857 master-0 kubenswrapper[4167]: I0216 17:16:57.186839 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.186988 master-0 kubenswrapper[4167]: I0216 17:16:57.186952 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpjv7\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-kube-api-access-vpjv7\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.187087 master-0 kubenswrapper[4167]: I0216 17:16:57.187058 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/810a2275-fae5-45df-a3b8-92860451d33b-host\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:16:57.187191 master-0 kubenswrapper[4167]: I0216 17:16:57.187170 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.187304 master-0 kubenswrapper[4167]: I0216 17:16:57.187285 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.187412 master-0 kubenswrapper[4167]: I0216 17:16:57.187395 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-secret-volume\") pod \"collect-profiles-29521035-zdh6r\" (UID: \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:16:57.187518 master-0 kubenswrapper[4167]: I0216 17:16:57.187500 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.187628 master-0 kubenswrapper[4167]: I0216 17:16:57.187610 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.187739 master-0 kubenswrapper[4167]: I0216 17:16:57.187722 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.187846 master-0 kubenswrapper[4167]: I0216 17:16:57.187828 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/810a2275-fae5-45df-a3b8-92860451d33b-serviceca\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:16:57.188002 master-0 kubenswrapper[4167]: I0216 17:16:57.187235 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-trusted-ca-bundle\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.188078 master-0 kubenswrapper[4167]: I0216 17:16:57.187467 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-config-volume\") pod \"collect-profiles-29521035-zdh6r\" (UID: \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:16:57.188078 master-0 kubenswrapper[4167]: I0216 17:16:57.187951 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.188174 master-0 kubenswrapper[4167]: I0216 17:16:57.188124 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4573747f-1dff-4f51-9a56-b413287082b6-console-serving-cert\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.188174 master-0 kubenswrapper[4167]: I0216 17:16:57.188166 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.188267 master-0 kubenswrapper[4167]: I0216 17:16:57.188198 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.188267 master-0 kubenswrapper[4167]: I0216 17:16:57.188232 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:16:57.188267 master-0 kubenswrapper[4167]: I0216 17:16:57.188257 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.188445 master-0 kubenswrapper[4167]: I0216 17:16:57.188292 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-service-ca\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.188506 master-0 kubenswrapper[4167]: I0216 17:16:57.188479 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.188555 master-0 kubenswrapper[4167]: I0216 17:16:57.188511 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.188555 master-0 kubenswrapper[4167]: I0216 17:16:57.188540 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.188555 master-0 kubenswrapper[4167]: I0216 17:16:57.188519 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/810a2275-fae5-45df-a3b8-92860451d33b-serviceca\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:16:57.188690 master-0 kubenswrapper[4167]: I0216 17:16:57.188573 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.189519 master-0 kubenswrapper[4167]: I0216 17:16:57.189487 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-service-ca\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.189598 master-0 kubenswrapper[4167]: I0216 17:16:57.189525 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktgm7\" (UniqueName: \"kubernetes.io/projected/810a2275-fae5-45df-a3b8-92860451d33b-kube-api-access-ktgm7\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:16:57.189598 master-0 kubenswrapper[4167]: I0216 17:16:57.189548 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.189598 master-0 kubenswrapper[4167]: I0216 17:16:57.189569 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.189598 master-0 kubenswrapper[4167]: I0216 17:16:57.189587 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.189736 master-0 kubenswrapper[4167]: I0216 17:16:57.189607 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.189736 master-0 kubenswrapper[4167]: I0216 17:16:57.189624 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.189736 master-0 kubenswrapper[4167]: I0216 17:16:57.189646 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-oauth-serving-cert\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.189736 master-0 kubenswrapper[4167]: I0216 17:16:57.189667 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.189736 master-0 kubenswrapper[4167]: I0216 17:16:57.189682 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.189736 master-0 kubenswrapper[4167]: I0216 17:16:57.189710 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.189736 master-0 kubenswrapper[4167]: I0216 17:16:57.189728 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l67l5\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-kube-api-access-l67l5\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.189935 master-0 kubenswrapper[4167]: I0216 17:16:57.189744 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.189935 master-0 kubenswrapper[4167]: I0216 17:16:57.189752 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.189935 master-0 kubenswrapper[4167]: I0216 17:16:57.189766 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.189935 master-0 kubenswrapper[4167]: I0216 17:16:57.189784 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.189935 master-0 kubenswrapper[4167]: I0216 17:16:57.189802 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.189935 master-0 kubenswrapper[4167]: I0216 17:16:57.189823 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.189935 master-0 kubenswrapper[4167]: I0216 17:16:57.189843 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.189935 master-0 kubenswrapper[4167]: I0216 17:16:57.189859 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.189935 master-0 kubenswrapper[4167]: I0216 17:16:57.189874 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.189935 master-0 kubenswrapper[4167]: I0216 17:16:57.189891 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-config-out\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.189935 master-0 kubenswrapper[4167]: I0216 17:16:57.189905 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-console-config\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.189935 master-0 kubenswrapper[4167]: I0216 17:16:57.189923 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.189941 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.189973 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.189997 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-out\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.190013 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.190034 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.190052 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.190071 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.190089 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-dir\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.190104 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.190128 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.190144 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.190160 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.190177 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.190303 master-0 kubenswrapper[4167]: I0216 17:16:57.190191 4167 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.198471 master-0 kubenswrapper[4167]: I0216 17:16:57.196651 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.198471 master-0 kubenswrapper[4167]: I0216 17:16:57.196685 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-oauth-serving-cert\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.198471 master-0 kubenswrapper[4167]: I0216 17:16:57.197718 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.198471 master-0 kubenswrapper[4167]: I0216 17:16:57.198080 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.198471 master-0 kubenswrapper[4167]: I0216 17:16:57.198318 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.198471 master-0 kubenswrapper[4167]: I0216 17:16:57.198459 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.199136 master-0 kubenswrapper[4167]: I0216 17:16:57.198531 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:16:57.199136 master-0 kubenswrapper[4167]: I0216 17:16:57.198647 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.199136 master-0 kubenswrapper[4167]: I0216 17:16:57.198868 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.199136 master-0 kubenswrapper[4167]: I0216 17:16:57.198879 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.200313 master-0 kubenswrapper[4167]: I0216 17:16:57.200287 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-dir\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.201347 master-0 kubenswrapper[4167]: I0216 17:16:57.200607 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.201347 master-0 kubenswrapper[4167]: I0216 17:16:57.201039 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.201347 master-0 kubenswrapper[4167]: I0216 17:16:57.201116 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.201552 master-0 kubenswrapper[4167]: I0216 17:16:57.201517 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-console-config\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.201552 master-0 kubenswrapper[4167]: I0216 17:16:57.201538 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.201871 master-0 kubenswrapper[4167]: I0216 17:16:57.201841 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.202508 master-0 kubenswrapper[4167]: I0216 17:16:57.202482 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.203862 master-0 kubenswrapper[4167]: I0216 17:16:57.203791 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2mkf\" (UniqueName: \"kubernetes.io/projected/4573747f-1dff-4f51-9a56-b413287082b6-kube-api-access-k2mkf\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.208809 master-0 kubenswrapper[4167]: I0216 17:16:57.204204 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-secret-volume\") pod \"collect-profiles-29521035-zdh6r\" (UID: \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:16:57.208809 master-0 kubenswrapper[4167]: I0216 17:16:57.204582 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.208809 master-0 kubenswrapper[4167]: I0216 17:16:57.204609 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.208809 master-0 kubenswrapper[4167]: I0216 17:16:57.206761 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpjv7\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-kube-api-access-vpjv7\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.208809 master-0 kubenswrapper[4167]: I0216 17:16:57.207627 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.208809 master-0 kubenswrapper[4167]: I0216 17:16:57.208042 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.208809 master-0 kubenswrapper[4167]: I0216 17:16:57.208735 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.208809 master-0 kubenswrapper[4167]: I0216 17:16:57.208759 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.209459 master-0 kubenswrapper[4167]: I0216 17:16:57.209424 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:16:57.210221 master-0 kubenswrapper[4167]: I0216 17:16:57.210109 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.210221 master-0 kubenswrapper[4167]: I0216 17:16:57.201250 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.210860 master-0 kubenswrapper[4167]: I0216 17:16:57.210828 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv4dt\" (UniqueName: \"kubernetes.io/projected/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-kube-api-access-pv4dt\") pod \"collect-profiles-29521035-zdh6r\" (UID: \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:16:57.211910 master-0 kubenswrapper[4167]: I0216 17:16:57.211888 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.212120 master-0 kubenswrapper[4167]: I0216 17:16:57.212085 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.212340 master-0 kubenswrapper[4167]: I0216 17:16:57.212307 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.212514 master-0 kubenswrapper[4167]: I0216 17:16:57.212479 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4573747f-1dff-4f51-9a56-b413287082b6-console-oauth-config\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.212571 master-0 kubenswrapper[4167]: I0216 17:16:57.212499 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.213046 master-0 kubenswrapper[4167]: I0216 17:16:57.213012 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.215097 master-0 kubenswrapper[4167]: I0216 17:16:57.214150 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.215502 master-0 kubenswrapper[4167]: I0216 17:16:57.215460 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.215502 master-0 kubenswrapper[4167]: I0216 17:16:57.215471 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-out\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.215663 master-0 kubenswrapper[4167]: I0216 17:16:57.215632 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.215705 master-0 kubenswrapper[4167]: I0216 17:16:57.215655 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-config-out\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.215774 master-0 kubenswrapper[4167]: I0216 17:16:57.215634 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.215932 master-0 kubenswrapper[4167]: I0216 17:16:57.215900 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.216004 master-0 kubenswrapper[4167]: I0216 17:16:57.215906 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.216083 master-0 kubenswrapper[4167]: I0216 17:16:57.216053 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.216123 master-0 kubenswrapper[4167]: I0216 17:16:57.216069 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4573747f-1dff-4f51-9a56-b413287082b6-console-serving-cert\") pod \"console-795746f87c-qdv9c\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.216635 master-0 kubenswrapper[4167]: I0216 17:16:57.216557 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.216635 master-0 kubenswrapper[4167]: I0216 17:16:57.216604 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktgm7\" (UniqueName: \"kubernetes.io/projected/810a2275-fae5-45df-a3b8-92860451d33b-kube-api-access-ktgm7\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:16:57.218001 master-0 kubenswrapper[4167]: I0216 17:16:57.217949 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.219874 master-0 kubenswrapper[4167]: I0216 17:16:57.219834 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l67l5\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-kube-api-access-l67l5\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.220316 master-0 kubenswrapper[4167]: I0216 17:16:57.220276 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.221042 master-0 kubenswrapper[4167]: I0216 17:16:57.221014 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.221272 master-0 kubenswrapper[4167]: I0216 17:16:57.221243 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.221327 master-0 kubenswrapper[4167]: I0216 17:16:57.221242 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.221327 master-0 kubenswrapper[4167]: I0216 17:16:57.221311 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.221549 master-0 kubenswrapper[4167]: I0216 17:16:57.221512 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.221589 master-0 kubenswrapper[4167]: I0216 17:16:57.221574 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.221926 master-0 kubenswrapper[4167]: I0216 17:16:57.221901 4167 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.336404 master-0 kubenswrapper[4167]: I0216 17:16:57.336333 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:16:57.363174 master-0 kubenswrapper[4167]: I0216 17:16:57.363055 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:16:57.395888 master-0 kubenswrapper[4167]: I0216 17:16:57.395804 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:16:57.396953 master-0 kubenswrapper[4167]: I0216 17:16:57.396876 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:16:57.409022 master-0 kubenswrapper[4167]: I0216 17:16:57.408909 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:16:57.424882 master-0 kubenswrapper[4167]: I0216 17:16:57.424783 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:16:57.441620 master-0 kubenswrapper[4167]: I0216 17:16:57.439811 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:16:57.455721 master-0 kubenswrapper[4167]: I0216 17:16:57.455657 4167 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:57.714874 master-0 kubenswrapper[4167]: I0216 17:16:57.714695 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xv2wv" event={"ID":"810a2275-fae5-45df-a3b8-92860451d33b","Type":"ContainerStarted","Data":"8743a2f4042ce8d84323ed7d399bb616de117a3694624b640b157644b331c583"} Feb 16 17:16:57.789491 master-0 kubenswrapper[4167]: I0216 17:16:57.789428 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-599b567ff7-nrcpr"] Feb 16 17:16:57.855698 master-0 kubenswrapper[4167]: I0216 17:16:57.855661 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r"] Feb 16 17:16:57.861935 master-0 kubenswrapper[4167]: W0216 17:16:57.861430 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf18a5c41_3a62_4c14_88f5_cc9c09e81d38.slice/crio-9b150810561a1b1d762e670c7245a991462b21408054b1821c6e1fe59c38db76 WatchSource:0}: Error finding container 9b150810561a1b1d762e670c7245a991462b21408054b1821c6e1fe59c38db76: Status 404 returned error can't find the container with id 9b150810561a1b1d762e670c7245a991462b21408054b1821c6e1fe59c38db76 Feb 16 17:16:57.917259 master-0 kubenswrapper[4167]: I0216 17:16:57.917209 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct"] Feb 16 17:16:57.919518 master-0 kubenswrapper[4167]: W0216 17:16:57.919484 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f44170a_3c1c_4944_b971_251f75a51fc3.slice/crio-1087eea3bee59ada0adba203356aacaf98b0d3fdf0eb3e0b1d804b7e16cc0375 WatchSource:0}: Error finding container 1087eea3bee59ada0adba203356aacaf98b0d3fdf0eb3e0b1d804b7e16cc0375: Status 404 returned error can't find the container with id 1087eea3bee59ada0adba203356aacaf98b0d3fdf0eb3e0b1d804b7e16cc0375 Feb 16 17:16:57.930834 master-0 kubenswrapper[4167]: I0216 17:16:57.930642 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Feb 16 17:16:57.933174 master-0 kubenswrapper[4167]: W0216 17:16:57.933135 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e8e6dab_fb6e_4cf0_a9d1_56f9955d106e.slice/crio-db3bcbf9ee795908db3db1f3c71c8add5c1ec863b4870d1995c0ec89d904ef0c WatchSource:0}: Error finding container db3bcbf9ee795908db3db1f3c71c8add5c1ec863b4870d1995c0ec89d904ef0c: Status 404 returned error can't find the container with id db3bcbf9ee795908db3db1f3c71c8add5c1ec863b4870d1995c0ec89d904ef0c Feb 16 17:16:58.014042 master-0 kubenswrapper[4167]: I0216 17:16:58.013947 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-795746f87c-qdv9c"] Feb 16 17:16:58.022445 master-0 kubenswrapper[4167]: I0216 17:16:58.022381 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-64f85b8fc9-n9msn"] Feb 16 17:16:58.023728 master-0 kubenswrapper[4167]: W0216 17:16:58.023654 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4573747f_1dff_4f51_9a56_b413287082b6.slice/crio-848eff5ed91bfa76e968db088e340c9b4164d792c9863a524acf7c488ac3d27f WatchSource:0}: Error finding container 848eff5ed91bfa76e968db088e340c9b4164d792c9863a524acf7c488ac3d27f: Status 404 returned error can't find the container with id 848eff5ed91bfa76e968db088e340c9b4164d792c9863a524acf7c488ac3d27f Feb 16 17:16:58.033014 master-0 kubenswrapper[4167]: I0216 17:16:58.031876 4167 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Feb 16 17:16:58.034163 master-0 kubenswrapper[4167]: W0216 17:16:58.034128 4167 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2be9d55c_a4ec_48cd_93d2_0a1dced745a8.slice/crio-9186fd6ca57d985dc2587aaf1ec5043b8f101a5ed64e5194c35010959b3d06e2 WatchSource:0}: Error finding container 9186fd6ca57d985dc2587aaf1ec5043b8f101a5ed64e5194c35010959b3d06e2: Status 404 returned error can't find the container with id 9186fd6ca57d985dc2587aaf1ec5043b8f101a5ed64e5194c35010959b3d06e2 Feb 16 17:16:58.722039 master-0 kubenswrapper[4167]: I0216 17:16:58.721977 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-795746f87c-qdv9c" event={"ID":"4573747f-1dff-4f51-9a56-b413287082b6","Type":"ContainerStarted","Data":"848eff5ed91bfa76e968db088e340c9b4164d792c9863a524acf7c488ac3d27f"} Feb 16 17:16:58.723355 master-0 kubenswrapper[4167]: I0216 17:16:58.723284 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" event={"ID":"6f44170a-3c1c-4944-b971-251f75a51fc3","Type":"ContainerStarted","Data":"1087eea3bee59ada0adba203356aacaf98b0d3fdf0eb3e0b1d804b7e16cc0375"} Feb 16 17:16:58.725023 master-0 kubenswrapper[4167]: I0216 17:16:58.724952 4167 generic.go:334] "Generic (PLEG): container finished" podID="f18a5c41-3a62-4c14-88f5-cc9c09e81d38" containerID="7f30fc4dc43f5bff7934dfed48168650156eb7932764e6f71956e1dd3d5d703e" exitCode=0 Feb 16 17:16:58.725103 master-0 kubenswrapper[4167]: I0216 17:16:58.725065 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" event={"ID":"f18a5c41-3a62-4c14-88f5-cc9c09e81d38","Type":"ContainerDied","Data":"7f30fc4dc43f5bff7934dfed48168650156eb7932764e6f71956e1dd3d5d703e"} Feb 16 17:16:58.725103 master-0 kubenswrapper[4167]: I0216 17:16:58.725094 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" event={"ID":"f18a5c41-3a62-4c14-88f5-cc9c09e81d38","Type":"ContainerStarted","Data":"9b150810561a1b1d762e670c7245a991462b21408054b1821c6e1fe59c38db76"} Feb 16 17:16:58.727479 master-0 kubenswrapper[4167]: I0216 17:16:58.727211 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-599b567ff7-nrcpr" event={"ID":"ed3d89d0-bc00-482e-a656-7fdf4646ab0a","Type":"ContainerStarted","Data":"63f1000e23f5c4634b08bd4e160eb14b031256f09d28dabc6f1ac14217138a82"} Feb 16 17:16:58.732363 master-0 kubenswrapper[4167]: I0216 17:16:58.729002 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" event={"ID":"2be9d55c-a4ec-48cd-93d2-0a1dced745a8","Type":"ContainerStarted","Data":"6e7a54a797193d9e6354dfc9c20644fa6cc6d33f0d9ebf10fdfdf046234d2555"} Feb 16 17:16:58.732363 master-0 kubenswrapper[4167]: I0216 17:16:58.729029 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" event={"ID":"2be9d55c-a4ec-48cd-93d2-0a1dced745a8","Type":"ContainerStarted","Data":"9186fd6ca57d985dc2587aaf1ec5043b8f101a5ed64e5194c35010959b3d06e2"} Feb 16 17:16:58.732363 master-0 kubenswrapper[4167]: I0216 17:16:58.729834 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:58.732363 master-0 kubenswrapper[4167]: I0216 17:16:58.731193 4167 generic.go:334] "Generic (PLEG): container finished" podID="b04ee64e-5e83-499c-812d-749b2b6824c6" containerID="fb63eae65b26a9b12bcacd6a041effebaac95c368f4b97fef40a113f3deafdf9" exitCode=0 Feb 16 17:16:58.732363 master-0 kubenswrapper[4167]: I0216 17:16:58.731247 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerDied","Data":"fb63eae65b26a9b12bcacd6a041effebaac95c368f4b97fef40a113f3deafdf9"} Feb 16 17:16:58.732363 master-0 kubenswrapper[4167]: I0216 17:16:58.731277 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"7d546d320ea4aa6158df39f46ee422484141fc09b60f9b17000f2384f0afeaeb"} Feb 16 17:16:58.732806 master-0 kubenswrapper[4167]: I0216 17:16:58.732659 4167 generic.go:334] "Generic (PLEG): container finished" podID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" containerID="d750e8be5ac9bbca8a5a7dc0b2439f40baac802e0e58b5c8967e841fce5c4ba3" exitCode=0 Feb 16 17:16:58.732806 master-0 kubenswrapper[4167]: I0216 17:16:58.732686 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerDied","Data":"d750e8be5ac9bbca8a5a7dc0b2439f40baac802e0e58b5c8967e841fce5c4ba3"} Feb 16 17:16:58.732806 master-0 kubenswrapper[4167]: I0216 17:16:58.732705 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"db3bcbf9ee795908db3db1f3c71c8add5c1ec863b4870d1995c0ec89d904ef0c"} Feb 16 17:16:58.765934 master-0 kubenswrapper[4167]: I0216 17:16:58.764596 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" podStartSLOduration=511.764573348 podStartE2EDuration="8m31.764573348s" podCreationTimestamp="2026-02-16 17:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:16:58.763713945 +0000 UTC m=+160.494160343" watchObservedRunningTime="2026-02-16 17:16:58.764573348 +0000 UTC m=+160.495019736" Feb 16 17:16:59.077986 master-0 kubenswrapper[4167]: I0216 17:16:59.075644 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:16:59.741648 master-0 kubenswrapper[4167]: I0216 17:16:59.741597 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"06deede4924397e939bebb44e359f2380b34ce255772a313f28d157026b919c3"} Feb 16 17:16:59.741648 master-0 kubenswrapper[4167]: I0216 17:16:59.741648 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"0b28d07efb0b6dc4fd3ea7fb6263df1be0a3f8789786db5c410d8b1eaffadfd8"} Feb 16 17:16:59.745174 master-0 kubenswrapper[4167]: I0216 17:16:59.745114 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"724eabf6366889ad27f53e251522e6e8aa6a1854883743a613e9de16af72831d"} Feb 16 17:16:59.745300 master-0 kubenswrapper[4167]: I0216 17:16:59.745186 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"b84a614b1ec3a6d2390121273be6a1676c50cbc27aabf14a1e15474dc3929160"} Feb 16 17:16:59.745300 master-0 kubenswrapper[4167]: I0216 17:16:59.745212 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"54030a7ff4b10a85d9e72a01a23417e3237ec0ba05096416a9dd2e258691a5a3"} Feb 16 17:17:00.343875 master-0 kubenswrapper[4167]: I0216 17:17:00.343816 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:17:00.439890 master-0 kubenswrapper[4167]: I0216 17:17:00.439838 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv4dt\" (UniqueName: \"kubernetes.io/projected/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-kube-api-access-pv4dt\") pod \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\" (UID: \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\") " Feb 16 17:17:00.442983 master-0 kubenswrapper[4167]: I0216 17:17:00.442910 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-kube-api-access-pv4dt" (OuterVolumeSpecName: "kube-api-access-pv4dt") pod "f18a5c41-3a62-4c14-88f5-cc9c09e81d38" (UID: "f18a5c41-3a62-4c14-88f5-cc9c09e81d38"). InnerVolumeSpecName "kube-api-access-pv4dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:17:00.540947 master-0 kubenswrapper[4167]: I0216 17:17:00.540773 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-secret-volume\") pod \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\" (UID: \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\") " Feb 16 17:17:00.541446 master-0 kubenswrapper[4167]: I0216 17:17:00.541399 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-config-volume\") pod \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\" (UID: \"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\") " Feb 16 17:17:00.541824 master-0 kubenswrapper[4167]: I0216 17:17:00.541793 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-config-volume" (OuterVolumeSpecName: "config-volume") pod "f18a5c41-3a62-4c14-88f5-cc9c09e81d38" (UID: "f18a5c41-3a62-4c14-88f5-cc9c09e81d38"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:17:00.542377 master-0 kubenswrapper[4167]: I0216 17:17:00.542348 4167 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 17:17:00.542377 master-0 kubenswrapper[4167]: I0216 17:17:00.542371 4167 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv4dt\" (UniqueName: \"kubernetes.io/projected/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-kube-api-access-pv4dt\") on node \"master-0\" DevicePath \"\"" Feb 16 17:17:00.544292 master-0 kubenswrapper[4167]: I0216 17:17:00.544176 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f18a5c41-3a62-4c14-88f5-cc9c09e81d38" (UID: "f18a5c41-3a62-4c14-88f5-cc9c09e81d38"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:17:00.644182 master-0 kubenswrapper[4167]: I0216 17:17:00.643371 4167 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f18a5c41-3a62-4c14-88f5-cc9c09e81d38-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 17:17:00.752163 master-0 kubenswrapper[4167]: I0216 17:17:00.752094 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"aaf2848fb6af1c4ef6ce19e563ad85aaca897482b8b4f1ff056e322d68a43a8a"} Feb 16 17:17:00.753459 master-0 kubenswrapper[4167]: I0216 17:17:00.753336 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" event={"ID":"f18a5c41-3a62-4c14-88f5-cc9c09e81d38","Type":"ContainerDied","Data":"9b150810561a1b1d762e670c7245a991462b21408054b1821c6e1fe59c38db76"} Feb 16 17:17:00.753459 master-0 kubenswrapper[4167]: I0216 17:17:00.753374 4167 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b150810561a1b1d762e670c7245a991462b21408054b1821c6e1fe59c38db76" Feb 16 17:17:00.753459 master-0 kubenswrapper[4167]: I0216 17:17:00.753381 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:17:01.762900 master-0 kubenswrapper[4167]: I0216 17:17:01.762838 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"63229405ff3ddfaa766c4ee313994817781540fd6f3c8891c90f43028c210fdb"} Feb 16 17:17:02.773446 master-0 kubenswrapper[4167]: I0216 17:17:02.773384 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-599b567ff7-nrcpr" event={"ID":"ed3d89d0-bc00-482e-a656-7fdf4646ab0a","Type":"ContainerStarted","Data":"297af5174948e7a5e193045218b9b8c209b0751b5e42d942a380c1c4105d45a9"} Feb 16 17:17:02.779318 master-0 kubenswrapper[4167]: I0216 17:17:02.779254 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"fe18074fe2d6a625e681a6ea6b856bf18c9f1a1defe26ba044caeba03bdc1424"} Feb 16 17:17:02.779318 master-0 kubenswrapper[4167]: I0216 17:17:02.779316 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"ebd675428e5bb8f7c53e95c7fcb4e73a21dc73f92a194758d2d88737bda48c9a"} Feb 16 17:17:02.779533 master-0 kubenswrapper[4167]: I0216 17:17:02.779333 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"2b7a620daf4a97b25d369c0c6e1aeac19537e878e52d80a0b791512400fe794c"} Feb 16 17:17:02.782371 master-0 kubenswrapper[4167]: I0216 17:17:02.782316 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"718e5d5d624544036baa50256bfb6aab06e61a8de740d2fc9fcd3a13c9f3eeba"} Feb 16 17:17:02.782371 master-0 kubenswrapper[4167]: I0216 17:17:02.782362 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"84478ba80a4930a90177cd524333f9a711b1e761ab3c5b96e6e2d7ba45d9f4f8"} Feb 16 17:17:02.784398 master-0 kubenswrapper[4167]: I0216 17:17:02.784338 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-795746f87c-qdv9c" event={"ID":"4573747f-1dff-4f51-9a56-b413287082b6","Type":"ContainerStarted","Data":"3d8e14c5d25bf9d8e21460e05d113da3dca3f3c9f2a50705a9de49d3ee70177c"} Feb 16 17:17:02.786117 master-0 kubenswrapper[4167]: I0216 17:17:02.786063 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xv2wv" event={"ID":"810a2275-fae5-45df-a3b8-92860451d33b","Type":"ContainerStarted","Data":"31dfdce3749e2857f76e798f1232a3015c2f3bd49b76be69a655614f7ff3d685"} Feb 16 17:17:02.787559 master-0 kubenswrapper[4167]: I0216 17:17:02.787526 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" event={"ID":"6f44170a-3c1c-4944-b971-251f75a51fc3","Type":"ContainerStarted","Data":"e12557cfa82b8f5d79f2dbafc88336b0f40bdc282a9e080ed322a28aa7a19d9d"} Feb 16 17:17:02.806152 master-0 kubenswrapper[4167]: I0216 17:17:02.806073 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-599b567ff7-nrcpr" podStartSLOduration=504.021175675 podStartE2EDuration="8m27.806055763s" podCreationTimestamp="2026-02-16 17:08:35 +0000 UTC" firstStartedPulling="2026-02-16 17:16:57.793648062 +0000 UTC m=+159.524094440" lastFinishedPulling="2026-02-16 17:17:01.57852815 +0000 UTC m=+163.308974528" observedRunningTime="2026-02-16 17:17:02.802564589 +0000 UTC m=+164.533010967" watchObservedRunningTime="2026-02-16 17:17:02.806055763 +0000 UTC m=+164.536502141" Feb 16 17:17:02.826424 master-0 kubenswrapper[4167]: I0216 17:17:02.826341 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xv2wv" podStartSLOduration=512.865042378 podStartE2EDuration="8m36.826324352s" podCreationTimestamp="2026-02-16 17:08:26 +0000 UTC" firstStartedPulling="2026-02-16 17:16:57.513556069 +0000 UTC m=+159.244002467" lastFinishedPulling="2026-02-16 17:17:01.474838053 +0000 UTC m=+163.205284441" observedRunningTime="2026-02-16 17:17:02.823739492 +0000 UTC m=+164.554185870" watchObservedRunningTime="2026-02-16 17:17:02.826324352 +0000 UTC m=+164.556770730" Feb 16 17:17:02.889610 master-0 kubenswrapper[4167]: I0216 17:17:02.889519 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=79.889498752 podStartE2EDuration="1m19.889498752s" podCreationTimestamp="2026-02-16 17:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:17:02.888102725 +0000 UTC m=+164.618549113" watchObservedRunningTime="2026-02-16 17:17:02.889498752 +0000 UTC m=+164.619945130" Feb 16 17:17:02.892885 master-0 kubenswrapper[4167]: I0216 17:17:02.892826 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" podStartSLOduration=508.344195279 podStartE2EDuration="8m31.892814062s" podCreationTimestamp="2026-02-16 17:08:31 +0000 UTC" firstStartedPulling="2026-02-16 17:16:57.926257281 +0000 UTC m=+159.656703659" lastFinishedPulling="2026-02-16 17:17:01.474876054 +0000 UTC m=+163.205322442" observedRunningTime="2026-02-16 17:17:02.844578966 +0000 UTC m=+164.575025364" watchObservedRunningTime="2026-02-16 17:17:02.892814062 +0000 UTC m=+164.623260430" Feb 16 17:17:02.926880 master-0 kubenswrapper[4167]: I0216 17:17:02.926779 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=149.926751821 podStartE2EDuration="2m29.926751821s" podCreationTimestamp="2026-02-16 17:14:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:17:02.91931953 +0000 UTC m=+164.649765938" watchObservedRunningTime="2026-02-16 17:17:02.926751821 +0000 UTC m=+164.657198229" Feb 16 17:17:07.337147 master-0 kubenswrapper[4167]: I0216 17:17:07.337081 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:17:07.337884 master-0 kubenswrapper[4167]: I0216 17:17:07.337864 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:17:07.344084 master-0 kubenswrapper[4167]: I0216 17:17:07.344039 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:17:07.382601 master-0 kubenswrapper[4167]: I0216 17:17:07.382538 4167 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-795746f87c-qdv9c" podStartSLOduration=510.808239528 podStartE2EDuration="8m34.382519475s" podCreationTimestamp="2026-02-16 17:08:33 +0000 UTC" firstStartedPulling="2026-02-16 17:16:58.025710034 +0000 UTC m=+159.756156402" lastFinishedPulling="2026-02-16 17:17:01.599989971 +0000 UTC m=+163.330436349" observedRunningTime="2026-02-16 17:17:02.955845889 +0000 UTC m=+164.686292317" watchObservedRunningTime="2026-02-16 17:17:07.382519475 +0000 UTC m=+169.112965853" Feb 16 17:17:07.409401 master-0 kubenswrapper[4167]: I0216 17:17:07.409357 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:17:07.409498 master-0 kubenswrapper[4167]: I0216 17:17:07.409419 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:17:07.414234 master-0 kubenswrapper[4167]: I0216 17:17:07.414189 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:17:07.425863 master-0 kubenswrapper[4167]: I0216 17:17:07.425817 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:17:07.831053 master-0 kubenswrapper[4167]: I0216 17:17:07.830951 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:17:07.831309 master-0 kubenswrapper[4167]: I0216 17:17:07.831213 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:17:07.995386 master-0 kubenswrapper[4167]: I0216 17:17:07.995318 4167 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-795746f87c-qdv9c"] Feb 16 17:17:17.770254 master-0 kubenswrapper[4167]: I0216 17:17:17.770122 4167 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:17:17.770254 master-0 kubenswrapper[4167]: I0216 17:17:17.770242 4167 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:17:34.888026 master-0 kubenswrapper[4167]: I0216 17:17:34.887822 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-795746f87c-qdv9c" podUID="4573747f-1dff-4f51-9a56-b413287082b6" containerName="console" containerID="cri-o://3d8e14c5d25bf9d8e21460e05d113da3dca3f3c9f2a50705a9de49d3ee70177c" gracePeriod=15 Feb 16 17:17:35.020287 master-0 kubenswrapper[4167]: I0216 17:17:35.020222 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-795746f87c-qdv9c_4573747f-1dff-4f51-9a56-b413287082b6/console/0.log" Feb 16 17:17:35.020475 master-0 kubenswrapper[4167]: I0216 17:17:35.020294 4167 generic.go:334] "Generic (PLEG): container finished" podID="4573747f-1dff-4f51-9a56-b413287082b6" containerID="3d8e14c5d25bf9d8e21460e05d113da3dca3f3c9f2a50705a9de49d3ee70177c" exitCode=2 Feb 16 17:17:35.020475 master-0 kubenswrapper[4167]: I0216 17:17:35.020330 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-795746f87c-qdv9c" event={"ID":"4573747f-1dff-4f51-9a56-b413287082b6","Type":"ContainerDied","Data":"3d8e14c5d25bf9d8e21460e05d113da3dca3f3c9f2a50705a9de49d3ee70177c"} Feb 16 17:17:35.366728 master-0 kubenswrapper[4167]: I0216 17:17:35.366657 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-795746f87c-qdv9c_4573747f-1dff-4f51-9a56-b413287082b6/console/0.log" Feb 16 17:17:35.366941 master-0 kubenswrapper[4167]: I0216 17:17:35.366775 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:17:35.485550 master-0 kubenswrapper[4167]: I0216 17:17:35.485488 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2mkf\" (UniqueName: \"kubernetes.io/projected/4573747f-1dff-4f51-9a56-b413287082b6-kube-api-access-k2mkf\") pod \"4573747f-1dff-4f51-9a56-b413287082b6\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " Feb 16 17:17:35.485923 master-0 kubenswrapper[4167]: I0216 17:17:35.485581 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-trusted-ca-bundle\") pod \"4573747f-1dff-4f51-9a56-b413287082b6\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " Feb 16 17:17:35.485923 master-0 kubenswrapper[4167]: I0216 17:17:35.485738 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4573747f-1dff-4f51-9a56-b413287082b6-console-oauth-config\") pod \"4573747f-1dff-4f51-9a56-b413287082b6\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " Feb 16 17:17:35.485923 master-0 kubenswrapper[4167]: I0216 17:17:35.485843 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-console-config\") pod \"4573747f-1dff-4f51-9a56-b413287082b6\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " Feb 16 17:17:35.485923 master-0 kubenswrapper[4167]: I0216 17:17:35.485894 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-oauth-serving-cert\") pod \"4573747f-1dff-4f51-9a56-b413287082b6\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " Feb 16 17:17:35.486343 master-0 kubenswrapper[4167]: I0216 17:17:35.485997 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-service-ca\") pod \"4573747f-1dff-4f51-9a56-b413287082b6\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " Feb 16 17:17:35.486343 master-0 kubenswrapper[4167]: I0216 17:17:35.486037 4167 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4573747f-1dff-4f51-9a56-b413287082b6-console-serving-cert\") pod \"4573747f-1dff-4f51-9a56-b413287082b6\" (UID: \"4573747f-1dff-4f51-9a56-b413287082b6\") " Feb 16 17:17:35.490613 master-0 kubenswrapper[4167]: I0216 17:17:35.490545 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4573747f-1dff-4f51-9a56-b413287082b6-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4573747f-1dff-4f51-9a56-b413287082b6" (UID: "4573747f-1dff-4f51-9a56-b413287082b6"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:17:35.490776 master-0 kubenswrapper[4167]: I0216 17:17:35.490665 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4573747f-1dff-4f51-9a56-b413287082b6-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4573747f-1dff-4f51-9a56-b413287082b6" (UID: "4573747f-1dff-4f51-9a56-b413287082b6"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:17:35.491095 master-0 kubenswrapper[4167]: I0216 17:17:35.491025 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-console-config" (OuterVolumeSpecName: "console-config") pod "4573747f-1dff-4f51-9a56-b413287082b6" (UID: "4573747f-1dff-4f51-9a56-b413287082b6"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:17:35.491673 master-0 kubenswrapper[4167]: I0216 17:17:35.491602 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4573747f-1dff-4f51-9a56-b413287082b6" (UID: "4573747f-1dff-4f51-9a56-b413287082b6"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:17:35.491673 master-0 kubenswrapper[4167]: I0216 17:17:35.491666 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-service-ca" (OuterVolumeSpecName: "service-ca") pod "4573747f-1dff-4f51-9a56-b413287082b6" (UID: "4573747f-1dff-4f51-9a56-b413287082b6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:17:35.491824 master-0 kubenswrapper[4167]: I0216 17:17:35.491735 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4573747f-1dff-4f51-9a56-b413287082b6" (UID: "4573747f-1dff-4f51-9a56-b413287082b6"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:17:35.495306 master-0 kubenswrapper[4167]: I0216 17:17:35.495244 4167 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4573747f-1dff-4f51-9a56-b413287082b6-kube-api-access-k2mkf" (OuterVolumeSpecName: "kube-api-access-k2mkf") pod "4573747f-1dff-4f51-9a56-b413287082b6" (UID: "4573747f-1dff-4f51-9a56-b413287082b6"). InnerVolumeSpecName "kube-api-access-k2mkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:17:35.588265 master-0 kubenswrapper[4167]: I0216 17:17:35.588041 4167 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:17:35.588265 master-0 kubenswrapper[4167]: I0216 17:17:35.588093 4167 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4573747f-1dff-4f51-9a56-b413287082b6-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:17:35.588265 master-0 kubenswrapper[4167]: I0216 17:17:35.588110 4167 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2mkf\" (UniqueName: \"kubernetes.io/projected/4573747f-1dff-4f51-9a56-b413287082b6-kube-api-access-k2mkf\") on node \"master-0\" DevicePath \"\"" Feb 16 17:17:35.588265 master-0 kubenswrapper[4167]: I0216 17:17:35.588122 4167 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:17:35.588265 master-0 kubenswrapper[4167]: I0216 17:17:35.588132 4167 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4573747f-1dff-4f51-9a56-b413287082b6-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:17:35.588265 master-0 kubenswrapper[4167]: I0216 17:17:35.588143 4167 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-console-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:17:35.588265 master-0 kubenswrapper[4167]: I0216 17:17:35.588156 4167 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4573747f-1dff-4f51-9a56-b413287082b6-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:17:36.029857 master-0 kubenswrapper[4167]: I0216 17:17:36.029772 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-795746f87c-qdv9c_4573747f-1dff-4f51-9a56-b413287082b6/console/0.log" Feb 16 17:17:36.029857 master-0 kubenswrapper[4167]: I0216 17:17:36.029851 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-795746f87c-qdv9c" event={"ID":"4573747f-1dff-4f51-9a56-b413287082b6","Type":"ContainerDied","Data":"848eff5ed91bfa76e968db088e340c9b4164d792c9863a524acf7c488ac3d27f"} Feb 16 17:17:36.030571 master-0 kubenswrapper[4167]: I0216 17:17:36.029895 4167 scope.go:117] "RemoveContainer" containerID="3d8e14c5d25bf9d8e21460e05d113da3dca3f3c9f2a50705a9de49d3ee70177c" Feb 16 17:17:36.030571 master-0 kubenswrapper[4167]: I0216 17:17:36.030074 4167 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-795746f87c-qdv9c" Feb 16 17:17:36.092880 master-0 kubenswrapper[4167]: I0216 17:17:36.092577 4167 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-795746f87c-qdv9c"] Feb 16 17:17:36.100766 master-0 kubenswrapper[4167]: I0216 17:17:36.100670 4167 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-795746f87c-qdv9c"] Feb 16 17:17:36.460468 master-0 kubenswrapper[4167]: I0216 17:17:36.460366 4167 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4573747f-1dff-4f51-9a56-b413287082b6" path="/var/lib/kubelet/pods/4573747f-1dff-4f51-9a56-b413287082b6/volumes" Feb 16 17:17:47.769878 master-0 kubenswrapper[4167]: I0216 17:17:47.769820 4167 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:17:47.770547 master-0 kubenswrapper[4167]: I0216 17:17:47.769893 4167 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:17:47.770547 master-0 kubenswrapper[4167]: I0216 17:17:47.769945 4167 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:17:47.770633 master-0 kubenswrapper[4167]: I0216 17:17:47.770611 4167 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68f33191bbdd9baf1095b1769d81979c4da11a7d920a2c849ce21980a92a7ecd"} pod="openshift-machine-config-operator/machine-config-daemon-98q6v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:17:47.770708 master-0 kubenswrapper[4167]: I0216 17:17:47.770679 4167 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" containerID="cri-o://68f33191bbdd9baf1095b1769d81979c4da11a7d920a2c849ce21980a92a7ecd" gracePeriod=600 Feb 16 17:17:48.128347 master-0 kubenswrapper[4167]: I0216 17:17:48.128251 4167 generic.go:334] "Generic (PLEG): container finished" podID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerID="68f33191bbdd9baf1095b1769d81979c4da11a7d920a2c849ce21980a92a7ecd" exitCode=0 Feb 16 17:17:48.128347 master-0 kubenswrapper[4167]: I0216 17:17:48.128320 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerDied","Data":"68f33191bbdd9baf1095b1769d81979c4da11a7d920a2c849ce21980a92a7ecd"} Feb 16 17:17:48.128615 master-0 kubenswrapper[4167]: I0216 17:17:48.128368 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"a7283ea35678c0dc94d8e5f0d4d0c3ed9937a2df41677781fb1944f77ea6f01e"} Feb 16 17:17:57.425575 master-0 kubenswrapper[4167]: I0216 17:17:57.425509 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:17:57.465940 master-0 kubenswrapper[4167]: I0216 17:17:57.465879 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:17:58.253919 master-0 kubenswrapper[4167]: I0216 17:17:58.253847 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:18:40.338797 master-0 kubenswrapper[4167]: I0216 17:18:40.338758 4167 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Liveness probe status=failure output="Get \"https://192.168.32.10:9980/healthz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Feb 16 17:18:40.339461 master-0 kubenswrapper[4167]: I0216 17:18:40.338838 4167 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd/etcd-master-0" podUID="7adecad495595c43c57c30abd350e987" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/healthz\": dial tcp 192.168.32.10:9980: connect: connection refused" Feb 16 17:18:40.567660 master-0 kubenswrapper[4167]: I0216 17:18:40.567621 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-599b567ff7-nrcpr_ed3d89d0-bc00-482e-a656-7fdf4646ab0a/console/0.log" Feb 16 17:18:40.567771 master-0 kubenswrapper[4167]: I0216 17:18:40.567669 4167 generic.go:334] "Generic (PLEG): container finished" podID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" containerID="297af5174948e7a5e193045218b9b8c209b0751b5e42d942a380c1c4105d45a9" exitCode=2 Feb 16 17:18:40.567771 master-0 kubenswrapper[4167]: I0216 17:18:40.567725 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-599b567ff7-nrcpr" event={"ID":"ed3d89d0-bc00-482e-a656-7fdf4646ab0a","Type":"ContainerDied","Data":"297af5174948e7a5e193045218b9b8c209b0751b5e42d942a380c1c4105d45a9"} Feb 16 17:18:40.568372 master-0 kubenswrapper[4167]: I0216 17:18:40.568344 4167 scope.go:117] "RemoveContainer" containerID="297af5174948e7a5e193045218b9b8c209b0751b5e42d942a380c1c4105d45a9" Feb 16 17:18:40.569515 master-0 kubenswrapper[4167]: I0216 17:18:40.569487 4167 generic.go:334] "Generic (PLEG): container finished" podID="6b3e071c-1c62-489b-91c1-aef0d197f40b" containerID="4c1a6b0253eebc598c1d19e2bc8901bc0fc3435f1608b53f029ff531ef5d536b" exitCode=0 Feb 16 17:18:40.569564 master-0 kubenswrapper[4167]: I0216 17:18:40.569521 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerDied","Data":"4c1a6b0253eebc598c1d19e2bc8901bc0fc3435f1608b53f029ff531ef5d536b"} Feb 16 17:18:40.570026 master-0 kubenswrapper[4167]: I0216 17:18:40.570000 4167 scope.go:117] "RemoveContainer" containerID="4c1a6b0253eebc598c1d19e2bc8901bc0fc3435f1608b53f029ff531ef5d536b" Feb 16 17:18:40.571465 master-0 kubenswrapper[4167]: I0216 17:18:40.571444 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-92rqx_404c402a-705f-4352-b9df-b89562070d9c/machine-api-operator/2.log" Feb 16 17:18:40.571981 master-0 kubenswrapper[4167]: I0216 17:18:40.571938 4167 generic.go:334] "Generic (PLEG): container finished" podID="404c402a-705f-4352-b9df-b89562070d9c" containerID="11617d13afe5f8d066713748644151c41066462b4ff447d2322da2af49234639" exitCode=2 Feb 16 17:18:40.572046 master-0 kubenswrapper[4167]: I0216 17:18:40.571991 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerDied","Data":"11617d13afe5f8d066713748644151c41066462b4ff447d2322da2af49234639"} Feb 16 17:18:40.572288 master-0 kubenswrapper[4167]: I0216 17:18:40.572253 4167 scope.go:117] "RemoveContainer" containerID="b4eb28c976464930f1c03f92ec479debd9dd58656d0f14a479c1a70e1cff09c4" Feb 16 17:18:40.572288 master-0 kubenswrapper[4167]: I0216 17:18:40.572269 4167 scope.go:117] "RemoveContainer" containerID="11617d13afe5f8d066713748644151c41066462b4ff447d2322da2af49234639" Feb 16 17:18:40.573417 master-0 kubenswrapper[4167]: I0216 17:18:40.573387 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-555857f695-nlrnr_54fba066-0e9e-49f6-8a86-34d5b4b660df/monitoring-plugin/2.log" Feb 16 17:18:40.573417 master-0 kubenswrapper[4167]: I0216 17:18:40.573413 4167 generic.go:334] "Generic (PLEG): container finished" podID="54fba066-0e9e-49f6-8a86-34d5b4b660df" containerID="674059d1546130b2b1cd88434ec07f49b78934c11e7c0706b6bce62bcf537cfe" exitCode=2 Feb 16 17:18:40.573514 master-0 kubenswrapper[4167]: I0216 17:18:40.573445 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" event={"ID":"54fba066-0e9e-49f6-8a86-34d5b4b660df","Type":"ContainerDied","Data":"674059d1546130b2b1cd88434ec07f49b78934c11e7c0706b6bce62bcf537cfe"} Feb 16 17:18:40.573684 master-0 kubenswrapper[4167]: I0216 17:18:40.573658 4167 scope.go:117] "RemoveContainer" containerID="674059d1546130b2b1cd88434ec07f49b78934c11e7c0706b6bce62bcf537cfe" Feb 16 17:18:40.575040 master-0 kubenswrapper[4167]: I0216 17:18:40.575015 4167 generic.go:334] "Generic (PLEG): container finished" podID="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" containerID="5be425293550e3c23f77af0361bf1f6e5f1b68ae077b612972bafc5cd8d78142" exitCode=0 Feb 16 17:18:40.575040 master-0 kubenswrapper[4167]: I0216 17:18:40.575034 4167 generic.go:334] "Generic (PLEG): container finished" podID="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" containerID="7f8330c3bb76d22d354e24a41e8d51cbaaa63368eaca8d8e23a100303a48a87c" exitCode=0 Feb 16 17:18:40.575040 master-0 kubenswrapper[4167]: I0216 17:18:40.575043 4167 generic.go:334] "Generic (PLEG): container finished" podID="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" containerID="e44b7278b2810a6e423fa7a03723078be8b30f08c40fc76aacee28ced6ab10ee" exitCode=0 Feb 16 17:18:40.575171 master-0 kubenswrapper[4167]: I0216 17:18:40.575070 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerDied","Data":"5be425293550e3c23f77af0361bf1f6e5f1b68ae077b612972bafc5cd8d78142"} Feb 16 17:18:40.575171 master-0 kubenswrapper[4167]: I0216 17:18:40.575084 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerDied","Data":"7f8330c3bb76d22d354e24a41e8d51cbaaa63368eaca8d8e23a100303a48a87c"} Feb 16 17:18:40.575171 master-0 kubenswrapper[4167]: I0216 17:18:40.575094 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerDied","Data":"e44b7278b2810a6e423fa7a03723078be8b30f08c40fc76aacee28ced6ab10ee"} Feb 16 17:18:40.575320 master-0 kubenswrapper[4167]: I0216 17:18:40.575299 4167 scope.go:117] "RemoveContainer" containerID="e44b7278b2810a6e423fa7a03723078be8b30f08c40fc76aacee28ced6ab10ee" Feb 16 17:18:40.575320 master-0 kubenswrapper[4167]: I0216 17:18:40.575320 4167 scope.go:117] "RemoveContainer" containerID="7f8330c3bb76d22d354e24a41e8d51cbaaa63368eaca8d8e23a100303a48a87c" Feb 16 17:18:40.575401 master-0 kubenswrapper[4167]: I0216 17:18:40.575330 4167 scope.go:117] "RemoveContainer" containerID="5be425293550e3c23f77af0361bf1f6e5f1b68ae077b612972bafc5cd8d78142" Feb 16 17:18:40.576722 master-0 kubenswrapper[4167]: I0216 17:18:40.576691 4167 generic.go:334] "Generic (PLEG): container finished" podID="d9859457-f0d1-4754-a6c5-cf05d5abf447" containerID="6c8043a73593eb818d47c831f8cfbf4afec441fd3742943afa12b44d4e57561c" exitCode=0 Feb 16 17:18:40.576722 master-0 kubenswrapper[4167]: I0216 17:18:40.576709 4167 generic.go:334] "Generic (PLEG): container finished" podID="d9859457-f0d1-4754-a6c5-cf05d5abf447" containerID="18eb2ea9f07d5c6d110107b6cae077ae63beca1d3e388a0edf9c5be4e7025b94" exitCode=0 Feb 16 17:18:40.576812 master-0 kubenswrapper[4167]: I0216 17:18:40.576744 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerDied","Data":"6c8043a73593eb818d47c831f8cfbf4afec441fd3742943afa12b44d4e57561c"} Feb 16 17:18:40.576812 master-0 kubenswrapper[4167]: I0216 17:18:40.576761 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerDied","Data":"18eb2ea9f07d5c6d110107b6cae077ae63beca1d3e388a0edf9c5be4e7025b94"} Feb 16 17:18:40.577024 master-0 kubenswrapper[4167]: I0216 17:18:40.577001 4167 scope.go:117] "RemoveContainer" containerID="18eb2ea9f07d5c6d110107b6cae077ae63beca1d3e388a0edf9c5be4e7025b94" Feb 16 17:18:40.577024 master-0 kubenswrapper[4167]: I0216 17:18:40.577022 4167 scope.go:117] "RemoveContainer" containerID="6c8043a73593eb818d47c831f8cfbf4afec441fd3742943afa12b44d4e57561c" Feb 16 17:18:40.578497 master-0 kubenswrapper[4167]: I0216 17:18:40.578461 4167 generic.go:334] "Generic (PLEG): container finished" podID="2d96ccdc-0b09-437d-bfca-1958af5d9953" containerID="dbf513153b490360d777be6ca05f18ad905f65e50e441d5e6e8adfc27b930dc9" exitCode=0 Feb 16 17:18:40.578570 master-0 kubenswrapper[4167]: I0216 17:18:40.578528 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerDied","Data":"dbf513153b490360d777be6ca05f18ad905f65e50e441d5e6e8adfc27b930dc9"} Feb 16 17:18:40.579082 master-0 kubenswrapper[4167]: I0216 17:18:40.579056 4167 scope.go:117] "RemoveContainer" containerID="dbf513153b490360d777be6ca05f18ad905f65e50e441d5e6e8adfc27b930dc9" Feb 16 17:18:40.579603 master-0 kubenswrapper[4167]: I0216 17:18:40.579579 4167 generic.go:334] "Generic (PLEG): container finished" podID="810a2275-fae5-45df-a3b8-92860451d33b" containerID="31dfdce3749e2857f76e798f1232a3015c2f3bd49b76be69a655614f7ff3d685" exitCode=0 Feb 16 17:18:40.579662 master-0 kubenswrapper[4167]: I0216 17:18:40.579618 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xv2wv" event={"ID":"810a2275-fae5-45df-a3b8-92860451d33b","Type":"ContainerDied","Data":"31dfdce3749e2857f76e798f1232a3015c2f3bd49b76be69a655614f7ff3d685"} Feb 16 17:18:40.580203 master-0 kubenswrapper[4167]: I0216 17:18:40.580159 4167 scope.go:117] "RemoveContainer" containerID="31dfdce3749e2857f76e798f1232a3015c2f3bd49b76be69a655614f7ff3d685" Feb 16 17:18:40.580493 master-0 kubenswrapper[4167]: I0216 17:18:40.580464 4167 generic.go:334] "Generic (PLEG): container finished" podID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" containerID="4592a4eaa04bafb6bbe94b87216e3a2f3a1314c0e37470ac34dbb8c88f6b8e44" exitCode=0 Feb 16 17:18:40.580548 master-0 kubenswrapper[4167]: I0216 17:18:40.580504 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" event={"ID":"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41","Type":"ContainerDied","Data":"4592a4eaa04bafb6bbe94b87216e3a2f3a1314c0e37470ac34dbb8c88f6b8e44"} Feb 16 17:18:40.580752 master-0 kubenswrapper[4167]: I0216 17:18:40.580724 4167 scope.go:117] "RemoveContainer" containerID="4592a4eaa04bafb6bbe94b87216e3a2f3a1314c0e37470ac34dbb8c88f6b8e44" Feb 16 17:18:40.582985 master-0 kubenswrapper[4167]: I0216 17:18:40.582930 4167 generic.go:334] "Generic (PLEG): container finished" podID="b04ee64e-5e83-499c-812d-749b2b6824c6" containerID="fe18074fe2d6a625e681a6ea6b856bf18c9f1a1defe26ba044caeba03bdc1424" exitCode=0 Feb 16 17:18:40.582985 master-0 kubenswrapper[4167]: I0216 17:18:40.582968 4167 generic.go:334] "Generic (PLEG): container finished" podID="b04ee64e-5e83-499c-812d-749b2b6824c6" containerID="2b7a620daf4a97b25d369c0c6e1aeac19537e878e52d80a0b791512400fe794c" exitCode=0 Feb 16 17:18:40.582985 master-0 kubenswrapper[4167]: I0216 17:18:40.582980 4167 generic.go:334] "Generic (PLEG): container finished" podID="b04ee64e-5e83-499c-812d-749b2b6824c6" containerID="0b28d07efb0b6dc4fd3ea7fb6263df1be0a3f8789786db5c410d8b1eaffadfd8" exitCode=0 Feb 16 17:18:40.583097 master-0 kubenswrapper[4167]: I0216 17:18:40.582986 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerDied","Data":"fe18074fe2d6a625e681a6ea6b856bf18c9f1a1defe26ba044caeba03bdc1424"} Feb 16 17:18:40.583097 master-0 kubenswrapper[4167]: I0216 17:18:40.583019 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerDied","Data":"2b7a620daf4a97b25d369c0c6e1aeac19537e878e52d80a0b791512400fe794c"} Feb 16 17:18:40.583097 master-0 kubenswrapper[4167]: I0216 17:18:40.583042 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerDied","Data":"0b28d07efb0b6dc4fd3ea7fb6263df1be0a3f8789786db5c410d8b1eaffadfd8"} Feb 16 17:18:40.583475 master-0 kubenswrapper[4167]: I0216 17:18:40.583449 4167 scope.go:117] "RemoveContainer" containerID="0b28d07efb0b6dc4fd3ea7fb6263df1be0a3f8789786db5c410d8b1eaffadfd8" Feb 16 17:18:40.583475 master-0 kubenswrapper[4167]: I0216 17:18:40.583473 4167 scope.go:117] "RemoveContainer" containerID="06deede4924397e939bebb44e359f2380b34ce255772a313f28d157026b919c3" Feb 16 17:18:40.583542 master-0 kubenswrapper[4167]: I0216 17:18:40.583483 4167 scope.go:117] "RemoveContainer" containerID="aaf2848fb6af1c4ef6ce19e563ad85aaca897482b8b4f1ff056e322d68a43a8a" Feb 16 17:18:40.583542 master-0 kubenswrapper[4167]: I0216 17:18:40.583495 4167 scope.go:117] "RemoveContainer" containerID="2b7a620daf4a97b25d369c0c6e1aeac19537e878e52d80a0b791512400fe794c" Feb 16 17:18:40.583542 master-0 kubenswrapper[4167]: I0216 17:18:40.583506 4167 scope.go:117] "RemoveContainer" containerID="ebd675428e5bb8f7c53e95c7fcb4e73a21dc73f92a194758d2d88737bda48c9a" Feb 16 17:18:40.583542 master-0 kubenswrapper[4167]: I0216 17:18:40.583515 4167 scope.go:117] "RemoveContainer" containerID="fe18074fe2d6a625e681a6ea6b856bf18c9f1a1defe26ba044caeba03bdc1424" Feb 16 17:18:40.585654 master-0 kubenswrapper[4167]: I0216 17:18:40.585624 4167 generic.go:334] "Generic (PLEG): container finished" podID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" containerID="718e5d5d624544036baa50256bfb6aab06e61a8de740d2fc9fcd3a13c9f3eeba" exitCode=0 Feb 16 17:18:40.585701 master-0 kubenswrapper[4167]: I0216 17:18:40.585680 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerDied","Data":"718e5d5d624544036baa50256bfb6aab06e61a8de740d2fc9fcd3a13c9f3eeba"} Feb 16 17:18:40.586069 master-0 kubenswrapper[4167]: I0216 17:18:40.586040 4167 scope.go:117] "RemoveContainer" containerID="54030a7ff4b10a85d9e72a01a23417e3237ec0ba05096416a9dd2e258691a5a3" Feb 16 17:18:40.586069 master-0 kubenswrapper[4167]: I0216 17:18:40.586066 4167 scope.go:117] "RemoveContainer" containerID="b84a614b1ec3a6d2390121273be6a1676c50cbc27aabf14a1e15474dc3929160" Feb 16 17:18:40.586148 master-0 kubenswrapper[4167]: I0216 17:18:40.586077 4167 scope.go:117] "RemoveContainer" containerID="724eabf6366889ad27f53e251522e6e8aa6a1854883743a613e9de16af72831d" Feb 16 17:18:40.586148 master-0 kubenswrapper[4167]: I0216 17:18:40.586087 4167 scope.go:117] "RemoveContainer" containerID="63229405ff3ddfaa766c4ee313994817781540fd6f3c8891c90f43028c210fdb" Feb 16 17:18:40.586148 master-0 kubenswrapper[4167]: I0216 17:18:40.586097 4167 scope.go:117] "RemoveContainer" containerID="84478ba80a4930a90177cd524333f9a711b1e761ab3c5b96e6e2d7ba45d9f4f8" Feb 16 17:18:40.586148 master-0 kubenswrapper[4167]: I0216 17:18:40.586108 4167 scope.go:117] "RemoveContainer" containerID="718e5d5d624544036baa50256bfb6aab06e61a8de740d2fc9fcd3a13c9f3eeba" Feb 16 17:18:40.586840 master-0 kubenswrapper[4167]: I0216 17:18:40.586811 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-546cc7d765-94nfl_ae20b683-dac8-419e-808a-ddcdb3c564e1/openshift-state-metrics/1.log" Feb 16 17:18:40.587464 master-0 kubenswrapper[4167]: I0216 17:18:40.587421 4167 generic.go:334] "Generic (PLEG): container finished" podID="ae20b683-dac8-419e-808a-ddcdb3c564e1" containerID="7f86deeb630ebc6a955ba82a1abc0b13aa6b96c4f0240fac5e1a4c87ada309ab" exitCode=2 Feb 16 17:18:40.587464 master-0 kubenswrapper[4167]: I0216 17:18:40.587454 4167 generic.go:334] "Generic (PLEG): container finished" podID="ae20b683-dac8-419e-808a-ddcdb3c564e1" containerID="b2286bcf587925ee921929470a8ad12ff00ac21d82c0b9d9106c62d62f43b303" exitCode=0 Feb 16 17:18:40.587558 master-0 kubenswrapper[4167]: I0216 17:18:40.587470 4167 generic.go:334] "Generic (PLEG): container finished" podID="ae20b683-dac8-419e-808a-ddcdb3c564e1" containerID="48167f7958f079be9f38df909933908c7490dd9f2008247aca0f3eae6929ea97" exitCode=0 Feb 16 17:18:40.587558 master-0 kubenswrapper[4167]: I0216 17:18:40.587509 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerDied","Data":"7f86deeb630ebc6a955ba82a1abc0b13aa6b96c4f0240fac5e1a4c87ada309ab"} Feb 16 17:18:40.587558 master-0 kubenswrapper[4167]: I0216 17:18:40.587547 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerDied","Data":"b2286bcf587925ee921929470a8ad12ff00ac21d82c0b9d9106c62d62f43b303"} Feb 16 17:18:40.587643 master-0 kubenswrapper[4167]: I0216 17:18:40.587565 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerDied","Data":"48167f7958f079be9f38df909933908c7490dd9f2008247aca0f3eae6929ea97"} Feb 16 17:18:40.587930 master-0 kubenswrapper[4167]: I0216 17:18:40.587900 4167 scope.go:117] "RemoveContainer" containerID="48167f7958f079be9f38df909933908c7490dd9f2008247aca0f3eae6929ea97" Feb 16 17:18:40.588006 master-0 kubenswrapper[4167]: I0216 17:18:40.587929 4167 scope.go:117] "RemoveContainer" containerID="b2286bcf587925ee921929470a8ad12ff00ac21d82c0b9d9106c62d62f43b303" Feb 16 17:18:40.588006 master-0 kubenswrapper[4167]: I0216 17:18:40.587947 4167 scope.go:117] "RemoveContainer" containerID="7f86deeb630ebc6a955ba82a1abc0b13aa6b96c4f0240fac5e1a4c87ada309ab" Feb 16 17:18:40.588412 master-0 kubenswrapper[4167]: I0216 17:18:40.588374 4167 generic.go:334] "Generic (PLEG): container finished" podID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" containerID="2455980e6d53934b52e5080b8ee3311c8bde147e1418e36ce2112399000fa925" exitCode=0 Feb 16 17:18:40.588412 master-0 kubenswrapper[4167]: I0216 17:18:40.588407 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" event={"ID":"544c6815-81d7-422a-9e4a-5fcbfabe8da8","Type":"ContainerDied","Data":"2455980e6d53934b52e5080b8ee3311c8bde147e1418e36ce2112399000fa925"} Feb 16 17:18:40.588641 master-0 kubenswrapper[4167]: I0216 17:18:40.588616 4167 scope.go:117] "RemoveContainer" containerID="2455980e6d53934b52e5080b8ee3311c8bde147e1418e36ce2112399000fa925" Feb 16 17:18:40.590223 master-0 kubenswrapper[4167]: I0216 17:18:40.590197 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-rjdlk_ab5760f1-b2e0-4138-9383-e4827154ac50/kube-multus-additional-cni-plugins/2.log" Feb 16 17:18:40.591644 master-0 kubenswrapper[4167]: I0216 17:18:40.591616 4167 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="dac15d7eada948ad47cd0f531b47c3157a1f1c84f8dd6641421d210893300fa3" exitCode=143 Feb 16 17:18:40.591693 master-0 kubenswrapper[4167]: I0216 17:18:40.591655 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"dac15d7eada948ad47cd0f531b47c3157a1f1c84f8dd6641421d210893300fa3"} Feb 16 17:18:40.591913 master-0 kubenswrapper[4167]: I0216 17:18:40.591888 4167 scope.go:117] "RemoveContainer" containerID="dac15d7eada948ad47cd0f531b47c3157a1f1c84f8dd6641421d210893300fa3" Feb 16 17:18:40.593593 master-0 kubenswrapper[4167]: I0216 17:18:40.593549 4167 generic.go:334] "Generic (PLEG): container finished" podID="78be97a3-18d1-4962-804f-372974dc8ccc" containerID="37060e3d6082551da36ecc80a6060aed182d6f433dd96f7ed17dfdfc6699bb75" exitCode=0 Feb 16 17:18:40.593629 master-0 kubenswrapper[4167]: I0216 17:18:40.593617 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" event={"ID":"78be97a3-18d1-4962-804f-372974dc8ccc","Type":"ContainerDied","Data":"37060e3d6082551da36ecc80a6060aed182d6f433dd96f7ed17dfdfc6699bb75"} Feb 16 17:18:40.593881 master-0 kubenswrapper[4167]: I0216 17:18:40.593855 4167 scope.go:117] "RemoveContainer" containerID="37060e3d6082551da36ecc80a6060aed182d6f433dd96f7ed17dfdfc6699bb75" Feb 16 17:18:40.595838 master-0 kubenswrapper[4167]: I0216 17:18:40.595814 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-2ws9r_9c48005e-c4df-4332-87fc-ec028f2c6921/machine-config-server/3.log" Feb 16 17:18:40.595878 master-0 kubenswrapper[4167]: I0216 17:18:40.595843 4167 generic.go:334] "Generic (PLEG): container finished" podID="9c48005e-c4df-4332-87fc-ec028f2c6921" containerID="bdc020491a9803f4c1dec5b73c04013b94c7a0d002b33673a1496e7041e90131" exitCode=2 Feb 16 17:18:40.595912 master-0 kubenswrapper[4167]: I0216 17:18:40.595885 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2ws9r" event={"ID":"9c48005e-c4df-4332-87fc-ec028f2c6921","Type":"ContainerDied","Data":"bdc020491a9803f4c1dec5b73c04013b94c7a0d002b33673a1496e7041e90131"} Feb 16 17:18:40.596163 master-0 kubenswrapper[4167]: I0216 17:18:40.596122 4167 scope.go:117] "RemoveContainer" containerID="bdc020491a9803f4c1dec5b73c04013b94c7a0d002b33673a1496e7041e90131" Feb 16 17:18:40.598047 master-0 kubenswrapper[4167]: I0216 17:18:40.598006 4167 generic.go:334] "Generic (PLEG): container finished" podID="ad805251-19d0-4d2f-b741-7d11158f1f03" containerID="b9259dd7380ae4bcf4ca64d09a9941e6cc688a7adfc8b3148ff68348868f4e9b" exitCode=0 Feb 16 17:18:40.598047 master-0 kubenswrapper[4167]: I0216 17:18:40.598037 4167 generic.go:334] "Generic (PLEG): container finished" podID="ad805251-19d0-4d2f-b741-7d11158f1f03" containerID="45579996dec5fbb8ca9e640c695bcfc916a457471eff47c2eaa4f92ca972cb1d" exitCode=0 Feb 16 17:18:40.598134 master-0 kubenswrapper[4167]: I0216 17:18:40.598057 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerDied","Data":"b9259dd7380ae4bcf4ca64d09a9941e6cc688a7adfc8b3148ff68348868f4e9b"} Feb 16 17:18:40.598134 master-0 kubenswrapper[4167]: I0216 17:18:40.598077 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerDied","Data":"45579996dec5fbb8ca9e640c695bcfc916a457471eff47c2eaa4f92ca972cb1d"} Feb 16 17:18:40.598334 master-0 kubenswrapper[4167]: I0216 17:18:40.598308 4167 scope.go:117] "RemoveContainer" containerID="45579996dec5fbb8ca9e640c695bcfc916a457471eff47c2eaa4f92ca972cb1d" Feb 16 17:18:40.598334 master-0 kubenswrapper[4167]: I0216 17:18:40.598331 4167 scope.go:117] "RemoveContainer" containerID="b9259dd7380ae4bcf4ca64d09a9941e6cc688a7adfc8b3148ff68348868f4e9b" Feb 16 17:18:40.599441 master-0 kubenswrapper[4167]: I0216 17:18:40.599408 4167 generic.go:334] "Generic (PLEG): container finished" podID="ee84198d-6357-4429-a90c-455c3850a788" containerID="7015900215e20506adcf5495345e6bd2ad2664eeb0994f9660d3953cd6ae7d87" exitCode=0 Feb 16 17:18:40.599441 master-0 kubenswrapper[4167]: I0216 17:18:40.599432 4167 generic.go:334] "Generic (PLEG): container finished" podID="ee84198d-6357-4429-a90c-455c3850a788" containerID="201add77b7fd1e26ee29767f3e4d6ce42612e17d1a1e501190daa73b0bf3ad68" exitCode=0 Feb 16 17:18:40.599523 master-0 kubenswrapper[4167]: I0216 17:18:40.599470 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerDied","Data":"7015900215e20506adcf5495345e6bd2ad2664eeb0994f9660d3953cd6ae7d87"} Feb 16 17:18:40.599523 master-0 kubenswrapper[4167]: I0216 17:18:40.599485 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerDied","Data":"201add77b7fd1e26ee29767f3e4d6ce42612e17d1a1e501190daa73b0bf3ad68"} Feb 16 17:18:40.599767 master-0 kubenswrapper[4167]: I0216 17:18:40.599743 4167 scope.go:117] "RemoveContainer" containerID="201add77b7fd1e26ee29767f3e4d6ce42612e17d1a1e501190daa73b0bf3ad68" Feb 16 17:18:40.599767 master-0 kubenswrapper[4167]: I0216 17:18:40.599767 4167 scope.go:117] "RemoveContainer" containerID="7015900215e20506adcf5495345e6bd2ad2664eeb0994f9660d3953cd6ae7d87" Feb 16 17:18:40.600795 master-0 kubenswrapper[4167]: I0216 17:18:40.600758 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-8256c_a94f9b8e-b020-4aab-8373-6c056ec07464/node-exporter/2.log" Feb 16 17:18:40.601053 master-0 kubenswrapper[4167]: I0216 17:18:40.601025 4167 generic.go:334] "Generic (PLEG): container finished" podID="a94f9b8e-b020-4aab-8373-6c056ec07464" containerID="d48ed75d01835cb567fe69a84180dd706bf0eb3a5743df83cbcb5e81501a9b38" exitCode=143 Feb 16 17:18:40.601182 master-0 kubenswrapper[4167]: I0216 17:18:40.601060 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerDied","Data":"d48ed75d01835cb567fe69a84180dd706bf0eb3a5743df83cbcb5e81501a9b38"} Feb 16 17:18:40.601398 master-0 kubenswrapper[4167]: I0216 17:18:40.601369 4167 scope.go:117] "RemoveContainer" containerID="d48ed75d01835cb567fe69a84180dd706bf0eb3a5743df83cbcb5e81501a9b38" Feb 16 17:18:40.601398 master-0 kubenswrapper[4167]: I0216 17:18:40.601390 4167 scope.go:117] "RemoveContainer" containerID="9f85722a140cc9cccf5e480580bd91d292bda981ade94e3a57f3d684826a3098" Feb 16 17:18:40.603584 master-0 kubenswrapper[4167]: I0216 17:18:40.603553 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-6bbd87b65b-mt2mz_06067627-6ccf-4cc8-bd20-dabdd776bb46/telemeter-client/2.log" Feb 16 17:18:40.603584 master-0 kubenswrapper[4167]: I0216 17:18:40.603583 4167 generic.go:334] "Generic (PLEG): container finished" podID="06067627-6ccf-4cc8-bd20-dabdd776bb46" containerID="d35a143a0c6ed1ce7662763f0d9a66b362f68bbc7be23ea6be7a5c81173502df" exitCode=0 Feb 16 17:18:40.603684 master-0 kubenswrapper[4167]: I0216 17:18:40.603593 4167 generic.go:334] "Generic (PLEG): container finished" podID="06067627-6ccf-4cc8-bd20-dabdd776bb46" containerID="ee1025e9b4ffeb3f18a693ceb2d5bb7ea83a0ed3a79e2550a16a5a3521c2dd17" exitCode=0 Feb 16 17:18:40.603684 master-0 kubenswrapper[4167]: I0216 17:18:40.603601 4167 generic.go:334] "Generic (PLEG): container finished" podID="06067627-6ccf-4cc8-bd20-dabdd776bb46" containerID="ad16040839827af79b426b3a5b5b65f211c5b6adf024fe6140adb8c12a4da675" exitCode=2 Feb 16 17:18:40.603684 master-0 kubenswrapper[4167]: I0216 17:18:40.603634 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerDied","Data":"d35a143a0c6ed1ce7662763f0d9a66b362f68bbc7be23ea6be7a5c81173502df"} Feb 16 17:18:40.603684 master-0 kubenswrapper[4167]: I0216 17:18:40.603648 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerDied","Data":"ee1025e9b4ffeb3f18a693ceb2d5bb7ea83a0ed3a79e2550a16a5a3521c2dd17"} Feb 16 17:18:40.603684 master-0 kubenswrapper[4167]: I0216 17:18:40.603657 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerDied","Data":"ad16040839827af79b426b3a5b5b65f211c5b6adf024fe6140adb8c12a4da675"} Feb 16 17:18:40.604071 master-0 kubenswrapper[4167]: I0216 17:18:40.604037 4167 scope.go:117] "RemoveContainer" containerID="ad16040839827af79b426b3a5b5b65f211c5b6adf024fe6140adb8c12a4da675" Feb 16 17:18:40.604071 master-0 kubenswrapper[4167]: I0216 17:18:40.604064 4167 scope.go:117] "RemoveContainer" containerID="ee1025e9b4ffeb3f18a693ceb2d5bb7ea83a0ed3a79e2550a16a5a3521c2dd17" Feb 16 17:18:40.604200 master-0 kubenswrapper[4167]: I0216 17:18:40.604076 4167 scope.go:117] "RemoveContainer" containerID="d35a143a0c6ed1ce7662763f0d9a66b362f68bbc7be23ea6be7a5c81173502df" Feb 16 17:18:40.605225 master-0 kubenswrapper[4167]: I0216 17:18:40.605193 4167 generic.go:334] "Generic (PLEG): container finished" podID="442600dc-09b2-4fee-9f89-777296b2ee40" containerID="eaa418d7f6a36f58cde0a1f71f41979582dd4e652c978199a1440c9abf983aef" exitCode=0 Feb 16 17:18:40.605289 master-0 kubenswrapper[4167]: I0216 17:18:40.605245 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" event={"ID":"442600dc-09b2-4fee-9f89-777296b2ee40","Type":"ContainerDied","Data":"eaa418d7f6a36f58cde0a1f71f41979582dd4e652c978199a1440c9abf983aef"} Feb 16 17:18:40.606130 master-0 kubenswrapper[4167]: I0216 17:18:40.606100 4167 scope.go:117] "RemoveContainer" containerID="eaa418d7f6a36f58cde0a1f71f41979582dd4e652c978199a1440c9abf983aef" Feb 16 17:18:40.607338 master-0 kubenswrapper[4167]: I0216 17:18:40.607301 4167 patch_prober.go:28] interesting pod/etcd-master-0 container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" start-of-body= Feb 16 17:18:40.607401 master-0 kubenswrapper[4167]: I0216 17:18:40.607338 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-master-0" podUID="7adecad495595c43c57c30abd350e987" containerName="etcd" probeResult="failure" output="Get \"https://192.168.32.10:9980/readyz\": dial tcp 192.168.32.10:9980: connect: connection refused" Feb 16 17:18:40.608661 master-0 kubenswrapper[4167]: I0216 17:18:40.608615 4167 generic.go:334] "Generic (PLEG): container finished" podID="0393fe12-2533-4c9c-a8e4-a58003c88f36" containerID="066bdd4f20e0a6244f992ccc6dcdaa26b8b14420e27b885c9487a8ceb4e614b3" exitCode=0 Feb 16 17:18:40.608731 master-0 kubenswrapper[4167]: I0216 17:18:40.608681 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerDied","Data":"066bdd4f20e0a6244f992ccc6dcdaa26b8b14420e27b885c9487a8ceb4e614b3"} Feb 16 17:18:40.609295 master-0 kubenswrapper[4167]: I0216 17:18:40.609264 4167 scope.go:117] "RemoveContainer" containerID="066bdd4f20e0a6244f992ccc6dcdaa26b8b14420e27b885c9487a8ceb4e614b3" Feb 16 17:18:40.611314 master-0 kubenswrapper[4167]: I0216 17:18:40.611231 4167 generic.go:334] "Generic (PLEG): container finished" podID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" containerID="d3f14f6eb35d23eb8319deb391de3fca7798f82dbc6d17e8ad6ff98c43a1d058" exitCode=0 Feb 16 17:18:40.611360 master-0 kubenswrapper[4167]: I0216 17:18:40.611309 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerDied","Data":"d3f14f6eb35d23eb8319deb391de3fca7798f82dbc6d17e8ad6ff98c43a1d058"} Feb 16 17:18:40.612083 master-0 kubenswrapper[4167]: I0216 17:18:40.612041 4167 scope.go:117] "RemoveContainer" containerID="d3f14f6eb35d23eb8319deb391de3fca7798f82dbc6d17e8ad6ff98c43a1d058" Feb 16 17:18:40.617665 master-0 kubenswrapper[4167]: I0216 17:18:40.617632 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-cb4f7b4cf-6qrw5_c2511146-1d04-4ecd-a28e-79662ef7b9d3/insights-operator/4.log" Feb 16 17:18:40.617727 master-0 kubenswrapper[4167]: I0216 17:18:40.617673 4167 generic.go:334] "Generic (PLEG): container finished" podID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" containerID="d71c0d414d7ee8db843bc4ce26b9d372fabc4cab276f09113b6084d58b461806" exitCode=2 Feb 16 17:18:40.617824 master-0 kubenswrapper[4167]: I0216 17:18:40.617733 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerDied","Data":"d71c0d414d7ee8db843bc4ce26b9d372fabc4cab276f09113b6084d58b461806"} Feb 16 17:18:40.618096 master-0 kubenswrapper[4167]: I0216 17:18:40.618063 4167 scope.go:117] "RemoveContainer" containerID="d71c0d414d7ee8db843bc4ce26b9d372fabc4cab276f09113b6084d58b461806" Feb 16 17:18:40.619361 master-0 kubenswrapper[4167]: I0216 17:18:40.619327 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-operator-84976bb859-rsnqc_f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/machine-config-operator/2.log" Feb 16 17:18:40.619361 master-0 kubenswrapper[4167]: I0216 17:18:40.619361 4167 generic.go:334] "Generic (PLEG): container finished" podID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" containerID="3578e7c097759d128de9a7cdaa82fbdaa554fe1149f249c339609e5acacc377e" exitCode=0 Feb 16 17:18:40.619493 master-0 kubenswrapper[4167]: I0216 17:18:40.619376 4167 generic.go:334] "Generic (PLEG): container finished" podID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" containerID="a4a81494e8a1f9b6da2ca38e538dbd7b036da6ec2d128354a9233f983cc05bd7" exitCode=2 Feb 16 17:18:40.619493 master-0 kubenswrapper[4167]: I0216 17:18:40.619425 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerDied","Data":"3578e7c097759d128de9a7cdaa82fbdaa554fe1149f249c339609e5acacc377e"} Feb 16 17:18:40.619493 master-0 kubenswrapper[4167]: I0216 17:18:40.619445 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerDied","Data":"a4a81494e8a1f9b6da2ca38e538dbd7b036da6ec2d128354a9233f983cc05bd7"} Feb 16 17:18:40.619763 master-0 kubenswrapper[4167]: I0216 17:18:40.619731 4167 scope.go:117] "RemoveContainer" containerID="a4a81494e8a1f9b6da2ca38e538dbd7b036da6ec2d128354a9233f983cc05bd7" Feb 16 17:18:40.619763 master-0 kubenswrapper[4167]: I0216 17:18:40.619759 4167 scope.go:117] "RemoveContainer" containerID="3578e7c097759d128de9a7cdaa82fbdaa554fe1149f249c339609e5acacc377e" Feb 16 17:18:40.621252 master-0 kubenswrapper[4167]: I0216 17:18:40.621222 4167 generic.go:334] "Generic (PLEG): container finished" podID="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" containerID="69978caa7c6df44b4132b190aa96ee90fadd8b7bb9d61222abed2f640a709a92" exitCode=0 Feb 16 17:18:40.621307 master-0 kubenswrapper[4167]: I0216 17:18:40.621281 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" event={"ID":"b6ad958f-25e4-40cb-89ec-5da9cb6395c7","Type":"ContainerDied","Data":"69978caa7c6df44b4132b190aa96ee90fadd8b7bb9d61222abed2f640a709a92"} Feb 16 17:18:40.621553 master-0 kubenswrapper[4167]: I0216 17:18:40.621525 4167 scope.go:117] "RemoveContainer" containerID="69978caa7c6df44b4132b190aa96ee90fadd8b7bb9d61222abed2f640a709a92" Feb 16 17:18:40.623086 master-0 kubenswrapper[4167]: I0216 17:18:40.623057 4167 generic.go:334] "Generic (PLEG): container finished" podID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" containerID="605f4d37dca5a7b7cb782ef97f0d0cf40cf83a6f18a4a48dec6e2d525f6967fe" exitCode=0 Feb 16 17:18:40.623163 master-0 kubenswrapper[4167]: I0216 17:18:40.623107 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-dhhfh" event={"ID":"08a90dc5-b0d8-4aad-a002-736492b6c1a9","Type":"ContainerDied","Data":"605f4d37dca5a7b7cb782ef97f0d0cf40cf83a6f18a4a48dec6e2d525f6967fe"} Feb 16 17:18:40.623390 master-0 kubenswrapper[4167]: I0216 17:18:40.623365 4167 scope.go:117] "RemoveContainer" containerID="605f4d37dca5a7b7cb782ef97f0d0cf40cf83a6f18a4a48dec6e2d525f6967fe" Feb 16 17:18:40.624471 master-0 kubenswrapper[4167]: I0216 17:18:40.624441 4167 generic.go:334] "Generic (PLEG): container finished" podID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" containerID="82f814829818865ca6f170f3bafba2be0d3e2f523200306ddbd0fd3eb4fe4e96" exitCode=0 Feb 16 17:18:40.624510 master-0 kubenswrapper[4167]: I0216 17:18:40.624490 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" event={"ID":"eaf7edff-0a89-4ac0-b9dd-511e098b5434","Type":"ContainerDied","Data":"82f814829818865ca6f170f3bafba2be0d3e2f523200306ddbd0fd3eb4fe4e96"} Feb 16 17:18:40.624741 master-0 kubenswrapper[4167]: I0216 17:18:40.624721 4167 scope.go:117] "RemoveContainer" containerID="82f814829818865ca6f170f3bafba2be0d3e2f523200306ddbd0fd3eb4fe4e96" Feb 16 17:18:40.626404 master-0 kubenswrapper[4167]: I0216 17:18:40.626379 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler-cert-syncer/2.log" Feb 16 17:18:40.626875 master-0 kubenswrapper[4167]: I0216 17:18:40.626847 4167 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="35f5cd1992e81a7af4f5f45d4bb9187b72081e16a022193c8645c6082cefaf93" exitCode=0 Feb 16 17:18:40.626875 master-0 kubenswrapper[4167]: I0216 17:18:40.626865 4167 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="01765c8bc2f28fd305b50bff98604cd983df450cc2ab5cd1fc4e41b470b0da56" exitCode=2 Feb 16 17:18:40.626981 master-0 kubenswrapper[4167]: I0216 17:18:40.626900 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"35f5cd1992e81a7af4f5f45d4bb9187b72081e16a022193c8645c6082cefaf93"} Feb 16 17:18:40.626981 master-0 kubenswrapper[4167]: I0216 17:18:40.626918 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"01765c8bc2f28fd305b50bff98604cd983df450cc2ab5cd1fc4e41b470b0da56"} Feb 16 17:18:40.627218 master-0 kubenswrapper[4167]: I0216 17:18:40.627188 4167 scope.go:117] "RemoveContainer" containerID="01765c8bc2f28fd305b50bff98604cd983df450cc2ab5cd1fc4e41b470b0da56" Feb 16 17:18:40.627218 master-0 kubenswrapper[4167]: I0216 17:18:40.627207 4167 scope.go:117] "RemoveContainer" containerID="35f5cd1992e81a7af4f5f45d4bb9187b72081e16a022193c8645c6082cefaf93" Feb 16 17:18:40.629006 master-0 kubenswrapper[4167]: I0216 17:18:40.628978 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerDied","Data":"0c1926d5165564b892069993c2da74e652f04997f279c72560669a0edc4edc9e"} Feb 16 17:18:40.629060 master-0 kubenswrapper[4167]: I0216 17:18:40.628975 4167 generic.go:334] "Generic (PLEG): container finished" podID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" containerID="0c1926d5165564b892069993c2da74e652f04997f279c72560669a0edc4edc9e" exitCode=0 Feb 16 17:18:40.629060 master-0 kubenswrapper[4167]: I0216 17:18:40.629028 4167 generic.go:334] "Generic (PLEG): container finished" podID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" containerID="6b62b01d85b7699e5e556ac71a310845e6b50e4710b03945dccf0e7bc14e56ea" exitCode=0 Feb 16 17:18:40.629165 master-0 kubenswrapper[4167]: I0216 17:18:40.629113 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerDied","Data":"6b62b01d85b7699e5e556ac71a310845e6b50e4710b03945dccf0e7bc14e56ea"} Feb 16 17:18:40.629278 master-0 kubenswrapper[4167]: I0216 17:18:40.629251 4167 scope.go:117] "RemoveContainer" containerID="6b62b01d85b7699e5e556ac71a310845e6b50e4710b03945dccf0e7bc14e56ea" Feb 16 17:18:40.629278 master-0 kubenswrapper[4167]: I0216 17:18:40.629275 4167 scope.go:117] "RemoveContainer" containerID="0c1926d5165564b892069993c2da74e652f04997f279c72560669a0edc4edc9e" Feb 16 17:18:40.630525 master-0 kubenswrapper[4167]: I0216 17:18:40.630483 4167 generic.go:334] "Generic (PLEG): container finished" podID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" containerID="43c5a005dd7a25001bb9feb6c2a1407ea809f5d7df385d399f64fc0554e23523" exitCode=0 Feb 16 17:18:40.630593 master-0 kubenswrapper[4167]: I0216 17:18:40.630549 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" event={"ID":"edbaac23-11f0-4bc7-a7ce-b593c774c0fa","Type":"ContainerDied","Data":"43c5a005dd7a25001bb9feb6c2a1407ea809f5d7df385d399f64fc0554e23523"} Feb 16 17:18:40.631055 master-0 kubenswrapper[4167]: I0216 17:18:40.631019 4167 scope.go:117] "RemoveContainer" containerID="43c5a005dd7a25001bb9feb6c2a1407ea809f5d7df385d399f64fc0554e23523" Feb 16 17:18:40.631867 master-0 kubenswrapper[4167]: I0216 17:18:40.631833 4167 generic.go:334] "Generic (PLEG): container finished" podID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" containerID="e1c8414590ed2fbc23be76e18fa26d224f487dcbc205c4862bf9e762f7a9b956" exitCode=0 Feb 16 17:18:40.631926 master-0 kubenswrapper[4167]: I0216 17:18:40.631890 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" event={"ID":"e10d0b0c-4c2a-45b3-8d69-3070d566b97d","Type":"ContainerDied","Data":"e1c8414590ed2fbc23be76e18fa26d224f487dcbc205c4862bf9e762f7a9b956"} Feb 16 17:18:40.632287 master-0 kubenswrapper[4167]: I0216 17:18:40.632246 4167 scope.go:117] "RemoveContainer" containerID="e1c8414590ed2fbc23be76e18fa26d224f487dcbc205c4862bf9e762f7a9b956" Feb 16 17:18:40.633322 master-0 kubenswrapper[4167]: I0216 17:18:40.633292 4167 generic.go:334] "Generic (PLEG): container finished" podID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" containerID="a5daed247e3ea86dc7a10458a8c0ff11ac31e756522086c2ac98a2f1125b561c" exitCode=0 Feb 16 17:18:40.633322 master-0 kubenswrapper[4167]: I0216 17:18:40.633313 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" event={"ID":"8e623376-9e14-4341-9dcf-7a7c218b6f9f","Type":"ContainerDied","Data":"a5daed247e3ea86dc7a10458a8c0ff11ac31e756522086c2ac98a2f1125b561c"} Feb 16 17:18:40.633655 master-0 kubenswrapper[4167]: I0216 17:18:40.633611 4167 scope.go:117] "RemoveContainer" containerID="a5daed247e3ea86dc7a10458a8c0ff11ac31e756522086c2ac98a2f1125b561c" Feb 16 17:18:40.637634 master-0 kubenswrapper[4167]: I0216 17:18:40.637342 4167 generic.go:334] "Generic (PLEG): container finished" podID="0d980a9a-2574-41b9-b970-0718cd97c8cd" containerID="6ba471877e1d22d2e62b0c2c1cb7eebd4f675dec80131e40647259b0b619f256" exitCode=0 Feb 16 17:18:40.637634 master-0 kubenswrapper[4167]: I0216 17:18:40.637427 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerDied","Data":"6ba471877e1d22d2e62b0c2c1cb7eebd4f675dec80131e40647259b0b619f256"} Feb 16 17:18:40.638000 master-0 kubenswrapper[4167]: I0216 17:18:40.637948 4167 scope.go:117] "RemoveContainer" containerID="6ba471877e1d22d2e62b0c2c1cb7eebd4f675dec80131e40647259b0b619f256" Feb 16 17:18:40.642266 master-0 kubenswrapper[4167]: I0216 17:18:40.642228 4167 generic.go:334] "Generic (PLEG): container finished" podID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" containerID="7d8809cdf4d92b85ae754a964638d7b2833c5008e5808250e883e4568bdc6480" exitCode=0 Feb 16 17:18:40.642363 master-0 kubenswrapper[4167]: I0216 17:18:40.642299 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" event={"ID":"188e42e5-9f9c-42af-ba15-5548c4fa4b52","Type":"ContainerDied","Data":"7d8809cdf4d92b85ae754a964638d7b2833c5008e5808250e883e4568bdc6480"} Feb 16 17:18:40.642926 master-0 kubenswrapper[4167]: I0216 17:18:40.642853 4167 scope.go:117] "RemoveContainer" containerID="7d8809cdf4d92b85ae754a964638d7b2833c5008e5808250e883e4568bdc6480" Feb 16 17:18:40.646752 master-0 kubenswrapper[4167]: I0216 17:18:40.646722 4167 generic.go:334] "Generic (PLEG): container finished" podID="d020c902-2adb-4919-8dd9-0c2109830580" containerID="1b593f959c471380eec535372fc7c882c7401dbf006c1a4484b0dd77ca11bb58" exitCode=0 Feb 16 17:18:40.646805 master-0 kubenswrapper[4167]: I0216 17:18:40.646791 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerDied","Data":"1b593f959c471380eec535372fc7c882c7401dbf006c1a4484b0dd77ca11bb58"} Feb 16 17:18:40.647251 master-0 kubenswrapper[4167]: I0216 17:18:40.647217 4167 scope.go:117] "RemoveContainer" containerID="1b593f959c471380eec535372fc7c882c7401dbf006c1a4484b0dd77ca11bb58" Feb 16 17:18:40.648656 master-0 kubenswrapper[4167]: I0216 17:18:40.648628 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_7adecad495595c43c57c30abd350e987/etcd-rev/1.log" Feb 16 17:18:40.649603 master-0 kubenswrapper[4167]: I0216 17:18:40.649564 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_7adecad495595c43c57c30abd350e987/etcd-metrics/1.log" Feb 16 17:18:40.651473 master-0 kubenswrapper[4167]: I0216 17:18:40.651400 4167 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="aedfff1b9149a45317431b294572c56015ded146f7b3ff9ec7a263bc47a383b3" exitCode=2 Feb 16 17:18:40.651473 master-0 kubenswrapper[4167]: I0216 17:18:40.651422 4167 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="04067ed4187486197b26e1ae13951b566ae8dc6eabd9679686fe0234c3137a4b" exitCode=0 Feb 16 17:18:40.651473 master-0 kubenswrapper[4167]: I0216 17:18:40.651432 4167 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="482b77f255da8dbdc1be0e1707334e04261602c426501a77264ce29439b9c9bd" exitCode=2 Feb 16 17:18:40.651473 master-0 kubenswrapper[4167]: I0216 17:18:40.651443 4167 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="c7588158dd7d7dabaa4f447c2b9bdd6aa5e276f1eca5073f8e69a9eb9b31cfba" exitCode=0 Feb 16 17:18:40.651652 master-0 kubenswrapper[4167]: I0216 17:18:40.651474 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"aedfff1b9149a45317431b294572c56015ded146f7b3ff9ec7a263bc47a383b3"} Feb 16 17:18:40.651652 master-0 kubenswrapper[4167]: I0216 17:18:40.651519 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"04067ed4187486197b26e1ae13951b566ae8dc6eabd9679686fe0234c3137a4b"} Feb 16 17:18:40.651652 master-0 kubenswrapper[4167]: I0216 17:18:40.651534 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"482b77f255da8dbdc1be0e1707334e04261602c426501a77264ce29439b9c9bd"} Feb 16 17:18:40.651652 master-0 kubenswrapper[4167]: I0216 17:18:40.651545 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"c7588158dd7d7dabaa4f447c2b9bdd6aa5e276f1eca5073f8e69a9eb9b31cfba"} Feb 16 17:18:40.652268 master-0 kubenswrapper[4167]: I0216 17:18:40.652180 4167 scope.go:117] "RemoveContainer" containerID="c7588158dd7d7dabaa4f447c2b9bdd6aa5e276f1eca5073f8e69a9eb9b31cfba" Feb 16 17:18:40.652746 master-0 kubenswrapper[4167]: I0216 17:18:40.652715 4167 scope.go:117] "RemoveContainer" containerID="482b77f255da8dbdc1be0e1707334e04261602c426501a77264ce29439b9c9bd" Feb 16 17:18:40.652746 master-0 kubenswrapper[4167]: I0216 17:18:40.652742 4167 scope.go:117] "RemoveContainer" containerID="04067ed4187486197b26e1ae13951b566ae8dc6eabd9679686fe0234c3137a4b" Feb 16 17:18:40.652839 master-0 kubenswrapper[4167]: I0216 17:18:40.652755 4167 scope.go:117] "RemoveContainer" containerID="aedfff1b9149a45317431b294572c56015ded146f7b3ff9ec7a263bc47a383b3" Feb 16 17:18:40.669669 master-0 kubenswrapper[4167]: I0216 17:18:40.669620 4167 generic.go:334] "Generic (PLEG): container finished" podID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerID="a7283ea35678c0dc94d8e5f0d4d0c3ed9937a2df41677781fb1944f77ea6f01e" exitCode=0 Feb 16 17:18:40.669669 master-0 kubenswrapper[4167]: I0216 17:18:40.669661 4167 generic.go:334] "Generic (PLEG): container finished" podID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerID="17104543b9da3593ba424383349543390a7b869403edf6ab8bc4ba2652888980" exitCode=0 Feb 16 17:18:40.669835 master-0 kubenswrapper[4167]: I0216 17:18:40.669705 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerDied","Data":"a7283ea35678c0dc94d8e5f0d4d0c3ed9937a2df41677781fb1944f77ea6f01e"} Feb 16 17:18:40.669835 master-0 kubenswrapper[4167]: I0216 17:18:40.669733 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerDied","Data":"17104543b9da3593ba424383349543390a7b869403edf6ab8bc4ba2652888980"} Feb 16 17:18:40.670621 master-0 kubenswrapper[4167]: I0216 17:18:40.670584 4167 scope.go:117] "RemoveContainer" containerID="68f33191bbdd9baf1095b1769d81979c4da11a7d920a2c849ce21980a92a7ecd" Feb 16 17:18:40.670764 master-0 kubenswrapper[4167]: I0216 17:18:40.670731 4167 scope.go:117] "RemoveContainer" containerID="a7283ea35678c0dc94d8e5f0d4d0c3ed9937a2df41677781fb1944f77ea6f01e" Feb 16 17:18:40.670764 master-0 kubenswrapper[4167]: I0216 17:18:40.670757 4167 scope.go:117] "RemoveContainer" containerID="17104543b9da3593ba424383349543390a7b869403edf6ab8bc4ba2652888980" Feb 16 17:18:40.677411 master-0 kubenswrapper[4167]: I0216 17:18:40.677371 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-diagnostics_network-check-target-vwvwx_c303189e-adae-4fe2-8dd7-cc9b80f73e66/network-check-target-container/2.log" Feb 16 17:18:40.677569 master-0 kubenswrapper[4167]: I0216 17:18:40.677416 4167 generic.go:334] "Generic (PLEG): container finished" podID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" containerID="9aa1b15d60f99d2cf55f79f2c375c36924f7525007528026162ba065a295f718" exitCode=2 Feb 16 17:18:40.677569 master-0 kubenswrapper[4167]: I0216 17:18:40.677475 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vwvwx" event={"ID":"c303189e-adae-4fe2-8dd7-cc9b80f73e66","Type":"ContainerDied","Data":"9aa1b15d60f99d2cf55f79f2c375c36924f7525007528026162ba065a295f718"} Feb 16 17:18:40.678027 master-0 kubenswrapper[4167]: I0216 17:18:40.677938 4167 scope.go:117] "RemoveContainer" containerID="9aa1b15d60f99d2cf55f79f2c375c36924f7525007528026162ba065a295f718" Feb 16 17:18:40.679782 master-0 kubenswrapper[4167]: I0216 17:18:40.679735 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_10e298020284b0e8ffa6a0bc184059d9/kube-apiserver-check-endpoints/0.log" Feb 16 17:18:40.680715 master-0 kubenswrapper[4167]: I0216 17:18:40.680679 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_10e298020284b0e8ffa6a0bc184059d9/kube-apiserver-cert-syncer/0.log" Feb 16 17:18:40.681304 master-0 kubenswrapper[4167]: I0216 17:18:40.681266 4167 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="861e88422bffb8b290aabc9e4e2f5c409ac08d163d44f75a8570182c8883c793" exitCode=0 Feb 16 17:18:40.681304 master-0 kubenswrapper[4167]: I0216 17:18:40.681291 4167 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="25465b580bb48f3a6fb46d8c2d044d08f4836a58f5f20bd9908d153f0df9ce48" exitCode=0 Feb 16 17:18:40.681304 master-0 kubenswrapper[4167]: I0216 17:18:40.681298 4167 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="1eea8daa47d9e87380f3482df90e85193de1a1240a1d240c6fe7a9ef3f0312e1" exitCode=2 Feb 16 17:18:40.681509 master-0 kubenswrapper[4167]: I0216 17:18:40.681331 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerDied","Data":"861e88422bffb8b290aabc9e4e2f5c409ac08d163d44f75a8570182c8883c793"} Feb 16 17:18:40.681509 master-0 kubenswrapper[4167]: I0216 17:18:40.681353 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerDied","Data":"25465b580bb48f3a6fb46d8c2d044d08f4836a58f5f20bd9908d153f0df9ce48"} Feb 16 17:18:40.681509 master-0 kubenswrapper[4167]: I0216 17:18:40.681364 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerDied","Data":"1eea8daa47d9e87380f3482df90e85193de1a1240a1d240c6fe7a9ef3f0312e1"} Feb 16 17:18:40.681768 master-0 kubenswrapper[4167]: I0216 17:18:40.681752 4167 scope.go:117] "RemoveContainer" containerID="1eea8daa47d9e87380f3482df90e85193de1a1240a1d240c6fe7a9ef3f0312e1" Feb 16 17:18:40.681843 master-0 kubenswrapper[4167]: I0216 17:18:40.681783 4167 scope.go:117] "RemoveContainer" containerID="25465b580bb48f3a6fb46d8c2d044d08f4836a58f5f20bd9908d153f0df9ce48" Feb 16 17:18:40.681843 master-0 kubenswrapper[4167]: I0216 17:18:40.681792 4167 scope.go:117] "RemoveContainer" containerID="861e88422bffb8b290aabc9e4e2f5c409ac08d163d44f75a8570182c8883c793" Feb 16 17:18:40.684227 master-0 kubenswrapper[4167]: I0216 17:18:40.684183 4167 generic.go:334] "Generic (PLEG): container finished" podID="737fcc7d-d850-4352-9f17-383c85d5bc28" containerID="8142c200a6b49fc3925f8e8e01c16d1d663fb018383ce7fe3fa9041f33cc3eba" exitCode=0 Feb 16 17:18:40.684450 master-0 kubenswrapper[4167]: I0216 17:18:40.684251 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" event={"ID":"737fcc7d-d850-4352-9f17-383c85d5bc28","Type":"ContainerDied","Data":"8142c200a6b49fc3925f8e8e01c16d1d663fb018383ce7fe3fa9041f33cc3eba"} Feb 16 17:18:40.684597 master-0 kubenswrapper[4167]: I0216 17:18:40.684563 4167 scope.go:117] "RemoveContainer" containerID="8142c200a6b49fc3925f8e8e01c16d1d663fb018383ce7fe3fa9041f33cc3eba" Feb 16 17:18:40.686340 master-0 kubenswrapper[4167]: I0216 17:18:40.686138 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7cc9598d54-8j5rk_55d635cd-1f0d-4086-96f2-9f3524f3f18c/kube-state-metrics/2.log" Feb 16 17:18:40.686340 master-0 kubenswrapper[4167]: I0216 17:18:40.686332 4167 generic.go:334] "Generic (PLEG): container finished" podID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" containerID="bf55a464d01636c344c4211041c9e4a6cbe20bd14450c7a3003fe8c18c5fc450" exitCode=2 Feb 16 17:18:40.686524 master-0 kubenswrapper[4167]: I0216 17:18:40.686413 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerDied","Data":"bf55a464d01636c344c4211041c9e4a6cbe20bd14450c7a3003fe8c18c5fc450"} Feb 16 17:18:40.686808 master-0 kubenswrapper[4167]: I0216 17:18:40.686776 4167 scope.go:117] "RemoveContainer" containerID="bf55a464d01636c344c4211041c9e4a6cbe20bd14450c7a3003fe8c18c5fc450" Feb 16 17:18:40.686808 master-0 kubenswrapper[4167]: I0216 17:18:40.686798 4167 scope.go:117] "RemoveContainer" containerID="0d0daa3f24697660fe022e8196e98f3bdaaa8ca58ffaa8746bbffff1ded535e4" Feb 16 17:18:40.686808 master-0 kubenswrapper[4167]: I0216 17:18:40.686808 4167 scope.go:117] "RemoveContainer" containerID="014df18f96d4689cdbe5b5e5f610e110c485c44cf2bfa273b7618be6223ab8da" Feb 16 17:18:40.688729 master-0 kubenswrapper[4167]: I0216 17:18:40.688698 4167 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="9cac97f2a7ed5b660f1fe9defbb77e1823cc7917bedfcd9a1ee2cf3d27a5413c" exitCode=0 Feb 16 17:18:40.688872 master-0 kubenswrapper[4167]: I0216 17:18:40.688750 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"9cac97f2a7ed5b660f1fe9defbb77e1823cc7917bedfcd9a1ee2cf3d27a5413c"} Feb 16 17:18:40.689038 master-0 kubenswrapper[4167]: I0216 17:18:40.689016 4167 scope.go:117] "RemoveContainer" containerID="9cac97f2a7ed5b660f1fe9defbb77e1823cc7917bedfcd9a1ee2cf3d27a5413c" Feb 16 17:18:40.698094 master-0 kubenswrapper[4167]: I0216 17:18:40.698047 4167 generic.go:334] "Generic (PLEG): container finished" podID="d1524fc1-d157-435a-8bf8-7e877c45909d" containerID="647f8e57100443092b8c3ff37a546c586b56745048e6f03ef72ab7b78b1506b2" exitCode=0 Feb 16 17:18:40.698094 master-0 kubenswrapper[4167]: I0216 17:18:40.698077 4167 generic.go:334] "Generic (PLEG): container finished" podID="d1524fc1-d157-435a-8bf8-7e877c45909d" containerID="6ecf5325ac595a4e389f176c15d5d6f6e061e84d5c7efa627d5409dfc8280c18" exitCode=0 Feb 16 17:18:40.698293 master-0 kubenswrapper[4167]: I0216 17:18:40.698118 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerDied","Data":"647f8e57100443092b8c3ff37a546c586b56745048e6f03ef72ab7b78b1506b2"} Feb 16 17:18:40.698293 master-0 kubenswrapper[4167]: I0216 17:18:40.698145 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerDied","Data":"6ecf5325ac595a4e389f176c15d5d6f6e061e84d5c7efa627d5409dfc8280c18"} Feb 16 17:18:40.698779 master-0 kubenswrapper[4167]: I0216 17:18:40.698743 4167 scope.go:117] "RemoveContainer" containerID="6ecf5325ac595a4e389f176c15d5d6f6e061e84d5c7efa627d5409dfc8280c18" Feb 16 17:18:40.698779 master-0 kubenswrapper[4167]: I0216 17:18:40.698768 4167 scope.go:117] "RemoveContainer" containerID="647f8e57100443092b8c3ff37a546c586b56745048e6f03ef72ab7b78b1506b2" Feb 16 17:18:40.701084 master-0 kubenswrapper[4167]: I0216 17:18:40.700854 4167 generic.go:334] "Generic (PLEG): container finished" podID="5a275679-b7b6-4c28-b389-94cd2b014d6c" containerID="61d87b339037e31a6b6263a1f9104ac8668a66669fa4343e3d898a074bb31102" exitCode=0 Feb 16 17:18:40.701084 master-0 kubenswrapper[4167]: I0216 17:18:40.700918 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" event={"ID":"5a275679-b7b6-4c28-b389-94cd2b014d6c","Type":"ContainerDied","Data":"61d87b339037e31a6b6263a1f9104ac8668a66669fa4343e3d898a074bb31102"} Feb 16 17:18:40.701265 master-0 kubenswrapper[4167]: I0216 17:18:40.701188 4167 scope.go:117] "RemoveContainer" containerID="61d87b339037e31a6b6263a1f9104ac8668a66669fa4343e3d898a074bb31102" Feb 16 17:18:40.704147 master-0 kubenswrapper[4167]: I0216 17:18:40.703334 4167 generic.go:334] "Generic (PLEG): container finished" podID="0517b180-00ee-47fe-a8e7-36a3931b7e72" containerID="186fd3f0db823e5328c89f464f7fcd6084fe37b5dd8507f965b2e41f003c7c49" exitCode=0 Feb 16 17:18:40.704147 master-0 kubenswrapper[4167]: I0216 17:18:40.703405 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" event={"ID":"0517b180-00ee-47fe-a8e7-36a3931b7e72","Type":"ContainerDied","Data":"186fd3f0db823e5328c89f464f7fcd6084fe37b5dd8507f965b2e41f003c7c49"} Feb 16 17:18:40.705369 master-0 kubenswrapper[4167]: I0216 17:18:40.704688 4167 scope.go:117] "RemoveContainer" containerID="186fd3f0db823e5328c89f464f7fcd6084fe37b5dd8507f965b2e41f003c7c49" Feb 16 17:18:40.708020 master-0 kubenswrapper[4167]: I0216 17:18:40.706507 4167 generic.go:334] "Generic (PLEG): container finished" podID="970d4376-f299-412c-a8ee-90aa980c689e" containerID="a0d06520735ed0b60c5bb08d65bc6702e8532961f3a95e0c641eb0fffcd1225a" exitCode=0 Feb 16 17:18:40.708020 master-0 kubenswrapper[4167]: I0216 17:18:40.706579 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" event={"ID":"970d4376-f299-412c-a8ee-90aa980c689e","Type":"ContainerDied","Data":"a0d06520735ed0b60c5bb08d65bc6702e8532961f3a95e0c641eb0fffcd1225a"} Feb 16 17:18:40.708020 master-0 kubenswrapper[4167]: I0216 17:18:40.707101 4167 scope.go:117] "RemoveContainer" containerID="a0d06520735ed0b60c5bb08d65bc6702e8532961f3a95e0c641eb0fffcd1225a" Feb 16 17:18:40.711748 master-0 kubenswrapper[4167]: I0216 17:18:40.711696 4167 generic.go:334] "Generic (PLEG): container finished" podID="48801344-a48a-493e-aea4-19d998d0b708" containerID="2ca6b95433456156a7d26505482fe45c8815f83af6b492554b24c8ab621388ee" exitCode=0 Feb 16 17:18:40.711860 master-0 kubenswrapper[4167]: I0216 17:18:40.711782 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" event={"ID":"48801344-a48a-493e-aea4-19d998d0b708","Type":"ContainerDied","Data":"2ca6b95433456156a7d26505482fe45c8815f83af6b492554b24c8ab621388ee"} Feb 16 17:18:40.712359 master-0 kubenswrapper[4167]: I0216 17:18:40.712324 4167 scope.go:117] "RemoveContainer" containerID="2ca6b95433456156a7d26505482fe45c8815f83af6b492554b24c8ab621388ee" Feb 16 17:18:40.715050 master-0 kubenswrapper[4167]: I0216 17:18:40.714941 4167 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="62478ab5c23f3ea9a0e6eac2c1335867a9ed27280579a87a6023cc6f8b882123" exitCode=0 Feb 16 17:18:40.715050 master-0 kubenswrapper[4167]: I0216 17:18:40.715042 4167 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="aab010443e5953fae2765f398f189cc7072cddd5f5db6fb4755e40a70cbb95c4" exitCode=0 Feb 16 17:18:40.715167 master-0 kubenswrapper[4167]: I0216 17:18:40.715104 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"62478ab5c23f3ea9a0e6eac2c1335867a9ed27280579a87a6023cc6f8b882123"} Feb 16 17:18:40.715167 master-0 kubenswrapper[4167]: I0216 17:18:40.715148 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"aab010443e5953fae2765f398f189cc7072cddd5f5db6fb4755e40a70cbb95c4"} Feb 16 17:18:40.715542 master-0 kubenswrapper[4167]: I0216 17:18:40.715506 4167 scope.go:117] "RemoveContainer" containerID="62478ab5c23f3ea9a0e6eac2c1335867a9ed27280579a87a6023cc6f8b882123" Feb 16 17:18:40.715542 master-0 kubenswrapper[4167]: I0216 17:18:40.715528 4167 scope.go:117] "RemoveContainer" containerID="aab010443e5953fae2765f398f189cc7072cddd5f5db6fb4755e40a70cbb95c4" Feb 16 17:18:40.717298 master-0 kubenswrapper[4167]: I0216 17:18:40.717079 4167 generic.go:334] "Generic (PLEG): container finished" podID="9609a4f3-b947-47af-a685-baae26c50fa3" containerID="8be46334fb66fe6506d2685575256c509d602c3155b7997ac9e303d0fc33bfa7" exitCode=0 Feb 16 17:18:40.717298 master-0 kubenswrapper[4167]: I0216 17:18:40.717112 4167 generic.go:334] "Generic (PLEG): container finished" podID="9609a4f3-b947-47af-a685-baae26c50fa3" containerID="2d47112fd42c5255c0ee3f609db46766e89e9137b3690ef0ab34d4341b2caa25" exitCode=0 Feb 16 17:18:40.717298 master-0 kubenswrapper[4167]: I0216 17:18:40.717188 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerDied","Data":"8be46334fb66fe6506d2685575256c509d602c3155b7997ac9e303d0fc33bfa7"} Feb 16 17:18:40.717298 master-0 kubenswrapper[4167]: I0216 17:18:40.717216 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerDied","Data":"2d47112fd42c5255c0ee3f609db46766e89e9137b3690ef0ab34d4341b2caa25"} Feb 16 17:18:40.717609 master-0 kubenswrapper[4167]: I0216 17:18:40.717575 4167 scope.go:117] "RemoveContainer" containerID="2d47112fd42c5255c0ee3f609db46766e89e9137b3690ef0ab34d4341b2caa25" Feb 16 17:18:40.717653 master-0 kubenswrapper[4167]: I0216 17:18:40.717609 4167 scope.go:117] "RemoveContainer" containerID="8be46334fb66fe6506d2685575256c509d602c3155b7997ac9e303d0fc33bfa7" Feb 16 17:18:40.720179 master-0 kubenswrapper[4167]: I0216 17:18:40.720150 4167 generic.go:334] "Generic (PLEG): container finished" podID="4e51bba5-0ebe-4e55-a588-38b71548c605" containerID="127a39bb4a42f5db86f2d69dcf6b90ad653286c64c989311c25e1215aca40901" exitCode=0 Feb 16 17:18:40.720238 master-0 kubenswrapper[4167]: I0216 17:18:40.720209 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerDied","Data":"127a39bb4a42f5db86f2d69dcf6b90ad653286c64c989311c25e1215aca40901"} Feb 16 17:18:40.721014 master-0 kubenswrapper[4167]: I0216 17:18:40.720942 4167 scope.go:117] "RemoveContainer" containerID="127a39bb4a42f5db86f2d69dcf6b90ad653286c64c989311c25e1215aca40901" Feb 16 17:18:40.722673 master-0 kubenswrapper[4167]: I0216 17:18:40.722624 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-74b6595c6d-pfzq2_80d3b238-70c3-4e71-96a1-99405352033f/snapshot-controller/2.log" Feb 16 17:18:40.722746 master-0 kubenswrapper[4167]: I0216 17:18:40.722710 4167 generic.go:334] "Generic (PLEG): container finished" podID="80d3b238-70c3-4e71-96a1-99405352033f" containerID="e0d85abf94603412016468a1a7df04b05545e2fdd348a9efcb6d6e8215a730c2" exitCode=2 Feb 16 17:18:40.722863 master-0 kubenswrapper[4167]: I0216 17:18:40.722830 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" event={"ID":"80d3b238-70c3-4e71-96a1-99405352033f","Type":"ContainerDied","Data":"e0d85abf94603412016468a1a7df04b05545e2fdd348a9efcb6d6e8215a730c2"} Feb 16 17:18:40.723640 master-0 kubenswrapper[4167]: I0216 17:18:40.723589 4167 scope.go:117] "RemoveContainer" containerID="e0d85abf94603412016468a1a7df04b05545e2fdd348a9efcb6d6e8215a730c2" Feb 16 17:18:40.725394 master-0 kubenswrapper[4167]: I0216 17:18:40.725362 4167 generic.go:334] "Generic (PLEG): container finished" podID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" containerID="0135d939d70cc6c0a0f7a61b3043a49448731eaa91251064c3b28918d59425ed" exitCode=0 Feb 16 17:18:40.725567 master-0 kubenswrapper[4167]: I0216 17:18:40.725434 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerDied","Data":"0135d939d70cc6c0a0f7a61b3043a49448731eaa91251064c3b28918d59425ed"} Feb 16 17:18:40.726170 master-0 kubenswrapper[4167]: I0216 17:18:40.726133 4167 scope.go:117] "RemoveContainer" containerID="9989c01fc82b13e01ba462441db07496e1ba7a4f459fb76d4a48c27e9484ec2b" Feb 16 17:18:40.726220 master-0 kubenswrapper[4167]: I0216 17:18:40.726176 4167 scope.go:117] "RemoveContainer" containerID="0135d939d70cc6c0a0f7a61b3043a49448731eaa91251064c3b28918d59425ed" Feb 16 17:18:40.730515 master-0 kubenswrapper[4167]: I0216 17:18:40.730472 4167 generic.go:334] "Generic (PLEG): container finished" podID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" containerID="e05dd3f5f3e806cfbd23e9c168d7df8ca93928467a534cded34e4d4897e99cda" exitCode=0 Feb 16 17:18:40.730515 master-0 kubenswrapper[4167]: I0216 17:18:40.730512 4167 generic.go:334] "Generic (PLEG): container finished" podID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" containerID="90e176cdc4a300f72473c75de5d6adc3210b277899f4e0a21e71ec4156cc6e6b" exitCode=0 Feb 16 17:18:40.730630 master-0 kubenswrapper[4167]: I0216 17:18:40.730556 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerDied","Data":"e05dd3f5f3e806cfbd23e9c168d7df8ca93928467a534cded34e4d4897e99cda"} Feb 16 17:18:40.730630 master-0 kubenswrapper[4167]: I0216 17:18:40.730606 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerDied","Data":"90e176cdc4a300f72473c75de5d6adc3210b277899f4e0a21e71ec4156cc6e6b"} Feb 16 17:18:40.731321 master-0 kubenswrapper[4167]: I0216 17:18:40.731295 4167 scope.go:117] "RemoveContainer" containerID="90e176cdc4a300f72473c75de5d6adc3210b277899f4e0a21e71ec4156cc6e6b" Feb 16 17:18:40.731406 master-0 kubenswrapper[4167]: I0216 17:18:40.731328 4167 scope.go:117] "RemoveContainer" containerID="dfcf44a82191101ede58522b2f1885bae69752d6fff22ba727eb9abcc1459ac5" Feb 16 17:18:40.731406 master-0 kubenswrapper[4167]: I0216 17:18:40.731341 4167 scope.go:117] "RemoveContainer" containerID="ac4956c462bb8fea92f8db53bd1874b804dea857d75de80ca406b1072139abdb" Feb 16 17:18:40.731406 master-0 kubenswrapper[4167]: I0216 17:18:40.731354 4167 scope.go:117] "RemoveContainer" containerID="e05dd3f5f3e806cfbd23e9c168d7df8ca93928467a534cded34e4d4897e99cda" Feb 16 17:18:40.731406 master-0 kubenswrapper[4167]: I0216 17:18:40.731365 4167 scope.go:117] "RemoveContainer" containerID="f8e0fde3ac327eefe9afd4d8bd37736cca600b0b5d7bc7662c2dd9d135042d1f" Feb 16 17:18:40.731406 master-0 kubenswrapper[4167]: I0216 17:18:40.731375 4167 scope.go:117] "RemoveContainer" containerID="fcda713f163db03b897ab292aff9fcb7b6f07b2cb0cc8de01b750b4174bcb94a" Feb 16 17:18:40.732659 master-0 kubenswrapper[4167]: I0216 17:18:40.732616 4167 generic.go:334] "Generic (PLEG): container finished" podID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" containerID="6e7a54a797193d9e6354dfc9c20644fa6cc6d33f0d9ebf10fdfdf046234d2555" exitCode=0 Feb 16 17:18:40.732727 master-0 kubenswrapper[4167]: I0216 17:18:40.732696 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" event={"ID":"2be9d55c-a4ec-48cd-93d2-0a1dced745a8","Type":"ContainerDied","Data":"6e7a54a797193d9e6354dfc9c20644fa6cc6d33f0d9ebf10fdfdf046234d2555"} Feb 16 17:18:40.733262 master-0 kubenswrapper[4167]: I0216 17:18:40.733226 4167 scope.go:117] "RemoveContainer" containerID="6e7a54a797193d9e6354dfc9c20644fa6cc6d33f0d9ebf10fdfdf046234d2555" Feb 16 17:18:40.735571 master-0 kubenswrapper[4167]: I0216 17:18:40.735520 4167 generic.go:334] "Generic (PLEG): container finished" podID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" containerID="d7731adcb1e4539a65b2c7f3daf888135793fea7b0a6cb8d9092fe38bfc1b95c" exitCode=0 Feb 16 17:18:40.735862 master-0 kubenswrapper[4167]: I0216 17:18:40.735594 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerDied","Data":"d7731adcb1e4539a65b2c7f3daf888135793fea7b0a6cb8d9092fe38bfc1b95c"} Feb 16 17:18:40.736480 master-0 kubenswrapper[4167]: I0216 17:18:40.736429 4167 scope.go:117] "RemoveContainer" containerID="d7731adcb1e4539a65b2c7f3daf888135793fea7b0a6cb8d9092fe38bfc1b95c" Feb 16 17:18:40.742074 master-0 kubenswrapper[4167]: I0216 17:18:40.741947 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-flr86_9f9bf4ab-5415-4616-aa36-ea387c699ea9/ovn-acl-logging/1.log" Feb 16 17:18:40.743005 master-0 kubenswrapper[4167]: I0216 17:18:40.742878 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-flr86_9f9bf4ab-5415-4616-aa36-ea387c699ea9/ovn-controller/2.log" Feb 16 17:18:40.744369 master-0 kubenswrapper[4167]: I0216 17:18:40.744329 4167 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="89df1b736cd989f6bae1660e2384e5940c7dcd9c34b7c051214046710bb00dab" exitCode=0 Feb 16 17:18:40.744439 master-0 kubenswrapper[4167]: I0216 17:18:40.744419 4167 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="bc93650ebf3cc691f951d0213bee9685e2f6189244bf9df33a231ed89e535a91" exitCode=0 Feb 16 17:18:40.744439 master-0 kubenswrapper[4167]: I0216 17:18:40.744429 4167 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="f43833d221ea9f4a9e4534c7ac99a93808c280574641dc386194df75a19f49c9" exitCode=0 Feb 16 17:18:40.744439 master-0 kubenswrapper[4167]: I0216 17:18:40.744435 4167 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="b7d42dce3d54f9d3e617b618c16e9ef08c99739ea91e000b4b1d99443db8553d" exitCode=0 Feb 16 17:18:40.744551 master-0 kubenswrapper[4167]: I0216 17:18:40.744443 4167 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="cc397b3d5cc6cbc9a903e00354fc794059c1f176e81929eee27fef44e0ed535b" exitCode=0 Feb 16 17:18:40.744551 master-0 kubenswrapper[4167]: I0216 17:18:40.744449 4167 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="783acd5adfcf8bec6e5a632c51786877d81220b6ece8f11093f1631f55f8aab9" exitCode=143 Feb 16 17:18:40.744551 master-0 kubenswrapper[4167]: I0216 17:18:40.744456 4167 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="7011e9e23652e9ae0c244b5fce826a0e0485e2276d5484734e7bac8ba4afe778" exitCode=143 Feb 16 17:18:40.744551 master-0 kubenswrapper[4167]: I0216 17:18:40.744515 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"89df1b736cd989f6bae1660e2384e5940c7dcd9c34b7c051214046710bb00dab"} Feb 16 17:18:40.744551 master-0 kubenswrapper[4167]: I0216 17:18:40.744542 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"bc93650ebf3cc691f951d0213bee9685e2f6189244bf9df33a231ed89e535a91"} Feb 16 17:18:40.744551 master-0 kubenswrapper[4167]: I0216 17:18:40.744553 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"f43833d221ea9f4a9e4534c7ac99a93808c280574641dc386194df75a19f49c9"} Feb 16 17:18:40.744737 master-0 kubenswrapper[4167]: I0216 17:18:40.744561 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"b7d42dce3d54f9d3e617b618c16e9ef08c99739ea91e000b4b1d99443db8553d"} Feb 16 17:18:40.744737 master-0 kubenswrapper[4167]: I0216 17:18:40.744569 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"cc397b3d5cc6cbc9a903e00354fc794059c1f176e81929eee27fef44e0ed535b"} Feb 16 17:18:40.744737 master-0 kubenswrapper[4167]: I0216 17:18:40.744708 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"783acd5adfcf8bec6e5a632c51786877d81220b6ece8f11093f1631f55f8aab9"} Feb 16 17:18:40.744737 master-0 kubenswrapper[4167]: I0216 17:18:40.744720 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"7011e9e23652e9ae0c244b5fce826a0e0485e2276d5484734e7bac8ba4afe778"} Feb 16 17:18:40.745102 master-0 kubenswrapper[4167]: I0216 17:18:40.745073 4167 scope.go:117] "RemoveContainer" containerID="7011e9e23652e9ae0c244b5fce826a0e0485e2276d5484734e7bac8ba4afe778" Feb 16 17:18:40.745102 master-0 kubenswrapper[4167]: I0216 17:18:40.745099 4167 scope.go:117] "RemoveContainer" containerID="783acd5adfcf8bec6e5a632c51786877d81220b6ece8f11093f1631f55f8aab9" Feb 16 17:18:40.745190 master-0 kubenswrapper[4167]: I0216 17:18:40.745109 4167 scope.go:117] "RemoveContainer" containerID="cc397b3d5cc6cbc9a903e00354fc794059c1f176e81929eee27fef44e0ed535b" Feb 16 17:18:40.745190 master-0 kubenswrapper[4167]: I0216 17:18:40.745120 4167 scope.go:117] "RemoveContainer" containerID="b7d42dce3d54f9d3e617b618c16e9ef08c99739ea91e000b4b1d99443db8553d" Feb 16 17:18:40.745190 master-0 kubenswrapper[4167]: I0216 17:18:40.745129 4167 scope.go:117] "RemoveContainer" containerID="7c3093aff1f8ebccbe96292ac184022bc15d8eeb25fdec354abddcd0eccfb95a" Feb 16 17:18:40.745190 master-0 kubenswrapper[4167]: I0216 17:18:40.745138 4167 scope.go:117] "RemoveContainer" containerID="f43833d221ea9f4a9e4534c7ac99a93808c280574641dc386194df75a19f49c9" Feb 16 17:18:40.745190 master-0 kubenswrapper[4167]: I0216 17:18:40.745146 4167 scope.go:117] "RemoveContainer" containerID="bc93650ebf3cc691f951d0213bee9685e2f6189244bf9df33a231ed89e535a91" Feb 16 17:18:40.745190 master-0 kubenswrapper[4167]: I0216 17:18:40.745155 4167 scope.go:117] "RemoveContainer" containerID="89df1b736cd989f6bae1660e2384e5940c7dcd9c34b7c051214046710bb00dab" Feb 16 17:18:40.747007 master-0 kubenswrapper[4167]: I0216 17:18:40.746663 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/4.log" Feb 16 17:18:40.747062 master-0 kubenswrapper[4167]: I0216 17:18:40.747029 4167 generic.go:334] "Generic (PLEG): container finished" podID="702322ac-7610-4568-9a68-b6acbd1f0c12" containerID="1a8885af29cb94472b23ade7b86cbcb90ba289e387a2780efabf272cfe37dbff" exitCode=0 Feb 16 17:18:40.749000 master-0 kubenswrapper[4167]: I0216 17:18:40.747091 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerDied","Data":"1a8885af29cb94472b23ade7b86cbcb90ba289e387a2780efabf272cfe37dbff"} Feb 16 17:18:40.749000 master-0 kubenswrapper[4167]: I0216 17:18:40.747559 4167 scope.go:117] "RemoveContainer" containerID="1a8885af29cb94472b23ade7b86cbcb90ba289e387a2780efabf272cfe37dbff" Feb 16 17:18:40.749000 master-0 kubenswrapper[4167]: I0216 17:18:40.748814 4167 generic.go:334] "Generic (PLEG): container finished" podID="54f29618-42c2-4270-9af7-7d82852d7cec" containerID="d72f324a55650c7405e43c567e917b447a66bed241765eff3340e11149168029" exitCode=0 Feb 16 17:18:40.749000 master-0 kubenswrapper[4167]: I0216 17:18:40.748863 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerDied","Data":"d72f324a55650c7405e43c567e917b447a66bed241765eff3340e11149168029"} Feb 16 17:18:40.749185 master-0 kubenswrapper[4167]: I0216 17:18:40.749174 4167 scope.go:117] "RemoveContainer" containerID="d72f324a55650c7405e43c567e917b447a66bed241765eff3340e11149168029" Feb 16 17:18:40.749218 master-0 kubenswrapper[4167]: I0216 17:18:40.749192 4167 scope.go:117] "RemoveContainer" containerID="55e08c69eb23144d5aa7fed58f1dae3c3d541d066648d64253f1ea46150d45de" Feb 16 17:18:40.752661 master-0 kubenswrapper[4167]: I0216 17:18:40.752635 4167 generic.go:334] "Generic (PLEG): container finished" podID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" containerID="f812e7bcfc10722b8073186c03d561876e550c6cdad7f1232e13cf4de32d1296" exitCode=0 Feb 16 17:18:40.752742 master-0 kubenswrapper[4167]: I0216 17:18:40.752705 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerDied","Data":"f812e7bcfc10722b8073186c03d561876e550c6cdad7f1232e13cf4de32d1296"} Feb 16 17:18:40.753176 master-0 kubenswrapper[4167]: I0216 17:18:40.753152 4167 scope.go:117] "RemoveContainer" containerID="f812e7bcfc10722b8073186c03d561876e550c6cdad7f1232e13cf4de32d1296" Feb 16 17:18:40.757536 master-0 kubenswrapper[4167]: I0216 17:18:40.757508 4167 generic.go:334] "Generic (PLEG): container finished" podID="18e9a9d3-9b18-4c19-9558-f33c68101922" containerID="43eb528b3bd9774e59d2bd74bbad7abccd777b07368ee07e8ee390fd02445251" exitCode=0 Feb 16 17:18:40.757536 master-0 kubenswrapper[4167]: I0216 17:18:40.757531 4167 generic.go:334] "Generic (PLEG): container finished" podID="18e9a9d3-9b18-4c19-9558-f33c68101922" containerID="e413aa866392057ec839c989db6bcca34307e7e6175ba54f4b7e7f66ded4d8a9" exitCode=0 Feb 16 17:18:40.757842 master-0 kubenswrapper[4167]: I0216 17:18:40.757587 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerDied","Data":"43eb528b3bd9774e59d2bd74bbad7abccd777b07368ee07e8ee390fd02445251"} Feb 16 17:18:40.757842 master-0 kubenswrapper[4167]: I0216 17:18:40.757635 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerDied","Data":"e413aa866392057ec839c989db6bcca34307e7e6175ba54f4b7e7f66ded4d8a9"} Feb 16 17:18:40.758239 master-0 kubenswrapper[4167]: I0216 17:18:40.758220 4167 scope.go:117] "RemoveContainer" containerID="e413aa866392057ec839c989db6bcca34307e7e6175ba54f4b7e7f66ded4d8a9" Feb 16 17:18:40.758292 master-0 kubenswrapper[4167]: I0216 17:18:40.758246 4167 scope.go:117] "RemoveContainer" containerID="43eb528b3bd9774e59d2bd74bbad7abccd777b07368ee07e8ee390fd02445251" Feb 16 17:18:40.759076 master-0 kubenswrapper[4167]: I0216 17:18:40.759054 4167 generic.go:334] "Generic (PLEG): container finished" podID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" containerID="488b352d1dfd257802bcd1296116b56e5206a76528371ecdeaba998886e63d7f" exitCode=0 Feb 16 17:18:40.759121 master-0 kubenswrapper[4167]: I0216 17:18:40.759104 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" event={"ID":"5192fa49-d81c-47ce-b2ab-f90996cc0bd5","Type":"ContainerDied","Data":"488b352d1dfd257802bcd1296116b56e5206a76528371ecdeaba998886e63d7f"} Feb 16 17:18:40.759342 master-0 kubenswrapper[4167]: I0216 17:18:40.759321 4167 scope.go:117] "RemoveContainer" containerID="488b352d1dfd257802bcd1296116b56e5206a76528371ecdeaba998886e63d7f" Feb 16 17:18:40.761026 master-0 kubenswrapper[4167]: I0216 17:18:40.761003 4167 generic.go:334] "Generic (PLEG): container finished" podID="ab80e0fb-09dd-4c93-b235-1487024105d2" containerID="9a063b7eebbf7665801f6c436290f4be7bb969d600b58e4ec089553b0db33f2c" exitCode=0 Feb 16 17:18:40.761081 master-0 kubenswrapper[4167]: I0216 17:18:40.761050 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerDied","Data":"9a063b7eebbf7665801f6c436290f4be7bb969d600b58e4ec089553b0db33f2c"} Feb 16 17:18:40.761292 master-0 kubenswrapper[4167]: I0216 17:18:40.761269 4167 scope.go:117] "RemoveContainer" containerID="9a063b7eebbf7665801f6c436290f4be7bb969d600b58e4ec089553b0db33f2c" Feb 16 17:18:40.762268 master-0 kubenswrapper[4167]: I0216 17:18:40.762250 4167 generic.go:334] "Generic (PLEG): container finished" podID="4549ea98-7379-49e1-8452-5efb643137ca" containerID="2be53eae191398924971d0097300b007201955ba0357c275cdcb7ace2ef35cd4" exitCode=0 Feb 16 17:18:40.762327 master-0 kubenswrapper[4167]: I0216 17:18:40.762310 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" event={"ID":"4549ea98-7379-49e1-8452-5efb643137ca","Type":"ContainerDied","Data":"2be53eae191398924971d0097300b007201955ba0357c275cdcb7ace2ef35cd4"} Feb 16 17:18:40.762737 master-0 kubenswrapper[4167]: I0216 17:18:40.762719 4167 scope.go:117] "RemoveContainer" containerID="2be53eae191398924971d0097300b007201955ba0357c275cdcb7ace2ef35cd4" Feb 16 17:18:40.763785 master-0 kubenswrapper[4167]: I0216 17:18:40.763759 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-qqvg4_1363cb7b-62cc-497b-af6f-4d5e0eb7f174/serve-healthcheck-canary/2.log" Feb 16 17:18:40.763847 master-0 kubenswrapper[4167]: I0216 17:18:40.763794 4167 generic.go:334] "Generic (PLEG): container finished" podID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" containerID="4e84b2065d1af8978ad93515e956abd6154a82b2a6c17439456375941f7a255b" exitCode=2 Feb 16 17:18:40.763892 master-0 kubenswrapper[4167]: I0216 17:18:40.763843 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qqvg4" event={"ID":"1363cb7b-62cc-497b-af6f-4d5e0eb7f174","Type":"ContainerDied","Data":"4e84b2065d1af8978ad93515e956abd6154a82b2a6c17439456375941f7a255b"} Feb 16 17:18:40.764101 master-0 kubenswrapper[4167]: I0216 17:18:40.764082 4167 scope.go:117] "RemoveContainer" containerID="4e84b2065d1af8978ad93515e956abd6154a82b2a6c17439456375941f7a255b" Feb 16 17:18:40.766218 master-0 kubenswrapper[4167]: I0216 17:18:40.766198 4167 generic.go:334] "Generic (PLEG): container finished" podID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" containerID="6dd68bd49fb165551bba65c313f75ef9ca64d5244db2bb6f507206ab912ba745" exitCode=0 Feb 16 17:18:40.766268 master-0 kubenswrapper[4167]: I0216 17:18:40.766219 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerDied","Data":"6dd68bd49fb165551bba65c313f75ef9ca64d5244db2bb6f507206ab912ba745"} Feb 16 17:18:40.766664 master-0 kubenswrapper[4167]: I0216 17:18:40.766637 4167 scope.go:117] "RemoveContainer" containerID="6dd68bd49fb165551bba65c313f75ef9ca64d5244db2bb6f507206ab912ba745" Feb 16 17:18:41.145995 master-0 kubenswrapper[4167]: I0216 17:18:41.145934 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:18:41.271123 master-0 kubenswrapper[4167]: I0216 17:18:41.269268 4167 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:18:41.271123 master-0 kubenswrapper[4167]: [+]has-synced ok Feb 16 17:18:41.271123 master-0 kubenswrapper[4167]: [-]process-running failed: reason withheld Feb 16 17:18:41.271123 master-0 kubenswrapper[4167]: healthz check failed Feb 16 17:18:41.271123 master-0 kubenswrapper[4167]: I0216 17:18:41.269355 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:18:41.488781 master-0 kubenswrapper[4167]: I0216 17:18:41.488733 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:18:41.489181 master-0 kubenswrapper[4167]: I0216 17:18:41.489112 4167 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:18:41.489181 master-0 kubenswrapper[4167]: I0216 17:18:41.489162 4167 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:18:41.496294 master-0 kubenswrapper[4167]: I0216 17:18:41.496253 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:18:41.511364 master-0 kubenswrapper[4167]: I0216 17:18:41.511323 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:18:41.529320 master-0 kubenswrapper[4167]: I0216 17:18:41.529214 4167 patch_prober.go:28] interesting pod/packageserver-6d5d8c8c95-kzfjw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.59:5443/healthz\": dial tcp 10.128.0.59:5443: connect: connection refused" start-of-body= Feb 16 17:18:41.529320 master-0 kubenswrapper[4167]: I0216 17:18:41.529273 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.59:5443/healthz\": dial tcp 10.128.0.59:5443: connect: connection refused" Feb 16 17:18:41.529320 master-0 kubenswrapper[4167]: I0216 17:18:41.529214 4167 patch_prober.go:28] interesting pod/packageserver-6d5d8c8c95-kzfjw container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.59:5443/healthz\": dial tcp 10.128.0.59:5443: connect: connection refused" start-of-body= Feb 16 17:18:41.529613 master-0 kubenswrapper[4167]: I0216 17:18:41.529356 4167 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.59:5443/healthz\": dial tcp 10.128.0.59:5443: connect: connection refused" Feb 16 17:18:41.536577 master-0 kubenswrapper[4167]: I0216 17:18:41.536536 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:18:41.536645 master-0 kubenswrapper[4167]: I0216 17:18:41.536582 4167 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: I0216 17:18:41.543865 4167 patch_prober.go:28] interesting pod/apiserver-66788cb45c-dp9bc container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: [+]log ok Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: [+]etcd excluded: ok Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: [+]etcd-readiness excluded: ok Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: [+]informer-sync ok Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: [+]poststarthook/max-in-flight-filter ok Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: [+]poststarthook/openshift.io-StartUserInformer ok Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: [+]poststarthook/openshift.io-StartOAuthInformer ok Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: [-]shutdown failed: reason withheld Feb 16 17:18:41.544122 master-0 kubenswrapper[4167]: readyz check failed Feb 16 17:18:41.544665 master-0 kubenswrapper[4167]: I0216 17:18:41.544131 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:18:41.567109 master-0 kubenswrapper[4167]: I0216 17:18:41.567054 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:18:41.586948 master-0 kubenswrapper[4167]: I0216 17:18:41.586900 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:18:41.611509 master-0 kubenswrapper[4167]: I0216 17:18:41.611463 4167 patch_prober.go:28] interesting pod/olm-operator-6b56bd877c-p7k2k container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 17:18:41.611611 master-0 kubenswrapper[4167]: I0216 17:18:41.611526 4167 patch_prober.go:28] interesting pod/olm-operator-6b56bd877c-p7k2k container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" start-of-body= Feb 16 17:18:41.611611 master-0 kubenswrapper[4167]: I0216 17:18:41.611526 4167 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 17:18:41.611611 master-0 kubenswrapper[4167]: I0216 17:18:41.611550 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" containerName="olm-operator" probeResult="failure" output="Get \"https://10.128.0.54:8443/healthz\": dial tcp 10.128.0.54:8443: connect: connection refused" Feb 16 17:18:41.638882 master-0 kubenswrapper[4167]: E0216 17:18:41.638814 4167 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67 is running failed: container process not found" containerID="3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 17:18:41.638882 master-0 kubenswrapper[4167]: E0216 17:18:41.638832 4167 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67 is running failed: container process not found" containerID="3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 17:18:41.639232 master-0 kubenswrapper[4167]: E0216 17:18:41.639189 4167 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67 is running failed: container process not found" containerID="3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 17:18:41.639232 master-0 kubenswrapper[4167]: E0216 17:18:41.639201 4167 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67 is running failed: container process not found" containerID="3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 17:18:41.639600 master-0 kubenswrapper[4167]: E0216 17:18:41.639567 4167 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67 is running failed: container process not found" containerID="3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 17:18:41.639636 master-0 kubenswrapper[4167]: E0216 17:18:41.639599 4167 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67 is running failed: container process not found" probeType="Liveness" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" containerName="registry-server" Feb 16 17:18:41.639674 master-0 kubenswrapper[4167]: E0216 17:18:41.639624 4167 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67 is running failed: container process not found" containerID="3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67" cmd=["grpc_health_probe","-addr=:50051"] Feb 16 17:18:41.639708 master-0 kubenswrapper[4167]: E0216 17:18:41.639668 4167 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" containerName="registry-server" Feb 16 17:18:41.685395 master-0 kubenswrapper[4167]: I0216 17:18:41.685317 4167 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:18:41.685395 master-0 kubenswrapper[4167]: I0216 17:18:41.685399 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:18:41.685665 master-0 kubenswrapper[4167]: I0216 17:18:41.685419 4167 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:18:41.721863 master-0 kubenswrapper[4167]: I0216 17:18:41.721821 4167 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:18:41.721934 master-0 kubenswrapper[4167]: I0216 17:18:41.721870 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:18:41.761744 master-0 kubenswrapper[4167]: I0216 17:18:41.761668 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:18:41.775978 master-0 kubenswrapper[4167]: I0216 17:18:41.775893 4167 generic.go:334] "Generic (PLEG): container finished" podID="a6fe41b0-1a42-4f07-8220-d9aaa50788ad" containerID="9bb121c20d32f58b029c9f769238c1b163f3d4ad3991ada34d435c50d5d6208a" exitCode=0 Feb 16 17:18:41.776193 master-0 kubenswrapper[4167]: I0216 17:18:41.776031 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vfxj4" event={"ID":"a6fe41b0-1a42-4f07-8220-d9aaa50788ad","Type":"ContainerDied","Data":"9bb121c20d32f58b029c9f769238c1b163f3d4ad3991ada34d435c50d5d6208a"} Feb 16 17:18:41.776814 master-0 kubenswrapper[4167]: I0216 17:18:41.776768 4167 scope.go:117] "RemoveContainer" containerID="9bb121c20d32f58b029c9f769238c1b163f3d4ad3991ada34d435c50d5d6208a" Feb 16 17:18:41.778670 master-0 kubenswrapper[4167]: I0216 17:18:41.778625 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-8256c_a94f9b8e-b020-4aab-8373-6c056ec07464/node-exporter/2.log" Feb 16 17:18:41.779231 master-0 kubenswrapper[4167]: I0216 17:18:41.779185 4167 generic.go:334] "Generic (PLEG): container finished" podID="a94f9b8e-b020-4aab-8373-6c056ec07464" containerID="9f85722a140cc9cccf5e480580bd91d292bda981ade94e3a57f3d684826a3098" exitCode=0 Feb 16 17:18:41.779403 master-0 kubenswrapper[4167]: I0216 17:18:41.779248 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerDied","Data":"9f85722a140cc9cccf5e480580bd91d292bda981ade94e3a57f3d684826a3098"} Feb 16 17:18:41.782246 master-0 kubenswrapper[4167]: I0216 17:18:41.782204 4167 generic.go:334] "Generic (PLEG): container finished" podID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" containerID="3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67" exitCode=0 Feb 16 17:18:41.782306 master-0 kubenswrapper[4167]: I0216 17:18:41.782240 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerDied","Data":"3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67"} Feb 16 17:18:41.783342 master-0 kubenswrapper[4167]: I0216 17:18:41.783281 4167 scope.go:117] "RemoveContainer" containerID="3b6993fdc857646cd604d6a7a6db3a349ad6e4b4d2872c081fdb4682f43d5b67" Feb 16 17:18:41.784686 master-0 kubenswrapper[4167]: I0216 17:18:41.784645 4167 generic.go:334] "Generic (PLEG): container finished" podID="e73ee493-de15-44c2-bd51-e12fcbb27a15" containerID="725fcc0c6cb710edb4a6da087267f92efc9dc223a018289fd1f7f613bf9f07d9" exitCode=0 Feb 16 17:18:41.784752 master-0 kubenswrapper[4167]: I0216 17:18:41.784690 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" event={"ID":"e73ee493-de15-44c2-bd51-e12fcbb27a15","Type":"ContainerDied","Data":"725fcc0c6cb710edb4a6da087267f92efc9dc223a018289fd1f7f613bf9f07d9"} Feb 16 17:18:41.785453 master-0 kubenswrapper[4167]: I0216 17:18:41.785409 4167 scope.go:117] "RemoveContainer" containerID="725fcc0c6cb710edb4a6da087267f92efc9dc223a018289fd1f7f613bf9f07d9" Feb 16 17:18:41.787436 master-0 kubenswrapper[4167]: I0216 17:18:41.787396 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-bd7dd5c46-92rqx_404c402a-705f-4352-b9df-b89562070d9c/machine-api-operator/2.log" Feb 16 17:18:41.787930 master-0 kubenswrapper[4167]: I0216 17:18:41.787893 4167 generic.go:334] "Generic (PLEG): container finished" podID="404c402a-705f-4352-b9df-b89562070d9c" containerID="b4eb28c976464930f1c03f92ec479debd9dd58656d0f14a479c1a70e1cff09c4" exitCode=0 Feb 16 17:18:41.788006 master-0 kubenswrapper[4167]: I0216 17:18:41.787981 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerDied","Data":"b4eb28c976464930f1c03f92ec479debd9dd58656d0f14a479c1a70e1cff09c4"} Feb 16 17:18:41.790565 master-0 kubenswrapper[4167]: I0216 17:18:41.790527 4167 generic.go:334] "Generic (PLEG): container finished" podID="cc9a20f4-255a-4312-8f43-174a28c06340" containerID="ad7184f5e454ed28cae744f6dce96256b237252d9accb9b83a29e875cd597f37" exitCode=0 Feb 16 17:18:41.790680 master-0 kubenswrapper[4167]: I0216 17:18:41.790625 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerDied","Data":"ad7184f5e454ed28cae744f6dce96256b237252d9accb9b83a29e875cd597f37"} Feb 16 17:18:41.791419 master-0 kubenswrapper[4167]: I0216 17:18:41.791371 4167 scope.go:117] "RemoveContainer" containerID="ad7184f5e454ed28cae744f6dce96256b237252d9accb9b83a29e875cd597f37" Feb 16 17:18:41.792703 master-0 kubenswrapper[4167]: I0216 17:18:41.792666 4167 generic.go:334] "Generic (PLEG): container finished" podID="62220aa5-4065-472c-8a17-c0a58942ab8a" containerID="c07ea07c8dfc4d02cfd7cdd8fe4653a27764977193409ade2423da788998dfd1" exitCode=0 Feb 16 17:18:41.792773 master-0 kubenswrapper[4167]: I0216 17:18:41.792738 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" event={"ID":"62220aa5-4065-472c-8a17-c0a58942ab8a","Type":"ContainerDied","Data":"c07ea07c8dfc4d02cfd7cdd8fe4653a27764977193409ade2423da788998dfd1"} Feb 16 17:18:41.793099 master-0 kubenswrapper[4167]: I0216 17:18:41.792813 4167 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:18:41.793099 master-0 kubenswrapper[4167]: I0216 17:18:41.792873 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:18:41.793277 master-0 kubenswrapper[4167]: I0216 17:18:41.793238 4167 scope.go:117] "RemoveContainer" containerID="c07ea07c8dfc4d02cfd7cdd8fe4653a27764977193409ade2423da788998dfd1" Feb 16 17:18:41.795044 master-0 kubenswrapper[4167]: I0216 17:18:41.794929 4167 generic.go:334] "Generic (PLEG): container finished" podID="54f29618-42c2-4270-9af7-7d82852d7cec" containerID="55e08c69eb23144d5aa7fed58f1dae3c3d541d066648d64253f1ea46150d45de" exitCode=0 Feb 16 17:18:41.795044 master-0 kubenswrapper[4167]: I0216 17:18:41.795021 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerDied","Data":"55e08c69eb23144d5aa7fed58f1dae3c3d541d066648d64253f1ea46150d45de"} Feb 16 17:18:41.798523 master-0 kubenswrapper[4167]: I0216 17:18:41.798472 4167 generic.go:334] "Generic (PLEG): container finished" podID="2d1636c0-f34d-444c-822d-77f1d203ddc4" containerID="ce1fecddf778d4ca8cd64c9cabae410947bc8980b5642363ae0b76afacbfeeea" exitCode=0 Feb 16 17:18:41.798523 master-0 kubenswrapper[4167]: I0216 17:18:41.798513 4167 generic.go:334] "Generic (PLEG): container finished" podID="2d1636c0-f34d-444c-822d-77f1d203ddc4" containerID="81b6be3a9390d67aaa62a013a1ec4e1d4dd124d8a7ebbce77a890ca7b5c2761d" exitCode=0 Feb 16 17:18:41.798690 master-0 kubenswrapper[4167]: I0216 17:18:41.798596 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" event={"ID":"2d1636c0-f34d-444c-822d-77f1d203ddc4","Type":"ContainerDied","Data":"ce1fecddf778d4ca8cd64c9cabae410947bc8980b5642363ae0b76afacbfeeea"} Feb 16 17:18:41.799546 master-0 kubenswrapper[4167]: I0216 17:18:41.798751 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" event={"ID":"2d1636c0-f34d-444c-822d-77f1d203ddc4","Type":"ContainerDied","Data":"81b6be3a9390d67aaa62a013a1ec4e1d4dd124d8a7ebbce77a890ca7b5c2761d"} Feb 16 17:18:41.799640 master-0 kubenswrapper[4167]: I0216 17:18:41.799485 4167 scope.go:117] "RemoveContainer" containerID="81b6be3a9390d67aaa62a013a1ec4e1d4dd124d8a7ebbce77a890ca7b5c2761d" Feb 16 17:18:41.799640 master-0 kubenswrapper[4167]: I0216 17:18:41.799594 4167 scope.go:117] "RemoveContainer" containerID="ce1fecddf778d4ca8cd64c9cabae410947bc8980b5642363ae0b76afacbfeeea" Feb 16 17:18:41.800435 master-0 kubenswrapper[4167]: I0216 17:18:41.800397 4167 generic.go:334] "Generic (PLEG): container finished" podID="0ff68421-1741-41c1-93d5-5c722dfd295e" containerID="cb48db8b0c54e896a8fe0f23386c8f40c351681307dcca24daa28ca39af6e204" exitCode=0 Feb 16 17:18:41.800494 master-0 kubenswrapper[4167]: I0216 17:18:41.800440 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" event={"ID":"0ff68421-1741-41c1-93d5-5c722dfd295e","Type":"ContainerDied","Data":"cb48db8b0c54e896a8fe0f23386c8f40c351681307dcca24daa28ca39af6e204"} Feb 16 17:18:41.801229 master-0 kubenswrapper[4167]: I0216 17:18:41.801166 4167 scope.go:117] "RemoveContainer" containerID="cb48db8b0c54e896a8fe0f23386c8f40c351681307dcca24daa28ca39af6e204" Feb 16 17:18:41.807068 master-0 kubenswrapper[4167]: I0216 17:18:41.806863 4167 generic.go:334] "Generic (PLEG): container finished" podID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" containerID="fcda713f163db03b897ab292aff9fcb7b6f07b2cb0cc8de01b750b4174bcb94a" exitCode=0 Feb 16 17:18:41.807068 master-0 kubenswrapper[4167]: I0216 17:18:41.806895 4167 generic.go:334] "Generic (PLEG): container finished" podID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" containerID="f8e0fde3ac327eefe9afd4d8bd37736cca600b0b5d7bc7662c2dd9d135042d1f" exitCode=0 Feb 16 17:18:41.807068 master-0 kubenswrapper[4167]: I0216 17:18:41.806904 4167 generic.go:334] "Generic (PLEG): container finished" podID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" containerID="ac4956c462bb8fea92f8db53bd1874b804dea857d75de80ca406b1072139abdb" exitCode=0 Feb 16 17:18:41.807068 master-0 kubenswrapper[4167]: I0216 17:18:41.806912 4167 generic.go:334] "Generic (PLEG): container finished" podID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" containerID="dfcf44a82191101ede58522b2f1885bae69752d6fff22ba727eb9abcc1459ac5" exitCode=0 Feb 16 17:18:41.807068 master-0 kubenswrapper[4167]: I0216 17:18:41.806904 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerDied","Data":"fcda713f163db03b897ab292aff9fcb7b6f07b2cb0cc8de01b750b4174bcb94a"} Feb 16 17:18:41.807068 master-0 kubenswrapper[4167]: I0216 17:18:41.807014 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerDied","Data":"f8e0fde3ac327eefe9afd4d8bd37736cca600b0b5d7bc7662c2dd9d135042d1f"} Feb 16 17:18:41.807068 master-0 kubenswrapper[4167]: I0216 17:18:41.807050 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerDied","Data":"ac4956c462bb8fea92f8db53bd1874b804dea857d75de80ca406b1072139abdb"} Feb 16 17:18:41.807068 master-0 kubenswrapper[4167]: I0216 17:18:41.807079 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerDied","Data":"dfcf44a82191101ede58522b2f1885bae69752d6fff22ba727eb9abcc1459ac5"} Feb 16 17:18:41.810559 master-0 kubenswrapper[4167]: I0216 17:18:41.810494 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7cc9598d54-8j5rk_55d635cd-1f0d-4086-96f2-9f3524f3f18c/kube-state-metrics/2.log" Feb 16 17:18:41.810716 master-0 kubenswrapper[4167]: I0216 17:18:41.810575 4167 generic.go:334] "Generic (PLEG): container finished" podID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" containerID="014df18f96d4689cdbe5b5e5f610e110c485c44cf2bfa273b7618be6223ab8da" exitCode=0 Feb 16 17:18:41.810716 master-0 kubenswrapper[4167]: I0216 17:18:41.810604 4167 generic.go:334] "Generic (PLEG): container finished" podID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" containerID="0d0daa3f24697660fe022e8196e98f3bdaaa8ca58ffaa8746bbffff1ded535e4" exitCode=0 Feb 16 17:18:41.810716 master-0 kubenswrapper[4167]: I0216 17:18:41.810689 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerDied","Data":"014df18f96d4689cdbe5b5e5f610e110c485c44cf2bfa273b7618be6223ab8da"} Feb 16 17:18:41.811066 master-0 kubenswrapper[4167]: I0216 17:18:41.810737 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerDied","Data":"0d0daa3f24697660fe022e8196e98f3bdaaa8ca58ffaa8746bbffff1ded535e4"} Feb 16 17:18:41.814173 master-0 kubenswrapper[4167]: I0216 17:18:41.814116 4167 generic.go:334] "Generic (PLEG): container finished" podID="f3beb7bf-922f-425d-8a19-fd407a7153a8" containerID="55903a364e6bb62b7d7cc10dbb056ba756dd2642598ae93d70c487737c5ebd22" exitCode=0 Feb 16 17:18:41.814373 master-0 kubenswrapper[4167]: I0216 17:18:41.814189 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerDied","Data":"55903a364e6bb62b7d7cc10dbb056ba756dd2642598ae93d70c487737c5ebd22"} Feb 16 17:18:41.814765 master-0 kubenswrapper[4167]: I0216 17:18:41.814660 4167 scope.go:117] "RemoveContainer" containerID="55903a364e6bb62b7d7cc10dbb056ba756dd2642598ae93d70c487737c5ebd22" Feb 16 17:18:41.820914 master-0 kubenswrapper[4167]: I0216 17:18:41.820839 4167 generic.go:334] "Generic (PLEG): container finished" podID="b04ee64e-5e83-499c-812d-749b2b6824c6" containerID="ebd675428e5bb8f7c53e95c7fcb4e73a21dc73f92a194758d2d88737bda48c9a" exitCode=0 Feb 16 17:18:41.820914 master-0 kubenswrapper[4167]: I0216 17:18:41.820878 4167 generic.go:334] "Generic (PLEG): container finished" podID="b04ee64e-5e83-499c-812d-749b2b6824c6" containerID="aaf2848fb6af1c4ef6ce19e563ad85aaca897482b8b4f1ff056e322d68a43a8a" exitCode=0 Feb 16 17:18:41.820914 master-0 kubenswrapper[4167]: I0216 17:18:41.820888 4167 generic.go:334] "Generic (PLEG): container finished" podID="b04ee64e-5e83-499c-812d-749b2b6824c6" containerID="06deede4924397e939bebb44e359f2380b34ce255772a313f28d157026b919c3" exitCode=0 Feb 16 17:18:41.820914 master-0 kubenswrapper[4167]: I0216 17:18:41.820882 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerDied","Data":"ebd675428e5bb8f7c53e95c7fcb4e73a21dc73f92a194758d2d88737bda48c9a"} Feb 16 17:18:41.821420 master-0 kubenswrapper[4167]: I0216 17:18:41.820938 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerDied","Data":"aaf2848fb6af1c4ef6ce19e563ad85aaca897482b8b4f1ff056e322d68a43a8a"} Feb 16 17:18:41.821420 master-0 kubenswrapper[4167]: I0216 17:18:41.821000 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerDied","Data":"06deede4924397e939bebb44e359f2380b34ce255772a313f28d157026b919c3"} Feb 16 17:18:41.827204 master-0 kubenswrapper[4167]: I0216 17:18:41.824360 4167 generic.go:334] "Generic (PLEG): container finished" podID="29402454-a920-471e-895e-764235d16eb4" containerID="ccfd56e5629650dcc52c7297fb4921b657d9eb06f420b97278d5c481fb1e03bf" exitCode=0 Feb 16 17:18:41.827204 master-0 kubenswrapper[4167]: I0216 17:18:41.824425 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerDied","Data":"ccfd56e5629650dcc52c7297fb4921b657d9eb06f420b97278d5c481fb1e03bf"} Feb 16 17:18:41.827204 master-0 kubenswrapper[4167]: I0216 17:18:41.824916 4167 scope.go:117] "RemoveContainer" containerID="ccfd56e5629650dcc52c7297fb4921b657d9eb06f420b97278d5c481fb1e03bf" Feb 16 17:18:41.831602 master-0 kubenswrapper[4167]: I0216 17:18:41.831506 4167 generic.go:334] "Generic (PLEG): container finished" podID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" containerID="9989c01fc82b13e01ba462441db07496e1ba7a4f459fb76d4a48c27e9484ec2b" exitCode=0 Feb 16 17:18:41.831742 master-0 kubenswrapper[4167]: I0216 17:18:41.831604 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerDied","Data":"9989c01fc82b13e01ba462441db07496e1ba7a4f459fb76d4a48c27e9484ec2b"} Feb 16 17:18:41.845363 master-0 kubenswrapper[4167]: I0216 17:18:41.845312 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-flr86_9f9bf4ab-5415-4616-aa36-ea387c699ea9/ovn-acl-logging/1.log" Feb 16 17:18:41.846200 master-0 kubenswrapper[4167]: I0216 17:18:41.846133 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-flr86_9f9bf4ab-5415-4616-aa36-ea387c699ea9/ovn-controller/2.log" Feb 16 17:18:41.847179 master-0 kubenswrapper[4167]: I0216 17:18:41.847141 4167 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="7c3093aff1f8ebccbe96292ac184022bc15d8eeb25fdec354abddcd0eccfb95a" exitCode=0 Feb 16 17:18:41.847299 master-0 kubenswrapper[4167]: I0216 17:18:41.847227 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"7c3093aff1f8ebccbe96292ac184022bc15d8eeb25fdec354abddcd0eccfb95a"} Feb 16 17:18:41.849202 master-0 kubenswrapper[4167]: I0216 17:18:41.849151 4167 generic.go:334] "Generic (PLEG): container finished" podID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" containerID="e663fa585fa9b333671e87d8b98645a4ea3e7b1b7a725b36b379c4e86f1caadd" exitCode=0 Feb 16 17:18:41.849354 master-0 kubenswrapper[4167]: I0216 17:18:41.849221 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" event={"ID":"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd","Type":"ContainerDied","Data":"e663fa585fa9b333671e87d8b98645a4ea3e7b1b7a725b36b379c4e86f1caadd"} Feb 16 17:18:41.849649 master-0 kubenswrapper[4167]: I0216 17:18:41.849613 4167 scope.go:117] "RemoveContainer" containerID="e663fa585fa9b333671e87d8b98645a4ea3e7b1b7a725b36b379c4e86f1caadd" Feb 16 17:18:41.855771 master-0 kubenswrapper[4167]: I0216 17:18:41.855727 4167 generic.go:334] "Generic (PLEG): container finished" podID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" containerID="84478ba80a4930a90177cd524333f9a711b1e761ab3c5b96e6e2d7ba45d9f4f8" exitCode=0 Feb 16 17:18:41.855771 master-0 kubenswrapper[4167]: I0216 17:18:41.855754 4167 generic.go:334] "Generic (PLEG): container finished" podID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" containerID="63229405ff3ddfaa766c4ee313994817781540fd6f3c8891c90f43028c210fdb" exitCode=0 Feb 16 17:18:41.855771 master-0 kubenswrapper[4167]: I0216 17:18:41.855767 4167 generic.go:334] "Generic (PLEG): container finished" podID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" containerID="724eabf6366889ad27f53e251522e6e8aa6a1854883743a613e9de16af72831d" exitCode=0 Feb 16 17:18:41.855771 master-0 kubenswrapper[4167]: I0216 17:18:41.855760 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerDied","Data":"84478ba80a4930a90177cd524333f9a711b1e761ab3c5b96e6e2d7ba45d9f4f8"} Feb 16 17:18:41.856285 master-0 kubenswrapper[4167]: I0216 17:18:41.855809 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerDied","Data":"63229405ff3ddfaa766c4ee313994817781540fd6f3c8891c90f43028c210fdb"} Feb 16 17:18:41.856285 master-0 kubenswrapper[4167]: I0216 17:18:41.855826 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerDied","Data":"724eabf6366889ad27f53e251522e6e8aa6a1854883743a613e9de16af72831d"} Feb 16 17:18:41.856285 master-0 kubenswrapper[4167]: I0216 17:18:41.855839 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerDied","Data":"b84a614b1ec3a6d2390121273be6a1676c50cbc27aabf14a1e15474dc3929160"} Feb 16 17:18:41.856285 master-0 kubenswrapper[4167]: I0216 17:18:41.855780 4167 generic.go:334] "Generic (PLEG): container finished" podID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" containerID="b84a614b1ec3a6d2390121273be6a1676c50cbc27aabf14a1e15474dc3929160" exitCode=0 Feb 16 17:18:41.856285 master-0 kubenswrapper[4167]: I0216 17:18:41.855862 4167 generic.go:334] "Generic (PLEG): container finished" podID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" containerID="54030a7ff4b10a85d9e72a01a23417e3237ec0ba05096416a9dd2e258691a5a3" exitCode=0 Feb 16 17:18:41.856285 master-0 kubenswrapper[4167]: I0216 17:18:41.855907 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerDied","Data":"54030a7ff4b10a85d9e72a01a23417e3237ec0ba05096416a9dd2e258691a5a3"} Feb 16 17:18:41.857682 master-0 kubenswrapper[4167]: I0216 17:18:41.857624 4167 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_iptables-alerter-czzz2_b3fa6ac1-781f-446c-b6b4-18bdb7723c23/iptables-alerter/3.log" Feb 16 17:18:41.857682 master-0 kubenswrapper[4167]: I0216 17:18:41.857667 4167 generic.go:334] "Generic (PLEG): container finished" podID="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" containerID="f20084f5737352054f19e5471f4cd7b6012bf7a4a364d64611d452b2501d140a" exitCode=143 Feb 16 17:18:41.857904 master-0 kubenswrapper[4167]: I0216 17:18:41.857714 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-czzz2" event={"ID":"b3fa6ac1-781f-446c-b6b4-18bdb7723c23","Type":"ContainerDied","Data":"f20084f5737352054f19e5471f4cd7b6012bf7a4a364d64611d452b2501d140a"} Feb 16 17:18:41.858184 master-0 kubenswrapper[4167]: I0216 17:18:41.858154 4167 scope.go:117] "RemoveContainer" containerID="f20084f5737352054f19e5471f4cd7b6012bf7a4a364d64611d452b2501d140a" Feb 16 17:18:41.860561 master-0 kubenswrapper[4167]: I0216 17:18:41.860521 4167 generic.go:334] "Generic (PLEG): container finished" podID="6f44170a-3c1c-4944-b971-251f75a51fc3" containerID="e12557cfa82b8f5d79f2dbafc88336b0f40bdc282a9e080ed322a28aa7a19d9d" exitCode=0 Feb 16 17:18:41.860561 master-0 kubenswrapper[4167]: I0216 17:18:41.860553 4167 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" event={"ID":"6f44170a-3c1c-4944-b971-251f75a51fc3","Type":"ContainerDied","Data":"e12557cfa82b8f5d79f2dbafc88336b0f40bdc282a9e080ed322a28aa7a19d9d"} Feb 16 17:18:41.861042 master-0 kubenswrapper[4167]: I0216 17:18:41.861019 4167 scope.go:117] "RemoveContainer" containerID="e12557cfa82b8f5d79f2dbafc88336b0f40bdc282a9e080ed322a28aa7a19d9d" Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: I0216 17:18:41.890392 4167 patch_prober.go:28] interesting pod/apiserver-fc4bf7f79-tqnlw container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]log ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]etcd excluded: ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]etcd-readiness excluded: ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]informer-sync ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]poststarthook/max-in-flight-filter ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]poststarthook/project.openshift.io-projectcache ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]poststarthook/openshift.io-startinformers ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: [-]shutdown failed: reason withheld Feb 16 17:18:41.890445 master-0 kubenswrapper[4167]: readyz check failed Feb 16 17:18:41.891534 master-0 kubenswrapper[4167]: I0216 17:18:41.890460 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:18:42.425457 master-0 kubenswrapper[4167]: I0216 17:18:42.425345 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:18:42.425457 master-0 kubenswrapper[4167]: I0216 17:18:42.425440 4167 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:18:42.517206 master-0 kubenswrapper[4167]: I0216 17:18:42.517146 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: I0216 17:18:42.618753 4167 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]log ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]api-openshift-apiserver-available ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]api-openshift-oauth-apiserver-available ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]informer-sync ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/priority-and-fairness-filter ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/start-apiextensions-informers ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/start-apiextensions-controllers ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/crd-informer-synced ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/start-system-namespaces-controller ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/rbac/bootstrap-roles ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/bootstrap-controller ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/start-kube-aggregator-informers ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/apiservice-registration-controller ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/apiservice-discovery-controller ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]autoregister-completion ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/apiservice-openapi-controller ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: [-]shutdown failed: reason withheld Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: readyz check failed Feb 16 17:18:42.618828 master-0 kubenswrapper[4167]: I0216 17:18:42.618820 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:18:42.643937 master-0 kubenswrapper[4167]: I0216 17:18:42.643853 4167 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:18:42.644112 master-0 kubenswrapper[4167]: I0216 17:18:42.644032 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:18:42.844543 master-0 kubenswrapper[4167]: I0216 17:18:42.844477 4167 patch_prober.go:28] interesting pod/dns-default-qcgxx container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.128.0.32:8181/ready\": dial tcp 10.128.0.32:8181: connect: connection refused" start-of-body= Feb 16 17:18:42.844636 master-0 kubenswrapper[4167]: I0216 17:18:42.844574 4167 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" containerName="dns" probeResult="failure" output="Get \"http://10.128.0.32:8181/ready\": dial tcp 10.128.0.32:8181: connect: connection refused" Feb 16 17:18:43.311814 master-0 kubenswrapper[4167]: I0216 17:18:43.311715 4167 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" start-of-body= Feb 16 17:18:43.312066 master-0 kubenswrapper[4167]: I0216 17:18:43.311807 4167 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": dial tcp 192.168.32.10:10259: connect: connection refused" Feb 16 17:18:43.484069 master-0 kubenswrapper[4167]: I0216 17:18:43.483923 4167 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:18:43.484069 master-0 kubenswrapper[4167]: I0216 17:18:43.484059 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:18:43.510255 master-0 kubenswrapper[4167]: I0216 17:18:43.510173 4167 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:18:44.055154 master-0 kubenswrapper[4167]: I0216 17:18:44.055040 4167 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:18:44.772086 master-0 kubenswrapper[4167]: I0216 17:18:44.771933 4167 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:18:45.076097 master-0 kubenswrapper[4167]: I0216 17:18:45.076040 4167 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:18:45.076604 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 16 17:18:45.117991 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 16 17:18:45.118569 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 16 17:18:45.123640 master-0 systemd[1]: kubelet.service: Consumed 55.043s CPU time. -- Boot bff30cf771da4e66994013ec1ab42f05 -- Feb 16 17:23:31.976329 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 17:23:32.607212 master-0 kubenswrapper[3178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:23:32.607212 master-0 kubenswrapper[3178]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 17:23:32.607212 master-0 kubenswrapper[3178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: I0216 17:23:32.608839 3178 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: W0216 17:23:32.612160 3178 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: W0216 17:23:32.612171 3178 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: W0216 17:23:32.612175 3178 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: W0216 17:23:32.612180 3178 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: W0216 17:23:32.612184 3178 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: W0216 17:23:32.612188 3178 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: W0216 17:23:32.612192 3178 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: W0216 17:23:32.612196 3178 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: W0216 17:23:32.612200 3178 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: W0216 17:23:32.612204 3178 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: W0216 17:23:32.612208 3178 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:23:32.619021 master-0 kubenswrapper[3178]: W0216 17:23:32.612212 3178 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612216 3178 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612220 3178 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612223 3178 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612227 3178 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612230 3178 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612234 3178 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612259 3178 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612264 3178 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612268 3178 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612271 3178 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612277 3178 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612283 3178 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612286 3178 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612290 3178 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612293 3178 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612297 3178 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612300 3178 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612305 3178 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:23:32.620237 master-0 kubenswrapper[3178]: W0216 17:23:32.612309 3178 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612313 3178 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612317 3178 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612321 3178 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612324 3178 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612327 3178 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612331 3178 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612334 3178 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612338 3178 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612341 3178 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612345 3178 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612348 3178 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612351 3178 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612355 3178 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612358 3178 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612362 3178 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612365 3178 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612369 3178 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612374 3178 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612378 3178 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:23:32.621094 master-0 kubenswrapper[3178]: W0216 17:23:32.612381 3178 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612385 3178 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612388 3178 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612392 3178 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612400 3178 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612404 3178 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612408 3178 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612411 3178 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612415 3178 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612418 3178 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612422 3178 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612425 3178 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612430 3178 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612435 3178 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612439 3178 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612443 3178 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612447 3178 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612450 3178 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612454 3178 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:23:32.623668 master-0 kubenswrapper[3178]: W0216 17:23:32.612458 3178 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: W0216 17:23:32.612461 3178 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: W0216 17:23:32.612466 3178 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612545 3178 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612553 3178 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612562 3178 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612568 3178 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612574 3178 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612578 3178 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612583 3178 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612588 3178 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612592 3178 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612597 3178 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612602 3178 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612606 3178 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612611 3178 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612615 3178 flags.go:64] FLAG: --cgroup-root="" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612619 3178 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612624 3178 flags.go:64] FLAG: --client-ca-file="" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612628 3178 flags.go:64] FLAG: --cloud-config="" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612632 3178 flags.go:64] FLAG: --cloud-provider="" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612642 3178 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612652 3178 flags.go:64] FLAG: --cluster-domain="" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612656 3178 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 17:23:32.624666 master-0 kubenswrapper[3178]: I0216 17:23:32.612661 3178 flags.go:64] FLAG: --config-dir="" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612664 3178 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612669 3178 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612675 3178 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612679 3178 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612683 3178 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612687 3178 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612691 3178 flags.go:64] FLAG: --contention-profiling="false" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612695 3178 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612699 3178 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612704 3178 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612708 3178 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612713 3178 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612717 3178 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612721 3178 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612725 3178 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612729 3178 flags.go:64] FLAG: --enable-server="true" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612733 3178 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612744 3178 flags.go:64] FLAG: --event-burst="100" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612749 3178 flags.go:64] FLAG: --event-qps="50" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612753 3178 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612757 3178 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612762 3178 flags.go:64] FLAG: --eviction-hard="" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612767 3178 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 17:23:32.626534 master-0 kubenswrapper[3178]: I0216 17:23:32.612771 3178 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612775 3178 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612779 3178 flags.go:64] FLAG: --eviction-soft="" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612783 3178 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612787 3178 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612791 3178 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612796 3178 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612800 3178 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612804 3178 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612814 3178 flags.go:64] FLAG: --feature-gates="" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612821 3178 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612827 3178 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612832 3178 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612837 3178 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612842 3178 flags.go:64] FLAG: --healthz-port="10248" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612846 3178 flags.go:64] FLAG: --help="false" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612853 3178 flags.go:64] FLAG: --hostname-override="" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612857 3178 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612861 3178 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612865 3178 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612868 3178 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612873 3178 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612877 3178 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612882 3178 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612886 3178 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 17:23:32.627506 master-0 kubenswrapper[3178]: I0216 17:23:32.612890 3178 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612894 3178 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612898 3178 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612902 3178 flags.go:64] FLAG: --kube-reserved="" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612906 3178 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612910 3178 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612914 3178 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612918 3178 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612922 3178 flags.go:64] FLAG: --lock-file="" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612926 3178 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612930 3178 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612934 3178 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612940 3178 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612945 3178 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612949 3178 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612953 3178 flags.go:64] FLAG: --logging-format="text" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612956 3178 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612961 3178 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612965 3178 flags.go:64] FLAG: --manifest-url="" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612969 3178 flags.go:64] FLAG: --manifest-url-header="" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612981 3178 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612986 3178 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612991 3178 flags.go:64] FLAG: --max-pods="110" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.612996 3178 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.613000 3178 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 17:23:32.628541 master-0 kubenswrapper[3178]: I0216 17:23:32.613004 3178 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613008 3178 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613012 3178 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613016 3178 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613020 3178 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613030 3178 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613034 3178 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613039 3178 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613043 3178 flags.go:64] FLAG: --pod-cidr="" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613047 3178 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613052 3178 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613056 3178 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613061 3178 flags.go:64] FLAG: --pods-per-core="0" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613065 3178 flags.go:64] FLAG: --port="10250" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613069 3178 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613073 3178 flags.go:64] FLAG: --provider-id="" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613077 3178 flags.go:64] FLAG: --qos-reserved="" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613081 3178 flags.go:64] FLAG: --read-only-port="10255" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613085 3178 flags.go:64] FLAG: --register-node="true" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613089 3178 flags.go:64] FLAG: --register-schedulable="true" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613093 3178 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613102 3178 flags.go:64] FLAG: --registry-burst="10" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613106 3178 flags.go:64] FLAG: --registry-qps="5" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613110 3178 flags.go:64] FLAG: --reserved-cpus="" Feb 16 17:23:32.629537 master-0 kubenswrapper[3178]: I0216 17:23:32.613114 3178 flags.go:64] FLAG: --reserved-memory="" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613119 3178 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613123 3178 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613128 3178 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613132 3178 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613136 3178 flags.go:64] FLAG: --runonce="false" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613141 3178 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613152 3178 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613157 3178 flags.go:64] FLAG: --seccomp-default="false" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613163 3178 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613168 3178 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613173 3178 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613178 3178 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613184 3178 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613190 3178 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613195 3178 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613200 3178 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613206 3178 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613210 3178 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613215 3178 flags.go:64] FLAG: --system-cgroups="" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613219 3178 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613229 3178 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613233 3178 flags.go:64] FLAG: --tls-cert-file="" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613238 3178 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613267 3178 flags.go:64] FLAG: --tls-min-version="" Feb 16 17:23:32.632915 master-0 kubenswrapper[3178]: I0216 17:23:32.613277 3178 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: I0216 17:23:32.613285 3178 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: I0216 17:23:32.613291 3178 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: I0216 17:23:32.613296 3178 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: I0216 17:23:32.613300 3178 flags.go:64] FLAG: --v="2" Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: I0216 17:23:32.613307 3178 flags.go:64] FLAG: --version="false" Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: I0216 17:23:32.613312 3178 flags.go:64] FLAG: --vmodule="" Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: I0216 17:23:32.613317 3178 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: I0216 17:23:32.613322 3178 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613464 3178 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613470 3178 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613474 3178 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613478 3178 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613483 3178 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613487 3178 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613490 3178 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613494 3178 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613498 3178 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613506 3178 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613510 3178 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613514 3178 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613518 3178 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:23:32.634907 master-0 kubenswrapper[3178]: W0216 17:23:32.613521 3178 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613525 3178 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613528 3178 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613531 3178 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613535 3178 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613539 3178 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613542 3178 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613546 3178 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613549 3178 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613553 3178 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613557 3178 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613562 3178 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613565 3178 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613569 3178 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613573 3178 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613577 3178 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613581 3178 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613585 3178 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613588 3178 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:23:32.637314 master-0 kubenswrapper[3178]: W0216 17:23:32.613592 3178 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613595 3178 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613599 3178 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613603 3178 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613607 3178 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613611 3178 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613614 3178 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613618 3178 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613621 3178 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613625 3178 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613629 3178 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613633 3178 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613639 3178 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613648 3178 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613652 3178 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613656 3178 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613661 3178 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613665 3178 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613669 3178 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:23:32.640744 master-0 kubenswrapper[3178]: W0216 17:23:32.613672 3178 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613676 3178 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613680 3178 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613683 3178 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613687 3178 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613691 3178 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613695 3178 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613699 3178 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613703 3178 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613707 3178 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613711 3178 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613714 3178 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613717 3178 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613721 3178 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613725 3178 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613728 3178 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613732 3178 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613737 3178 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613741 3178 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613744 3178 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:23:32.641627 master-0 kubenswrapper[3178]: W0216 17:23:32.613748 3178 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: I0216 17:23:32.613759 3178 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: I0216 17:23:32.626325 3178 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: I0216 17:23:32.626353 3178 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: W0216 17:23:32.626418 3178 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: W0216 17:23:32.626425 3178 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: W0216 17:23:32.626434 3178 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: W0216 17:23:32.626441 3178 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: W0216 17:23:32.626445 3178 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: W0216 17:23:32.626450 3178 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: W0216 17:23:32.626453 3178 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: W0216 17:23:32.626457 3178 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: W0216 17:23:32.626461 3178 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: W0216 17:23:32.626465 3178 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:23:32.642499 master-0 kubenswrapper[3178]: W0216 17:23:32.626469 3178 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626472 3178 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626476 3178 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626481 3178 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626485 3178 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626489 3178 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626493 3178 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626498 3178 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626501 3178 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626505 3178 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626509 3178 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626513 3178 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626517 3178 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626521 3178 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626524 3178 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626530 3178 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626534 3178 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626541 3178 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626546 3178 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626551 3178 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:23:32.643021 master-0 kubenswrapper[3178]: W0216 17:23:32.626556 3178 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626560 3178 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626564 3178 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626568 3178 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626574 3178 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626578 3178 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626582 3178 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626585 3178 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626589 3178 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626593 3178 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626596 3178 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626600 3178 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626603 3178 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626607 3178 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626610 3178 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626628 3178 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626632 3178 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626637 3178 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626641 3178 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626645 3178 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:23:32.643788 master-0 kubenswrapper[3178]: W0216 17:23:32.626648 3178 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626652 3178 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626656 3178 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626659 3178 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626663 3178 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626666 3178 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626670 3178 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626675 3178 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626679 3178 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626684 3178 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626689 3178 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626693 3178 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626698 3178 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626703 3178 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626707 3178 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626711 3178 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626718 3178 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626722 3178 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626726 3178 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:23:32.645478 master-0 kubenswrapper[3178]: W0216 17:23:32.626730 3178 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626734 3178 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626738 3178 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: I0216 17:23:32.626744 3178 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626876 3178 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626883 3178 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626887 3178 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626892 3178 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626896 3178 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626900 3178 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626904 3178 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626908 3178 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626912 3178 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626915 3178 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626920 3178 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:23:32.646294 master-0 kubenswrapper[3178]: W0216 17:23:32.626925 3178 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626929 3178 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626933 3178 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626937 3178 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626941 3178 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626946 3178 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626950 3178 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626953 3178 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626957 3178 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626961 3178 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626964 3178 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626968 3178 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626971 3178 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626975 3178 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626979 3178 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626984 3178 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626988 3178 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626991 3178 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626995 3178 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.626998 3178 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:23:32.646924 master-0 kubenswrapper[3178]: W0216 17:23:32.627002 3178 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627005 3178 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627010 3178 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627014 3178 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627019 3178 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627023 3178 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627027 3178 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627031 3178 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627034 3178 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627038 3178 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627042 3178 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627046 3178 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627050 3178 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627053 3178 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627057 3178 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627061 3178 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627065 3178 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627068 3178 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627072 3178 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627075 3178 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:23:32.647791 master-0 kubenswrapper[3178]: W0216 17:23:32.627080 3178 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627084 3178 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627088 3178 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627092 3178 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627095 3178 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627100 3178 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627105 3178 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627110 3178 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627114 3178 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627117 3178 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627121 3178 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627124 3178 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627128 3178 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627132 3178 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627135 3178 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627139 3178 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627143 3178 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627147 3178 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627150 3178 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:23:32.648723 master-0 kubenswrapper[3178]: W0216 17:23:32.627154 3178 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:23:32.649457 master-0 kubenswrapper[3178]: W0216 17:23:32.627159 3178 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:23:32.649457 master-0 kubenswrapper[3178]: I0216 17:23:32.627164 3178 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:23:32.649457 master-0 kubenswrapper[3178]: I0216 17:23:32.628230 3178 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 17:23:32.649457 master-0 kubenswrapper[3178]: I0216 17:23:32.634130 3178 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 17:23:32.649457 master-0 kubenswrapper[3178]: I0216 17:23:32.634207 3178 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 17:23:32.649457 master-0 kubenswrapper[3178]: I0216 17:23:32.636161 3178 server.go:997] "Starting client certificate rotation" Feb 16 17:23:32.649457 master-0 kubenswrapper[3178]: I0216 17:23:32.636174 3178 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 17:23:32.649457 master-0 kubenswrapper[3178]: I0216 17:23:32.637259 3178 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 11:28:00.903514822 +0000 UTC Feb 16 17:23:32.649457 master-0 kubenswrapper[3178]: I0216 17:23:32.637304 3178 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h4m28.266212817s for next certificate rotation Feb 16 17:23:32.667445 master-0 kubenswrapper[3178]: I0216 17:23:32.667398 3178 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:23:32.669205 master-0 kubenswrapper[3178]: I0216 17:23:32.669145 3178 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:23:32.686519 master-0 kubenswrapper[3178]: I0216 17:23:32.686469 3178 log.go:25] "Validated CRI v1 runtime API" Feb 16 17:23:32.752647 master-0 kubenswrapper[3178]: I0216 17:23:32.752567 3178 log.go:25] "Validated CRI v1 image API" Feb 16 17:23:32.755042 master-0 kubenswrapper[3178]: I0216 17:23:32.754786 3178 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 17:23:32.761004 master-0 kubenswrapper[3178]: I0216 17:23:32.760944 3178 fs.go:135] Filesystem UUIDs: map[35a0b0cc-84b1-4374-a18a-0f49ad7a8333:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 17:23:32.761004 master-0 kubenswrapper[3178]: I0216 17:23:32.760980 3178 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Feb 16 17:23:32.789522 master-0 kubenswrapper[3178]: I0216 17:23:32.788951 3178 manager.go:217] Machine: {Timestamp:2026-02-16 17:23:32.785541301 +0000 UTC m=+0.598233665 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514141184 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:47bfea951bd14de8bb3b008f6812b13f SystemUUID:47bfea95-1bd1-4de8-bb3b-008f6812b13f BootID:bff30cf7-71da-4e66-9940-13ec1ab42f05 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257070592 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257070592 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102829056 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:52:47:03:db:66:8a Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:2c:b9:e2 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:4a:2e:ce Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:46:8e:c9:b8:46:79 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514141184 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 17:23:32.789522 master-0 kubenswrapper[3178]: I0216 17:23:32.789504 3178 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 17:23:32.789808 master-0 kubenswrapper[3178]: I0216 17:23:32.789769 3178 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 17:23:32.790260 master-0 kubenswrapper[3178]: I0216 17:23:32.790208 3178 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 17:23:32.790627 master-0 kubenswrapper[3178]: I0216 17:23:32.790572 3178 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 17:23:32.790944 master-0 kubenswrapper[3178]: I0216 17:23:32.790619 3178 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 17:23:32.791008 master-0 kubenswrapper[3178]: I0216 17:23:32.790969 3178 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 17:23:32.791008 master-0 kubenswrapper[3178]: I0216 17:23:32.790991 3178 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 17:23:32.791461 master-0 kubenswrapper[3178]: I0216 17:23:32.791414 3178 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:23:32.791823 master-0 kubenswrapper[3178]: I0216 17:23:32.791788 3178 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:23:32.793038 master-0 kubenswrapper[3178]: I0216 17:23:32.793000 3178 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:23:32.793180 master-0 kubenswrapper[3178]: I0216 17:23:32.793152 3178 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 17:23:32.796311 master-0 kubenswrapper[3178]: I0216 17:23:32.796277 3178 kubelet.go:418] "Attempting to sync node with API server" Feb 16 17:23:32.796387 master-0 kubenswrapper[3178]: I0216 17:23:32.796314 3178 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 17:23:32.796387 master-0 kubenswrapper[3178]: I0216 17:23:32.796341 3178 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 17:23:32.796387 master-0 kubenswrapper[3178]: I0216 17:23:32.796361 3178 kubelet.go:324] "Adding apiserver pod source" Feb 16 17:23:32.796387 master-0 kubenswrapper[3178]: I0216 17:23:32.796379 3178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 17:23:32.801660 master-0 kubenswrapper[3178]: I0216 17:23:32.801599 3178 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 17:23:32.802941 master-0 kubenswrapper[3178]: I0216 17:23:32.802906 3178 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 17:23:32.806331 master-0 kubenswrapper[3178]: W0216 17:23:32.805945 3178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:23:32.806331 master-0 kubenswrapper[3178]: W0216 17:23:32.806004 3178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:23:32.806331 master-0 kubenswrapper[3178]: E0216 17:23:32.806124 3178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:23:32.806331 master-0 kubenswrapper[3178]: E0216 17:23:32.806115 3178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:23:32.806621 master-0 kubenswrapper[3178]: I0216 17:23:32.806562 3178 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 17:23:32.806861 master-0 kubenswrapper[3178]: I0216 17:23:32.806836 3178 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 17:23:32.807466 master-0 kubenswrapper[3178]: I0216 17:23:32.806866 3178 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 17:23:32.807466 master-0 kubenswrapper[3178]: I0216 17:23:32.806877 3178 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 17:23:32.807466 master-0 kubenswrapper[3178]: I0216 17:23:32.806886 3178 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 17:23:32.807466 master-0 kubenswrapper[3178]: I0216 17:23:32.806895 3178 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 17:23:32.807466 master-0 kubenswrapper[3178]: I0216 17:23:32.806904 3178 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 17:23:32.807466 master-0 kubenswrapper[3178]: I0216 17:23:32.806913 3178 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 17:23:32.807466 master-0 kubenswrapper[3178]: I0216 17:23:32.806923 3178 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 17:23:32.807466 master-0 kubenswrapper[3178]: I0216 17:23:32.806936 3178 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 17:23:32.807466 master-0 kubenswrapper[3178]: I0216 17:23:32.806957 3178 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 17:23:32.807466 master-0 kubenswrapper[3178]: I0216 17:23:32.806985 3178 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 17:23:32.807466 master-0 kubenswrapper[3178]: I0216 17:23:32.807001 3178 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 17:23:32.808153 master-0 kubenswrapper[3178]: I0216 17:23:32.807988 3178 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 17:23:32.808582 master-0 kubenswrapper[3178]: I0216 17:23:32.808434 3178 server.go:1280] "Started kubelet" Feb 16 17:23:32.808810 master-0 kubenswrapper[3178]: I0216 17:23:32.808759 3178 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:23:32.809566 master-0 kubenswrapper[3178]: I0216 17:23:32.809431 3178 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 17:23:32.809566 master-0 kubenswrapper[3178]: I0216 17:23:32.809439 3178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 17:23:32.809566 master-0 kubenswrapper[3178]: I0216 17:23:32.809539 3178 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 17:23:32.810005 master-0 kubenswrapper[3178]: I0216 17:23:32.809972 3178 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 17:23:32.810125 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 17:23:32.811909 master-0 kubenswrapper[3178]: I0216 17:23:32.811878 3178 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 17:23:32.811909 master-0 kubenswrapper[3178]: I0216 17:23:32.811913 3178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 17:23:32.812187 master-0 kubenswrapper[3178]: I0216 17:23:32.812021 3178 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 13:07:36.337685197 +0000 UTC Feb 16 17:23:32.812187 master-0 kubenswrapper[3178]: I0216 17:23:32.812184 3178 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h44m3.525505659s for next certificate rotation Feb 16 17:23:32.812353 master-0 kubenswrapper[3178]: E0216 17:23:32.812190 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:32.813595 master-0 kubenswrapper[3178]: I0216 17:23:32.813573 3178 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 17:23:32.814231 master-0 kubenswrapper[3178]: I0216 17:23:32.813515 3178 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 17:23:32.814231 master-0 kubenswrapper[3178]: I0216 17:23:32.814219 3178 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 17:23:32.819681 master-0 kubenswrapper[3178]: I0216 17:23:32.819625 3178 server.go:449] "Adding debug handlers to kubelet server" Feb 16 17:23:32.819863 master-0 kubenswrapper[3178]: E0216 17:23:32.813946 3178 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.1894c9f63860057a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:23:32.808402298 +0000 UTC m=+0.621094602,LastTimestamp:2026-02-16 17:23:32.808402298 +0000 UTC m=+0.621094602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:23:32.821584 master-0 kubenswrapper[3178]: I0216 17:23:32.821532 3178 factory.go:55] Registering systemd factory Feb 16 17:23:32.821584 master-0 kubenswrapper[3178]: E0216 17:23:32.821532 3178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 16 17:23:32.821584 master-0 kubenswrapper[3178]: I0216 17:23:32.821562 3178 factory.go:221] Registration of the systemd container factory successfully Feb 16 17:23:32.822029 master-0 kubenswrapper[3178]: W0216 17:23:32.821944 3178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:23:32.822116 master-0 kubenswrapper[3178]: E0216 17:23:32.822031 3178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:23:32.822445 master-0 kubenswrapper[3178]: I0216 17:23:32.822396 3178 factory.go:153] Registering CRI-O factory Feb 16 17:23:32.822508 master-0 kubenswrapper[3178]: I0216 17:23:32.822449 3178 factory.go:221] Registration of the crio container factory successfully Feb 16 17:23:32.822610 master-0 kubenswrapper[3178]: I0216 17:23:32.822550 3178 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 17:23:32.822610 master-0 kubenswrapper[3178]: I0216 17:23:32.822600 3178 factory.go:103] Registering Raw factory Feb 16 17:23:32.822699 master-0 kubenswrapper[3178]: I0216 17:23:32.822627 3178 manager.go:1196] Started watching for new ooms in manager Feb 16 17:23:32.828892 master-0 kubenswrapper[3178]: I0216 17:23:32.828142 3178 manager.go:319] Starting recovery of all containers Feb 16 17:23:32.846950 master-0 kubenswrapper[3178]: I0216 17:23:32.846901 3178 manager.go:324] Recovery completed Feb 16 17:23:32.858488 master-0 kubenswrapper[3178]: I0216 17:23:32.858467 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:32.861591 master-0 kubenswrapper[3178]: I0216 17:23:32.861453 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:32.861591 master-0 kubenswrapper[3178]: I0216 17:23:32.861528 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:32.861591 master-0 kubenswrapper[3178]: I0216 17:23:32.861549 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:32.863445 master-0 kubenswrapper[3178]: I0216 17:23:32.863399 3178 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 17:23:32.863445 master-0 kubenswrapper[3178]: I0216 17:23:32.863442 3178 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 17:23:32.863581 master-0 kubenswrapper[3178]: I0216 17:23:32.863483 3178 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:23:32.867725 master-0 kubenswrapper[3178]: I0216 17:23:32.867683 3178 policy_none.go:49] "None policy: Start" Feb 16 17:23:32.868728 master-0 kubenswrapper[3178]: I0216 17:23:32.868680 3178 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 17:23:32.868728 master-0 kubenswrapper[3178]: I0216 17:23:32.868722 3178 state_mem.go:35] "Initializing new in-memory state store" Feb 16 17:23:32.913158 master-0 kubenswrapper[3178]: E0216 17:23:32.913102 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:32.934772 master-0 kubenswrapper[3178]: I0216 17:23:32.934702 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config" seLinuxMountContext="" Feb 16 17:23:32.934772 master-0 kubenswrapper[3178]: I0216 17:23:32.934763 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw" seLinuxMountContext="" Feb 16 17:23:32.934772 master-0 kubenswrapper[3178]: I0216 17:23:32.934775 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934785 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934794 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934803 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934811 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934821 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934833 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934841 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934851 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934867 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934876 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934886 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-kube-api-access-vpjv7" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934895 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934904 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934917 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80d3b238-70c3-4e71-96a1-99405352033f" volumeName="kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934927 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934937 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9859457-f0d1-4754-a6c5-cf05d5abf447" volumeName="kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934948 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934960 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-main-db" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934970 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934979 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config" seLinuxMountContext="" Feb 16 17:23:33.020710 master-0 kubenswrapper[3178]: I0216 17:23:32.934989 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.934998 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935024 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935037 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935048 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935060 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935069 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f44170a-3c1c-4944-b971-251f75a51fc3" volumeName="kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935078 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935091 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935102 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935111 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935122 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935130 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935140 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935148 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935157 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935167 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935177 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935186 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935195 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6fe41b0-1a42-4f07-8220-d9aaa50788ad" volumeName="kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935210 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric" seLinuxMountContext="" Feb 16 17:23:33.022544 master-0 kubenswrapper[3178]: I0216 17:23:32.935219 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935228 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935238 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935261 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935273 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935284 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935292 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935303 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935318 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08a90dc5-b0d8-4aad-a002-736492b6c1a9" volumeName="kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935327 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935337 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935348 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935358 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935366 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935375 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935384 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935395 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935406 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935416 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935425 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1524fc1-d157-435a-8bf8-7e877c45909d" volumeName="kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935434 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca" seLinuxMountContext="" Feb 16 17:23:33.023387 master-0 kubenswrapper[3178]: I0216 17:23:32.935443 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935452 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935461 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935471 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935480 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935488 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935497 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935507 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62fc29f4-557f-4a75-8b78-6ca425c81b81" volumeName="kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935516 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935526 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="810a2275-fae5-45df-a3b8-92860451d33b" volumeName="kubernetes.io/projected/810a2275-fae5-45df-a3b8-92860451d33b-kube-api-access-ktgm7" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935535 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935546 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935557 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935598 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935608 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935617 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935626 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935635 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935648 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935656 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935666 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca" seLinuxMountContext="" Feb 16 17:23:33.025053 master-0 kubenswrapper[3178]: I0216 17:23:32.935676 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935687 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935698 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935708 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935720 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935729 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935739 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935749 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935764 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935773 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935782 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935791 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935800 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935809 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935820 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d980a9a-2574-41b9-b970-0718cd97c8cd" volumeName="kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935831 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" volumeName="kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935840 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935850 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935863 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-metrics-client-ca" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935876 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935889 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg" seLinuxMountContext="" Feb 16 17:23:33.025923 master-0 kubenswrapper[3178]: I0216 17:23:32.935900 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.935913 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.935927 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.935939 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.935949 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.935964 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad805251-19d0-4d2f-b741-7d11158f1f03" volumeName="kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.935975 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.935984 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.935992 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.936003 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.936012 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54fba066-0e9e-49f6-8a86-34d5b4b660df" volumeName="kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.936020 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.936028 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.936037 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1524fc1-d157-435a-8bf8-7e877c45909d" volumeName="kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.936046 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.936056 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d980a9a-2574-41b9-b970-0718cd97c8cd" volumeName="kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.936067 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.936076 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.936084 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-config-out" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.936105 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.936116 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-out" seLinuxMountContext="" Feb 16 17:23:33.026811 master-0 kubenswrapper[3178]: I0216 17:23:32.936127 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936136 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936145 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a275679-b7b6-4c28-b389-94cd2b014d6c" volumeName="kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936153 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936162 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936171 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936181 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936190 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936200 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936214 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936223 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936232 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c303189e-adae-4fe2-8dd7-cc9b80f73e66" volumeName="kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936241 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936269 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936278 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936288 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936301 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936310 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f44170a-3c1c-4944-b971-251f75a51fc3" volumeName="kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936328 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936337 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936348 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls" seLinuxMountContext="" Feb 16 17:23:33.027715 master-0 kubenswrapper[3178]: I0216 17:23:32.936357 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936365 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936374 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936390 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936401 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936410 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936421 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936437 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936445 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936454 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936462 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936472 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936482 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936490 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936506 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936514 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936523 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936533 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936542 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936551 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936563 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca" seLinuxMountContext="" Feb 16 17:23:33.029854 master-0 kubenswrapper[3178]: I0216 17:23:32.936574 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936582 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936592 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" volumeName="kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936602 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18e9a9d3-9b18-4c19-9558-f33c68101922" volumeName="kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936610 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936618 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936628 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936637 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936646 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18e9a9d3-9b18-4c19-9558-f33c68101922" volumeName="kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936655 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936663 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936672 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936680 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936690 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936699 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936709 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936720 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936729 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="970d4376-f299-412c-a8ee-90aa980c689e" volumeName="kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936739 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936752 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936763 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv" seLinuxMountContext="" Feb 16 17:23:33.030853 master-0 kubenswrapper[3178]: I0216 17:23:32.936776 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936789 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936800 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936826 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936834 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936843 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936852 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936860 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936869 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936878 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936889 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936899 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-kube-api-access-l67l5" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936908 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936918 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936926 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936935 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.936944 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.937020 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.937043 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.937072 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.937082 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy" seLinuxMountContext="" Feb 16 17:23:33.032744 master-0 kubenswrapper[3178]: I0216 17:23:32.937101 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937110 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937119 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937129 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937145 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937160 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937175 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ff68421-1741-41c1-93d5-5c722dfd295e" volumeName="kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937189 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937204 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937302 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937312 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937322 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937333 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937345 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937357 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937368 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937379 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937392 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937402 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937412 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937421 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad805251-19d0-4d2f-b741-7d11158f1f03" volumeName="kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937437 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.033784 master-0 kubenswrapper[3178]: I0216 17:23:32.937447 3178 manager.go:334] "Starting Device Plugin manager" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937494 3178 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937521 3178 server.go:79] "Starting device plugin registration server" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937451 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937641 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937653 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937662 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937671 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937685 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937695 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937704 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937712 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937721 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937731 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937740 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="642e5115-b7f2-4561-bc6b-1a74b6d891c4" volumeName="kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937749 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937758 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937768 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937788 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a275679-b7b6-4c28-b389-94cd2b014d6c" volumeName="kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937798 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937808 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937821 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config" seLinuxMountContext="" Feb 16 17:23:33.034718 master-0 kubenswrapper[3178]: I0216 17:23:32.937832 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-metrics-client-ca" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.937844 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.937855 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.937879 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.937893 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.937906 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.937922 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.937937 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.937958 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="642e5115-b7f2-4561-bc6b-1a74b6d891c4" volumeName="kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.937977 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.937989 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.938000 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.938015 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.938025 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.938035 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.938046 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9859457-f0d1-4754-a6c5-cf05d5abf447" volumeName="kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.938056 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.938066 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.938077 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.938096 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.938106 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2" seLinuxMountContext="" Feb 16 17:23:33.035617 master-0 kubenswrapper[3178]: I0216 17:23:32.938115 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938125 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="810a2275-fae5-45df-a3b8-92860451d33b" volumeName="kubernetes.io/configmap/810a2275-fae5-45df-a3b8-92860451d33b-serviceca" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938129 3178 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938135 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938145 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-db" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938155 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938166 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938176 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938185 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938196 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938207 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938237 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938262 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938272 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938281 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938290 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938299 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938309 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938320 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938143 3178 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938331 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938341 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0" seLinuxMountContext="" Feb 16 17:23:33.036465 master-0 kubenswrapper[3178]: I0216 17:23:32.938351 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938362 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938383 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938394 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938404 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938415 3178 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938423 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938449 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938461 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938471 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938487 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938514 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938526 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938536 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="544c6815-81d7-422a-9e4a-5fcbfabe8da8" volumeName="kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938550 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938559 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938572 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938583 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938596 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938608 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938622 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938636 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938651 3178 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls" seLinuxMountContext="" Feb 16 17:23:33.037809 master-0 kubenswrapper[3178]: I0216 17:23:32.938664 3178 reconstruct.go:97] "Volume reconstruction finished" Feb 16 17:23:33.038806 master-0 kubenswrapper[3178]: I0216 17:23:32.938684 3178 reconciler.go:26] "Reconciler: start to sync state" Feb 16 17:23:33.038806 master-0 kubenswrapper[3178]: I0216 17:23:32.938578 3178 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 17:23:33.038806 master-0 kubenswrapper[3178]: I0216 17:23:32.938769 3178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 17:23:33.038806 master-0 kubenswrapper[3178]: E0216 17:23:32.945690 3178 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 17:23:33.038806 master-0 kubenswrapper[3178]: I0216 17:23:32.955812 3178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 17:23:33.038806 master-0 kubenswrapper[3178]: I0216 17:23:32.957497 3178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 17:23:33.038806 master-0 kubenswrapper[3178]: I0216 17:23:32.957547 3178 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 17:23:33.038806 master-0 kubenswrapper[3178]: I0216 17:23:32.957574 3178 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 17:23:33.038806 master-0 kubenswrapper[3178]: E0216 17:23:32.957618 3178 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 16 17:23:33.038806 master-0 kubenswrapper[3178]: W0216 17:23:32.960441 3178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:23:33.038806 master-0 kubenswrapper[3178]: E0216 17:23:32.960495 3178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:23:33.038806 master-0 kubenswrapper[3178]: E0216 17:23:33.024330 3178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 16 17:23:33.039403 master-0 kubenswrapper[3178]: I0216 17:23:33.039362 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:33.040503 master-0 kubenswrapper[3178]: I0216 17:23:33.040465 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:33.040563 master-0 kubenswrapper[3178]: I0216 17:23:33.040510 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:33.040563 master-0 kubenswrapper[3178]: I0216 17:23:33.040522 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:33.040563 master-0 kubenswrapper[3178]: I0216 17:23:33.040549 3178 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:23:33.041455 master-0 kubenswrapper[3178]: E0216 17:23:33.041413 3178 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:23:33.059144 master-0 kubenswrapper[3178]: I0216 17:23:33.058740 3178 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:23:33.059144 master-0 kubenswrapper[3178]: I0216 17:23:33.058901 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:33.060627 master-0 kubenswrapper[3178]: I0216 17:23:33.060575 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:33.060627 master-0 kubenswrapper[3178]: I0216 17:23:33.060624 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:33.060788 master-0 kubenswrapper[3178]: I0216 17:23:33.060637 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:33.060788 master-0 kubenswrapper[3178]: I0216 17:23:33.060784 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:33.061063 master-0 kubenswrapper[3178]: I0216 17:23:33.061020 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.061063 master-0 kubenswrapper[3178]: I0216 17:23:33.061063 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:33.061695 master-0 kubenswrapper[3178]: I0216 17:23:33.061659 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:33.061695 master-0 kubenswrapper[3178]: I0216 17:23:33.061689 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:33.061849 master-0 kubenswrapper[3178]: I0216 17:23:33.061700 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:33.061849 master-0 kubenswrapper[3178]: I0216 17:23:33.061781 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:33.061849 master-0 kubenswrapper[3178]: I0216 17:23:33.061804 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:33.061849 master-0 kubenswrapper[3178]: I0216 17:23:33.061813 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:33.061849 master-0 kubenswrapper[3178]: I0216 17:23:33.061818 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:33.062549 master-0 kubenswrapper[3178]: I0216 17:23:33.062202 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:23:33.062549 master-0 kubenswrapper[3178]: I0216 17:23:33.062237 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:33.062549 master-0 kubenswrapper[3178]: I0216 17:23:33.062365 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:33.062549 master-0 kubenswrapper[3178]: I0216 17:23:33.062384 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:33.062549 master-0 kubenswrapper[3178]: I0216 17:23:33.062395 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:33.062549 master-0 kubenswrapper[3178]: I0216 17:23:33.062477 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:33.063126 master-0 kubenswrapper[3178]: I0216 17:23:33.062648 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:23:33.063126 master-0 kubenswrapper[3178]: I0216 17:23:33.062697 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:33.063126 master-0 kubenswrapper[3178]: I0216 17:23:33.062738 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:33.063126 master-0 kubenswrapper[3178]: I0216 17:23:33.062756 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:33.063126 master-0 kubenswrapper[3178]: I0216 17:23:33.062766 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:33.063126 master-0 kubenswrapper[3178]: I0216 17:23:33.062950 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:33.063126 master-0 kubenswrapper[3178]: I0216 17:23:33.062965 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:33.063126 master-0 kubenswrapper[3178]: I0216 17:23:33.062978 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:33.063860 master-0 kubenswrapper[3178]: I0216 17:23:33.063221 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:33.063860 master-0 kubenswrapper[3178]: I0216 17:23:33.063468 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.063860 master-0 kubenswrapper[3178]: I0216 17:23:33.063510 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:33.063860 master-0 kubenswrapper[3178]: I0216 17:23:33.063648 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:33.063860 master-0 kubenswrapper[3178]: I0216 17:23:33.063680 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:33.063860 master-0 kubenswrapper[3178]: I0216 17:23:33.063692 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:33.064644 master-0 kubenswrapper[3178]: I0216 17:23:33.064164 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:33.064644 master-0 kubenswrapper[3178]: I0216 17:23:33.064203 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:33.064644 master-0 kubenswrapper[3178]: I0216 17:23:33.064216 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:33.064644 master-0 kubenswrapper[3178]: I0216 17:23:33.064203 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:33.064644 master-0 kubenswrapper[3178]: I0216 17:23:33.064269 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:33.064644 master-0 kubenswrapper[3178]: I0216 17:23:33.064283 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:33.064644 master-0 kubenswrapper[3178]: I0216 17:23:33.064487 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:33.064644 master-0 kubenswrapper[3178]: I0216 17:23:33.064511 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:33.065198 master-0 kubenswrapper[3178]: I0216 17:23:33.065158 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:33.065198 master-0 kubenswrapper[3178]: I0216 17:23:33.065197 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:33.065348 master-0 kubenswrapper[3178]: I0216 17:23:33.065211 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:33.142311 master-0 kubenswrapper[3178]: I0216 17:23:33.142206 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.142311 master-0 kubenswrapper[3178]: I0216 17:23:33.142310 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.142644 master-0 kubenswrapper[3178]: I0216 17:23:33.142348 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.142644 master-0 kubenswrapper[3178]: I0216 17:23:33.142377 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.142644 master-0 kubenswrapper[3178]: I0216 17:23:33.142408 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:23:33.142644 master-0 kubenswrapper[3178]: I0216 17:23:33.142434 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:23:33.142644 master-0 kubenswrapper[3178]: I0216 17:23:33.142461 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.142644 master-0 kubenswrapper[3178]: I0216 17:23:33.142595 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:33.143061 master-0 kubenswrapper[3178]: I0216 17:23:33.142684 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:33.143061 master-0 kubenswrapper[3178]: I0216 17:23:33.142763 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.143061 master-0 kubenswrapper[3178]: I0216 17:23:33.142833 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.143061 master-0 kubenswrapper[3178]: I0216 17:23:33.142886 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:23:33.143061 master-0 kubenswrapper[3178]: I0216 17:23:33.142948 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:23:33.143061 master-0 kubenswrapper[3178]: I0216 17:23:33.142987 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:33.143642 master-0 kubenswrapper[3178]: I0216 17:23:33.143094 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.143642 master-0 kubenswrapper[3178]: I0216 17:23:33.143138 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.143642 master-0 kubenswrapper[3178]: I0216 17:23:33.143230 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.143642 master-0 kubenswrapper[3178]: I0216 17:23:33.143333 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.242434 master-0 kubenswrapper[3178]: I0216 17:23:33.242231 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:33.245741 master-0 kubenswrapper[3178]: I0216 17:23:33.245645 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.245741 master-0 kubenswrapper[3178]: I0216 17:23:33.245716 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.245741 master-0 kubenswrapper[3178]: I0216 17:23:33.245745 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.246039 master-0 kubenswrapper[3178]: I0216 17:23:33.245776 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.246039 master-0 kubenswrapper[3178]: I0216 17:23:33.245801 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:33.246039 master-0 kubenswrapper[3178]: I0216 17:23:33.245823 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.246039 master-0 kubenswrapper[3178]: I0216 17:23:33.245843 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.246039 master-0 kubenswrapper[3178]: I0216 17:23:33.245853 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.246039 master-0 kubenswrapper[3178]: I0216 17:23:33.245867 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.246039 master-0 kubenswrapper[3178]: I0216 17:23:33.245896 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:23:33.246039 master-0 kubenswrapper[3178]: I0216 17:23:33.245920 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:23:33.246039 master-0 kubenswrapper[3178]: I0216 17:23:33.245961 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:33.246039 master-0 kubenswrapper[3178]: I0216 17:23:33.245981 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:23:33.246039 master-0 kubenswrapper[3178]: I0216 17:23:33.246054 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246020 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246092 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246106 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246136 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246150 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246169 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246189 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246200 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246228 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246277 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246292 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246308 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246329 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246339 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246364 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246450 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246479 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:23:33.247079 master-0 kubenswrapper[3178]: I0216 17:23:33.246482 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:33.249052 master-0 kubenswrapper[3178]: I0216 17:23:33.246528 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.249052 master-0 kubenswrapper[3178]: I0216 17:23:33.246510 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:23:33.249052 master-0 kubenswrapper[3178]: I0216 17:23:33.246686 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.249052 master-0 kubenswrapper[3178]: I0216 17:23:33.246725 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.249052 master-0 kubenswrapper[3178]: I0216 17:23:33.248046 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:33.249052 master-0 kubenswrapper[3178]: I0216 17:23:33.248081 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:33.249052 master-0 kubenswrapper[3178]: I0216 17:23:33.248092 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:33.249052 master-0 kubenswrapper[3178]: I0216 17:23:33.248115 3178 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:23:33.249052 master-0 kubenswrapper[3178]: E0216 17:23:33.248942 3178 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:23:33.392823 master-0 kubenswrapper[3178]: I0216 17:23:33.392749 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:33.405542 master-0 kubenswrapper[3178]: I0216 17:23:33.405475 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:23:33.420756 master-0 kubenswrapper[3178]: W0216 17:23:33.420686 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8fa563c7331931f00ce0006e522f0f1.slice/crio-60ad9673b9da87a543bba4e5a24b9c3c17606af8ac65c311daabcc313339be82 WatchSource:0}: Error finding container 60ad9673b9da87a543bba4e5a24b9c3c17606af8ac65c311daabcc313339be82: Status 404 returned error can't find the container with id 60ad9673b9da87a543bba4e5a24b9c3c17606af8ac65c311daabcc313339be82 Feb 16 17:23:33.421120 master-0 kubenswrapper[3178]: W0216 17:23:33.421081 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80420f2e7c3cdda71f7d0d6ccbe6f9f3.slice/crio-06cdc79aff420eb5730cf93c10b791911677809cb3e311984f04d7223bea2df7 WatchSource:0}: Error finding container 06cdc79aff420eb5730cf93c10b791911677809cb3e311984f04d7223bea2df7: Status 404 returned error can't find the container with id 06cdc79aff420eb5730cf93c10b791911677809cb3e311984f04d7223bea2df7 Feb 16 17:23:33.425670 master-0 kubenswrapper[3178]: E0216 17:23:33.425623 3178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 16 17:23:33.429386 master-0 kubenswrapper[3178]: I0216 17:23:33.429331 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:23:33.442540 master-0 kubenswrapper[3178]: I0216 17:23:33.442468 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Feb 16 17:23:33.446718 master-0 kubenswrapper[3178]: W0216 17:23:33.446676 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3322fd3717f4aec0d8f54ec7862c07e.slice/crio-7390c7d89f79e636baa8c58deafe3fc046c5d3959b31e83d9fd704ba232e7cc1 WatchSource:0}: Error finding container 7390c7d89f79e636baa8c58deafe3fc046c5d3959b31e83d9fd704ba232e7cc1: Status 404 returned error can't find the container with id 7390c7d89f79e636baa8c58deafe3fc046c5d3959b31e83d9fd704ba232e7cc1 Feb 16 17:23:33.454672 master-0 kubenswrapper[3178]: I0216 17:23:33.454621 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:33.460727 master-0 kubenswrapper[3178]: W0216 17:23:33.460663 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7adecad495595c43c57c30abd350e987.slice/crio-cd3d71a6084ee248a560124746ee307460625fa3d9ee1fe1d378dbd98e43a0fb WatchSource:0}: Error finding container cd3d71a6084ee248a560124746ee307460625fa3d9ee1fe1d378dbd98e43a0fb: Status 404 returned error can't find the container with id cd3d71a6084ee248a560124746ee307460625fa3d9ee1fe1d378dbd98e43a0fb Feb 16 17:23:33.649521 master-0 kubenswrapper[3178]: I0216 17:23:33.649041 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:33.649521 master-0 kubenswrapper[3178]: W0216 17:23:33.649463 3178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:23:33.651294 master-0 kubenswrapper[3178]: E0216 17:23:33.649549 3178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:23:33.651294 master-0 kubenswrapper[3178]: I0216 17:23:33.650437 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:33.651294 master-0 kubenswrapper[3178]: I0216 17:23:33.650471 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:33.651294 master-0 kubenswrapper[3178]: I0216 17:23:33.650482 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:33.651294 master-0 kubenswrapper[3178]: I0216 17:23:33.650511 3178 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:23:33.651858 master-0 kubenswrapper[3178]: E0216 17:23:33.651814 3178 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:23:33.771950 master-0 kubenswrapper[3178]: W0216 17:23:33.771858 3178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:23:33.771950 master-0 kubenswrapper[3178]: E0216 17:23:33.771927 3178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:23:33.810646 master-0 kubenswrapper[3178]: I0216 17:23:33.810585 3178 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:23:33.961511 master-0 kubenswrapper[3178]: I0216 17:23:33.961349 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"cd3d71a6084ee248a560124746ee307460625fa3d9ee1fe1d378dbd98e43a0fb"} Feb 16 17:23:33.963017 master-0 kubenswrapper[3178]: I0216 17:23:33.962948 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"7390c7d89f79e636baa8c58deafe3fc046c5d3959b31e83d9fd704ba232e7cc1"} Feb 16 17:23:33.963681 master-0 kubenswrapper[3178]: I0216 17:23:33.963636 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"60ad9673b9da87a543bba4e5a24b9c3c17606af8ac65c311daabcc313339be82"} Feb 16 17:23:33.964652 master-0 kubenswrapper[3178]: I0216 17:23:33.964610 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"06cdc79aff420eb5730cf93c10b791911677809cb3e311984f04d7223bea2df7"} Feb 16 17:23:33.965733 master-0 kubenswrapper[3178]: I0216 17:23:33.965697 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"36d50210d5c52db4b7e6fdca90b019b559fb61ad6d363fa02b488e76691be827"} Feb 16 17:23:34.227877 master-0 kubenswrapper[3178]: E0216 17:23:34.227763 3178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 16 17:23:34.327151 master-0 kubenswrapper[3178]: W0216 17:23:34.327035 3178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:23:34.327321 master-0 kubenswrapper[3178]: E0216 17:23:34.327159 3178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:23:34.340174 master-0 kubenswrapper[3178]: W0216 17:23:34.340057 3178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:23:34.340399 master-0 kubenswrapper[3178]: E0216 17:23:34.340178 3178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Feb 16 17:23:34.453204 master-0 kubenswrapper[3178]: I0216 17:23:34.452982 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:34.455149 master-0 kubenswrapper[3178]: I0216 17:23:34.455081 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:34.455149 master-0 kubenswrapper[3178]: I0216 17:23:34.455136 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:34.456716 master-0 kubenswrapper[3178]: I0216 17:23:34.455158 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:34.456716 master-0 kubenswrapper[3178]: I0216 17:23:34.455206 3178 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:23:34.458633 master-0 kubenswrapper[3178]: E0216 17:23:34.458374 3178 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:23:34.810688 master-0 kubenswrapper[3178]: I0216 17:23:34.810515 3178 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:23:34.970448 master-0 kubenswrapper[3178]: I0216 17:23:34.970334 3178 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="2f9e036184f8cd2fd14e3ee4e8e0984726c748a2f48514f7099254370b0935ca" exitCode=0 Feb 16 17:23:34.970448 master-0 kubenswrapper[3178]: I0216 17:23:34.970444 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"2f9e036184f8cd2fd14e3ee4e8e0984726c748a2f48514f7099254370b0935ca"} Feb 16 17:23:34.970826 master-0 kubenswrapper[3178]: I0216 17:23:34.970545 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:34.971906 master-0 kubenswrapper[3178]: I0216 17:23:34.971850 3178 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="024918b99b0960332808509aca9a4a206a98049b3cbbd79cb59ca43d40614ee8" exitCode=0 Feb 16 17:23:34.972199 master-0 kubenswrapper[3178]: I0216 17:23:34.971907 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"024918b99b0960332808509aca9a4a206a98049b3cbbd79cb59ca43d40614ee8"} Feb 16 17:23:34.972199 master-0 kubenswrapper[3178]: I0216 17:23:34.972029 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:34.972549 master-0 kubenswrapper[3178]: I0216 17:23:34.972502 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:34.972549 master-0 kubenswrapper[3178]: I0216 17:23:34.972536 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:34.972549 master-0 kubenswrapper[3178]: I0216 17:23:34.972551 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:34.973045 master-0 kubenswrapper[3178]: I0216 17:23:34.972989 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:34.973045 master-0 kubenswrapper[3178]: I0216 17:23:34.973042 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:34.973190 master-0 kubenswrapper[3178]: I0216 17:23:34.973055 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:34.976497 master-0 kubenswrapper[3178]: I0216 17:23:34.976449 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:34.976597 master-0 kubenswrapper[3178]: I0216 17:23:34.976546 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"8418547cd53261f1b77929899a0ab7c7d55cf1c91b349c65456cae4040067db4"} Feb 16 17:23:34.977160 master-0 kubenswrapper[3178]: I0216 17:23:34.976223 3178 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="8418547cd53261f1b77929899a0ab7c7d55cf1c91b349c65456cae4040067db4" exitCode=0 Feb 16 17:23:34.981452 master-0 kubenswrapper[3178]: I0216 17:23:34.981362 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:34.981452 master-0 kubenswrapper[3178]: I0216 17:23:34.981457 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:34.981734 master-0 kubenswrapper[3178]: I0216 17:23:34.981482 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:34.983957 master-0 kubenswrapper[3178]: I0216 17:23:34.983880 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838"} Feb 16 17:23:34.984107 master-0 kubenswrapper[3178]: I0216 17:23:34.983954 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"464edba8d57c32478bbe224a3095ce2fb80668145c9a9d6b24b771d9e8330a78"} Feb 16 17:23:34.984107 master-0 kubenswrapper[3178]: I0216 17:23:34.984096 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:34.990440 master-0 kubenswrapper[3178]: I0216 17:23:34.990380 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:34.990598 master-0 kubenswrapper[3178]: I0216 17:23:34.990447 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:34.990598 master-0 kubenswrapper[3178]: I0216 17:23:34.990469 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:34.993220 master-0 kubenswrapper[3178]: I0216 17:23:34.993153 3178 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576" exitCode=0 Feb 16 17:23:34.993220 master-0 kubenswrapper[3178]: I0216 17:23:34.993215 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerDied","Data":"bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576"} Feb 16 17:23:34.993534 master-0 kubenswrapper[3178]: I0216 17:23:34.993451 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:34.994534 master-0 kubenswrapper[3178]: I0216 17:23:34.994485 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:34.994647 master-0 kubenswrapper[3178]: I0216 17:23:34.994538 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:34.994647 master-0 kubenswrapper[3178]: I0216 17:23:34.994563 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:34.998781 master-0 kubenswrapper[3178]: I0216 17:23:34.998721 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:35.006393 master-0 kubenswrapper[3178]: I0216 17:23:35.006352 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:35.006518 master-0 kubenswrapper[3178]: I0216 17:23:35.006399 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:35.006518 master-0 kubenswrapper[3178]: I0216 17:23:35.006415 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:35.810161 master-0 kubenswrapper[3178]: I0216 17:23:35.810075 3178 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Feb 16 17:23:35.828697 master-0 kubenswrapper[3178]: E0216 17:23:35.828641 3178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 16 17:23:35.838634 master-0 kubenswrapper[3178]: I0216 17:23:35.838590 3178 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:35.997424 master-0 kubenswrapper[3178]: I0216 17:23:35.997369 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"df0e3c6ca3dd8af42c8e9ac9cdc311a5a319df0fa4ca786ea177a90a6aefea49"} Feb 16 17:23:35.997424 master-0 kubenswrapper[3178]: I0216 17:23:35.997415 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"208e46a3c641476f9960cdd4e77a82fbdec0a87c2f2f91e56dfe5eb0ed0268f8"} Feb 16 17:23:35.997424 master-0 kubenswrapper[3178]: I0216 17:23:35.997428 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"94ea5b6007080bd428cd7b6fe066cc75a9a70841a978304207907aa746d9ac27"} Feb 16 17:23:35.997746 master-0 kubenswrapper[3178]: I0216 17:23:35.997516 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:35.998275 master-0 kubenswrapper[3178]: I0216 17:23:35.998218 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:35.998275 master-0 kubenswrapper[3178]: I0216 17:23:35.998262 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:35.998275 master-0 kubenswrapper[3178]: I0216 17:23:35.998273 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:35.999963 master-0 kubenswrapper[3178]: I0216 17:23:35.999925 3178 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="464edba8d57c32478bbe224a3095ce2fb80668145c9a9d6b24b771d9e8330a78" exitCode=1 Feb 16 17:23:36.000054 master-0 kubenswrapper[3178]: I0216 17:23:35.999979 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"464edba8d57c32478bbe224a3095ce2fb80668145c9a9d6b24b771d9e8330a78"} Feb 16 17:23:36.000116 master-0 kubenswrapper[3178]: I0216 17:23:36.000065 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:36.000789 master-0 kubenswrapper[3178]: I0216 17:23:36.000755 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:36.000789 master-0 kubenswrapper[3178]: I0216 17:23:36.000779 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:36.000920 master-0 kubenswrapper[3178]: I0216 17:23:36.000793 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:36.001118 master-0 kubenswrapper[3178]: I0216 17:23:36.001086 3178 scope.go:117] "RemoveContainer" containerID="464edba8d57c32478bbe224a3095ce2fb80668145c9a9d6b24b771d9e8330a78" Feb 16 17:23:36.004390 master-0 kubenswrapper[3178]: I0216 17:23:36.004338 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d"} Feb 16 17:23:36.004504 master-0 kubenswrapper[3178]: I0216 17:23:36.004394 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7"} Feb 16 17:23:36.004504 master-0 kubenswrapper[3178]: I0216 17:23:36.004409 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988"} Feb 16 17:23:36.004504 master-0 kubenswrapper[3178]: I0216 17:23:36.004420 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134"} Feb 16 17:23:36.005800 master-0 kubenswrapper[3178]: I0216 17:23:36.005763 3178 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="b1d0578227c4edafa8bba585414b028b5fae4c055f5a0b9d56187660cf9393ff" exitCode=0 Feb 16 17:23:36.005849 master-0 kubenswrapper[3178]: I0216 17:23:36.005809 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"b1d0578227c4edafa8bba585414b028b5fae4c055f5a0b9d56187660cf9393ff"} Feb 16 17:23:36.005939 master-0 kubenswrapper[3178]: I0216 17:23:36.005912 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:36.010932 master-0 kubenswrapper[3178]: I0216 17:23:36.006611 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:36.010932 master-0 kubenswrapper[3178]: I0216 17:23:36.006637 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:36.010932 master-0 kubenswrapper[3178]: I0216 17:23:36.006645 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:36.015549 master-0 kubenswrapper[3178]: I0216 17:23:36.014441 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"5fa8b867f6c7632908fe33e45a5de76207c3a49f016816d7a95a271132f5f9bc"} Feb 16 17:23:36.015549 master-0 kubenswrapper[3178]: I0216 17:23:36.014509 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:36.022065 master-0 kubenswrapper[3178]: I0216 17:23:36.019969 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:36.022065 master-0 kubenswrapper[3178]: I0216 17:23:36.020014 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:36.022065 master-0 kubenswrapper[3178]: I0216 17:23:36.020025 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:36.060150 master-0 kubenswrapper[3178]: I0216 17:23:36.060064 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:36.061599 master-0 kubenswrapper[3178]: I0216 17:23:36.061549 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:36.061599 master-0 kubenswrapper[3178]: I0216 17:23:36.061590 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:36.061710 master-0 kubenswrapper[3178]: I0216 17:23:36.061604 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:36.061710 master-0 kubenswrapper[3178]: I0216 17:23:36.061634 3178 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:23:36.063125 master-0 kubenswrapper[3178]: E0216 17:23:36.062691 3178 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Feb 16 17:23:37.021909 master-0 kubenswrapper[3178]: I0216 17:23:37.021834 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5"} Feb 16 17:23:37.022697 master-0 kubenswrapper[3178]: I0216 17:23:37.021947 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:37.023550 master-0 kubenswrapper[3178]: I0216 17:23:37.022919 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:37.023550 master-0 kubenswrapper[3178]: I0216 17:23:37.022959 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:37.023550 master-0 kubenswrapper[3178]: I0216 17:23:37.022969 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:37.025027 master-0 kubenswrapper[3178]: I0216 17:23:37.024993 3178 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="aa4ad8fed1eda81c2562cc6c50ee8eff149a61c6fa1ef5cf233edb4d1184264a" exitCode=0 Feb 16 17:23:37.025109 master-0 kubenswrapper[3178]: I0216 17:23:37.025075 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"aa4ad8fed1eda81c2562cc6c50ee8eff149a61c6fa1ef5cf233edb4d1184264a"} Feb 16 17:23:37.025325 master-0 kubenswrapper[3178]: I0216 17:23:37.025142 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:37.026211 master-0 kubenswrapper[3178]: I0216 17:23:37.026180 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:37.026282 master-0 kubenswrapper[3178]: I0216 17:23:37.026216 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:37.026282 master-0 kubenswrapper[3178]: I0216 17:23:37.026226 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:37.028138 master-0 kubenswrapper[3178]: I0216 17:23:37.027548 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"ac596da9ba2aef67b1e91be250c648d0d94ad9e7ab12065e2f256126b855cff0"} Feb 16 17:23:37.028138 master-0 kubenswrapper[3178]: I0216 17:23:37.027615 3178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:23:37.028138 master-0 kubenswrapper[3178]: I0216 17:23:37.027635 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:37.028138 master-0 kubenswrapper[3178]: I0216 17:23:37.027662 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:37.028138 master-0 kubenswrapper[3178]: I0216 17:23:37.027861 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:37.028682 master-0 kubenswrapper[3178]: I0216 17:23:37.028666 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:37.028738 master-0 kubenswrapper[3178]: I0216 17:23:37.028687 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:37.028738 master-0 kubenswrapper[3178]: I0216 17:23:37.028695 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:37.028800 master-0 kubenswrapper[3178]: I0216 17:23:37.028772 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:37.028836 master-0 kubenswrapper[3178]: I0216 17:23:37.028804 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:37.028836 master-0 kubenswrapper[3178]: I0216 17:23:37.028816 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:37.029499 master-0 kubenswrapper[3178]: I0216 17:23:37.029423 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:37.029499 master-0 kubenswrapper[3178]: I0216 17:23:37.029473 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:37.029609 master-0 kubenswrapper[3178]: I0216 17:23:37.029506 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:37.252433 master-0 kubenswrapper[3178]: I0216 17:23:37.252376 3178 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:38.034936 master-0 kubenswrapper[3178]: I0216 17:23:38.034768 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"022e1fddc422dc77252f1d7b260702feb66ffc90c31448ea87e5739cd23f3805"} Feb 16 17:23:38.034936 master-0 kubenswrapper[3178]: I0216 17:23:38.034854 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"2e99ae292d3b4bbd76a9fa68cff04a8bd972ff36354aa7a07d342bf6c90a37c3"} Feb 16 17:23:38.034936 master-0 kubenswrapper[3178]: I0216 17:23:38.034875 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"78c4c28ec182d11145fdbcfed4e0587dbd19c642c7d08933143edfffac5518da"} Feb 16 17:23:38.034936 master-0 kubenswrapper[3178]: I0216 17:23:38.034884 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:38.034936 master-0 kubenswrapper[3178]: I0216 17:23:38.034931 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:38.035698 master-0 kubenswrapper[3178]: I0216 17:23:38.034890 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"1208371bf87d9b91c91faffa32fd11198d3867d9e1e74ab3e3e862ddf72963a2"} Feb 16 17:23:38.035698 master-0 kubenswrapper[3178]: I0216 17:23:38.035045 3178 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:38.035867 master-0 kubenswrapper[3178]: I0216 17:23:38.035841 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:38.035915 master-0 kubenswrapper[3178]: I0216 17:23:38.035873 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:38.035915 master-0 kubenswrapper[3178]: I0216 17:23:38.035885 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:38.036774 master-0 kubenswrapper[3178]: I0216 17:23:38.036732 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:38.036828 master-0 kubenswrapper[3178]: I0216 17:23:38.036777 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:38.036828 master-0 kubenswrapper[3178]: I0216 17:23:38.036793 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:38.839084 master-0 kubenswrapper[3178]: I0216 17:23:38.838701 3178 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:23:39.039490 master-0 kubenswrapper[3178]: I0216 17:23:39.039430 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:39.040208 master-0 kubenswrapper[3178]: I0216 17:23:39.040146 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:39.040477 master-0 kubenswrapper[3178]: I0216 17:23:39.040367 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"92efa28d5ccdf6a2d1f34efa5ec12c219983a97ee3917a992682d4d798721c42"} Feb 16 17:23:39.040477 master-0 kubenswrapper[3178]: I0216 17:23:39.040429 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:39.040800 master-0 kubenswrapper[3178]: I0216 17:23:39.040739 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:39.040800 master-0 kubenswrapper[3178]: I0216 17:23:39.040769 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:39.040800 master-0 kubenswrapper[3178]: I0216 17:23:39.040779 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:39.040800 master-0 kubenswrapper[3178]: I0216 17:23:39.040797 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:39.041066 master-0 kubenswrapper[3178]: I0216 17:23:39.040812 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:39.041066 master-0 kubenswrapper[3178]: I0216 17:23:39.040820 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:39.041066 master-0 kubenswrapper[3178]: I0216 17:23:39.040954 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:39.041066 master-0 kubenswrapper[3178]: I0216 17:23:39.040964 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:39.041066 master-0 kubenswrapper[3178]: I0216 17:23:39.040971 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:39.263223 master-0 kubenswrapper[3178]: I0216 17:23:39.263165 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:39.264380 master-0 kubenswrapper[3178]: I0216 17:23:39.264345 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:39.264380 master-0 kubenswrapper[3178]: I0216 17:23:39.264379 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:39.264538 master-0 kubenswrapper[3178]: I0216 17:23:39.264392 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:39.264538 master-0 kubenswrapper[3178]: I0216 17:23:39.264414 3178 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:23:40.041920 master-0 kubenswrapper[3178]: I0216 17:23:40.041864 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:40.042737 master-0 kubenswrapper[3178]: I0216 17:23:40.042691 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:40.042831 master-0 kubenswrapper[3178]: I0216 17:23:40.042745 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:40.042831 master-0 kubenswrapper[3178]: I0216 17:23:40.042778 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:40.121356 master-0 kubenswrapper[3178]: I0216 17:23:40.121220 3178 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 16 17:23:40.752108 master-0 kubenswrapper[3178]: I0216 17:23:40.751961 3178 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:40.752414 master-0 kubenswrapper[3178]: I0216 17:23:40.752378 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:40.754025 master-0 kubenswrapper[3178]: I0216 17:23:40.753975 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:40.754081 master-0 kubenswrapper[3178]: I0216 17:23:40.754036 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:40.754081 master-0 kubenswrapper[3178]: I0216 17:23:40.754053 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:40.995651 master-0 kubenswrapper[3178]: I0216 17:23:40.995530 3178 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:41.044439 master-0 kubenswrapper[3178]: I0216 17:23:41.044224 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:41.044439 master-0 kubenswrapper[3178]: I0216 17:23:41.044395 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:41.045201 master-0 kubenswrapper[3178]: I0216 17:23:41.045164 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:41.045201 master-0 kubenswrapper[3178]: I0216 17:23:41.045197 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:41.045291 master-0 kubenswrapper[3178]: I0216 17:23:41.045208 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:41.045655 master-0 kubenswrapper[3178]: I0216 17:23:41.045609 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:41.045700 master-0 kubenswrapper[3178]: I0216 17:23:41.045671 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:41.045736 master-0 kubenswrapper[3178]: I0216 17:23:41.045723 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:41.487585 master-0 kubenswrapper[3178]: I0216 17:23:41.487372 3178 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:23:41.487839 master-0 kubenswrapper[3178]: I0216 17:23:41.487734 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:41.489195 master-0 kubenswrapper[3178]: I0216 17:23:41.489146 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:41.489292 master-0 kubenswrapper[3178]: I0216 17:23:41.489227 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:41.489341 master-0 kubenswrapper[3178]: I0216 17:23:41.489289 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:42.326188 master-0 kubenswrapper[3178]: I0216 17:23:42.326056 3178 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:42.327025 master-0 kubenswrapper[3178]: I0216 17:23:42.326340 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:42.328090 master-0 kubenswrapper[3178]: I0216 17:23:42.328029 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:42.328207 master-0 kubenswrapper[3178]: I0216 17:23:42.328163 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:42.328207 master-0 kubenswrapper[3178]: I0216 17:23:42.328185 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:42.946021 master-0 kubenswrapper[3178]: E0216 17:23:42.945775 3178 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 17:23:43.127801 master-0 kubenswrapper[3178]: I0216 17:23:43.127682 3178 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 16 17:23:43.128124 master-0 kubenswrapper[3178]: I0216 17:23:43.127945 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:43.129455 master-0 kubenswrapper[3178]: I0216 17:23:43.129395 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:43.129567 master-0 kubenswrapper[3178]: I0216 17:23:43.129462 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:43.129567 master-0 kubenswrapper[3178]: I0216 17:23:43.129503 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:43.769674 master-0 kubenswrapper[3178]: I0216 17:23:43.769507 3178 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:43.770709 master-0 kubenswrapper[3178]: I0216 17:23:43.769751 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:43.771010 master-0 kubenswrapper[3178]: I0216 17:23:43.770949 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:43.771010 master-0 kubenswrapper[3178]: I0216 17:23:43.771008 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:43.771204 master-0 kubenswrapper[3178]: I0216 17:23:43.771031 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:46.676663 master-0 kubenswrapper[3178]: W0216 17:23:46.676475 3178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 17:23:46.676663 master-0 kubenswrapper[3178]: I0216 17:23:46.676659 3178 trace.go:236] Trace[1554652371]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 17:23:36.674) (total time: 10001ms): Feb 16 17:23:46.676663 master-0 kubenswrapper[3178]: Trace[1554652371]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:23:46.676) Feb 16 17:23:46.676663 master-0 kubenswrapper[3178]: Trace[1554652371]: [10.001698219s] [10.001698219s] END Feb 16 17:23:46.677789 master-0 kubenswrapper[3178]: E0216 17:23:46.676705 3178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 17:23:46.694711 master-0 kubenswrapper[3178]: W0216 17:23:46.694575 3178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 17:23:46.694861 master-0 kubenswrapper[3178]: I0216 17:23:46.694740 3178 trace.go:236] Trace[2124527742]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 17:23:36.692) (total time: 10002ms): Feb 16 17:23:46.694861 master-0 kubenswrapper[3178]: Trace[2124527742]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (17:23:46.694) Feb 16 17:23:46.694861 master-0 kubenswrapper[3178]: Trace[2124527742]: [10.002385417s] [10.002385417s] END Feb 16 17:23:46.694861 master-0 kubenswrapper[3178]: E0216 17:23:46.694781 3178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 17:23:46.770186 master-0 kubenswrapper[3178]: I0216 17:23:46.770059 3178 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 17:23:46.780943 master-0 kubenswrapper[3178]: W0216 17:23:46.780803 3178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 17:23:46.781053 master-0 kubenswrapper[3178]: I0216 17:23:46.780985 3178 trace.go:236] Trace[1981042883]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 17:23:36.778) (total time: 10001ms): Feb 16 17:23:46.781053 master-0 kubenswrapper[3178]: Trace[1981042883]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:23:46.780) Feb 16 17:23:46.781053 master-0 kubenswrapper[3178]: Trace[1981042883]: [10.001953206s] [10.001953206s] END Feb 16 17:23:46.781234 master-0 kubenswrapper[3178]: E0216 17:23:46.781035 3178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 17:23:46.811638 master-0 kubenswrapper[3178]: I0216 17:23:46.811547 3178 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": net/http: TLS handshake timeout Feb 16 17:23:46.982577 master-0 kubenswrapper[3178]: W0216 17:23:46.982360 3178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 17:23:46.982577 master-0 kubenswrapper[3178]: I0216 17:23:46.982444 3178 trace.go:236] Trace[787570604]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 17:23:36.980) (total time: 10001ms): Feb 16 17:23:46.982577 master-0 kubenswrapper[3178]: Trace[787570604]: ---"Objects listed" error:Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (17:23:46.982) Feb 16 17:23:46.982577 master-0 kubenswrapper[3178]: Trace[787570604]: [10.001747281s] [10.001747281s] END Feb 16 17:23:46.982577 master-0 kubenswrapper[3178]: E0216 17:23:46.982465 3178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 17:23:48.069486 master-0 kubenswrapper[3178]: I0216 17:23:48.069204 3178 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="ac596da9ba2aef67b1e91be250c648d0d94ad9e7ab12065e2f256126b855cff0" exitCode=1 Feb 16 17:23:48.069486 master-0 kubenswrapper[3178]: I0216 17:23:48.069337 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"ac596da9ba2aef67b1e91be250c648d0d94ad9e7ab12065e2f256126b855cff0"} Feb 16 17:23:48.069486 master-0 kubenswrapper[3178]: I0216 17:23:48.069427 3178 scope.go:117] "RemoveContainer" containerID="464edba8d57c32478bbe224a3095ce2fb80668145c9a9d6b24b771d9e8330a78" Feb 16 17:23:48.070806 master-0 kubenswrapper[3178]: I0216 17:23:48.069561 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:48.071318 master-0 kubenswrapper[3178]: I0216 17:23:48.071218 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:48.071432 master-0 kubenswrapper[3178]: I0216 17:23:48.071320 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:48.071432 master-0 kubenswrapper[3178]: I0216 17:23:48.071349 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:48.072408 master-0 kubenswrapper[3178]: I0216 17:23:48.072112 3178 scope.go:117] "RemoveContainer" containerID="ac596da9ba2aef67b1e91be250c648d0d94ad9e7ab12065e2f256126b855cff0" Feb 16 17:23:48.072731 master-0 kubenswrapper[3178]: E0216 17:23:48.072670 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:23:48.839407 master-0 kubenswrapper[3178]: I0216 17:23:48.839314 3178 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 17:23:49.030071 master-0 kubenswrapper[3178]: E0216 17:23:49.029951 3178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Feb 16 17:23:49.266365 master-0 kubenswrapper[3178]: E0216 17:23:49.266231 3178 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="master-0" Feb 16 17:23:49.547833 master-0 kubenswrapper[3178]: I0216 17:23:49.547421 3178 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 17:23:49.547833 master-0 kubenswrapper[3178]: I0216 17:23:49.547493 3178 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 17:23:49.553904 master-0 kubenswrapper[3178]: I0216 17:23:49.553838 3178 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 17:23:49.554099 master-0 kubenswrapper[3178]: I0216 17:23:49.553942 3178 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 17:23:50.156764 master-0 kubenswrapper[3178]: I0216 17:23:50.156705 3178 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 16 17:23:50.157039 master-0 kubenswrapper[3178]: I0216 17:23:50.156977 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:50.158410 master-0 kubenswrapper[3178]: I0216 17:23:50.158345 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:50.158504 master-0 kubenswrapper[3178]: I0216 17:23:50.158434 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:50.158504 master-0 kubenswrapper[3178]: I0216 17:23:50.158459 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:50.175024 master-0 kubenswrapper[3178]: I0216 17:23:50.174968 3178 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: I0216 17:23:51.002558 3178 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]log ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]etcd ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/priority-and-fairness-filter ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/start-apiextensions-informers ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/start-apiextensions-controllers ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/crd-informer-synced ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/start-system-namespaces-controller ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/rbac/bootstrap-roles ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/bootstrap-controller ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/start-kube-aggregator-informers ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/apiservice-registration-controller ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]autoregister-completion ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/apiservice-openapi-controller ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 16 17:23:51.002620 master-0 kubenswrapper[3178]: livez check failed Feb 16 17:23:51.006577 master-0 kubenswrapper[3178]: I0216 17:23:51.003349 3178 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:23:51.080030 master-0 kubenswrapper[3178]: I0216 17:23:51.079945 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:51.081203 master-0 kubenswrapper[3178]: I0216 17:23:51.081150 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:51.081203 master-0 kubenswrapper[3178]: I0216 17:23:51.081204 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:51.081410 master-0 kubenswrapper[3178]: I0216 17:23:51.081218 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:52.524962 master-0 kubenswrapper[3178]: I0216 17:23:52.524890 3178 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:52.525592 master-0 kubenswrapper[3178]: I0216 17:23:52.525080 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:52.526210 master-0 kubenswrapper[3178]: I0216 17:23:52.526149 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:52.526288 master-0 kubenswrapper[3178]: I0216 17:23:52.526213 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:52.526288 master-0 kubenswrapper[3178]: I0216 17:23:52.526224 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:52.526706 master-0 kubenswrapper[3178]: I0216 17:23:52.526677 3178 scope.go:117] "RemoveContainer" containerID="ac596da9ba2aef67b1e91be250c648d0d94ad9e7ab12065e2f256126b855cff0" Feb 16 17:23:52.526953 master-0 kubenswrapper[3178]: E0216 17:23:52.526872 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:23:52.946233 master-0 kubenswrapper[3178]: E0216 17:23:52.946122 3178 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Feb 16 17:23:53.769562 master-0 kubenswrapper[3178]: I0216 17:23:53.769494 3178 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:53.770011 master-0 kubenswrapper[3178]: I0216 17:23:53.769707 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:53.771021 master-0 kubenswrapper[3178]: I0216 17:23:53.770988 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:53.771021 master-0 kubenswrapper[3178]: I0216 17:23:53.771019 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:53.771138 master-0 kubenswrapper[3178]: I0216 17:23:53.771030 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:53.771507 master-0 kubenswrapper[3178]: I0216 17:23:53.771470 3178 scope.go:117] "RemoveContainer" containerID="ac596da9ba2aef67b1e91be250c648d0d94ad9e7ab12065e2f256126b855cff0" Feb 16 17:23:53.771684 master-0 kubenswrapper[3178]: E0216 17:23:53.771654 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" Feb 16 17:23:55.424210 master-0 kubenswrapper[3178]: I0216 17:23:55.424106 3178 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:23:55.667410 master-0 kubenswrapper[3178]: I0216 17:23:55.667314 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:55.668872 master-0 kubenswrapper[3178]: I0216 17:23:55.668818 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:55.669007 master-0 kubenswrapper[3178]: I0216 17:23:55.668885 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:55.669007 master-0 kubenswrapper[3178]: I0216 17:23:55.668904 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:55.669007 master-0 kubenswrapper[3178]: I0216 17:23:55.668943 3178 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:23:55.688763 master-0 kubenswrapper[3178]: I0216 17:23:55.688568 3178 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 16 17:23:55.688994 master-0 kubenswrapper[3178]: I0216 17:23:55.688892 3178 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 16 17:23:55.688994 master-0 kubenswrapper[3178]: E0216 17:23:55.688919 3178 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Feb 16 17:23:55.692120 master-0 kubenswrapper[3178]: I0216 17:23:55.692075 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:23:55.692281 master-0 kubenswrapper[3178]: I0216 17:23:55.692120 3178 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:23:55Z","lastTransitionTime":"2026-02-16T17:23:55Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 16 17:23:55.708011 master-0 kubenswrapper[3178]: E0216 17:23:55.707947 3178 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bff30cf7-71da-4e66-9940-13ec1ab42f05\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:55.712701 master-0 kubenswrapper[3178]: I0216 17:23:55.712650 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:23:55.712823 master-0 kubenswrapper[3178]: I0216 17:23:55.712704 3178 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:23:55Z","lastTransitionTime":"2026-02-16T17:23:55Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 16 17:23:55.721822 master-0 kubenswrapper[3178]: E0216 17:23:55.721746 3178 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bff30cf7-71da-4e66-9940-13ec1ab42f05\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:55.725554 master-0 kubenswrapper[3178]: I0216 17:23:55.725501 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:23:55.725651 master-0 kubenswrapper[3178]: I0216 17:23:55.725551 3178 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:23:55Z","lastTransitionTime":"2026-02-16T17:23:55Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 16 17:23:55.735289 master-0 kubenswrapper[3178]: E0216 17:23:55.735216 3178 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bff30cf7-71da-4e66-9940-13ec1ab42f05\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:55.741605 master-0 kubenswrapper[3178]: I0216 17:23:55.741569 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:23:55.741709 master-0 kubenswrapper[3178]: I0216 17:23:55.741608 3178 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:23:55Z","lastTransitionTime":"2026-02-16T17:23:55Z","reason":"KubeletNotReady","message":"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]"} Feb 16 17:23:55.748740 master-0 kubenswrapper[3178]: E0216 17:23:55.748656 3178 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:55Z\\\",\\\"message\\\":\\\"[container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?, CSINode is not yet initialized]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bff30cf7-71da-4e66-9940-13ec1ab42f05\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:55.748740 master-0 kubenswrapper[3178]: E0216 17:23:55.748727 3178 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:23:55.748906 master-0 kubenswrapper[3178]: E0216 17:23:55.748760 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:55.849062 master-0 kubenswrapper[3178]: E0216 17:23:55.848989 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:55.949983 master-0 kubenswrapper[3178]: E0216 17:23:55.949795 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:56.002393 master-0 kubenswrapper[3178]: I0216 17:23:56.002327 3178 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:56.002605 master-0 kubenswrapper[3178]: I0216 17:23:56.002493 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:56.003644 master-0 kubenswrapper[3178]: I0216 17:23:56.003552 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:56.003644 master-0 kubenswrapper[3178]: I0216 17:23:56.003613 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:56.003644 master-0 kubenswrapper[3178]: I0216 17:23:56.003634 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:56.008883 master-0 kubenswrapper[3178]: I0216 17:23:56.008852 3178 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:56.050512 master-0 kubenswrapper[3178]: E0216 17:23:56.050441 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:56.094693 master-0 kubenswrapper[3178]: I0216 17:23:56.094625 3178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:23:56.094693 master-0 kubenswrapper[3178]: I0216 17:23:56.094698 3178 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:23:56.095863 master-0 kubenswrapper[3178]: I0216 17:23:56.095823 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:23:56.095922 master-0 kubenswrapper[3178]: I0216 17:23:56.095873 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:23:56.095922 master-0 kubenswrapper[3178]: I0216 17:23:56.095890 3178 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:23:56.151003 master-0 kubenswrapper[3178]: E0216 17:23:56.150924 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:56.252061 master-0 kubenswrapper[3178]: E0216 17:23:56.251897 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:56.353098 master-0 kubenswrapper[3178]: E0216 17:23:56.352947 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:56.453654 master-0 kubenswrapper[3178]: E0216 17:23:56.453566 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:56.554476 master-0 kubenswrapper[3178]: E0216 17:23:56.554317 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:56.655145 master-0 kubenswrapper[3178]: E0216 17:23:56.655063 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:56.756277 master-0 kubenswrapper[3178]: E0216 17:23:56.756188 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:56.857123 master-0 kubenswrapper[3178]: E0216 17:23:56.856978 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:56.952270 master-0 kubenswrapper[3178]: I0216 17:23:56.952193 3178 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 17:23:56.957526 master-0 kubenswrapper[3178]: E0216 17:23:56.957476 3178 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Feb 16 17:23:56.983173 master-0 kubenswrapper[3178]: I0216 17:23:56.982991 3178 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:23:57.528410 master-0 kubenswrapper[3178]: I0216 17:23:57.528346 3178 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:23:57.812103 master-0 kubenswrapper[3178]: I0216 17:23:57.811875 3178 apiserver.go:52] "Watching apiserver" Feb 16 17:23:57.844611 master-0 kubenswrapper[3178]: I0216 17:23:57.844537 3178 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 17:23:57.848360 master-0 kubenswrapper[3178]: I0216 17:23:57.848203 3178 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx","openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48","openshift-marketplace/redhat-marketplace-4kd66","openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz","openshift-service-ca/service-ca-676cd8b9b5-cp9rb","openshift-kube-scheduler/installer-4-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-console-operator/console-operator-7777d5cc66-64vhv","openshift-console/console-599b567ff7-nrcpr","openshift-console/downloads-dcd7b7d95-dhhfh","openshift-dns/node-resolver-vfxj4","openshift-kube-apiserver/installer-3-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k","openshift-monitoring/monitoring-plugin-555857f695-nlrnr","openshift-authentication/oauth-openshift-64f85b8fc9-n9msn","openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd","openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w","openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2","openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf","openshift-authentication-operator/authentication-operator-755d954778-lf4cb","openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9","openshift-etcd/etcd-master-0","openshift-etcd/installer-2-master-0","openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6","openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn","openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6","openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd","openshift-marketplace/certified-operators-z69zq","openshift-monitoring/metrics-server-745bd8d89b-qr4zh","openshift-monitoring/prometheus-operator-7485d645b8-zxxwd","openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc","assisted-installer/assisted-installer-controller-thhq2","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz","openshift-machine-config-operator/machine-config-server-2ws9r","openshift-ovn-kubernetes/ovnkube-node-flr86","kube-system/bootstrap-kube-controller-manager-master-0","openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp","openshift-image-registry/node-ca-xv2wv","openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx","openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw","openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg","openshift-network-node-identity/network-node-identity-hhcpr","openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv","openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2","openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb","openshift-dns/dns-default-qcgxx","openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv","openshift-marketplace/community-operators-7w4km","openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b","openshift-ingress-canary/ingress-canary-qqvg4","openshift-insights/insights-operator-cb4f7b4cf-6qrw5","openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h","openshift-network-diagnostics/network-check-target-vwvwx","openshift-apiserver/apiserver-fc4bf7f79-tqnlw","openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq","openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz","openshift-ingress/router-default-864ddd5f56-pm4rt","openshift-kube-apiserver/installer-1-master-0","openshift-monitoring/alertmanager-main-0","openshift-cluster-node-tuning-operator/tuned-l5kbz","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf","openshift-marketplace/redhat-operators-lnzfx","openshift-network-operator/iptables-alerter-czzz2","openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k","openshift-monitoring/prometheus-k8s-0","openshift-multus/multus-6r7wj","openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v","openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9","openshift-kube-apiserver/kube-apiserver-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-machine-config-operator/machine-config-daemon-98q6v","openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc","openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5","openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl","openshift-multus/network-metrics-daemon-279g6","openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr","openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8","openshift-dns-operator/dns-operator-86b8869b79-nhxlp","openshift-etcd/installer-2-retry-1-master-0","openshift-kube-apiserver/installer-4-master-0","openshift-kube-controller-manager/installer-2-master-0","openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6","openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d","openshift-multus/multus-admission-controller-6d678b8d67-5n9cl","openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9","openshift-network-operator/network-operator-6fcf4c966-6bmf9","openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts","openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf","openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm","openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk","openshift-multus/multus-additional-cni-plugins-rjdlk","openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r","openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8","openshift-monitoring/node-exporter-8256c","openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct"] Feb 16 17:23:57.848591 master-0 kubenswrapper[3178]: I0216 17:23:57.848563 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 17:23:57.848940 master-0 kubenswrapper[3178]: I0216 17:23:57.848880 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:23:57.849022 master-0 kubenswrapper[3178]: I0216 17:23:57.848932 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:23:57.849022 master-0 kubenswrapper[3178]: I0216 17:23:57.848966 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:23:57.849022 master-0 kubenswrapper[3178]: I0216 17:23:57.848905 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:23:57.849179 master-0 kubenswrapper[3178]: E0216 17:23:57.849061 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:23:57.849179 master-0 kubenswrapper[3178]: E0216 17:23:57.849060 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:23:57.849179 master-0 kubenswrapper[3178]: E0216 17:23:57.849128 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:23:57.850376 master-0 kubenswrapper[3178]: I0216 17:23:57.849211 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:23:57.850376 master-0 kubenswrapper[3178]: I0216 17:23:57.849222 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:23:57.850376 master-0 kubenswrapper[3178]: E0216 17:23:57.849335 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:23:57.850376 master-0 kubenswrapper[3178]: E0216 17:23:57.850019 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:23:57.850577 master-0 kubenswrapper[3178]: I0216 17:23:57.850490 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:23:57.850633 master-0 kubenswrapper[3178]: E0216 17:23:57.850601 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:23:57.850927 master-0 kubenswrapper[3178]: I0216 17:23:57.850876 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:23:57.851027 master-0 kubenswrapper[3178]: I0216 17:23:57.850978 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:23:57.851156 master-0 kubenswrapper[3178]: E0216 17:23:57.851101 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:23:57.851156 master-0 kubenswrapper[3178]: E0216 17:23:57.850986 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:23:57.851442 master-0 kubenswrapper[3178]: I0216 17:23:57.851420 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:23:57.851521 master-0 kubenswrapper[3178]: E0216 17:23:57.851482 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:23:57.851666 master-0 kubenswrapper[3178]: I0216 17:23:57.851565 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:23:57.851721 master-0 kubenswrapper[3178]: E0216 17:23:57.851676 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:23:57.852199 master-0 kubenswrapper[3178]: I0216 17:23:57.852156 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:23:57.852292 master-0 kubenswrapper[3178]: E0216 17:23:57.852208 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:23:57.852361 master-0 kubenswrapper[3178]: I0216 17:23:57.852301 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 17:23:57.852599 master-0 kubenswrapper[3178]: I0216 17:23:57.852420 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 17:23:57.853837 master-0 kubenswrapper[3178]: I0216 17:23:57.852923 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:23:57.853837 master-0 kubenswrapper[3178]: E0216 17:23:57.853042 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:23:57.853837 master-0 kubenswrapper[3178]: I0216 17:23:57.853065 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:23:57.853837 master-0 kubenswrapper[3178]: I0216 17:23:57.853132 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:23:57.853837 master-0 kubenswrapper[3178]: I0216 17:23:57.853082 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:23:57.853837 master-0 kubenswrapper[3178]: E0216 17:23:57.853222 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:23:57.853837 master-0 kubenswrapper[3178]: E0216 17:23:57.853312 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:23:57.853837 master-0 kubenswrapper[3178]: I0216 17:23:57.853228 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:23:57.853837 master-0 kubenswrapper[3178]: E0216 17:23:57.853174 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:23:57.853837 master-0 kubenswrapper[3178]: I0216 17:23:57.853455 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 17:23:57.853837 master-0 kubenswrapper[3178]: E0216 17:23:57.853488 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:23:57.853837 master-0 kubenswrapper[3178]: I0216 17:23:57.853828 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:23:57.854813 master-0 kubenswrapper[3178]: E0216 17:23:57.853919 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:23:57.854813 master-0 kubenswrapper[3178]: I0216 17:23:57.854168 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:23:57.854813 master-0 kubenswrapper[3178]: E0216 17:23:57.854230 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:23:57.854813 master-0 kubenswrapper[3178]: I0216 17:23:57.854357 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:23:57.854813 master-0 kubenswrapper[3178]: E0216 17:23:57.854427 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:23:57.854813 master-0 kubenswrapper[3178]: I0216 17:23:57.854788 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:57.855488 master-0 kubenswrapper[3178]: I0216 17:23:57.855336 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6r7wj" Feb 16 17:23:57.855488 master-0 kubenswrapper[3178]: I0216 17:23:57.855437 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:23:57.855488 master-0 kubenswrapper[3178]: I0216 17:23:57.855436 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:23:57.855683 master-0 kubenswrapper[3178]: E0216 17:23:57.855608 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:23:57.855683 master-0 kubenswrapper[3178]: I0216 17:23:57.855642 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:23:57.855807 master-0 kubenswrapper[3178]: E0216 17:23:57.855704 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:23:57.856553 master-0 kubenswrapper[3178]: I0216 17:23:57.856368 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:23:57.856553 master-0 kubenswrapper[3178]: E0216 17:23:57.856454 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:23:57.856553 master-0 kubenswrapper[3178]: I0216 17:23:57.856385 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:23:57.857385 master-0 kubenswrapper[3178]: I0216 17:23:57.857007 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:57.857385 master-0 kubenswrapper[3178]: I0216 17:23:57.857323 3178 scope.go:117] "RemoveContainer" containerID="ac596da9ba2aef67b1e91be250c648d0d94ad9e7ab12065e2f256126b855cff0" Feb 16 17:23:57.858476 master-0 kubenswrapper[3178]: I0216 17:23:57.857686 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:23:57.858476 master-0 kubenswrapper[3178]: I0216 17:23:57.857326 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 17:23:57.858476 master-0 kubenswrapper[3178]: I0216 17:23:57.857987 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:23:57.859222 master-0 kubenswrapper[3178]: I0216 17:23:57.858726 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:23:57.859222 master-0 kubenswrapper[3178]: E0216 17:23:57.858818 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:23:57.859222 master-0 kubenswrapper[3178]: I0216 17:23:57.858907 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 17:23:57.859222 master-0 kubenswrapper[3178]: I0216 17:23:57.858967 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 17:23:57.859707 master-0 kubenswrapper[3178]: I0216 17:23:57.859379 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 17:23:57.859707 master-0 kubenswrapper[3178]: I0216 17:23:57.859646 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:23:57.859707 master-0 kubenswrapper[3178]: E0216 17:23:57.859690 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:23:57.860116 master-0 kubenswrapper[3178]: I0216 17:23:57.859703 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 17:23:57.860822 master-0 kubenswrapper[3178]: I0216 17:23:57.860801 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 17:23:57.860992 master-0 kubenswrapper[3178]: I0216 17:23:57.860892 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 17:23:57.860992 master-0 kubenswrapper[3178]: I0216 17:23:57.860978 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 17:23:57.861206 master-0 kubenswrapper[3178]: I0216 17:23:57.861046 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 17:23:57.861206 master-0 kubenswrapper[3178]: I0216 17:23:57.861172 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 17:23:57.861206 master-0 kubenswrapper[3178]: I0216 17:23:57.861186 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 17:23:57.861620 master-0 kubenswrapper[3178]: I0216 17:23:57.861318 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 17:23:57.861620 master-0 kubenswrapper[3178]: I0216 17:23:57.861410 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 17:23:57.861620 master-0 kubenswrapper[3178]: I0216 17:23:57.861407 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 17:23:57.861620 master-0 kubenswrapper[3178]: I0216 17:23:57.861442 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 17:23:57.861620 master-0 kubenswrapper[3178]: I0216 17:23:57.861460 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 17:23:57.861620 master-0 kubenswrapper[3178]: I0216 17:23:57.861596 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 17:23:57.862217 master-0 kubenswrapper[3178]: I0216 17:23:57.861645 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:23:57.862217 master-0 kubenswrapper[3178]: E0216 17:23:57.861695 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:23:57.862217 master-0 kubenswrapper[3178]: I0216 17:23:57.861893 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:57.862217 master-0 kubenswrapper[3178]: I0216 17:23:57.862161 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 17:23:57.864101 master-0 kubenswrapper[3178]: I0216 17:23:57.863083 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:23:57.864101 master-0 kubenswrapper[3178]: E0216 17:23:57.863127 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:23:57.864101 master-0 kubenswrapper[3178]: I0216 17:23:57.863277 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:23:57.864101 master-0 kubenswrapper[3178]: I0216 17:23:57.863993 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:57.864101 master-0 kubenswrapper[3178]: E0216 17:23:57.864057 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:23:57.865487 master-0 kubenswrapper[3178]: I0216 17:23:57.864941 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 17:23:57.865487 master-0 kubenswrapper[3178]: I0216 17:23:57.865112 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 17:23:57.865487 master-0 kubenswrapper[3178]: I0216 17:23:57.865472 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:23:57.866037 master-0 kubenswrapper[3178]: I0216 17:23:57.865563 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 17:23:57.866037 master-0 kubenswrapper[3178]: I0216 17:23:57.865578 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 17:23:57.866037 master-0 kubenswrapper[3178]: E0216 17:23:57.865663 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:23:57.866037 master-0 kubenswrapper[3178]: I0216 17:23:57.865835 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 17:23:57.866037 master-0 kubenswrapper[3178]: I0216 17:23:57.865876 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:23:57.867180 master-0 kubenswrapper[3178]: I0216 17:23:57.866580 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:23:57.867180 master-0 kubenswrapper[3178]: E0216 17:23:57.866667 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:23:57.868181 master-0 kubenswrapper[3178]: I0216 17:23:57.868115 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:23:57.868356 master-0 kubenswrapper[3178]: E0216 17:23:57.868195 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:23:57.868547 master-0 kubenswrapper[3178]: I0216 17:23:57.868481 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:23:57.868770 master-0 kubenswrapper[3178]: E0216 17:23:57.868661 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:23:57.868859 master-0 kubenswrapper[3178]: I0216 17:23:57.868700 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 17:23:57.868859 master-0 kubenswrapper[3178]: I0216 17:23:57.868814 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:23:57.868859 master-0 kubenswrapper[3178]: I0216 17:23:57.868831 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 17:23:57.869076 master-0 kubenswrapper[3178]: I0216 17:23:57.868906 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 17:23:57.869076 master-0 kubenswrapper[3178]: I0216 17:23:57.868738 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 17:23:57.869419 master-0 kubenswrapper[3178]: I0216 17:23:57.869366 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:23:57.869519 master-0 kubenswrapper[3178]: I0216 17:23:57.869478 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:23:57.869593 master-0 kubenswrapper[3178]: I0216 17:23:57.869538 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:23:57.870189 master-0 kubenswrapper[3178]: E0216 17:23:57.869697 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:23:57.870189 master-0 kubenswrapper[3178]: I0216 17:23:57.869723 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:23:57.870189 master-0 kubenswrapper[3178]: I0216 17:23:57.869776 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 17:23:57.870189 master-0 kubenswrapper[3178]: I0216 17:23:57.869386 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 17:23:57.870189 master-0 kubenswrapper[3178]: E0216 17:23:57.869809 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:23:57.870189 master-0 kubenswrapper[3178]: I0216 17:23:57.869841 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:23:57.870189 master-0 kubenswrapper[3178]: I0216 17:23:57.869623 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 17:23:57.870189 master-0 kubenswrapper[3178]: E0216 17:23:57.870086 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:23:57.876302 master-0 kubenswrapper[3178]: I0216 17:23:57.872897 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 17:23:57.876302 master-0 kubenswrapper[3178]: I0216 17:23:57.875316 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:23:57.876302 master-0 kubenswrapper[3178]: E0216 17:23:57.875408 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:23:57.876302 master-0 kubenswrapper[3178]: I0216 17:23:57.875938 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:23:57.876302 master-0 kubenswrapper[3178]: E0216 17:23:57.876063 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:23:57.876302 master-0 kubenswrapper[3178]: I0216 17:23:57.876292 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 17:23:57.876598 master-0 kubenswrapper[3178]: I0216 17:23:57.876336 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:23:57.876598 master-0 kubenswrapper[3178]: I0216 17:23:57.876491 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 17:23:57.876661 master-0 kubenswrapper[3178]: E0216 17:23:57.876577 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:23:57.876700 master-0 kubenswrapper[3178]: I0216 17:23:57.876240 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:23:57.876856 master-0 kubenswrapper[3178]: E0216 17:23:57.876784 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:23:57.879823 master-0 kubenswrapper[3178]: I0216 17:23:57.879616 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:23:57.879823 master-0 kubenswrapper[3178]: E0216 17:23:57.879683 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:23:57.880225 master-0 kubenswrapper[3178]: I0216 17:23:57.880158 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:23:57.880399 master-0 kubenswrapper[3178]: E0216 17:23:57.880338 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:23:57.880607 master-0 kubenswrapper[3178]: I0216 17:23:57.880562 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:23:57.880690 master-0 kubenswrapper[3178]: E0216 17:23:57.880642 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:23:57.880918 master-0 kubenswrapper[3178]: I0216 17:23:57.880873 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:23:57.881000 master-0 kubenswrapper[3178]: E0216 17:23:57.880969 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:23:57.881471 master-0 kubenswrapper[3178]: I0216 17:23:57.881426 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:23:57.881621 master-0 kubenswrapper[3178]: E0216 17:23:57.881568 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:23:57.881762 master-0 kubenswrapper[3178]: I0216 17:23:57.881730 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:23:57.881824 master-0 kubenswrapper[3178]: E0216 17:23:57.881801 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:23:57.882091 master-0 kubenswrapper[3178]: I0216 17:23:57.882062 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:23:57.882161 master-0 kubenswrapper[3178]: I0216 17:23:57.882118 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:23:57.882306 master-0 kubenswrapper[3178]: E0216 17:23:57.882126 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:23:57.883166 master-0 kubenswrapper[3178]: I0216 17:23:57.883087 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:23:57.883398 master-0 kubenswrapper[3178]: E0216 17:23:57.883206 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:23:57.883398 master-0 kubenswrapper[3178]: I0216 17:23:57.883330 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:23:57.883398 master-0 kubenswrapper[3178]: E0216 17:23:57.883380 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:23:57.884151 master-0 kubenswrapper[3178]: I0216 17:23:57.884055 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:23:57.884493 master-0 kubenswrapper[3178]: I0216 17:23:57.884404 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:23:57.884493 master-0 kubenswrapper[3178]: E0216 17:23:57.884470 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:23:57.884811 master-0 kubenswrapper[3178]: I0216 17:23:57.884757 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:23:57.885696 master-0 kubenswrapper[3178]: I0216 17:23:57.885312 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:23:57.885696 master-0 kubenswrapper[3178]: E0216 17:23:57.885383 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:23:57.887701 master-0 kubenswrapper[3178]: I0216 17:23:57.886664 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:23:57.887701 master-0 kubenswrapper[3178]: E0216 17:23:57.886724 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:23:57.887864 master-0 kubenswrapper[3178]: I0216 17:23:57.887751 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:23:57.890409 master-0 kubenswrapper[3178]: I0216 17:23:57.889287 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 17:23:57.890409 master-0 kubenswrapper[3178]: I0216 17:23:57.889674 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:23:57.890409 master-0 kubenswrapper[3178]: I0216 17:23:57.889709 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:23:57.890409 master-0 kubenswrapper[3178]: E0216 17:23:57.889722 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:23:57.890409 master-0 kubenswrapper[3178]: E0216 17:23:57.889778 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:23:57.890409 master-0 kubenswrapper[3178]: I0216 17:23:57.889858 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:23:57.890409 master-0 kubenswrapper[3178]: I0216 17:23:57.889979 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:23:57.890409 master-0 kubenswrapper[3178]: I0216 17:23:57.889726 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 17:23:57.890409 master-0 kubenswrapper[3178]: I0216 17:23:57.890115 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:23:57.890409 master-0 kubenswrapper[3178]: E0216 17:23:57.890170 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:23:57.890409 master-0 kubenswrapper[3178]: I0216 17:23:57.890302 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.890482 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.890535 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.890548 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-q5h8t" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.890584 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.890637 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.890628 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.890664 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.890776 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.890818 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-wnnb7" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: E0216 17:23:57.890723 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.891081 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lc8g2" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.891314 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.891166 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.891392 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: E0216 17:23:57.891573 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: E0216 17:23:57.891618 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.892118 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.892131 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 17:23:57.892430 master-0 kubenswrapper[3178]: I0216 17:23:57.892275 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 17:23:57.893735 master-0 kubenswrapper[3178]: I0216 17:23:57.892741 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:23:57.893735 master-0 kubenswrapper[3178]: E0216 17:23:57.892802 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:23:57.893735 master-0 kubenswrapper[3178]: I0216 17:23:57.892904 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:23:57.893735 master-0 kubenswrapper[3178]: E0216 17:23:57.892953 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:23:57.893735 master-0 kubenswrapper[3178]: I0216 17:23:57.892976 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:23:57.893735 master-0 kubenswrapper[3178]: E0216 17:23:57.893023 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:23:57.893735 master-0 kubenswrapper[3178]: I0216 17:23:57.893411 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 17:23:57.895479 master-0 kubenswrapper[3178]: I0216 17:23:57.893832 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 17:23:57.895479 master-0 kubenswrapper[3178]: I0216 17:23:57.893955 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:23:57.895479 master-0 kubenswrapper[3178]: I0216 17:23:57.894047 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 17:23:57.895479 master-0 kubenswrapper[3178]: I0216 17:23:57.894222 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:23:57.895640 master-0 kubenswrapper[3178]: I0216 17:23:57.894238 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:23:57.895640 master-0 kubenswrapper[3178]: E0216 17:23:57.894272 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:23:57.895640 master-0 kubenswrapper[3178]: I0216 17:23:57.895589 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 17:23:57.895640 master-0 kubenswrapper[3178]: I0216 17:23:57.895616 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 17:23:57.895753 master-0 kubenswrapper[3178]: I0216 17:23:57.895658 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:23:57.895753 master-0 kubenswrapper[3178]: I0216 17:23:57.895351 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:23:57.895753 master-0 kubenswrapper[3178]: I0216 17:23:57.895713 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 17:23:57.895753 master-0 kubenswrapper[3178]: I0216 17:23:57.895416 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 17:23:57.896714 master-0 kubenswrapper[3178]: I0216 17:23:57.895476 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:23:57.896714 master-0 kubenswrapper[3178]: I0216 17:23:57.895807 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:57.896714 master-0 kubenswrapper[3178]: I0216 17:23:57.896041 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:23:57.896714 master-0 kubenswrapper[3178]: E0216 17:23:57.896072 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" Feb 16 17:23:57.896714 master-0 kubenswrapper[3178]: E0216 17:23:57.896318 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" podUID="6f44170a-3c1c-4944-b971-251f75a51fc3" Feb 16 17:23:57.896714 master-0 kubenswrapper[3178]: I0216 17:23:57.896484 3178 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:23:57.897166 master-0 kubenswrapper[3178]: I0216 17:23:57.897126 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:23:57.897498 master-0 kubenswrapper[3178]: E0216 17:23:57.897457 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" Feb 16 17:23:57.897841 master-0 kubenswrapper[3178]: I0216 17:23:57.897810 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:57.897947 master-0 kubenswrapper[3178]: E0216 17:23:57.897916 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" Feb 16 17:23:57.898027 master-0 kubenswrapper[3178]: I0216 17:23:57.897998 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:57.898072 master-0 kubenswrapper[3178]: I0216 17:23:57.898045 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-gvwqd" Feb 16 17:23:57.898072 master-0 kubenswrapper[3178]: E0216 17:23:57.898054 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="b04ee64e-5e83-499c-812d-749b2b6824c6" Feb 16 17:23:57.898135 master-0 kubenswrapper[3178]: I0216 17:23:57.898114 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:23:57.898186 master-0 kubenswrapper[3178]: I0216 17:23:57.898171 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 17:23:57.898315 master-0 kubenswrapper[3178]: I0216 17:23:57.898293 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 17:23:57.898864 master-0 kubenswrapper[3178]: E0216 17:23:57.898565 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:23:57.898864 master-0 kubenswrapper[3178]: I0216 17:23:57.898295 3178 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 17:23:57.899232 master-0 kubenswrapper[3178]: I0216 17:23:57.899201 3178 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 17:23:57.900538 master-0 kubenswrapper[3178]: I0216 17:23:57.900486 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b2561b-933b-4c58-a63a-7a8c671d0ae9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/\\\",\\\"name\\\":\\\"marketplace-trusted-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"marketplace-operator-metrics\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kx9vc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-6cc5b65c6b-s4gp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:57.910263 master-0 kubenswrapper[3178]: I0216 17:23:57.910155 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29402454-a920-471e-895e-764235d16eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-5dc4688546-pl7r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:57.922560 master-0 kubenswrapper[3178]: I0216 17:23:57.922477 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-node-tuning-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-node-tuning-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-node-tuning-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"node-tuning-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca/\\\",\\\"name\\\":\\\"trusted-ca\\\"},{\\\"mountPath\\\":\\\"/apiserver.local.config/certificates\\\",\\\"name\\\":\\\"apiservice-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gq8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-node-tuning-operator\"/\"cluster-node-tuning-operator-ff6c9b66-6j4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:57.923686 master-0 kubenswrapper[3178]: I0216 17:23:57.923639 3178 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 16 17:23:57.934288 master-0 kubenswrapper[3178]: I0216 17:23:57.933654 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-monitoring-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-monitoring-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0100b616991bd8bca68d583e902283aa4cc0d388046437d5d68407190e3fb041\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-monitoring-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"cluster-monitoring-operator-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cluster-monitoring-operator/telemetry\\\",\\\"name\\\":\\\"telemetry-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j7w67\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"cluster-monitoring-operator-756d64c8c4-ln4wm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:57.945785 master-0 kubenswrapper[3178]: E0216 17:23:57.945696 3178 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:23:57.947485 master-0 kubenswrapper[3178]: I0216 17:23:57.946105 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dptnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-5f5f84757d-ktmm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:57.957869 master-0 kubenswrapper[3178]: I0216 17:23:57.957795 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab80e0fb-09dd-4c93-b235-1487024105d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fkwxl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-bb7ffbb8d-lzgs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:57.970220 master-0 kubenswrapper[3178]: I0216 17:23:57.970132 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca/\\\",\\\"name\\\":\\\"trusted-ca\\\"},{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"image-registry-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/openshift/serviceaccount\\\",\\\"name\\\":\\\"bound-sa-token\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5mwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-96c8c64b8-zwwnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:57.979871 master-0 kubenswrapper[3178]: I0216 17:23:57.979809 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d020c902-2adb-4919-8dd9-0c2109830580\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-54984b6678-gp8gv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:57.989412 master-0 kubenswrapper[3178]: I0216 17:23:57.989356 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca-bundle\\\",\\\"name\\\":\\\"trusted-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/service-ca-bundle\\\",\\\"name\\\":\\\"service-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f42cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-755d954778-lf4cb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:57.997705 master-0 kubenswrapper[3178]: I0216 17:23:57.997644 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9609a4f3-b947-47af-a685-baae26c50fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"trusted-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/openshift/serviceaccount\\\",\\\"name\\\":\\\"bound-sa-token\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t24jh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"metrics-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t24jh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-c588d8cb4-wjr7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.007225 master-0 kubenswrapper[3178]: I0216 17:23:58.007155 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e69d8c51-e2a6-4f61-9c26-072784f6cf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/available-featuregates\\\",\\\"name\\\":\\\"available-featuregates\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr8t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-7c6bdb986f-v8dr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.017476 master-0 kubenswrapper[3178]: I0216 17:23:58.017421 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4549ea98-7379-49e1-8452-5efb643137ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zt8mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-6fcf4c966-6bmf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.027282 master-0 kubenswrapper[3178]: I0216 17:23:58.027134 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-279g6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad805251-19d0-4d2f-b741-7d11158f1f03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnnc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnnc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-279g6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.036746 master-0 kubenswrapper[3178]: I0216 17:23:58.036674 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e623376-9e14-4341-9dcf-7a7c218b6f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-cd5474998-829l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.046749 master-0 kubenswrapper[3178]: I0216 17:23:58.046669 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"442600dc-09b2-4fee-9f89-777296b2ee40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-78ff47c7c5-txr5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.056421 master-0 kubenswrapper[3178]: I0216 17:23:58.056244 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r7wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43f65f23-4ddd-471a-9cb3-b0945382d83c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r28x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-multus\"/\"multus-6r7wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.067402 master-0 kubenswrapper[3178]: I0216 17:23:58.067213 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"970d4376-f299-412c-a8ee-90aa980c689e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [csi-snapshot-controller-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [csi-snapshot-controller-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-snapshot-controller-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqstc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-storage-operator\"/\"csi-snapshot-controller-operator-7b87b97578-q55rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.077292 master-0 kubenswrapper[3178]: I0216 17:23:58.077196 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d9859457-f0d1-4754-a6c5-cf05d5abf447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1c8b9784a60860a08bd47935f0767b7b7f8f36c5c0adb7623a31b82c01d4c09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4gl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"metrics-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t4gl5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-dns-operator\"/\"dns-operator-86b8869b79-nhxlp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.091071 master-0 kubenswrapper[3178]: I0216 17:23:58.090968 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab5760f1-b2e0-4138-9383-e4827154ac50\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5qxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rjdlk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.100981 master-0 kubenswrapper[3178]: I0216 17:23:58.100909 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc"} Feb 16 17:23:58.102558 master-0 kubenswrapper[3178]: I0216 17:23:58.102500 3178 status_manager.go:875] "Failed to update status for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67aa7027-5cfd-41e1-9f0a-cb3a00bd09ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:33Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager cluster-policy-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:33Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d54bd262ca625a326b01ea2bfd33db10a402c05590e6b710b0959712e1bf30b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:23:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\",\\\"name\\\":\\\"ssl-certs-host\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/secrets\\\",\\\"name\\\":\\\"secrets\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/cloud\\\",\\\"name\\\":\\\"etc-kubernetes-cloud\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/config\\\",\\\"name\\\":\\\"config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/log/bootstrap-control-plane\\\",\\\"name\\\":\\\"logs\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac596da9ba2aef67b1e91be250c648d0d94ad9e7ab12065e2f256126b855cff0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac596da9ba2aef67b1e91be250c648d0d94ad9e7ab12065e2f256126b855cff0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T17:23:47Z\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T17:23:36Z\\\"}},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":10,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(80420f2e7c3cdda71f7d0d6ccbe6f9f3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\",\\\"name\\\":\\\"ssl-certs-host\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/secrets\\\",\\\"name\\\":\\\"secrets\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/cloud\\\",\\\"name\\\":\\\"etc-kubernetes-cloud\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/config\\\",\\\"name\\\":\\\"config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/log/bootstrap-control-plane\\\",\\\"name\\\":\\\"logs\\\"}]}],\\\"startTime\\\":\\\"2026-02-16T17:23:33Z\\\"}}\" for pod \"kube-system\"/\"bootstrap-kube-controller-manager-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.112088 master-0 kubenswrapper[3178]: I0216 17:23:58.112030 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"737fcc7d-d850-4352-9f17-383c85d5bc28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dpp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-6d4655d9cf-qhn9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.127542 master-0 kubenswrapper[3178]: I0216 17:23:58.127229 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e51bba5-0ebe-4e55-a588-38b71548c605\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-olm-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:333e6572029953b4c4676076f0991ee6e5c7d28cbe2887c71b1682f19831d8a1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-olm-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"cluster-olm-operator-serving-cert\\\"},{\\\"mountPath\\\":\\\"/operand-assets\\\",\\\"name\\\":\\\"operand-assets\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2dxw9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-olm-operator\"/\"cluster-olm-operator-55b69c6c48-7chjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.140455 master-0 kubenswrapper[3178]: I0216 17:23:58.140345 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18e9a9d3-9b18-4c19-9558-f33c68101922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"package-server-manager-serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bbcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bbcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-5c696dbdcd-qrrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.153181 master-0 kubenswrapper[3178]: I0216 17:23:58.153084 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ff68421-1741-41c1-93d5-5c722dfd295e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n6rwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-7d8f4c8c66-qjq9w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.163302 master-0 kubenswrapper[3178]: I0216 17:23:58.163226 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295dd2cc-4b35-40bc-959c-aa8ad90fc453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:34Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:36Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:33Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa8b867f6c7632908fe33e45a5de76207c3a49f016816d7a95a271132f5f9bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":7,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:23:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://024918b99b0960332808509aca9a4a206a98049b3cbbd79cb59ca43d40614ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://024918b99b0960332808509aca9a4a206a98049b3cbbd79cb59ca43d40614ee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:23:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"startTime\\\":\\\"2026-02-16T17:23:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.199684 master-0 kubenswrapper[3178]: I0216 17:23:58.199598 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3e071c-1c62-489b-91c1-aef0d197f40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-ca\\\",\\\"name\\\":\\\"etcd-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-service-ca\\\",\\\"name\\\":\\\"etcd-service-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/etcd-client\\\",\\\"name\\\":\\\"etcd-client\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjd5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-67bf55ccdd-cppj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.238292 master-0 kubenswrapper[3178]: I0216 17:23:58.238215 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eaf7edff-0a89-4ac0-b9dd-511e098b5434\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler-operator-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-scheduler-operator-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-7485d55966-sgmpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.278147 master-0 kubenswrapper[3178]: I0216 17:23:58.278075 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78be97a3-18d1-4962-804f-372974dc8ccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [route-controller-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0871b6c16b38a2eda5d1c89fd75079aff0775224307e940557e6fda6ba229f38\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"route-controller-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/client-ca\\\",\\\"name\\\":\\\"client-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wzlnz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-route-controller-manager\"/\"route-controller-manager-dcdb76cc6-5rcvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.319755 master-0 kubenswrapper[3178]: I0216 17:23:58.319537 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-279g6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad805251-19d0-4d2f-b741-7d11158f1f03\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnnc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:80531a0fe966e1cc0582c581951b09a7a4e42037c106748c44859110361b2c1b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bnnc5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-279g6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.360364 master-0 kubenswrapper[3178]: I0216 17:23:58.360277 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-vwvwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c303189e-adae-4fe2-8dd7-cc9b80f73e66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2s8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-vwvwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.404090 master-0 kubenswrapper[3178]: I0216 17:23:58.403988 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-baremetal-operator baremetal-kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-baremetal-operator baremetal-kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"baremetal-kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/baremetal-kube-rbac-proxy\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"cluster-baremetal-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hh2cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7b8fb1f11df51c131f5be8ddfc1b1c95ac13481f58d2dcd5a465a4a8341c0f49\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-baremetal-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/cluster-baremetal-operator/tls\\\",\\\"name\\\":\\\"cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cluster-baremetal-operator/images\\\",\\\"name\\\":\\\"images\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hh2cd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"cluster-baremetal-operator-7bc947fc7d-4j7pn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.438165 master-0 kubenswrapper[3178]: I0216 17:23:58.438045 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54f29618-42c2-4270-9af7-7d82852d7cec\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [manager kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [manager kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4wht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae30b3ab740f21c451d0272bceacb99fa34d22bbf2ea22f1e1e18230a156104b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cache\\\",\\\"name\\\":\\\"cache\\\"},{\\\"mountPath\\\":\\\"/var/ca-certs\\\",\\\"name\\\":\\\"ca-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/containers\\\",\\\"name\\\":\\\"etc-containers\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/docker\\\",\\\"name\\\":\\\"etc-docker\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w4wht\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-controller\"/\"operator-controller-controller-manager-85c9b89969-lj58b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.517901 master-0 kubenswrapper[3178]: I0216 17:23:58.517563 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/prometheus-k8s-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b04ee64e-5e83-499c-812d-749b2b6824c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus config-reloader thanos-sidecar kube-rbac-proxy-web kube-rbac-proxy kube-rbac-proxy-thanos]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus config-reloader thanos-sidecar kube-rbac-proxy-web kube-rbac-proxy kube-rbac-proxy-thanos]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"config-reloader\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/prometheus/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/config_out\\\",\\\"name\\\":\\\"config-out\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/web_config/web-config.yaml\\\",\\\"name\\\":\\\"web-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/rules/prometheus-k8s-rulefiles-0\\\",\\\"name\\\":\\\"prometheus-k8s-rulefiles-0\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpjv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-prometheus-k8s-tls\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"configmap-metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-kube-rbac-proxy\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpjv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-thanos\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-prometheus-k8s-thanos-sidecar-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-kube-rbac-proxy\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"configmap-metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpjv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-web\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-prometheus-k8s-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-prometheus-k8s-kube-rbac-proxy-web\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpjv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e4d0e747f55d3f773a63180bc4e4820ee5f17efbd45eb1dac9167fbc7520650e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"prometheus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/\\\",\\\"name\\\":\\\"prometheus-trusted-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/config_out\\\",\\\"name\\\":\\\"config-out\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/certs\\\",\\\"name\\\":\\\"tls-assets\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/prometheus\\\",\\\"name\\\":\\\"prometheus-k8s-db\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/secrets/prometheus-k8s-tls\\\",\\\"name\\\":\\\"secret-prometheus-k8s-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/secrets/prometheus-k8s-thanos-sidecar-tls\\\",\\\"name\\\":\\\"secret-prometheus-k8s-thanos-sidecar-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/secrets/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-kube-rbac-proxy\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/secrets/prometheus-k8s-kube-rbac-proxy-web\\\",\\\"name\\\":\\\"secret-prometheus-k8s-kube-rbac-proxy-web\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/secrets/metrics-client-certs\\\",\\\"name\\\":\\\"secret-metrics-client-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/configmaps/serving-certs-ca-bundle\\\",\\\"name\\\":\\\"configmap-serving-certs-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/configmaps/kubelet-serving-ca-bundle\\\",\\\"name\\\":\\\"configmap-kubelet-serving-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/configmaps/metrics-client-ca\\\",\\\"name\\\":\\\"configmap-metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/rules/prometheus-k8s-rulefiles-0\\\",\\\"name\\\":\\\"prometheus-k8s-rulefiles-0\\\"},{\\\"mountPath\\\":\\\"/etc/prometheus/web_config/web-config.yaml\\\",\\\"name\\\":\\\"web-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpjv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:86d47b2746de823e60068255722d2c0f1ff9d327b2865071a4f2f1e08b1f4ee9\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"thanos-sidecar\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/grpc\\\",\\\"name\\\":\\\"secret-grpc-tls\\\"},{\\\"mountPath\\\":\\\"/etc/thanos/config\\\",\\\"name\\\":\\\"thanos-prometheus-http-client-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vpjv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"prometheus-k8s-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.540694 master-0 kubenswrapper[3178]: I0216 17:23:58.540606 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"970d4376-f299-412c-a8ee-90aa980c689e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [csi-snapshot-controller-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [csi-snapshot-controller-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13d06502c0f0a3c73f69bf8d0743718f7cfc46e71f4a12916517ad7e9bff17e1\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"csi-snapshot-controller-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqstc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-storage-operator\"/\"csi-snapshot-controller-operator-7b87b97578-q55rf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.580680 master-0 kubenswrapper[3178]: I0216 17:23:58.580491 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39387549-c636-4bd4-b463-f6a93810f277\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vk7xl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vk7xl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-hhcpr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.617195 master-0 kubenswrapper[3178]: I0216 17:23:58.617072 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7390ccc6-dfbe-4f51-960c-7628f49bffb7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf24751d6b6d66fcfc26aa8e0f94a4248a3edab6dbfe3fe9651a90b6b4d92192\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/audit\\\",\\\"name\\\":\\\"audit-policies\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/etcd-client\\\",\\\"name\\\":\\\"etcd-client\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-serving-ca\\\",\\\"name\\\":\\\"etcd-serving-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca-bundle\\\",\\\"name\\\":\\\"trusted-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/encryption-config\\\",\\\"name\\\":\\\"encryption-config\\\"},{\\\"mountPath\\\":\\\"/var/log/oauth-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v65g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-oauth-apiserver\"/\"apiserver-66788cb45c-dp9bc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.660385 master-0 kubenswrapper[3178]: I0216 17:23:58.660291 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"642e5115-b7f2-4561-bc6b-1a74b6d891c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [control-plane-machine-set-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47c1d88223ffb35bb36a4d2bde736fb3e45f08e204519387e0e52e3e3dc00cfb\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"control-plane-machine-set-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/k8s-webhook-server/serving-certs\\\",\\\"name\\\":\\\"control-plane-machine-set-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dzpnw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"control-plane-machine-set-operator-d8bf84b88-m66tx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.725306 master-0 kubenswrapper[3178]: I0216 17:23:58.724885 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"676e24eb-bc42-4b39-8762-94da3ed718e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:34Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-scheduler]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:33Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://94ea5b6007080bd428cd7b6fe066cc75a9a70841a978304207907aa746d9ac27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:23:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208e46a3c641476f9960cdd4e77a82fbdec0a87c2f2f91e56dfe5eb0ed0268f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:23:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df0e3c6ca3dd8af42c8e9ac9cdc311a5a319df0fa4ca786ea177a90a6aefea49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f56d01ef06fe016c748e8c87538dea8e9fcc84856eb116bd3597cc8e042e9f0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:23:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8418547cd53261f1b77929899a0ab7c7d55cf1c91b349c65456cae4040067db4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8418547cd53261f1b77929899a0ab7c7d55cf1c91b349c65456cae4040067db4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:23:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:23:33Z\\\"}}}],\\\"startTime\\\":\\\"2026-02-16T17:23:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.764720 master-0 kubenswrapper[3178]: I0216 17:23:58.764605 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef9cb618-13ca-4088-a3d4-fb78be3f4bff\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:34Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:34Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:33Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:23:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:23:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:23:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:23:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:23:35Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:23:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:23:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"startTime\\\":\\\"2026-02-16T17:23:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.799708 master-0 kubenswrapper[3178]: I0216 17:23:58.799624 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sx92x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68f33191bbdd9baf1095b1769d81979c4da11a7d920a2c849ce21980a92a7ecd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:17:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:14:20Z\\\"}},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sx92x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-98q6v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.838130 master-0 kubenswrapper[3178]: I0216 17:23:58.837963 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"737fcc7d-d850-4352-9f17-383c85d5bc28\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd544a8a6b4d08fe0f4fd076109c09cf181302ab6056ec6b2b89d68a52954c5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5dpp2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver-operator\"/\"openshift-apiserver-operator-6d4655d9cf-qhn9v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.839222 master-0 kubenswrapper[3178]: I0216 17:23:58.839065 3178 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 17:23:58.839222 master-0 kubenswrapper[3178]: I0216 17:23:58.839206 3178 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:23:58.883463 master-0 kubenswrapper[3178]: I0216 17:23:58.883292 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"18e9a9d3-9b18-4c19-9558-f33c68101922\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy package-server-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"package-server-manager-serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bbcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"package-server-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6bbcf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"package-server-manager-5c696dbdcd-qrrc6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.922568 master-0 kubenswrapper[3178]: I0216 17:23:58.922459 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [tuned]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [tuned]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"tuned\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/modprobe.d\\\",\\\"name\\\":\\\"etc-modprobe-d\\\"},{\\\"mountPath\\\":\\\"/etc/sysconfig\\\",\\\"name\\\":\\\"etc-sysconfig\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/sysctl.d\\\",\\\"name\\\":\\\"etc-sysctl-d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/sysctl.conf\\\",\\\"name\\\":\\\"etc-sysctl-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd\\\",\\\"name\\\":\\\"etc-systemd\\\"},{\\\"mountPath\\\":\\\"/etc/tuned\\\",\\\"name\\\":\\\"etc-tuned\\\"},{\\\"mountPath\\\":\\\"/run\\\",\\\"name\\\":\\\"run\\\"},{\\\"mountPath\\\":\\\"/sys\\\",\\\"name\\\":\\\"sys\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wn82n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-cluster-node-tuning-operator\"/\"tuned-l5kbz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.958440 master-0 kubenswrapper[3178]: I0216 17:23:58.958379 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:58.958440 master-0 kubenswrapper[3178]: I0216 17:23:58.958467 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:23:58.958867 master-0 kubenswrapper[3178]: I0216 17:23:58.958483 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:23:58.958867 master-0 kubenswrapper[3178]: I0216 17:23:58.958395 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:23:58.958867 master-0 kubenswrapper[3178]: I0216 17:23:58.958379 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:23:58.959208 master-0 kubenswrapper[3178]: E0216 17:23:58.958877 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="b04ee64e-5e83-499c-812d-749b2b6824c6" Feb 16 17:23:58.959208 master-0 kubenswrapper[3178]: I0216 17:23:58.958920 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:23:58.959208 master-0 kubenswrapper[3178]: I0216 17:23:58.958974 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:23:58.959208 master-0 kubenswrapper[3178]: E0216 17:23:58.959092 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:23:58.959208 master-0 kubenswrapper[3178]: I0216 17:23:58.959148 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:23:58.959753 master-0 kubenswrapper[3178]: E0216 17:23:58.959281 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:23:58.959753 master-0 kubenswrapper[3178]: E0216 17:23:58.959467 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:23:58.959753 master-0 kubenswrapper[3178]: E0216 17:23:58.959685 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:23:58.960183 master-0 kubenswrapper[3178]: E0216 17:23:58.960110 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:23:58.960396 master-0 kubenswrapper[3178]: E0216 17:23:58.960343 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:23:58.960565 master-0 kubenswrapper[3178]: E0216 17:23:58.960515 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:23:58.962914 master-0 kubenswrapper[3178]: I0216 17:23:58.962805 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8729b1a-e365-4cf7-8a05-91a9987dabe9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcc-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmj52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hmj52\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-controller-686c884b4d-ksx48\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:58.997808 master-0 kubenswrapper[3178]: I0216 17:23:58.997705 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0517b180-00ee-47fe-a8e7-36a3931b7e72\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [console-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:95e7647e6fda21b94b692f03908e4cd154e3374fca0560229c646fefe2c46730\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"console-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"trusted-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sbrtz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-console-operator\"/\"console-operator-7777d5cc66-64vhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.038289 master-0 kubenswrapper[3178]: I0216 17:23:59.038164 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0d980a9a-2574-41b9-b970-0718cd97c8cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [multus-admission-controller kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook\\\",\\\"name\\\":\\\"webhook-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7l6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbe162375a11ed3810a1081c30dd400f461f2421d5f1e27d8792048bbd216956\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"multus-admission-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook\\\",\\\"name\\\":\\\"webhook-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t7l6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-multus\"/\"multus-admission-controller-6d678b8d67-5n9cl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.081381 master-0 kubenswrapper[3178]: I0216 17:23:59.081271 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-ingress-canary/ingress-canary-qqvg4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [serve-healthcheck-canary]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"serve-healthcheck-canary\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls-cert\\\",\\\"name\\\":\\\"cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6fmhb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-canary\"/\"ingress-canary-qqvg4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.104716 master-0 kubenswrapper[3178]: I0216 17:23:59.104441 3178 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 16 17:23:59.104716 master-0 kubenswrapper[3178]: I0216 17:23:59.104644 3178 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" containerID="cri-o://95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838" gracePeriod=30 Feb 16 17:23:59.123299 master-0 kubenswrapper[3178]: I0216 17:23:59.123147 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba37ef0e-373c-4ccc-b082-668630399765\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [metrics-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [metrics-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c1a0aba9ead3a33353dc8a033699dfa4795f4050516677dad6ed4ac664094692\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"metrics-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-metrics-server-tls\\\"},{\\\"mountPath\\\":\\\"/etc/tls/metrics-client-certs\\\",\\\"name\\\":\\\"secret-metrics-client-certs\\\"},{\\\"mountPath\\\":\\\"/etc/tls/kubelet-serving-ca-bundle\\\",\\\"name\\\":\\\"configmap-kubelet-serving-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/etc/audit\\\",\\\"name\\\":\\\"metrics-server-audit-profiles\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/log/metrics-server\\\",\\\"name\\\":\\\"audit-log\\\"},{\\\"mountPath\\\":\\\"/etc/client-ca-bundle\\\",\\\"name\\\":\\\"client-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57455\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"metrics-server-745bd8d89b-qr4zh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.164731 master-0 kubenswrapper[3178]: I0216 17:23:59.164604 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3e071c-1c62-489b-91c1-aef0d197f40b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [etcd-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6dd9324942b3d09b4b9a768f36b47be4e555d947910ee3d115fc5448c95f7399\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"etcd-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-ca\\\",\\\"name\\\":\\\"etcd-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-service-ca\\\",\\\"name\\\":\\\"etcd-service-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/etcd-client\\\",\\\"name\\\":\\\"etcd-client\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rjd5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-etcd-operator\"/\"etcd-operator-67bf55ccdd-cppj8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.200031 master-0 kubenswrapper[3178]: I0216 17:23:59.199898 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48801344-a48a-493e-aea4-19d998d0b708\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/signing-key\\\",\\\"name\\\":\\\"signing-key\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/signing-cabundle\\\",\\\"name\\\":\\\"signing-cabundle\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nqfds\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca\"/\"service-ca-676cd8b9b5-cp9rb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.243966 master-0 kubenswrapper[3178]: I0216 17:23:59.243824 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/redhat-operators-lnzfx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"822e1750-652e-4ceb-8fea-b2c1c905b0f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/extracted-catalog\\\",\\\"name\\\":\\\"catalog-content\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djfsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-operators-lnzfx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.294813 master-0 kubenswrapper[3178]: I0216 17:23:59.294602 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/alertmanager-main-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [alertmanager config-reloader kube-rbac-proxy-web kube-rbac-proxy kube-rbac-proxy-metric prom-label-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [alertmanager config-reloader kube-rbac-proxy-web kube-rbac-proxy kube-rbac-proxy-metric prom-label-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f22dd40cd10354e3512d2065a8dd8c9dcb995ea487c0f661f172c527509123fc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"alertmanager\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/alertmanager/config\\\",\\\"name\\\":\\\"config-volume\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/config_out\\\",\\\"name\\\":\\\"config-out\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/certs\\\",\\\"name\\\":\\\"tls-assets\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/alertmanager\\\",\\\"name\\\":\\\"alertmanager-main-db\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-main-tls\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-metric\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-web\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-web\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/\\\",\\\"name\\\":\\\"alertmanager-trusted-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/web_config/web-config.yaml\\\",\\\"name\\\":\\\"web-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l67l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"config-reloader\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/alertmanager/config\\\",\\\"name\\\":\\\"config-volume\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/config_out\\\",\\\"name\\\":\\\"config-out\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-main-tls\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-metric\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-web\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-web\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/alertmanager/web_config/web-config.yaml\\\",\\\"name\\\":\\\"web-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l67l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l67l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-metric\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-metric\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l67l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-web\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"secret-alertmanager-kube-rbac-proxy-web\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"secret-alertmanager-main-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l67l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:abf98e8b78df5cf21c9da051db2827b8c9081cf3ea201bf9017a5d9548dbc73e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"prom-label-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l67l5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"alertmanager-main-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.317829 master-0 kubenswrapper[3178]: I0216 17:23:59.317706 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dptnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-5f5f84757d-ktmm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.356298 master-0 kubenswrapper[3178]: I0216 17:23:59.356080 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p2jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/mco/images\\\",\\\"name\\\":\\\"images\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p2jz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-operator-84976bb859-rsnqc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.398279 master-0 kubenswrapper[3178]: I0216 17:23:59.398179 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74b2561b-933b-4c58-a63a-7a8c671d0ae9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [marketplace-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dab7a82d88f90f1ef4ac307b16132d4d573a4fa9080acc3272ca084613bd902a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"marketplace-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/\\\",\\\"name\\\":\\\"marketplace-trusted-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"marketplace-operator-metrics\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kx9vc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"marketplace-operator-6cc5b65c6b-s4gp2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.432099 master-0 kubenswrapper[3178]: I0216 17:23:59.431991 3178 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.32.10:17697/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 17:23:59.432099 master-0 kubenswrapper[3178]: I0216 17:23:59.432082 3178 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.32.10:17697/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 17:23:59.439135 master-0 kubenswrapper[3178]: I0216 17:23:59.438995 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29402454-a920-471e-895e-764235d16eb4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [service-ca-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5abe992def861ec075251ae17bbd66fa23bd05bd3701953c0fdcf68a8d161f1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"service-ca-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9bv7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-service-ca-operator\"/\"service-ca-operator-5dc4688546-pl7r5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.483298 master-0 kubenswrapper[3178]: I0216 17:23:59.483194 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-node-tuning-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-node-tuning-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-node-tuning-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"node-tuning-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca/\\\",\\\"name\\\":\\\"trusted-ca\\\"},{\\\"mountPath\\\":\\\"/apiserver.local.config/certificates\\\",\\\"name\\\":\\\"apiservice-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gq8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-node-tuning-operator\"/\"cluster-node-tuning-operator-ff6c9b66-6j4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.518647 master-0 kubenswrapper[3178]: I0216 17:23:59.518537 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e69d8c51-e2a6-4f61-9c26-072784f6cf40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2431778975829348e271dc9bf123c7a24c81a52537a61414cd17c8436436078b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/available-featuregates\\\",\\\"name\\\":\\\"available-featuregates\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xr8t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-7c6bdb986f-v8dr8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.558729 master-0 kubenswrapper[3178]: I0216 17:23:59.558663 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vfxj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m29g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-dns\"/\"node-resolver-vfxj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.591991 master-0 kubenswrapper[3178]: I0216 17:23:59.591935 3178 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:23:59.593024 master-0 kubenswrapper[3178]: I0216 17:23:59.593006 3178 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 17:23:59.680324 master-0 kubenswrapper[3178]: I0216 17:23:59.680203 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0393fe12-2533-4c9c-a8e4-a58003c88f36\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/extracted-catalog\\\",\\\"name\\\":\\\"catalog-content\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p5rwv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"redhat-marketplace-4kd66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694166 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694204 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694227 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694275 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694506 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694546 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694576 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694593 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694612 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq4dn\" (UniqueName: \"kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694629 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694644 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694663 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694678 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694694 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694710 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694750 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694768 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694783 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694801 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694817 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694831 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694848 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694864 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694878 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694898 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694944 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694960 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694982 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.694998 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.695014 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.695029 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.695044 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.695069 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.695089 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.695106 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.695125 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.695142 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.698768 master-0 kubenswrapper[3178]: I0216 17:23:59.695159 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695175 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695192 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbtld\" (UniqueName: \"kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695211 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695234 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695270 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695287 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695302 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695322 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695339 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695355 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695372 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695386 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695401 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695417 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695434 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695450 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695468 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695484 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695503 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695519 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695536 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695552 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695568 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695585 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695600 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695614 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695630 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695647 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695677 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695699 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695724 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695750 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695785 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695802 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695821 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695842 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695841 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.707133 master-0 kubenswrapper[3178]: I0216 17:23:59.695872 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.695898 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.695920 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.695953 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.695977 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.696474 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.696507 3178 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.696560 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.696600 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.196577507 +0000 UTC m=+28.009269861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-oauth-config" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.696623 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.196615088 +0000 UTC m=+28.009307492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.696638 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.696679 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.196659629 +0000 UTC m=+28.009351933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.696692 3178 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.696726 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.196717411 +0000 UTC m=+28.009409795 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.700625 3178 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.700784 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.700818 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.200793629 +0000 UTC m=+28.013485913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.700817 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.700879 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.200868381 +0000 UTC m=+28.013560665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.700740 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.701170 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.701262 3178 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.701499 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.701519 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.701527 3178 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.701604 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.20158317 +0000 UTC m=+28.014275464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.701687 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.701726 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.201715173 +0000 UTC m=+28.014407467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.701553 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktgm7\" (UniqueName: \"kubernetes.io/projected/810a2275-fae5-45df-a3b8-92860451d33b-kube-api-access-ktgm7\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.701794 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.702326 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.702705 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.202539485 +0000 UTC m=+28.015231769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.702726 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: I0216 17:23:59.702744 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.702737 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.20272152 +0000 UTC m=+28.015413804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.703875 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.703893 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.703956 3178 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:23:59.711475 master-0 kubenswrapper[3178]: E0216 17:23:59.704093 3178 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.704107 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.704827 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: I0216 17:23:59.704973 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.705051 3178 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: I0216 17:23:59.705187 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: I0216 17:23:59.705414 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.705520 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: I0216 17:23:59.705506 3178 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.705679 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.705866 3178 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.705936 3178 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.706080 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.706129 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.20610922 +0000 UTC m=+28.018801504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.706591 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.206570512 +0000 UTC m=+28.019262796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707135 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707160 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207142798 +0000 UTC m=+28.019835082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707218 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207187479 +0000 UTC m=+28.019879763 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707272 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207260741 +0000 UTC m=+28.019953025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707289 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207281721 +0000 UTC m=+28.019974005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707370 3178 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707373 3178 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707465 3178 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: I0216 17:23:59.707468 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707476 3178 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707500 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207481606 +0000 UTC m=+28.020173890 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707503 3178 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707522 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207513447 +0000 UTC m=+28.020205731 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707540 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207529118 +0000 UTC m=+28.020221402 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707565 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707577 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207558749 +0000 UTC m=+28.020251043 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707600 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207591709 +0000 UTC m=+28.020283993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: I0216 17:23:59.707037 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.706675 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707644 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207634261 +0000 UTC m=+28.020326555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707662 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207654651 +0000 UTC m=+28.020346945 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:23:59.716546 master-0 kubenswrapper[3178]: E0216 17:23:59.707703 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.707730 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207702602 +0000 UTC m=+28.020394886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.707752 3178 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.707788 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207778704 +0000 UTC m=+28.020470998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.707808 3178 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.707830 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207799795 +0000 UTC m=+28.020492079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.707872 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207864207 +0000 UTC m=+28.020556501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: I0216 17:23:59.707875 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.707886 3178 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.707889 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207881237 +0000 UTC m=+28.020573531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: I0216 17:23:59.707944 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.707948 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.207935588 +0000 UTC m=+28.020627883 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.707834 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.707997 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.20797679 +0000 UTC m=+28.020669074 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.708020 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.208012201 +0000 UTC m=+28.020704485 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: I0216 17:23:59.708066 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nfk2\" (UniqueName: \"kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.708090 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.208080312 +0000 UTC m=+28.020772596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.708110 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.208096843 +0000 UTC m=+28.020789127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.708164 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.708223 3178 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.708233 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.208216806 +0000 UTC m=+28.020909100 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: I0216 17:23:59.708301 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.708342 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.208317769 +0000 UTC m=+28.021010053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.708379 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.20836667 +0000 UTC m=+28.021058954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: I0216 17:23:59.708410 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: I0216 17:23:59.708410 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.708462 3178 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: I0216 17:23:59.708527 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: I0216 17:23:59.708563 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: I0216 17:23:59.708599 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:23:59.718147 master-0 kubenswrapper[3178]: E0216 17:23:59.708623 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.208613616 +0000 UTC m=+28.021305900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.708645 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.708692 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.708718 3178 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.708747 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.20873631 +0000 UTC m=+28.021428594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.708819 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.708853 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.708882 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.708904 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.708931 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.708966 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.709158 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.709174 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.709191 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.709217 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.709238 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.209223333 +0000 UTC m=+28.021915617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.709267 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.709284 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.709312 3178 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.709308 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.709389 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.209376617 +0000 UTC m=+28.022068901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.709423 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.209414198 +0000 UTC m=+28.022106472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.709512 3178 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.709540 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.209532871 +0000 UTC m=+28.022225145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.709542 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.709658 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.209606253 +0000 UTC m=+28.022298537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.709703 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.709835 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.709903 3178 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.709919 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.709950 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.209938642 +0000 UTC m=+28.022630926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.709979 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.710014 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.710064 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.710087 3178 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: E0216 17:23:59.710094 3178 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.710062 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:23:59.719562 master-0 kubenswrapper[3178]: I0216 17:23:59.710099 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.710163 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.210124367 +0000 UTC m=+28.022816651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.710165 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.710183 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.210173888 +0000 UTC m=+28.022866172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.710869 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.711524 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.711615 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.711644 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.711662 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.711681 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.711701 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.711721 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.711756 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.711774 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.711792 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/810a2275-fae5-45df-a3b8-92860451d33b-host\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.711808 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.711519 3178 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.711889 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.711936 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.711886 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.711906 3178 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.712002 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.211986336 +0000 UTC m=+28.024678640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.712029 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.212018687 +0000 UTC m=+28.024710971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.711918 3178 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.712038 3178 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.712056 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.212050068 +0000 UTC m=+28.024742352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.711961 3178 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.712075 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.212062678 +0000 UTC m=+28.024755052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.712075 3178 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.712096 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.212084799 +0000 UTC m=+28.024777083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.712060 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.712115 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.212106989 +0000 UTC m=+28.024799403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.712120 3178 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.712131 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.21212455 +0000 UTC m=+28.024816944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: E0216 17:23:59.712132 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:23:59.721155 master-0 kubenswrapper[3178]: I0216 17:23:59.712154 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: E0216 17:23:59.712167 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.212157611 +0000 UTC m=+28.024849905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: E0216 17:23:59.712177 3178 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712188 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: E0216 17:23:59.712235 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.212223932 +0000 UTC m=+28.024916256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: E0216 17:23:59.712306 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.212283994 +0000 UTC m=+28.024976298 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712330 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712549 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712633 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712635 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712679 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712701 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712718 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712737 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712852 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712903 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: E0216 17:23:59.712914 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712933 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: E0216 17:23:59.712943 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.212934071 +0000 UTC m=+28.025626355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712846 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712963 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.712991 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.713018 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.713044 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: E0216 17:23:59.713220 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: E0216 17:23:59.713275 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.21326342 +0000 UTC m=+28.025955774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.713216 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.713316 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.713341 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.713363 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.713533 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.713569 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.713589 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.713606 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: I0216 17:23:59.713676 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:23:59.722682 master-0 kubenswrapper[3178]: E0216 17:23:59.713727 3178 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.713764 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.213753293 +0000 UTC m=+28.026445657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.713766 3178 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.713809 3178 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.713723 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57455\" (UniqueName: \"kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.713841 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.213823695 +0000 UTC m=+28.026515979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"audit" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.713846 3178 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.713860 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.213854006 +0000 UTC m=+28.026546280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.713890 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.713900 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.213889337 +0000 UTC m=+28.026581631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.713921 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.713950 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.713963 3178 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.713978 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.714001 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.213988919 +0000 UTC m=+28.026681303 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.714125 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.714130 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.714153 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.714191 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.714163 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.214152074 +0000 UTC m=+28.026844448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.714214 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.714285 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.714308 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.214294947 +0000 UTC m=+28.026987231 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.714335 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.714362 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.714383 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.714413 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.714418 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.714411 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.714438 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.714465 3178 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.714468 3178 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.714468 3178 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.714507 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.214495933 +0000 UTC m=+28.027188217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.714525 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: I0216 17:23:59.714548 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:23:59.724204 master-0 kubenswrapper[3178]: E0216 17:23:59.714558 3178 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.714567 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.714587 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.214579335 +0000 UTC m=+28.027271719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.714604 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.214596355 +0000 UTC m=+28.027288739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.714621 3178 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.714627 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.214610676 +0000 UTC m=+28.027302980 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.714671 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.714687 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.714758 3178 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.714777 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.21475416 +0000 UTC m=+28.027446454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.714802 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.214792141 +0000 UTC m=+28.027484435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.714831 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l67l5\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-kube-api-access-l67l5\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715022 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.715068 3178 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.715116 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.215106359 +0000 UTC m=+28.027798653 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715011 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715175 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715206 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715231 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715294 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715308 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715455 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715491 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715519 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715543 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715570 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715602 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.715633 3178 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.715679 3178 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.715688 3178 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.715701 3178 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.715740 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.215728235 +0000 UTC m=+28.028420579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715735 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: E0216 17:23:59.715769 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.215752216 +0000 UTC m=+28.028444570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715808 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.725670 master-0 kubenswrapper[3178]: I0216 17:23:59.715847 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.715877 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.715902 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.715961 3178 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.715986 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.716002 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.215990142 +0000 UTC m=+28.028682486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.716024 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.216011613 +0000 UTC m=+28.028704007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.716043 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.716072 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/810a2275-fae5-45df-a3b8-92860451d33b-serviceca\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.716098 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.716124 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.716156 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.716296 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.716302 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.716329 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.716365 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.716449 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.716521 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.216509796 +0000 UTC m=+28.029202170 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.716547 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.716580 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.716603 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.716703 3178 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.716714 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.716734 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.216726542 +0000 UTC m=+28.029418826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.716746 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.216741002 +0000 UTC m=+28.029433276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.716763 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.717071 3178 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.717171 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.717224 3178 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.717271 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.217192354 +0000 UTC m=+28.029884648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.717296 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.717317 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.717322 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.217311167 +0000 UTC m=+28.030003461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.717356 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: E0216 17:23:59.717363 3178 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:23:59.726879 master-0 kubenswrapper[3178]: I0216 17:23:59.717376 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.717392 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.217383309 +0000 UTC m=+28.030075603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.717412 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.717440 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.717470 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.717480 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/810a2275-fae5-45df-a3b8-92860451d33b-serviceca\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.717495 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.717519 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.717548 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.717587 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.717717 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.717598 3178 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.717766 3178 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.717796 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.21778677 +0000 UTC m=+28.030479064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.717597 3178 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.717830 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.217819251 +0000 UTC m=+28.030511595 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.717687 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.717850 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.217839872 +0000 UTC m=+28.030532246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.717749 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.717869 3178 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.717870 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.217858442 +0000 UTC m=+28.030550816 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.717915 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.717936 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.717961 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.717981 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.217973115 +0000 UTC m=+28.030665399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.718012 3178 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.718030 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.218025136 +0000 UTC m=+28.030717420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.718043 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.718064 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.718077 3178 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.718105 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.718119 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.218109489 +0000 UTC m=+28.030801783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.718134 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.218127239 +0000 UTC m=+28.030819543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: E0216 17:23:59.717714 3178 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:23:59.728414 master-0 kubenswrapper[3178]: I0216 17:23:59.718082 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: E0216 17:23:59.718170 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.2181623 +0000 UTC m=+28.030854594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718195 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718194 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718214 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: E0216 17:23:59.718272 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: E0216 17:23:59.718308 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0 podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.218294404 +0000 UTC m=+28.030986688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718329 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718361 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: E0216 17:23:59.718424 3178 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718433 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: E0216 17:23:59.718456 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.218445748 +0000 UTC m=+28.031138032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718475 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718503 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718531 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvw4s\" (UniqueName: \"kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: E0216 17:23:59.718559 3178 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718559 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: E0216 17:23:59.718588 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.218578331 +0000 UTC m=+28.031270615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718608 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718636 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718672 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718700 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718725 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718750 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718776 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94kdz\" (UniqueName: \"kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718803 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718831 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718858 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718882 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718940 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718970 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.718998 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.719024 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.719049 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.719078 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.719104 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:23:59.729525 master-0 kubenswrapper[3178]: I0216 17:23:59.719130 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719155 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719201 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719231 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719274 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719304 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719328 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719331 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719368 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719387 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719406 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719426 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719448 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719465 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719483 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719500 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719518 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719532 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719537 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719573 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-dir\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: E0216 17:23:59.719576 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719597 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: E0216 17:23:59.719608 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.219598648 +0000 UTC m=+28.032290932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719627 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719656 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: E0216 17:23:59.719634 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719821 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719851 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: I0216 17:23:59.719878 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: E0216 17:23:59.719904 3178 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: E0216 17:23:59.719938 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.219928717 +0000 UTC m=+28.032621001 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: E0216 17:23:59.719958 3178 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: E0216 17:23:59.719991 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.219979928 +0000 UTC m=+28.032672222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: E0216 17:23:59.720016 3178 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: E0216 17:23:59.720007 3178 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: E0216 17:23:59.720044 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.22003615 +0000 UTC m=+28.032728434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:23:59.730527 master-0 kubenswrapper[3178]: E0216 17:23:59.720061 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.22005435 +0000 UTC m=+28.032746634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720305 3178 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720358 3178 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: I0216 17:23:59.719907 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720497 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.220484642 +0000 UTC m=+28.033176936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720525 3178 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720544 3178 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720555 3178 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: I0216 17:23:59.720574 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720556 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720591 3178 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720627 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.220512682 +0000 UTC m=+28.033204996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720648 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720654 3178 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720659 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720665 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.220654356 +0000 UTC m=+28.033346670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720690 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.220679507 +0000 UTC m=+28.033371801 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720689 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720704 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.220697227 +0000 UTC m=+28.033389531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720719 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.220712788 +0000 UTC m=+28.033405082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720733 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.220727578 +0000 UTC m=+28.033419872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720755 3178 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720767 3178 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: I0216 17:23:59.720757 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720771 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.220764659 +0000 UTC m=+28.033456953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720795 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.22078271 +0000 UTC m=+28.033475104 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720816 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720828 3178 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720843 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.220836151 +0000 UTC m=+28.033528445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720863 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.220851631 +0000 UTC m=+28.033544005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720886 3178 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: I0216 17:23:59.720869 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720918 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.220911033 +0000 UTC m=+28.033603327 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: I0216 17:23:59.720951 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: I0216 17:23:59.720978 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.731627 master-0 kubenswrapper[3178]: E0216 17:23:59.720979 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721026 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.221015516 +0000 UTC m=+28.033707930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721097 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.221086258 +0000 UTC m=+28.033778552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: I0216 17:23:59.721080 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-marketplace/certified-operators-z69zq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3beb7bf-922f-425d-8a19-fd407a7153a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [registry-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aca57c8abaa83b2d1dce95fa7fe5b9416be70e100957ce48f212e2ec404387bc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"registry-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/extracted-catalog\\\",\\\"name\\\":\\\"catalog-content\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qhz6z\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-marketplace\"/\"certified-operators-z69zq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721112 3178 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721145 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: I0216 17:23:59.721122 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721175 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.2211647 +0000 UTC m=+28.033856984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721197 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.22118498 +0000 UTC m=+28.033877344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: I0216 17:23:59.721221 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721228 3178 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721229 3178 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: I0216 17:23:59.721264 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721231 3178 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721280 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.221269613 +0000 UTC m=+28.033961977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721402 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.221379265 +0000 UTC m=+28.034071579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-config" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721435 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.221420047 +0000 UTC m=+28.034112341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: I0216 17:23:59.721472 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: I0216 17:23:59.721516 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: I0216 17:23:59.721520 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j99jl\" (UniqueName: \"kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721566 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.22155107 +0000 UTC m=+28.034243424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721684 3178 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721730 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.221717444 +0000 UTC m=+28.034409738 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: I0216 17:23:59.721786 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: I0216 17:23:59.721791 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: I0216 17:23:59.721820 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpjv7\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-kube-api-access-vpjv7\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: I0216 17:23:59.721826 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: E0216 17:23:59.721855 3178 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: I0216 17:23:59.721854 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76rtg\" (UniqueName: \"kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:23:59.733324 master-0 kubenswrapper[3178]: I0216 17:23:59.721912 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: E0216 17:23:59.721938 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.221883009 +0000 UTC m=+28.034575293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"service-ca" not registered Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.721966 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.721998 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722023 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722050 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722080 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722107 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722135 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722162 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722189 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722216 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722222 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722240 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: E0216 17:23:59.722324 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: E0216 17:23:59.722363 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.222350261 +0000 UTC m=+28.035042555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: E0216 17:23:59.722385 3178 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: E0216 17:23:59.722400 3178 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722410 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: E0216 17:23:59.722436 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.222425293 +0000 UTC m=+28.035117587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: E0216 17:23:59.722460 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.222448864 +0000 UTC m=+28.035141218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: E0216 17:23:59.722495 3178 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: E0216 17:23:59.722536 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.222525346 +0000 UTC m=+28.035217710 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722599 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: E0216 17:23:59.722624 3178 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: E0216 17:23:59.722662 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.222650499 +0000 UTC m=+28.035342793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722601 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722713 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722743 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722777 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722807 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722835 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722866 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: I0216 17:23:59.722891 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: E0216 17:23:59.722996 3178 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:23:59.734335 master-0 kubenswrapper[3178]: E0216 17:23:59.723035 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.223024089 +0000 UTC m=+28.035716383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.723063 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.723091 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.723436 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: E0216 17:23:59.723444 3178 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.723510 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: E0216 17:23:59.723546 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.223533663 +0000 UTC m=+28.036225957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.723655 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.723747 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.723777 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: E0216 17:23:59.723787 3178 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.723828 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: E0216 17:23:59.723834 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.22382197 +0000 UTC m=+28.036514344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.723787 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.723940 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.723963 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.723970 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.724033 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.724065 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: E0216 17:23:59.724067 3178 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: E0216 17:23:59.724105 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.224096558 +0000 UTC m=+28.036788842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.724138 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.724168 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: E0216 17:23:59.724160 3178 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.724312 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: E0216 17:23:59.724330 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.224318683 +0000 UTC m=+28.037010987 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: E0216 17:23:59.724333 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.724359 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: E0216 17:23:59.724380 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.224368775 +0000 UTC m=+28.037061059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.724405 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.724444 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: E0216 17:23:59.724453 3178 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.724471 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: E0216 17:23:59.724487 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.224475098 +0000 UTC m=+28.037167382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.724505 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.735883 master-0 kubenswrapper[3178]: I0216 17:23:59.724527 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.724545 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: E0216 17:23:59.724554 3178 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.724563 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.724584 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: E0216 17:23:59.724591 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.224582551 +0000 UTC m=+28.037274915 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.724609 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: E0216 17:23:59.724705 3178 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.724696 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: E0216 17:23:59.724743 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.224733355 +0000 UTC m=+28.037425709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.724768 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7l6q\" (UniqueName: \"kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: E0216 17:23:59.724775 3178 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: E0216 17:23:59.724834 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.224822767 +0000 UTC m=+28.037515121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.724866 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.724897 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.724926 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.724957 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.724984 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.725036 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: E0216 17:23:59.725083 3178 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: E0216 17:23:59.725116 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.225106824 +0000 UTC m=+28.037799108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: E0216 17:23:59.725201 3178 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.725225 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: E0216 17:23:59.725299 3178 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.725310 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: E0216 17:23:59.725302 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.225290989 +0000 UTC m=+28.037983293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: E0216 17:23:59.725363 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.225347071 +0000 UTC m=+28.038039355 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.725386 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.725542 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-out\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.725625 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.725712 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.725743 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.725780 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.725819 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.725847 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.725874 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:23:59.737351 master-0 kubenswrapper[3178]: I0216 17:23:59.725901 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.725928 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.725953 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.725965 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726020 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726019 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726054 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.226044179 +0000 UTC m=+28.038736463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726071 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.22606374 +0000 UTC m=+28.038756024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726098 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726113 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726132 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726153 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.226141572 +0000 UTC m=+28.038833866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726205 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726241 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726281 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726298 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.226286906 +0000 UTC m=+28.038979230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726335 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726371 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726528 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726577 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726620 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.226608464 +0000 UTC m=+28.039300768 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726615 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726629 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726667 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726700 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726737 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726749 3178 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726788 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.226777059 +0000 UTC m=+28.039469373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726795 3178 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726820 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726829 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.2268195 +0000 UTC m=+28.039511844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726854 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726886 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: I0216 17:23:59.726944 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.726985 3178 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:23:59.738918 master-0 kubenswrapper[3178]: E0216 17:23:59.727028 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.227016775 +0000 UTC m=+28.039709069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727064 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727099 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727132 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727182 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.727218 3178 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727233 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727217 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.727275 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.227264222 +0000 UTC m=+28.039956516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.727310 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.727366 3178 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.727422 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.727472 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.227460597 +0000 UTC m=+28.040152951 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727511 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.727534 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.227524029 +0000 UTC m=+28.040216373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727563 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727595 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727624 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.727643 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.227629971 +0000 UTC m=+28.040322255 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.727671 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.727705 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.227696123 +0000 UTC m=+28.040388457 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727670 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.727758 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727765 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727801 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.727815 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.227805566 +0000 UTC m=+28.040497860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727838 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727872 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727900 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727931 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-config-out\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727958 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.727960 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: I0216 17:23:59.728021 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.728049 3178 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.728090 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.228079183 +0000 UTC m=+28.040771467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:23:59.740295 master-0 kubenswrapper[3178]: E0216 17:23:59.728152 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728187 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.228176776 +0000 UTC m=+28.040869120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728052 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728209 3178 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728228 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728264 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.228238798 +0000 UTC m=+28.040931152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728290 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728321 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728405 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728443 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.228432783 +0000 UTC m=+28.041125087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728447 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728454 3178 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728487 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728506 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.228495754 +0000 UTC m=+28.041188108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-serving-cert" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728528 3178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728563 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728466 3178 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728600 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728633 3178 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728634 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728663 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.228654999 +0000 UTC m=+28.041347343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728635 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728700 3178 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728733 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728739 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.228731151 +0000 UTC m=+28.041423435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728775 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.228768622 +0000 UTC m=+28.041460996 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728799 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728851 3178 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.728882 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.228871504 +0000 UTC m=+28.041563788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728914 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.728945 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.729006 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.731180 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.231150415 +0000 UTC m=+28.043842709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.729125 3178 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: E0216 17:23:59.731236 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.231224257 +0000 UTC m=+28.043916551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:23:59.741725 master-0 kubenswrapper[3178]: I0216 17:23:59.729232 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-out\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.743001 master-0 kubenswrapper[3178]: I0216 17:23:59.729311 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:23:59.743001 master-0 kubenswrapper[3178]: I0216 17:23:59.729779 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.743001 master-0 kubenswrapper[3178]: I0216 17:23:59.729049 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:23:59.743001 master-0 kubenswrapper[3178]: I0216 17:23:59.731311 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:23:59.743001 master-0 kubenswrapper[3178]: I0216 17:23:59.729112 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:23:59.743001 master-0 kubenswrapper[3178]: I0216 17:23:59.731337 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.743001 master-0 kubenswrapper[3178]: I0216 17:23:59.729195 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:23:59.743001 master-0 kubenswrapper[3178]: E0216 17:23:59.731462 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:23:59.743001 master-0 kubenswrapper[3178]: E0216 17:23:59.731501 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.231491974 +0000 UTC m=+28.044184268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:23:59.743001 master-0 kubenswrapper[3178]: I0216 17:23:59.731777 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-config-out\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:23:59.743001 master-0 kubenswrapper[3178]: I0216 17:23:59.732321 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:23:59.743001 master-0 kubenswrapper[3178]: I0216 17:23:59.740077 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:23:59.743712 master-0 kubenswrapper[3178]: E0216 17:23:59.743664 3178 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:23:59.743712 master-0 kubenswrapper[3178]: E0216 17:23:59.743707 3178 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:23:59.743814 master-0 kubenswrapper[3178]: E0216 17:23:59.743721 3178 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.743814 master-0 kubenswrapper[3178]: E0216 17:23:59.743778 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.24376018 +0000 UTC m=+28.056452464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.766113 master-0 kubenswrapper[3178]: I0216 17:23:59.766072 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbtld\" (UniqueName: \"kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:23:59.787523 master-0 kubenswrapper[3178]: I0216 17:23:59.787239 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:23:59.806423 master-0 kubenswrapper[3178]: I0216 17:23:59.806380 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq4dn\" (UniqueName: \"kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:23:59.823820 master-0 kubenswrapper[3178]: E0216 17:23:59.823779 3178 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:23:59.823820 master-0 kubenswrapper[3178]: E0216 17:23:59.823812 3178 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:23:59.823820 master-0 kubenswrapper[3178]: E0216 17:23:59.823826 3178 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.824082 master-0 kubenswrapper[3178]: E0216 17:23:59.823882 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.323863747 +0000 UTC m=+28.136556041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.832568 master-0 kubenswrapper[3178]: I0216 17:23:59.832515 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.832723 master-0 kubenswrapper[3178]: I0216 17:23:59.832690 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.832932 master-0 kubenswrapper[3178]: I0216 17:23:59.832905 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.833003 master-0 kubenswrapper[3178]: I0216 17:23:59.832987 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:23:59.833045 master-0 kubenswrapper[3178]: I0216 17:23:59.833018 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.833337 master-0 kubenswrapper[3178]: I0216 17:23:59.833138 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.833337 master-0 kubenswrapper[3178]: I0216 17:23:59.833070 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:23:59.833337 master-0 kubenswrapper[3178]: I0216 17:23:59.833093 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.833337 master-0 kubenswrapper[3178]: I0216 17:23:59.833082 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.833337 master-0 kubenswrapper[3178]: I0216 17:23:59.833282 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.833337 master-0 kubenswrapper[3178]: I0216 17:23:59.833319 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.833586 master-0 kubenswrapper[3178]: I0216 17:23:59.833367 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.833586 master-0 kubenswrapper[3178]: I0216 17:23:59.833419 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.833586 master-0 kubenswrapper[3178]: I0216 17:23:59.833447 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.833586 master-0 kubenswrapper[3178]: I0216 17:23:59.833506 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.833586 master-0 kubenswrapper[3178]: I0216 17:23:59.833548 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.833586 master-0 kubenswrapper[3178]: I0216 17:23:59.833564 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.833879 master-0 kubenswrapper[3178]: I0216 17:23:59.833713 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:23:59.833879 master-0 kubenswrapper[3178]: I0216 17:23:59.833846 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.833879 master-0 kubenswrapper[3178]: I0216 17:23:59.833867 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:23:59.833986 master-0 kubenswrapper[3178]: I0216 17:23:59.833894 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.833986 master-0 kubenswrapper[3178]: I0216 17:23:59.833900 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:23:59.833986 master-0 kubenswrapper[3178]: I0216 17:23:59.833950 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:23:59.834378 master-0 kubenswrapper[3178]: I0216 17:23:59.834026 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.834378 master-0 kubenswrapper[3178]: I0216 17:23:59.834056 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:23:59.834378 master-0 kubenswrapper[3178]: I0216 17:23:59.834100 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.834378 master-0 kubenswrapper[3178]: I0216 17:23:59.834126 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:23:59.834378 master-0 kubenswrapper[3178]: I0216 17:23:59.834170 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.834378 master-0 kubenswrapper[3178]: I0216 17:23:59.834260 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:23:59.834378 master-0 kubenswrapper[3178]: I0216 17:23:59.834327 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.834378 master-0 kubenswrapper[3178]: I0216 17:23:59.834394 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:23:59.834724 master-0 kubenswrapper[3178]: I0216 17:23:59.834419 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.834724 master-0 kubenswrapper[3178]: I0216 17:23:59.834450 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.834724 master-0 kubenswrapper[3178]: I0216 17:23:59.834515 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.834724 master-0 kubenswrapper[3178]: I0216 17:23:59.834547 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.834724 master-0 kubenswrapper[3178]: I0216 17:23:59.834625 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.834724 master-0 kubenswrapper[3178]: I0216 17:23:59.834643 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.834724 master-0 kubenswrapper[3178]: I0216 17:23:59.834701 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:23:59.834724 master-0 kubenswrapper[3178]: I0216 17:23:59.834729 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.835146 master-0 kubenswrapper[3178]: I0216 17:23:59.834885 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:23:59.835146 master-0 kubenswrapper[3178]: I0216 17:23:59.834932 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:23:59.835146 master-0 kubenswrapper[3178]: I0216 17:23:59.834958 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.835146 master-0 kubenswrapper[3178]: I0216 17:23:59.834976 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:23:59.835146 master-0 kubenswrapper[3178]: I0216 17:23:59.835094 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.835620 master-0 kubenswrapper[3178]: I0216 17:23:59.835296 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.835620 master-0 kubenswrapper[3178]: I0216 17:23:59.835340 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.835761 master-0 kubenswrapper[3178]: I0216 17:23:59.835730 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.836021 master-0 kubenswrapper[3178]: I0216 17:23:59.835984 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.836139 master-0 kubenswrapper[3178]: I0216 17:23:59.836118 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/810a2275-fae5-45df-a3b8-92860451d33b-host\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:23:59.836176 master-0 kubenswrapper[3178]: I0216 17:23:59.836154 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:23:59.836265 master-0 kubenswrapper[3178]: I0216 17:23:59.836230 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.836320 master-0 kubenswrapper[3178]: I0216 17:23:59.836299 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.836415 master-0 kubenswrapper[3178]: I0216 17:23:59.836388 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.836691 master-0 kubenswrapper[3178]: I0216 17:23:59.836659 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.836724 master-0 kubenswrapper[3178]: I0216 17:23:59.836707 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.836848 master-0 kubenswrapper[3178]: I0216 17:23:59.836820 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.836912 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.836968 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837014 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/810a2275-fae5-45df-a3b8-92860451d33b-host\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837045 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837100 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837151 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837165 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837189 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837202 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837217 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837262 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837241 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837301 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837327 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837350 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837375 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837409 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837433 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837472 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837576 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837653 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.837782 master-0 kubenswrapper[3178]: I0216 17:23:59.837659 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.838473 master-0 kubenswrapper[3178]: I0216 17:23:59.837802 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.838473 master-0 kubenswrapper[3178]: I0216 17:23:59.837895 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.838473 master-0 kubenswrapper[3178]: I0216 17:23:59.837944 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.838473 master-0 kubenswrapper[3178]: I0216 17:23:59.838167 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.838473 master-0 kubenswrapper[3178]: I0216 17:23:59.838229 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.838473 master-0 kubenswrapper[3178]: I0216 17:23:59.838384 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-dir\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.838473 master-0 kubenswrapper[3178]: I0216 17:23:59.838464 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.838727 master-0 kubenswrapper[3178]: I0216 17:23:59.838525 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.838727 master-0 kubenswrapper[3178]: I0216 17:23:59.838550 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.838727 master-0 kubenswrapper[3178]: I0216 17:23:59.838646 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:23:59.838727 master-0 kubenswrapper[3178]: I0216 17:23:59.838696 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.838987 master-0 kubenswrapper[3178]: I0216 17:23:59.838752 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.838987 master-0 kubenswrapper[3178]: I0216 17:23:59.838909 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.839055 master-0 kubenswrapper[3178]: I0216 17:23:59.838988 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.839184 master-0 kubenswrapper[3178]: I0216 17:23:59.839148 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.839220 master-0 kubenswrapper[3178]: I0216 17:23:59.839192 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.839220 master-0 kubenswrapper[3178]: I0216 17:23:59.839199 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.839302 master-0 kubenswrapper[3178]: I0216 17:23:59.839222 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.839302 master-0 kubenswrapper[3178]: I0216 17:23:59.839237 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-dir\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.839302 master-0 kubenswrapper[3178]: I0216 17:23:59.839275 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.839302 master-0 kubenswrapper[3178]: I0216 17:23:59.839287 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.839302 master-0 kubenswrapper[3178]: I0216 17:23:59.839299 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.839440 master-0 kubenswrapper[3178]: I0216 17:23:59.839280 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:23:59.839440 master-0 kubenswrapper[3178]: I0216 17:23:59.839310 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.839493 master-0 kubenswrapper[3178]: I0216 17:23:59.839428 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.839493 master-0 kubenswrapper[3178]: I0216 17:23:59.839472 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:23:59.839551 master-0 kubenswrapper[3178]: I0216 17:23:59.839506 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.839551 master-0 kubenswrapper[3178]: I0216 17:23:59.839538 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.839615 master-0 kubenswrapper[3178]: I0216 17:23:59.839602 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.839645 master-0 kubenswrapper[3178]: I0216 17:23:59.839606 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.839645 master-0 kubenswrapper[3178]: I0216 17:23:59.839626 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.839702 master-0 kubenswrapper[3178]: I0216 17:23:59.839669 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.839782 master-0 kubenswrapper[3178]: I0216 17:23:59.839760 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.839782 master-0 kubenswrapper[3178]: I0216 17:23:59.839775 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.839836 master-0 kubenswrapper[3178]: I0216 17:23:59.839825 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:23:59.839878 master-0 kubenswrapper[3178]: I0216 17:23:59.839858 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.840355 master-0 kubenswrapper[3178]: I0216 17:23:59.840324 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.840486 master-0 kubenswrapper[3178]: I0216 17:23:59.840441 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.840721 master-0 kubenswrapper[3178]: I0216 17:23:59.840695 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.840756 master-0 kubenswrapper[3178]: I0216 17:23:59.840741 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.840788 master-0 kubenswrapper[3178]: I0216 17:23:59.840746 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.840788 master-0 kubenswrapper[3178]: I0216 17:23:59.840773 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:23:59.840846 master-0 kubenswrapper[3178]: I0216 17:23:59.840817 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.840877 master-0 kubenswrapper[3178]: I0216 17:23:59.840851 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:23:59.840919 master-0 kubenswrapper[3178]: I0216 17:23:59.840901 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.840953 master-0 kubenswrapper[3178]: I0216 17:23:59.840910 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:23:59.840982 master-0 kubenswrapper[3178]: I0216 17:23:59.840947 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:23:59.840982 master-0 kubenswrapper[3178]: I0216 17:23:59.840949 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.845146 master-0 kubenswrapper[3178]: I0216 17:23:59.845121 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:23:59.862021 master-0 kubenswrapper[3178]: E0216 17:23:59.861972 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:23:59.862021 master-0 kubenswrapper[3178]: E0216 17:23:59.862008 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:23:59.862021 master-0 kubenswrapper[3178]: E0216 17:23:59.862021 3178 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.862369 master-0 kubenswrapper[3178]: E0216 17:23:59.862086 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.362066381 +0000 UTC m=+28.174758665 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.887906 master-0 kubenswrapper[3178]: I0216 17:23:59.887789 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktgm7\" (UniqueName: \"kubernetes.io/projected/810a2275-fae5-45df-a3b8-92860451d33b-kube-api-access-ktgm7\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:23:59.901685 master-0 kubenswrapper[3178]: E0216 17:23:59.901136 3178 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:23:59.901685 master-0 kubenswrapper[3178]: E0216 17:23:59.901168 3178 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:23:59.901685 master-0 kubenswrapper[3178]: E0216 17:23:59.901180 3178 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.901685 master-0 kubenswrapper[3178]: E0216 17:23:59.901229 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.401212121 +0000 UTC m=+28.213904405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.933647 master-0 kubenswrapper[3178]: I0216 17:23:59.933588 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:23:59.948713 master-0 kubenswrapper[3178]: E0216 17:23:59.948559 3178 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:23:59.948713 master-0 kubenswrapper[3178]: E0216 17:23:59.948603 3178 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:23:59.948713 master-0 kubenswrapper[3178]: E0216 17:23:59.948615 3178 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.948713 master-0 kubenswrapper[3178]: E0216 17:23:59.948686 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.448670471 +0000 UTC m=+28.261362755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.958754 master-0 kubenswrapper[3178]: I0216 17:23:59.958691 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:23:59.958754 master-0 kubenswrapper[3178]: I0216 17:23:59.958738 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:23:59.959005 master-0 kubenswrapper[3178]: I0216 17:23:59.958857 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:23:59.959005 master-0 kubenswrapper[3178]: E0216 17:23:59.958855 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:23:59.959005 master-0 kubenswrapper[3178]: I0216 17:23:59.958898 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:23:59.959005 master-0 kubenswrapper[3178]: I0216 17:23:59.958918 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:23:59.959005 master-0 kubenswrapper[3178]: I0216 17:23:59.958963 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:23:59.959150 master-0 kubenswrapper[3178]: I0216 17:23:59.959102 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:23:59.959150 master-0 kubenswrapper[3178]: I0216 17:23:59.959117 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:23:59.959150 master-0 kubenswrapper[3178]: I0216 17:23:59.959119 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:23:59.959150 master-0 kubenswrapper[3178]: E0216 17:23:59.959112 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:23:59.959150 master-0 kubenswrapper[3178]: I0216 17:23:59.959141 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:23:59.959150 master-0 kubenswrapper[3178]: I0216 17:23:59.959153 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959164 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959161 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959197 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959199 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959226 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959233 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959236 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959267 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959184 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959216 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959230 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959221 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959184 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959302 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959311 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959270 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959287 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959324 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959288 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959300 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959201 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:23:59.959336 master-0 kubenswrapper[3178]: I0216 17:23:59.959239 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959313 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959131 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959344 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.958706 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959491 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: E0216 17:23:59.959502 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959540 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959527 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959571 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959588 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959575 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959614 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959617 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959633 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959673 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959693 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959701 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959731 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959765 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959777 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: I0216 17:23:59.959787 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:23:59.959998 master-0 kubenswrapper[3178]: E0216 17:23:59.959907 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:23:59.960686 master-0 kubenswrapper[3178]: I0216 17:23:59.959932 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:23:59.960686 master-0 kubenswrapper[3178]: I0216 17:23:59.959918 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:23:59.960686 master-0 kubenswrapper[3178]: I0216 17:23:59.960064 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:23:59.960686 master-0 kubenswrapper[3178]: E0216 17:23:59.960133 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:23:59.960686 master-0 kubenswrapper[3178]: I0216 17:23:59.960038 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:23:59.960686 master-0 kubenswrapper[3178]: E0216 17:23:59.960245 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:23:59.960686 master-0 kubenswrapper[3178]: I0216 17:23:59.959978 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:23:59.960686 master-0 kubenswrapper[3178]: E0216 17:23:59.960187 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:23:59.960686 master-0 kubenswrapper[3178]: E0216 17:23:59.960541 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:23:59.960686 master-0 kubenswrapper[3178]: E0216 17:23:59.960621 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:23:59.960976 master-0 kubenswrapper[3178]: E0216 17:23:59.960739 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:23:59.960976 master-0 kubenswrapper[3178]: E0216 17:23:59.960884 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:23:59.961081 master-0 kubenswrapper[3178]: E0216 17:23:59.961027 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:23:59.961194 master-0 kubenswrapper[3178]: E0216 17:23:59.961151 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:23:59.961230 master-0 kubenswrapper[3178]: E0216 17:23:59.961211 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:23:59.961422 master-0 kubenswrapper[3178]: E0216 17:23:59.961369 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:23:59.961620 master-0 kubenswrapper[3178]: E0216 17:23:59.961526 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:23:59.962032 master-0 kubenswrapper[3178]: E0216 17:23:59.961996 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:23:59.962119 master-0 kubenswrapper[3178]: E0216 17:23:59.962098 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:23:59.962189 master-0 kubenswrapper[3178]: E0216 17:23:59.962169 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:23:59.962387 master-0 kubenswrapper[3178]: E0216 17:23:59.962342 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:23:59.962493 master-0 kubenswrapper[3178]: E0216 17:23:59.962469 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:23:59.962612 master-0 kubenswrapper[3178]: E0216 17:23:59.962582 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" podUID="6f44170a-3c1c-4944-b971-251f75a51fc3" Feb 16 17:23:59.962699 master-0 kubenswrapper[3178]: E0216 17:23:59.962677 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:23:59.962830 master-0 kubenswrapper[3178]: E0216 17:23:59.962783 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:23:59.962919 master-0 kubenswrapper[3178]: E0216 17:23:59.962836 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:23:59.962919 master-0 kubenswrapper[3178]: E0216 17:23:59.962893 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:23:59.963099 master-0 kubenswrapper[3178]: E0216 17:23:59.963008 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:23:59.963193 master-0 kubenswrapper[3178]: E0216 17:23:59.963165 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:23:59.963384 master-0 kubenswrapper[3178]: E0216 17:23:59.963359 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" Feb 16 17:23:59.963497 master-0 kubenswrapper[3178]: E0216 17:23:59.963466 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:23:59.963649 master-0 kubenswrapper[3178]: E0216 17:23:59.963624 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:23:59.963794 master-0 kubenswrapper[3178]: E0216 17:23:59.963771 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:23:59.963940 master-0 kubenswrapper[3178]: E0216 17:23:59.963911 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:23:59.964025 master-0 kubenswrapper[3178]: E0216 17:23:59.964002 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:23:59.964101 master-0 kubenswrapper[3178]: E0216 17:23:59.964079 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:23:59.964155 master-0 kubenswrapper[3178]: E0216 17:23:59.964132 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:23:59.964220 master-0 kubenswrapper[3178]: E0216 17:23:59.964200 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:23:59.964336 master-0 kubenswrapper[3178]: E0216 17:23:59.964312 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:23:59.964412 master-0 kubenswrapper[3178]: E0216 17:23:59.964392 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:23:59.964511 master-0 kubenswrapper[3178]: E0216 17:23:59.964473 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:23:59.964677 master-0 kubenswrapper[3178]: E0216 17:23:59.964638 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:23:59.964732 master-0 kubenswrapper[3178]: E0216 17:23:59.964704 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" Feb 16 17:23:59.964836 master-0 kubenswrapper[3178]: E0216 17:23:59.964768 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:23:59.964901 master-0 kubenswrapper[3178]: E0216 17:23:59.964838 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:23:59.964944 master-0 kubenswrapper[3178]: E0216 17:23:59.964917 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:23:59.965125 master-0 kubenswrapper[3178]: E0216 17:23:59.965067 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:23:59.965234 master-0 kubenswrapper[3178]: E0216 17:23:59.965206 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:23:59.965359 master-0 kubenswrapper[3178]: E0216 17:23:59.965327 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:23:59.965411 master-0 kubenswrapper[3178]: E0216 17:23:59.965382 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:23:59.965536 master-0 kubenswrapper[3178]: E0216 17:23:59.965513 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:23:59.965630 master-0 kubenswrapper[3178]: E0216 17:23:59.965597 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:23:59.965692 master-0 kubenswrapper[3178]: E0216 17:23:59.965670 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:23:59.965904 master-0 kubenswrapper[3178]: E0216 17:23:59.965861 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:23:59.971273 master-0 kubenswrapper[3178]: E0216 17:23:59.968524 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:23:59.971273 master-0 kubenswrapper[3178]: E0216 17:23:59.968704 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" Feb 16 17:23:59.971273 master-0 kubenswrapper[3178]: E0216 17:23:59.968864 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:23:59.971273 master-0 kubenswrapper[3178]: E0216 17:23:59.969024 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:23:59.971273 master-0 kubenswrapper[3178]: E0216 17:23:59.969162 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:23:59.978326 master-0 kubenswrapper[3178]: I0216 17:23:59.978276 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nfk2\" (UniqueName: \"kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:23:59.988910 master-0 kubenswrapper[3178]: E0216 17:23:59.988861 3178 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:23:59.988910 master-0 kubenswrapper[3178]: E0216 17:23:59.988895 3178 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:23:59.988910 master-0 kubenswrapper[3178]: E0216 17:23:59.988912 3178 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.989042 master-0 kubenswrapper[3178]: E0216 17:23:59.988978 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.48895709 +0000 UTC m=+28.301649384 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.999607 master-0 kubenswrapper[3178]: E0216 17:23:59.999556 3178 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:23:59.999607 master-0 kubenswrapper[3178]: E0216 17:23:59.999592 3178 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:23:59.999607 master-0 kubenswrapper[3178]: E0216 17:23:59.999606 3178 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:23:59.999781 master-0 kubenswrapper[3178]: E0216 17:23:59.999663 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.499645664 +0000 UTC m=+28.312337968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.020263 master-0 kubenswrapper[3178]: E0216 17:24:00.020178 3178 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:00.020263 master-0 kubenswrapper[3178]: E0216 17:24:00.020214 3178 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.020263 master-0 kubenswrapper[3178]: E0216 17:24:00.020228 3178 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.020512 master-0 kubenswrapper[3178]: E0216 17:24:00.020298 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.520281612 +0000 UTC m=+28.332973896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.026072 master-0 kubenswrapper[3178]: I0216 17:24:00.026029 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:24:00.039959 master-0 kubenswrapper[3178]: W0216 17:24:00.039917 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab80e0fb_09dd_4c93_b235_1487024105d2.slice/crio-8829b7ba3dde2781a29cd29841cecd44ba49a0453c7b226cb4e93d3298990b75 WatchSource:0}: Error finding container 8829b7ba3dde2781a29cd29841cecd44ba49a0453c7b226cb4e93d3298990b75: Status 404 returned error can't find the container with id 8829b7ba3dde2781a29cd29841cecd44ba49a0453c7b226cb4e93d3298990b75 Feb 16 17:24:00.044400 master-0 kubenswrapper[3178]: I0216 17:24:00.044375 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:00.066045 master-0 kubenswrapper[3178]: E0216 17:24:00.066005 3178 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:00.066045 master-0 kubenswrapper[3178]: E0216 17:24:00.066042 3178 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.066151 master-0 kubenswrapper[3178]: E0216 17:24:00.066057 3178 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.066151 master-0 kubenswrapper[3178]: E0216 17:24:00.066112 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.566094869 +0000 UTC m=+28.378787153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.084472 master-0 kubenswrapper[3178]: I0216 17:24:00.084409 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:24:00.107497 master-0 kubenswrapper[3178]: I0216 17:24:00.107452 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerStarted","Data":"8829b7ba3dde2781a29cd29841cecd44ba49a0453c7b226cb4e93d3298990b75"} Feb 16 17:24:00.107702 master-0 kubenswrapper[3178]: I0216 17:24:00.107680 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:24:00.110478 master-0 kubenswrapper[3178]: I0216 17:24:00.110440 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:00.112020 master-0 kubenswrapper[3178]: I0216 17:24:00.111983 3178 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838" exitCode=255 Feb 16 17:24:00.112080 master-0 kubenswrapper[3178]: I0216 17:24:00.112029 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838"} Feb 16 17:24:00.112080 master-0 kubenswrapper[3178]: I0216 17:24:00.112062 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b"} Feb 16 17:24:00.120759 master-0 kubenswrapper[3178]: W0216 17:24:00.120721 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod648abb6c_9c81_4e5c_b5f1_3b7eb254f743.slice/crio-3f5274f53616b7f3f394dd7e765c70a0d9d9d82d26946040a2390d3b98008538 WatchSource:0}: Error finding container 3f5274f53616b7f3f394dd7e765c70a0d9d9d82d26946040a2390d3b98008538: Status 404 returned error can't find the container with id 3f5274f53616b7f3f394dd7e765c70a0d9d9d82d26946040a2390d3b98008538 Feb 16 17:24:00.127929 master-0 kubenswrapper[3178]: I0216 17:24:00.127894 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:00.138698 master-0 kubenswrapper[3178]: I0216 17:24:00.138596 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:00.145755 master-0 kubenswrapper[3178]: I0216 17:24:00.144031 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:24:00.145755 master-0 kubenswrapper[3178]: I0216 17:24:00.144785 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:00.163339 master-0 kubenswrapper[3178]: W0216 17:24:00.163274 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda94f9b8e_b020_4aab_8373_6c056ec07464.slice/crio-00316e53224294770a34d485da1701e46dd1e2fb2c2bd8ae7389a5dd2d782710 WatchSource:0}: Error finding container 00316e53224294770a34d485da1701e46dd1e2fb2c2bd8ae7389a5dd2d782710: Status 404 returned error can't find the container with id 00316e53224294770a34d485da1701e46dd1e2fb2c2bd8ae7389a5dd2d782710 Feb 16 17:24:00.164532 master-0 kubenswrapper[3178]: W0216 17:24:00.164463 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod810a2275_fae5_45df_a3b8_92860451d33b.slice/crio-9b5d8f819f97cebd14131d50dc1935b79709a51c884f493fa2fa58cc6a695b9a WatchSource:0}: Error finding container 9b5d8f819f97cebd14131d50dc1935b79709a51c884f493fa2fa58cc6a695b9a: Status 404 returned error can't find the container with id 9b5d8f819f97cebd14131d50dc1935b79709a51c884f493fa2fa58cc6a695b9a Feb 16 17:24:00.168680 master-0 kubenswrapper[3178]: I0216 17:24:00.168648 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:00.186654 master-0 kubenswrapper[3178]: E0216 17:24:00.186590 3178 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.186654 master-0 kubenswrapper[3178]: E0216 17:24:00.186623 3178 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.186806 master-0 kubenswrapper[3178]: E0216 17:24:00.186727 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.686703721 +0000 UTC m=+28.499396005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.205586 master-0 kubenswrapper[3178]: E0216 17:24:00.205550 3178 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.205586 master-0 kubenswrapper[3178]: E0216 17:24:00.205580 3178 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.205726 master-0 kubenswrapper[3178]: E0216 17:24:00.205597 3178 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.205726 master-0 kubenswrapper[3178]: E0216 17:24:00.205657 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.705637664 +0000 UTC m=+28.518329948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.231462 master-0 kubenswrapper[3178]: I0216 17:24:00.231278 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:00.242632 master-0 kubenswrapper[3178]: E0216 17:24:00.242544 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:00.242632 master-0 kubenswrapper[3178]: E0216 17:24:00.242583 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.242632 master-0 kubenswrapper[3178]: E0216 17:24:00.242624 3178 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.242818 master-0 kubenswrapper[3178]: E0216 17:24:00.242689 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.742663647 +0000 UTC m=+28.555355931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: I0216 17:24:00.259678 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: I0216 17:24:00.259898 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: I0216 17:24:00.259919 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: I0216 17:24:00.259937 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: I0216 17:24:00.259956 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.259791 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: I0216 17:24:00.259984 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.259971 3178 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260029 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260005568 +0000 UTC m=+29.072697852 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260043 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260037128 +0000 UTC m=+29.072729402 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-config" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260094 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: I0216 17:24:00.260118 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: I0216 17:24:00.260162 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260166 3178 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260192 3178 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260201 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260195383 +0000 UTC m=+29.072887667 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: I0216 17:24:00.260191 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260225 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260209923 +0000 UTC m=+29.072902207 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260240 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260268 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260242764 +0000 UTC m=+29.072935048 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260284 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260275725 +0000 UTC m=+29.072968119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260289 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: I0216 17:24:00.260303 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260314 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260307526 +0000 UTC m=+29.072999800 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260345 3178 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260366 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260360967 +0000 UTC m=+29.073053251 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: I0216 17:24:00.260364 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260399 3178 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260448 3178 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260449 3178 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260464 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260449149 +0000 UTC m=+29.073141433 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: I0216 17:24:00.260409 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260471 3178 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260476 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26047121 +0000 UTC m=+29.073163494 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:00.262270 master-0 kubenswrapper[3178]: E0216 17:24:00.260515 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260507221 +0000 UTC m=+29.073199505 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: I0216 17:24:00.260530 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260557 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260549432 +0000 UTC m=+29.073241716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"service-ca" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260584 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: I0216 17:24:00.260596 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260617 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260607004 +0000 UTC m=+29.073299368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260633 3178 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: I0216 17:24:00.260642 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260656 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260648755 +0000 UTC m=+29.073341169 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: I0216 17:24:00.260683 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260685 3178 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: I0216 17:24:00.260735 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260760 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260752937 +0000 UTC m=+29.073445221 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260788 3178 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: I0216 17:24:00.260792 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260812 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260806329 +0000 UTC m=+29.073498613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260838 3178 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: I0216 17:24:00.260842 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260865 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26085721 +0000 UTC m=+29.073549574 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260889 3178 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: I0216 17:24:00.260896 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260911 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260905251 +0000 UTC m=+29.073597535 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260937 3178 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260957 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260952163 +0000 UTC m=+29.073644447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: I0216 17:24:00.260938 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260964 3178 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260985 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.260979883 +0000 UTC m=+29.073672167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: I0216 17:24:00.260981 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.261012 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.261031 3178 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: I0216 17:24:00.261012 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.261032 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.261026695 +0000 UTC m=+29.073718979 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: E0216 17:24:00.260713 3178 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:00.263446 master-0 kubenswrapper[3178]: I0216 17:24:00.261065 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: E0216 17:24:00.261082 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.261075136 +0000 UTC m=+29.073767550 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: E0216 17:24:00.261094 3178 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: E0216 17:24:00.261114 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.261108757 +0000 UTC m=+29.073801041 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261144 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261165 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261184 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261216 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261285 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261309 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261346 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261377 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261428 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261447 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261487 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261507 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261531 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261551 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261569 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261591 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261609 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261628 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261646 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261667 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261699 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261719 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261737 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261762 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261780 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261798 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261818 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261836 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261855 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261882 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261902 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261922 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:00.264733 master-0 kubenswrapper[3178]: I0216 17:24:00.261950 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.261968 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.261987 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262013 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262040 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262067 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262089 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262113 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262138 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262163 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262187 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262216 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262241 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262284 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262309 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262336 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262363 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262411 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262434 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262463 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262490 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262518 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262544 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262571 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262623 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262650 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262676 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262702 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262726 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262766 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262790 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262817 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262844 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262872 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262901 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262921 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262948 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262966 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:00.265903 master-0 kubenswrapper[3178]: I0216 17:24:00.262993 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263020 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263038 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263057 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263078 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263099 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263117 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263137 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263153 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263171 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263189 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263206 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263225 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263243 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263296 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263314 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263333 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263353 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263390 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263407 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263425 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263459 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263481 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263501 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263524 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263543 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263562 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263581 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263599 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263617 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263635 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263652 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263669 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263704 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263722 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263740 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263765 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.273265 master-0 kubenswrapper[3178]: I0216 17:24:00.263784 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.263802 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.263820 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.263851 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.263868 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.263887 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.263918 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.263947 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.263966 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.263983 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264001 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264018 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264036 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264053 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264079 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264118 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264143 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264161 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264186 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264205 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264229 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264260 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264285 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264303 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264322 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264339 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264356 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264381 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264407 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264431 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264458 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264490 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264510 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264529 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: I0216 17:24:00.264547 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: E0216 17:24:00.264601 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:00.274773 master-0 kubenswrapper[3178]: E0216 17:24:00.264624 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26461687 +0000 UTC m=+29.077309144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264639 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.264633511 +0000 UTC m=+29.077325785 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264669 3178 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264686 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.264681382 +0000 UTC m=+29.077373666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264706 3178 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264722 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.264718143 +0000 UTC m=+29.077410417 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264753 3178 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264769 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.264764964 +0000 UTC m=+29.077457248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264800 3178 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264817 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.264812915 +0000 UTC m=+29.077505199 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264840 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264856 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.264851646 +0000 UTC m=+29.077543930 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264878 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264894 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.264890137 +0000 UTC m=+29.077582421 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264923 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264940 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.264934928 +0000 UTC m=+29.077627212 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264970 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.264988 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26498253 +0000 UTC m=+29.077674814 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265017 3178 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265033 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265028771 +0000 UTC m=+29.077721055 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265054 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265070 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265065982 +0000 UTC m=+29.077758266 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265091 3178 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265107 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265101673 +0000 UTC m=+29.077793957 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265135 3178 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265152 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265147684 +0000 UTC m=+29.077839958 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265172 3178 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265187 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265183215 +0000 UTC m=+29.077875499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265220 3178 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265257 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265235536 +0000 UTC m=+29.077927820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265290 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265314 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265306958 +0000 UTC m=+29.077999242 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:00.276390 master-0 kubenswrapper[3178]: E0216 17:24:00.265373 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265393 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26538679 +0000 UTC m=+29.078079074 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265436 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265461 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265453262 +0000 UTC m=+29.078145636 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265500 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265521 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265514364 +0000 UTC m=+29.078206658 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265558 3178 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265580 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265572805 +0000 UTC m=+29.078265209 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265618 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265638 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265632207 +0000 UTC m=+29.078324491 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265677 3178 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265697 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265691399 +0000 UTC m=+29.078383683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265734 3178 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265755 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26574881 +0000 UTC m=+29.078441094 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265782 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265803 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265796601 +0000 UTC m=+29.078488885 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265830 3178 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265849 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265843373 +0000 UTC m=+29.078535657 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265886 3178 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265910 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265902364 +0000 UTC m=+29.078594648 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265949 3178 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265967 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265962636 +0000 UTC m=+29.078654920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.265988 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.266003 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265999307 +0000 UTC m=+29.078691591 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.266026 3178 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.266041 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.266036908 +0000 UTC m=+29.078729182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.266061 3178 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.266077 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.266072329 +0000 UTC m=+29.078764613 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.266111 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.266127 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.2661223 +0000 UTC m=+29.078814584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.266173 3178 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.266182 3178 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.266190 3178 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.277625 master-0 kubenswrapper[3178]: E0216 17:24:00.266209 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.266203192 +0000 UTC m=+29.078895476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266240 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266890 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26688251 +0000 UTC m=+29.079574794 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266316 3178 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266345 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266973 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.266959312 +0000 UTC m=+29.079651596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266990 3178 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267001 3178 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266994 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.266985093 +0000 UTC m=+29.079677377 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267046 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267087 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267095 3178 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267114 3178 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267067 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267053965 +0000 UTC m=+29.079746259 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267107 3178 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267138 3178 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267147 3178 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267154 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267134967 +0000 UTC m=+29.079827261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267156 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266453 3178 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267178 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267139 3178 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267148 3178 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267197 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266478 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267173 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267163178 +0000 UTC m=+29.079855472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267160 3178 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266495 3178 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267229 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267220219 +0000 UTC m=+29.079912523 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266562 3178 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266551 3178 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267273 3178 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267289 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26723929 +0000 UTC m=+29.079931594 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267305 3178 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267314 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267306291 +0000 UTC m=+29.079998585 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267330 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267322822 +0000 UTC m=+29.080015116 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266565 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267345 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266603 3178 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266625 3178 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267395 3178 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266629 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266825 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267434 3178 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266961 3178 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266399 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.266399 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:24:00.279004 master-0 kubenswrapper[3178]: E0216 17:24:00.267069 3178 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267511 3178 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267125 3178 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.266452 3178 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.266474 3178 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267236 3178 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267595 3178 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.266594 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.266500 3178 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267282 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267661 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267296 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267360 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267338052 +0000 UTC m=+29.080030346 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267732 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267735 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267721523 +0000 UTC m=+29.080413877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267748 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267743193 +0000 UTC m=+29.080435477 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267757 3178 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267791 3178 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267760 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0 podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267754613 +0000 UTC m=+29.080446887 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267809 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267802105 +0000 UTC m=+29.080494389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267821 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267815425 +0000 UTC m=+29.080507709 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267835 3178 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267466 3178 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267580 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267871 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267383 3178 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267895 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267826365 +0000 UTC m=+29.080518649 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267907 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267901577 +0000 UTC m=+29.080593861 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267938 3178 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267896 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267916 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267941 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267935768 +0000 UTC m=+29.080628052 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267979 3178 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267985 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.267975039 +0000 UTC m=+29.080667323 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.268001 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26799196 +0000 UTC m=+29.080684244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.268005 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.268011 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26800653 +0000 UTC m=+29.080698814 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.267987 3178 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.268026 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26801871 +0000 UTC m=+29.080710994 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:00.281011 master-0 kubenswrapper[3178]: E0216 17:24:00.268038 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268032501 +0000 UTC m=+29.080724785 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268049 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268044441 +0000 UTC m=+29.080736725 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268060 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268054611 +0000 UTC m=+29.080746895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268070 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268065342 +0000 UTC m=+29.080757616 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268082 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268076652 +0000 UTC m=+29.080768926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268091 3178 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268092 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268086832 +0000 UTC m=+29.080779116 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268118 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268112133 +0000 UTC m=+29.080804417 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268130 3178 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268129 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268123903 +0000 UTC m=+29.080816187 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268150 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268144674 +0000 UTC m=+29.080836958 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268160 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268155694 +0000 UTC m=+29.080847978 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268163 3178 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268170 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268166014 +0000 UTC m=+29.080858288 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268182 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268176995 +0000 UTC m=+29.080869279 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268192 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268187265 +0000 UTC m=+29.080879549 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268202 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268197185 +0000 UTC m=+29.080889469 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268205 3178 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268212 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268207245 +0000 UTC m=+29.080899529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268221 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268217226 +0000 UTC m=+29.080909510 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268231 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268226006 +0000 UTC m=+29.080918290 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268237 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268239 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268235126 +0000 UTC m=+29.080927410 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268270 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268263907 +0000 UTC m=+29.080956291 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268282 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268277957 +0000 UTC m=+29.080970361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:00.282333 master-0 kubenswrapper[3178]: E0216 17:24:00.268292 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268288238 +0000 UTC m=+29.080980522 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268298 3178 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268194 3178 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268301 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268297548 +0000 UTC m=+29.080989832 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268339 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268342 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268337289 +0000 UTC m=+29.081029573 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268119 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268372 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26836406 +0000 UTC m=+29.081056404 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268390 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26838227 +0000 UTC m=+29.081074674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268402 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26839611 +0000 UTC m=+29.081088494 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268407 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268414 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268408251 +0000 UTC m=+29.081100635 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268426 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268419771 +0000 UTC m=+29.081112055 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268460 3178 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268467 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268433421 +0000 UTC m=+29.081125705 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268478 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268473432 +0000 UTC m=+29.081165716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268487 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268482953 +0000 UTC m=+29.081175237 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268495 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268491403 +0000 UTC m=+29.081183687 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268501 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268504 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268500123 +0000 UTC m=+29.081192407 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268538 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268532434 +0000 UTC m=+29.081224718 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268548 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268544054 +0000 UTC m=+29.081236338 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268557 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268553235 +0000 UTC m=+29.081245509 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268567 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268563755 +0000 UTC m=+29.081256039 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268581 3178 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268539 3178 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268526 3178 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268559 3178 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268620 3178 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268664 3178 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268699 3178 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268723 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268756 3178 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268789 3178 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268818 3178 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268842 3178 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Feb 16 17:24:00.283436 master-0 kubenswrapper[3178]: E0216 17:24:00.268871 3178 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.268888 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.268916 3178 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.268939 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.268969 3178 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.268992 3178 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269014 3178 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269045 3178 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269075 3178 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269106 3178 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269126 3178 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269158 3178 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269187 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.268572145 +0000 UTC m=+29.081264419 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269209 3178 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269233 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269226192 +0000 UTC m=+29.081918466 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269187 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269303 3178 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269347 3178 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269347 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269309205 +0000 UTC m=+29.082001539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269366 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269360626 +0000 UTC m=+29.082052910 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269378 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269372536 +0000 UTC m=+29.082064940 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269389 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269384217 +0000 UTC m=+29.082076501 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269398 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269393777 +0000 UTC m=+29.082086061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269407 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269402267 +0000 UTC m=+29.082094551 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269417 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269412737 +0000 UTC m=+29.082105021 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269426 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269422778 +0000 UTC m=+29.082115062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269435 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269431258 +0000 UTC m=+29.082123542 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269441 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269514 3178 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269444 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269440058 +0000 UTC m=+29.082132342 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269583 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269567031 +0000 UTC m=+29.082259325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269629 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269668 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269658414 +0000 UTC m=+29.082350708 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269689 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269681335 +0000 UTC m=+29.082373639 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269710 3178 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:00.286234 master-0 kubenswrapper[3178]: E0216 17:24:00.269712 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269705875 +0000 UTC m=+29.082398169 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269780 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269769957 +0000 UTC m=+29.082462311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269783 3178 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269795 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269786517 +0000 UTC m=+29.082478891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269835 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269826318 +0000 UTC m=+29.082518702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269849 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269842719 +0000 UTC m=+29.082535113 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269861 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269855519 +0000 UTC m=+29.082547803 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269873 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269867059 +0000 UTC m=+29.082559483 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"audit" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269909 3178 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269911 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.26990234 +0000 UTC m=+29.082594714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269933 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269927231 +0000 UTC m=+29.082619625 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269948 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269940121 +0000 UTC m=+29.082632535 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269963 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269984 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269955642 +0000 UTC m=+29.082648036 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269879 3178 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.270001 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.269994743 +0000 UTC m=+29.082687137 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.270017 3178 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.270019 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.270013093 +0000 UTC m=+29.082705487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269760 3178 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.269596 3178 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.270037 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.270030614 +0000 UTC m=+29.082723008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.270077 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.270068315 +0000 UTC m=+29.082760689 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.272063 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.270084215 +0000 UTC m=+29.082776499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.272128 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272116179 +0000 UTC m=+29.084808473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.272160 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.27214087 +0000 UTC m=+29.084833164 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.272180 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272170561 +0000 UTC m=+29.084862865 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:00.291011 master-0 kubenswrapper[3178]: E0216 17:24:00.272198 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272190091 +0000 UTC m=+29.084882385 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272224 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272214812 +0000 UTC m=+29.084907106 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272272 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272235112 +0000 UTC m=+29.084927406 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272357 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272302994 +0000 UTC m=+29.084995288 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272381 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272371396 +0000 UTC m=+29.085063690 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272401 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272391617 +0000 UTC m=+29.085083911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272425 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272416417 +0000 UTC m=+29.085108721 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272442 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272433648 +0000 UTC m=+29.085125942 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272468 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272451798 +0000 UTC m=+29.085144092 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272486 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272477559 +0000 UTC m=+29.085169863 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272534 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.2725229 +0000 UTC m=+29.085215204 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272600 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272549491 +0000 UTC m=+29.085241795 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272611 3178 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272627 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272616702 +0000 UTC m=+29.085309006 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272773 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.272749086 +0000 UTC m=+29.085441370 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.272995 3178 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.273047 3178 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.273022 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.273064 3178 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.273097 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.273084645 +0000 UTC m=+29.085776929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.273138 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.773109036 +0000 UTC m=+28.585801340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.273337 3178 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: E0216 17:24:00.273412 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.273394083 +0000 UTC m=+29.086086387 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:00.292197 master-0 kubenswrapper[3178]: I0216 17:24:00.290431 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:00.305590 master-0 kubenswrapper[3178]: I0216 17:24:00.305541 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57455\" (UniqueName: \"kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:00.320803 master-0 kubenswrapper[3178]: I0216 17:24:00.320757 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6r7wj" Feb 16 17:24:00.326730 master-0 kubenswrapper[3178]: I0216 17:24:00.326683 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:00.327717 master-0 kubenswrapper[3178]: I0216 17:24:00.327666 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:24:00.331185 master-0 kubenswrapper[3178]: I0216 17:24:00.331152 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l67l5\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-kube-api-access-l67l5\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:00.342518 master-0 kubenswrapper[3178]: W0216 17:24:00.342482 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab5760f1_b2e0_4138_9383_e4827154ac50.slice/crio-fc1859bea800a3c6a414cc64bcfd32dfbc9f487ecf2e012603f9cd17e1541615 WatchSource:0}: Error finding container fc1859bea800a3c6a414cc64bcfd32dfbc9f487ecf2e012603f9cd17e1541615: Status 404 returned error can't find the container with id fc1859bea800a3c6a414cc64bcfd32dfbc9f487ecf2e012603f9cd17e1541615 Feb 16 17:24:00.342955 master-0 kubenswrapper[3178]: W0216 17:24:00.342919 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43f65f23_4ddd_471a_9cb3_b0945382d83c.slice/crio-991e68de3901c2fa1007e2e130ec8671c0a957ba6c92997b14008db00c17ebb5 WatchSource:0}: Error finding container 991e68de3901c2fa1007e2e130ec8671c0a957ba6c92997b14008db00c17ebb5: Status 404 returned error can't find the container with id 991e68de3901c2fa1007e2e130ec8671c0a957ba6c92997b14008db00c17ebb5 Feb 16 17:24:00.348544 master-0 kubenswrapper[3178]: W0216 17:24:00.348508 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39387549_c636_4bd4_b463_f6a93810f277.slice/crio-e858d51ba7f7c1ad0ca843ee57bf6eb31850a3a502ba6e109fa74505612f66cd WatchSource:0}: Error finding container e858d51ba7f7c1ad0ca843ee57bf6eb31850a3a502ba6e109fa74505612f66cd: Status 404 returned error can't find the container with id e858d51ba7f7c1ad0ca843ee57bf6eb31850a3a502ba6e109fa74505612f66cd Feb 16 17:24:00.350764 master-0 kubenswrapper[3178]: E0216 17:24:00.350598 3178 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:00.350764 master-0 kubenswrapper[3178]: E0216 17:24:00.350633 3178 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.350764 master-0 kubenswrapper[3178]: E0216 17:24:00.350645 3178 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.350764 master-0 kubenswrapper[3178]: E0216 17:24:00.350717 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.850697666 +0000 UTC m=+28.663389950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.366926 master-0 kubenswrapper[3178]: I0216 17:24:00.366893 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:00.367097 master-0 kubenswrapper[3178]: E0216 17:24:00.367047 3178 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.367148 master-0 kubenswrapper[3178]: E0216 17:24:00.367098 3178 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.367148 master-0 kubenswrapper[3178]: E0216 17:24:00.367113 3178 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.367297 master-0 kubenswrapper[3178]: I0216 17:24:00.367174 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:00.367297 master-0 kubenswrapper[3178]: E0216 17:24:00.367242 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:00.367297 master-0 kubenswrapper[3178]: E0216 17:24:00.367274 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.367297 master-0 kubenswrapper[3178]: E0216 17:24:00.367285 3178 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.367464 master-0 kubenswrapper[3178]: E0216 17:24:00.367344 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.367327447 +0000 UTC m=+29.180019741 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.367464 master-0 kubenswrapper[3178]: E0216 17:24:00.367364 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.367355738 +0000 UTC m=+29.180048042 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.368004 master-0 kubenswrapper[3178]: I0216 17:24:00.367968 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:00.392009 master-0 kubenswrapper[3178]: I0216 17:24:00.391891 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:00.405423 master-0 kubenswrapper[3178]: E0216 17:24:00.405390 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:00.405423 master-0 kubenswrapper[3178]: E0216 17:24:00.405412 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.405423 master-0 kubenswrapper[3178]: E0216 17:24:00.405423 3178 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.405696 master-0 kubenswrapper[3178]: E0216 17:24:00.405468 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.90545541 +0000 UTC m=+28.718147694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.423829 master-0 kubenswrapper[3178]: I0216 17:24:00.423787 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:24:00.443161 master-0 kubenswrapper[3178]: I0216 17:24:00.442968 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:00.465270 master-0 kubenswrapper[3178]: E0216 17:24:00.465194 3178 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.465270 master-0 kubenswrapper[3178]: E0216 17:24:00.465225 3178 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.465270 master-0 kubenswrapper[3178]: E0216 17:24:00.465237 3178 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.465553 master-0 kubenswrapper[3178]: E0216 17:24:00.465307 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:00.965289589 +0000 UTC m=+28.777981863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.469205 master-0 kubenswrapper[3178]: I0216 17:24:00.469142 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:00.469393 master-0 kubenswrapper[3178]: I0216 17:24:00.469353 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:00.469508 master-0 kubenswrapper[3178]: E0216 17:24:00.469472 3178 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:24:00.469508 master-0 kubenswrapper[3178]: E0216 17:24:00.469499 3178 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.469508 master-0 kubenswrapper[3178]: E0216 17:24:00.469508 3178 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.469791 master-0 kubenswrapper[3178]: E0216 17:24:00.469731 3178 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:00.469791 master-0 kubenswrapper[3178]: E0216 17:24:00.469785 3178 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.469791 master-0 kubenswrapper[3178]: E0216 17:24:00.469798 3178 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.470329 master-0 kubenswrapper[3178]: E0216 17:24:00.470276 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.470205499 +0000 UTC m=+29.282897793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.470394 master-0 kubenswrapper[3178]: E0216 17:24:00.470354 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.470334413 +0000 UTC m=+29.283026817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.484148 master-0 kubenswrapper[3178]: I0216 17:24:00.484088 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:24:00.501221 master-0 kubenswrapper[3178]: E0216 17:24:00.501153 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.501221 master-0 kubenswrapper[3178]: E0216 17:24:00.501196 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.501221 master-0 kubenswrapper[3178]: E0216 17:24:00.501213 3178 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.501538 master-0 kubenswrapper[3178]: E0216 17:24:00.501317 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.001289674 +0000 UTC m=+28.813981958 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.525777 master-0 kubenswrapper[3178]: E0216 17:24:00.525703 3178 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:24:00.525777 master-0 kubenswrapper[3178]: E0216 17:24:00.525748 3178 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.525777 master-0 kubenswrapper[3178]: E0216 17:24:00.525761 3178 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.526042 master-0 kubenswrapper[3178]: E0216 17:24:00.525819 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.025799525 +0000 UTC m=+28.838491809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.545870 master-0 kubenswrapper[3178]: I0216 17:24:00.545806 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:00.561991 master-0 kubenswrapper[3178]: E0216 17:24:00.561938 3178 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:00.561991 master-0 kubenswrapper[3178]: E0216 17:24:00.561984 3178 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.561991 master-0 kubenswrapper[3178]: E0216 17:24:00.562000 3178 projected.go:194] Error preparing data for projected volume kube-api-access-st6bv for pod openshift-console/console-599b567ff7-nrcpr: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.562984 master-0 kubenswrapper[3178]: E0216 17:24:00.562070 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.062049918 +0000 UTC m=+28.874742212 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-st6bv" (UniqueName: "kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.583448 master-0 kubenswrapper[3178]: E0216 17:24:00.583400 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.583448 master-0 kubenswrapper[3178]: E0216 17:24:00.583439 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.583448 master-0 kubenswrapper[3178]: E0216 17:24:00.583455 3178 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.583652 master-0 kubenswrapper[3178]: E0216 17:24:00.583531 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.083512868 +0000 UTC m=+28.896205162 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.583992 master-0 kubenswrapper[3178]: I0216 17:24:00.583962 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:00.584183 master-0 kubenswrapper[3178]: E0216 17:24:00.584151 3178 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:00.584183 master-0 kubenswrapper[3178]: E0216 17:24:00.584182 3178 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.584239 master-0 kubenswrapper[3178]: E0216 17:24:00.584192 3178 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.584290 master-0 kubenswrapper[3178]: E0216 17:24:00.584257 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.584229907 +0000 UTC m=+29.396922191 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.585906 master-0 kubenswrapper[3178]: I0216 17:24:00.585876 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:00.585993 master-0 kubenswrapper[3178]: E0216 17:24:00.585971 3178 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.585993 master-0 kubenswrapper[3178]: E0216 17:24:00.585990 3178 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.586070 master-0 kubenswrapper[3178]: E0216 17:24:00.585999 3178 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.586070 master-0 kubenswrapper[3178]: E0216 17:24:00.586051 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.586018054 +0000 UTC m=+29.398710338 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.586313 master-0 kubenswrapper[3178]: I0216 17:24:00.586292 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:00.586364 master-0 kubenswrapper[3178]: I0216 17:24:00.586327 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:00.586364 master-0 kubenswrapper[3178]: E0216 17:24:00.586351 3178 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.586364 master-0 kubenswrapper[3178]: E0216 17:24:00.586364 3178 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.586477 master-0 kubenswrapper[3178]: E0216 17:24:00.586371 3178 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.586477 master-0 kubenswrapper[3178]: E0216 17:24:00.586427 3178 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:00.586477 master-0 kubenswrapper[3178]: E0216 17:24:00.586430 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.586419295 +0000 UTC m=+29.399111579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.586477 master-0 kubenswrapper[3178]: E0216 17:24:00.586439 3178 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.586477 master-0 kubenswrapper[3178]: E0216 17:24:00.586448 3178 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.586477 master-0 kubenswrapper[3178]: E0216 17:24:00.586471 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.586464416 +0000 UTC m=+29.399156700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.604051 master-0 kubenswrapper[3178]: E0216 17:24:00.603704 3178 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.604051 master-0 kubenswrapper[3178]: E0216 17:24:00.604022 3178 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.604051 master-0 kubenswrapper[3178]: E0216 17:24:00.604036 3178 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.604377 master-0 kubenswrapper[3178]: E0216 17:24:00.604091 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.104076164 +0000 UTC m=+28.916768448 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.624942 master-0 kubenswrapper[3178]: I0216 17:24:00.624863 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:24:00.626091 master-0 kubenswrapper[3178]: I0216 17:24:00.626049 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvw4s\" (UniqueName: \"kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:24:00.636270 master-0 kubenswrapper[3178]: W0216 17:24:00.636213 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3fa6ac1_781f_446c_b6b4_18bdb7723c23.slice/crio-4ab1b3b76e20d135df3b1131111388991974b01e8267bfd94f88542db725e3af WatchSource:0}: Error finding container 4ab1b3b76e20d135df3b1131111388991974b01e8267bfd94f88542db725e3af: Status 404 returned error can't find the container with id 4ab1b3b76e20d135df3b1131111388991974b01e8267bfd94f88542db725e3af Feb 16 17:24:00.648993 master-0 kubenswrapper[3178]: I0216 17:24:00.648911 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:00.657728 master-0 kubenswrapper[3178]: E0216 17:24:00.657671 3178 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:00.657728 master-0 kubenswrapper[3178]: E0216 17:24:00.657721 3178 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.657728 master-0 kubenswrapper[3178]: E0216 17:24:00.657736 3178 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.657922 master-0 kubenswrapper[3178]: E0216 17:24:00.657797 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.15777845 +0000 UTC m=+28.970470734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.670506 master-0 kubenswrapper[3178]: I0216 17:24:00.670459 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:00.675657 master-0 kubenswrapper[3178]: I0216 17:24:00.675615 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:00.682972 master-0 kubenswrapper[3178]: E0216 17:24:00.682942 3178 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.682972 master-0 kubenswrapper[3178]: E0216 17:24:00.682968 3178 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.683051 master-0 kubenswrapper[3178]: E0216 17:24:00.682979 3178 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.683051 master-0 kubenswrapper[3178]: E0216 17:24:00.683033 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.18301749 +0000 UTC m=+28.995709774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.690712 master-0 kubenswrapper[3178]: I0216 17:24:00.690662 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:00.690860 master-0 kubenswrapper[3178]: E0216 17:24:00.690803 3178 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.690860 master-0 kubenswrapper[3178]: E0216 17:24:00.690828 3178 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.690972 master-0 kubenswrapper[3178]: E0216 17:24:00.690873 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.690858088 +0000 UTC m=+29.503550372 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.704687 master-0 kubenswrapper[3178]: I0216 17:24:00.704665 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:24:00.717641 master-0 kubenswrapper[3178]: I0216 17:24:00.717603 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:24:00.722478 master-0 kubenswrapper[3178]: I0216 17:24:00.722459 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:24:00.727050 master-0 kubenswrapper[3178]: E0216 17:24:00.726983 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:00.727050 master-0 kubenswrapper[3178]: E0216 17:24:00.727026 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.727050 master-0 kubenswrapper[3178]: E0216 17:24:00.727040 3178 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.727273 master-0 kubenswrapper[3178]: E0216 17:24:00.727114 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.22709352 +0000 UTC m=+29.039785824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.730745 master-0 kubenswrapper[3178]: I0216 17:24:00.730685 3178 request.go:700] Waited for 1.009579557s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/serviceaccounts/router/token Feb 16 17:24:00.731195 master-0 kubenswrapper[3178]: I0216 17:24:00.730908 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:24:00.752882 master-0 kubenswrapper[3178]: I0216 17:24:00.752843 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94kdz\" (UniqueName: \"kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:00.765652 master-0 kubenswrapper[3178]: E0216 17:24:00.765608 3178 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:24:00.765652 master-0 kubenswrapper[3178]: E0216 17:24:00.765647 3178 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.765793 master-0 kubenswrapper[3178]: E0216 17:24:00.765660 3178 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.765793 master-0 kubenswrapper[3178]: E0216 17:24:00.765721 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.265703025 +0000 UTC m=+29.078395309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.785789 master-0 kubenswrapper[3178]: I0216 17:24:00.785727 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:00.793616 master-0 kubenswrapper[3178]: I0216 17:24:00.793584 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:00.793854 master-0 kubenswrapper[3178]: E0216 17:24:00.793815 3178 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.793854 master-0 kubenswrapper[3178]: E0216 17:24:00.793855 3178 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.793952 master-0 kubenswrapper[3178]: E0216 17:24:00.793867 3178 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.793985 master-0 kubenswrapper[3178]: I0216 17:24:00.793940 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:00.793985 master-0 kubenswrapper[3178]: E0216 17:24:00.793972 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:00.793985 master-0 kubenswrapper[3178]: E0216 17:24:00.793985 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.794072 master-0 kubenswrapper[3178]: E0216 17:24:00.793994 3178 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.794072 master-0 kubenswrapper[3178]: I0216 17:24:00.793982 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:00.794072 master-0 kubenswrapper[3178]: E0216 17:24:00.794022 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.794004667 +0000 UTC m=+29.606696951 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.794072 master-0 kubenswrapper[3178]: E0216 17:24:00.794043 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.794037038 +0000 UTC m=+29.606729322 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.794222 master-0 kubenswrapper[3178]: E0216 17:24:00.794084 3178 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:24:00.794222 master-0 kubenswrapper[3178]: E0216 17:24:00.794099 3178 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.794222 master-0 kubenswrapper[3178]: E0216 17:24:00.794109 3178 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.794222 master-0 kubenswrapper[3178]: E0216 17:24:00.794155 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.794142121 +0000 UTC m=+29.606834405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.821882 master-0 kubenswrapper[3178]: W0216 17:24:00.821830 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod702322ac_7610_4568_9a68_b6acbd1f0c12.slice/crio-7cb9dc3e1cfd504ac51740676aa8abfea42a74cb0bb3c1ae429538ab24b08f03 WatchSource:0}: Error finding container 7cb9dc3e1cfd504ac51740676aa8abfea42a74cb0bb3c1ae429538ab24b08f03: Status 404 returned error can't find the container with id 7cb9dc3e1cfd504ac51740676aa8abfea42a74cb0bb3c1ae429538ab24b08f03 Feb 16 17:24:00.822489 master-0 kubenswrapper[3178]: W0216 17:24:00.822394 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6ad958f_25e4_40cb_89ec_5da9cb6395c7.slice/crio-9c1f9646afbb62e247cefae88e6ea50065550d78f7935c044f7dcb7faa56701d WatchSource:0}: Error finding container 9c1f9646afbb62e247cefae88e6ea50065550d78f7935c044f7dcb7faa56701d: Status 404 returned error can't find the container with id 9c1f9646afbb62e247cefae88e6ea50065550d78f7935c044f7dcb7faa56701d Feb 16 17:24:00.830109 master-0 kubenswrapper[3178]: I0216 17:24:00.830051 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j99jl\" (UniqueName: \"kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:00.847292 master-0 kubenswrapper[3178]: I0216 17:24:00.847181 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpjv7\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-kube-api-access-vpjv7\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.856344 master-0 kubenswrapper[3178]: W0216 17:24:00.856294 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c48005e_c4df_4332_87fc_ec028f2c6921.slice/crio-86caf26a899fa2ef707de37b05830737248aa086d2a7fc23bbea1ac0ba7504f6 WatchSource:0}: Error finding container 86caf26a899fa2ef707de37b05830737248aa086d2a7fc23bbea1ac0ba7504f6: Status 404 returned error can't find the container with id 86caf26a899fa2ef707de37b05830737248aa086d2a7fc23bbea1ac0ba7504f6 Feb 16 17:24:00.866382 master-0 kubenswrapper[3178]: I0216 17:24:00.866345 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76rtg\" (UniqueName: \"kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:00.881518 master-0 kubenswrapper[3178]: E0216 17:24:00.881453 3178 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:00.881518 master-0 kubenswrapper[3178]: E0216 17:24:00.881511 3178 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.881694 master-0 kubenswrapper[3178]: E0216 17:24:00.881527 3178 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.881694 master-0 kubenswrapper[3178]: E0216 17:24:00.881603 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.381582622 +0000 UTC m=+29.194274906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.898873 master-0 kubenswrapper[3178]: W0216 17:24:00.898821 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a939dd0_fc27_4d47_b81b_96e13e4bbca9.slice/crio-441fa2dcc2054ce74afe608caccc7ace43169040cc77c644b15838983f1c426d WatchSource:0}: Error finding container 441fa2dcc2054ce74afe608caccc7ace43169040cc77c644b15838983f1c426d: Status 404 returned error can't find the container with id 441fa2dcc2054ce74afe608caccc7ace43169040cc77c644b15838983f1c426d Feb 16 17:24:00.900796 master-0 kubenswrapper[3178]: I0216 17:24:00.900755 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:00.900998 master-0 kubenswrapper[3178]: E0216 17:24:00.900977 3178 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:00.901042 master-0 kubenswrapper[3178]: E0216 17:24:00.901000 3178 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.901042 master-0 kubenswrapper[3178]: E0216 17:24:00.901011 3178 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.901102 master-0 kubenswrapper[3178]: E0216 17:24:00.901058 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.901042249 +0000 UTC m=+29.713734533 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.901102 master-0 kubenswrapper[3178]: E0216 17:24:00.901071 3178 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:00.901102 master-0 kubenswrapper[3178]: E0216 17:24:00.901086 3178 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.901102 master-0 kubenswrapper[3178]: E0216 17:24:00.901095 3178 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.901296 master-0 kubenswrapper[3178]: E0216 17:24:00.901265 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.401217704 +0000 UTC m=+29.213910018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.915947 master-0 kubenswrapper[3178]: W0216 17:24:00.915904 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc45ce0e5_c50b_4210_b7bb_82db2b2bc1db.slice/crio-3136aa60c6b6262cc5ee20f5294914b70883fbaf51532aab288a45f884ca3006 WatchSource:0}: Error finding container 3136aa60c6b6262cc5ee20f5294914b70883fbaf51532aab288a45f884ca3006: Status 404 returned error can't find the container with id 3136aa60c6b6262cc5ee20f5294914b70883fbaf51532aab288a45f884ca3006 Feb 16 17:24:00.920945 master-0 kubenswrapper[3178]: E0216 17:24:00.920916 3178 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.920945 master-0 kubenswrapper[3178]: E0216 17:24:00.920941 3178 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.921023 master-0 kubenswrapper[3178]: E0216 17:24:00.920952 3178 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.921023 master-0 kubenswrapper[3178]: E0216 17:24:00.921006 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.420988089 +0000 UTC m=+29.233680373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.946116 master-0 kubenswrapper[3178]: E0216 17:24:00.946082 3178 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:00.946116 master-0 kubenswrapper[3178]: E0216 17:24:00.946108 3178 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:00.946116 master-0 kubenswrapper[3178]: E0216 17:24:00.946118 3178 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.946366 master-0 kubenswrapper[3178]: E0216 17:24:00.946172 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.446157037 +0000 UTC m=+29.258849321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:00.958888 master-0 kubenswrapper[3178]: I0216 17:24:00.958834 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:00.959008 master-0 kubenswrapper[3178]: I0216 17:24:00.958913 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:00.959008 master-0 kubenswrapper[3178]: E0216 17:24:00.958961 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:24:00.959183 master-0 kubenswrapper[3178]: I0216 17:24:00.959162 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:00.959261 master-0 kubenswrapper[3178]: E0216 17:24:00.959175 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:24:00.959261 master-0 kubenswrapper[3178]: I0216 17:24:00.959197 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:00.959261 master-0 kubenswrapper[3178]: I0216 17:24:00.959232 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:00.959375 master-0 kubenswrapper[3178]: I0216 17:24:00.959285 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:00.959405 master-0 kubenswrapper[3178]: E0216 17:24:00.959382 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:24:00.959442 master-0 kubenswrapper[3178]: I0216 17:24:00.959418 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:00.959473 master-0 kubenswrapper[3178]: I0216 17:24:00.959456 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:00.959522 master-0 kubenswrapper[3178]: E0216 17:24:00.959494 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:24:00.959599 master-0 kubenswrapper[3178]: E0216 17:24:00.959570 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:24:00.959662 master-0 kubenswrapper[3178]: E0216 17:24:00.959639 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:24:00.959756 master-0 kubenswrapper[3178]: E0216 17:24:00.959724 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:24:00.959953 master-0 kubenswrapper[3178]: E0216 17:24:00.959915 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="b04ee64e-5e83-499c-812d-749b2b6824c6" Feb 16 17:24:00.966745 master-0 kubenswrapper[3178]: I0216 17:24:00.964916 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:24:00.966745 master-0 kubenswrapper[3178]: I0216 17:24:00.966213 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:00.983164 master-0 kubenswrapper[3178]: E0216 17:24:00.983124 3178 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.983300 master-0 kubenswrapper[3178]: E0216 17:24:00.983167 3178 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:00.983300 master-0 kubenswrapper[3178]: E0216 17:24:00.983269 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.483227561 +0000 UTC m=+29.295919845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.002694 master-0 kubenswrapper[3178]: E0216 17:24:01.002635 3178 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:01.002694 master-0 kubenswrapper[3178]: E0216 17:24:01.002669 3178 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.002694 master-0 kubenswrapper[3178]: E0216 17:24:01.002681 3178 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.002995 master-0 kubenswrapper[3178]: E0216 17:24:01.002742 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.502723589 +0000 UTC m=+29.315415873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.004026 master-0 kubenswrapper[3178]: I0216 17:24:01.003951 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:01.004026 master-0 kubenswrapper[3178]: I0216 17:24:01.004029 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:01.004309 master-0 kubenswrapper[3178]: E0216 17:24:01.004142 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.004309 master-0 kubenswrapper[3178]: E0216 17:24:01.004156 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.004309 master-0 kubenswrapper[3178]: E0216 17:24:01.004164 3178 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.004309 master-0 kubenswrapper[3178]: E0216 17:24:01.004195 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:01.004309 master-0 kubenswrapper[3178]: E0216 17:24:01.004239 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.004309 master-0 kubenswrapper[3178]: E0216 17:24:01.004269 3178 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.004309 master-0 kubenswrapper[3178]: E0216 17:24:01.004275 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.00426474 +0000 UTC m=+29.816957024 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.004575 master-0 kubenswrapper[3178]: I0216 17:24:01.004318 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:01.004575 master-0 kubenswrapper[3178]: E0216 17:24:01.004346 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.004322632 +0000 UTC m=+29.817015006 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.004575 master-0 kubenswrapper[3178]: E0216 17:24:01.004380 3178 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.004575 master-0 kubenswrapper[3178]: E0216 17:24:01.004391 3178 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.004575 master-0 kubenswrapper[3178]: E0216 17:24:01.004398 3178 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.004575 master-0 kubenswrapper[3178]: E0216 17:24:01.004516 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.004495826 +0000 UTC m=+29.817188160 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.021722 master-0 kubenswrapper[3178]: E0216 17:24:01.021680 3178 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.021722 master-0 kubenswrapper[3178]: E0216 17:24:01.021709 3178 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.021722 master-0 kubenswrapper[3178]: E0216 17:24:01.021721 3178 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.021927 master-0 kubenswrapper[3178]: E0216 17:24:01.021783 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.521766785 +0000 UTC m=+29.334459139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.041931 master-0 kubenswrapper[3178]: E0216 17:24:01.041865 3178 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:01.041931 master-0 kubenswrapper[3178]: E0216 17:24:01.041888 3178 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.041931 master-0 kubenswrapper[3178]: E0216 17:24:01.041897 3178 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.041931 master-0 kubenswrapper[3178]: E0216 17:24:01.041944 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.54193146 +0000 UTC m=+29.354623744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.063304 master-0 kubenswrapper[3178]: E0216 17:24:01.063270 3178 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.063304 master-0 kubenswrapper[3178]: E0216 17:24:01.063297 3178 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.063304 master-0 kubenswrapper[3178]: E0216 17:24:01.063306 3178 projected.go:194] Error preparing data for projected volume kube-api-access-sbrtz for pod openshift-console-operator/console-operator-7777d5cc66-64vhv: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.063440 master-0 kubenswrapper[3178]: E0216 17:24:01.063338 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.563329378 +0000 UTC m=+29.376021662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sbrtz" (UniqueName: "kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.085433 master-0 kubenswrapper[3178]: E0216 17:24:01.085399 3178 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Feb 16 17:24:01.085583 master-0 kubenswrapper[3178]: E0216 17:24:01.085440 3178 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.085583 master-0 kubenswrapper[3178]: E0216 17:24:01.085459 3178 projected.go:194] Error preparing data for projected volume kube-api-access-7mrkc for pod openshift-authentication/oauth-openshift-64f85b8fc9-n9msn: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.085583 master-0 kubenswrapper[3178]: E0216 17:24:01.085531 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.585511037 +0000 UTC m=+29.398203331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7mrkc" (UniqueName: "kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.109502 master-0 kubenswrapper[3178]: I0216 17:24:01.109435 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:01.109695 master-0 kubenswrapper[3178]: I0216 17:24:01.109642 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:01.109695 master-0 kubenswrapper[3178]: E0216 17:24:01.109651 3178 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:24:01.109763 master-0 kubenswrapper[3178]: E0216 17:24:01.109696 3178 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.109763 master-0 kubenswrapper[3178]: E0216 17:24:01.109717 3178 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.109835 master-0 kubenswrapper[3178]: E0216 17:24:01.109787 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.109768291 +0000 UTC m=+29.922460595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.109835 master-0 kubenswrapper[3178]: I0216 17:24:01.109690 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:01.110043 master-0 kubenswrapper[3178]: E0216 17:24:01.109806 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.110043 master-0 kubenswrapper[3178]: I0216 17:24:01.110030 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:01.110149 master-0 kubenswrapper[3178]: E0216 17:24:01.110050 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.110149 master-0 kubenswrapper[3178]: E0216 17:24:01.110069 3178 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.110149 master-0 kubenswrapper[3178]: E0216 17:24:01.109919 3178 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:01.110236 master-0 kubenswrapper[3178]: E0216 17:24:01.110156 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.110143121 +0000 UTC m=+29.922835405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.110236 master-0 kubenswrapper[3178]: E0216 17:24:01.110165 3178 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.110236 master-0 kubenswrapper[3178]: E0216 17:24:01.110179 3178 projected.go:194] Error preparing data for projected volume kube-api-access-st6bv for pod openshift-console/console-599b567ff7-nrcpr: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.110236 master-0 kubenswrapper[3178]: E0216 17:24:01.110211 3178 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.110236 master-0 kubenswrapper[3178]: E0216 17:24:01.110224 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.110207473 +0000 UTC m=+29.922899807 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-st6bv" (UniqueName: "kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.110236 master-0 kubenswrapper[3178]: E0216 17:24:01.110231 3178 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.110441 master-0 kubenswrapper[3178]: E0216 17:24:01.110275 3178 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.110441 master-0 kubenswrapper[3178]: E0216 17:24:01.110329 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.110316606 +0000 UTC m=+29.923008960 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.110441 master-0 kubenswrapper[3178]: I0216 17:24:01.110373 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7l6q\" (UniqueName: \"kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:01.117896 master-0 kubenswrapper[3178]: I0216 17:24:01.117861 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerStarted","Data":"5c5bcea03650c0cdda60439583d0a74ffa94b8547b99e2c40adb5f801938dcb3"} Feb 16 17:24:01.119002 master-0 kubenswrapper[3178]: I0216 17:24:01.118965 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" event={"ID":"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a","Type":"ContainerStarted","Data":"650c00271176c30e19eb9cdf1573f9c862bf460c2839e903b286275740a5a883"} Feb 16 17:24:01.119959 master-0 kubenswrapper[3178]: I0216 17:24:01.119936 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerStarted","Data":"441fa2dcc2054ce74afe608caccc7ace43169040cc77c644b15838983f1c426d"} Feb 16 17:24:01.121213 master-0 kubenswrapper[3178]: I0216 17:24:01.121191 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"36ecb052054e20edf5b4f7071d4d0da2f770afa6a294fc15e380d4c171f3c6ba"} Feb 16 17:24:01.121283 master-0 kubenswrapper[3178]: I0216 17:24:01.121216 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"3f5274f53616b7f3f394dd7e765c70a0d9d9d82d26946040a2390d3b98008538"} Feb 16 17:24:01.122686 master-0 kubenswrapper[3178]: I0216 17:24:01.122647 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2ws9r" event={"ID":"9c48005e-c4df-4332-87fc-ec028f2c6921","Type":"ContainerStarted","Data":"86caf26a899fa2ef707de37b05830737248aa086d2a7fc23bbea1ac0ba7504f6"} Feb 16 17:24:01.123756 master-0 kubenswrapper[3178]: I0216 17:24:01.123726 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" event={"ID":"b6ad958f-25e4-40cb-89ec-5da9cb6395c7","Type":"ContainerStarted","Data":"9c1f9646afbb62e247cefae88e6ea50065550d78f7935c044f7dcb7faa56701d"} Feb 16 17:24:01.124569 master-0 kubenswrapper[3178]: I0216 17:24:01.124541 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerStarted","Data":"e858d51ba7f7c1ad0ca843ee57bf6eb31850a3a502ba6e109fa74505612f66cd"} Feb 16 17:24:01.125464 master-0 kubenswrapper[3178]: I0216 17:24:01.125442 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xv2wv" event={"ID":"810a2275-fae5-45df-a3b8-92860451d33b","Type":"ContainerStarted","Data":"9b5d8f819f97cebd14131d50dc1935b79709a51c884f493fa2fa58cc6a695b9a"} Feb 16 17:24:01.126952 master-0 kubenswrapper[3178]: I0216 17:24:01.126457 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerStarted","Data":"7cb9dc3e1cfd504ac51740676aa8abfea42a74cb0bb3c1ae429538ab24b08f03"} Feb 16 17:24:01.126952 master-0 kubenswrapper[3178]: E0216 17:24:01.126745 3178 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.126952 master-0 kubenswrapper[3178]: E0216 17:24:01.126791 3178 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.126952 master-0 kubenswrapper[3178]: E0216 17:24:01.126818 3178 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.126952 master-0 kubenswrapper[3178]: E0216 17:24:01.126920 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.626892806 +0000 UTC m=+29.439585130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.128026 master-0 kubenswrapper[3178]: I0216 17:24:01.127989 3178 generic.go:334] "Generic (PLEG): container finished" podID="a94f9b8e-b020-4aab-8373-6c056ec07464" containerID="102a2b3ff0c0802de14c69b4e98a9814b1e46ce4db6fc83e68edccac0436a089" exitCode=0 Feb 16 17:24:01.128086 master-0 kubenswrapper[3178]: I0216 17:24:01.128058 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerDied","Data":"102a2b3ff0c0802de14c69b4e98a9814b1e46ce4db6fc83e68edccac0436a089"} Feb 16 17:24:01.128132 master-0 kubenswrapper[3178]: I0216 17:24:01.128115 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerStarted","Data":"00316e53224294770a34d485da1701e46dd1e2fb2c2bd8ae7389a5dd2d782710"} Feb 16 17:24:01.129312 master-0 kubenswrapper[3178]: I0216 17:24:01.129270 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" event={"ID":"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db","Type":"ContainerStarted","Data":"3136aa60c6b6262cc5ee20f5294914b70883fbaf51532aab288a45f884ca3006"} Feb 16 17:24:01.130828 master-0 kubenswrapper[3178]: I0216 17:24:01.130790 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-czzz2" event={"ID":"b3fa6ac1-781f-446c-b6b4-18bdb7723c23","Type":"ContainerStarted","Data":"4ab1b3b76e20d135df3b1131111388991974b01e8267bfd94f88542db725e3af"} Feb 16 17:24:01.132871 master-0 kubenswrapper[3178]: I0216 17:24:01.132824 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerStarted","Data":"fc1859bea800a3c6a414cc64bcfd32dfbc9f487ecf2e012603f9cd17e1541615"} Feb 16 17:24:01.133714 master-0 kubenswrapper[3178]: I0216 17:24:01.133673 3178 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r7wj" event={"ID":"43f65f23-4ddd-471a-9cb3-b0945382d83c","Type":"ContainerStarted","Data":"991e68de3901c2fa1007e2e130ec8671c0a957ba6c92997b14008db00c17ebb5"} Feb 16 17:24:01.143987 master-0 kubenswrapper[3178]: E0216 17:24:01.143949 3178 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.143987 master-0 kubenswrapper[3178]: E0216 17:24:01.143979 3178 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.144081 master-0 kubenswrapper[3178]: E0216 17:24:01.143992 3178 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.144081 master-0 kubenswrapper[3178]: E0216 17:24:01.144050 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.644034251 +0000 UTC m=+29.456726535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.171816 master-0 kubenswrapper[3178]: E0216 17:24:01.171647 3178 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.171816 master-0 kubenswrapper[3178]: E0216 17:24:01.171702 3178 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.171816 master-0 kubenswrapper[3178]: E0216 17:24:01.171794 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.671767758 +0000 UTC m=+29.484460082 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.197169 master-0 kubenswrapper[3178]: I0216 17:24:01.197074 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:01.217816 master-0 kubenswrapper[3178]: I0216 17:24:01.217732 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:01.218268 master-0 kubenswrapper[3178]: E0216 17:24:01.217958 3178 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:01.218268 master-0 kubenswrapper[3178]: E0216 17:24:01.217995 3178 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.218268 master-0 kubenswrapper[3178]: E0216 17:24:01.218017 3178 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.218268 master-0 kubenswrapper[3178]: E0216 17:24:01.218152 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.218114168 +0000 UTC m=+30.030806552 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.218268 master-0 kubenswrapper[3178]: I0216 17:24:01.218147 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:01.219408 master-0 kubenswrapper[3178]: E0216 17:24:01.218274 3178 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.219408 master-0 kubenswrapper[3178]: E0216 17:24:01.218315 3178 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.219408 master-0 kubenswrapper[3178]: E0216 17:24:01.218335 3178 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.219408 master-0 kubenswrapper[3178]: E0216 17:24:01.218535 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.218501879 +0000 UTC m=+30.031194203 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.221242 master-0 kubenswrapper[3178]: I0216 17:24:01.219654 3178 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:24:01.225628 master-0 kubenswrapper[3178]: E0216 17:24:01.225573 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.225628 master-0 kubenswrapper[3178]: E0216 17:24:01.225616 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.225628 master-0 kubenswrapper[3178]: E0216 17:24:01.225637 3178 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.225919 master-0 kubenswrapper[3178]: E0216 17:24:01.225722 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.72569532 +0000 UTC m=+29.538387644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.237629 master-0 kubenswrapper[3178]: I0216 17:24:01.237580 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:01.247386 master-0 kubenswrapper[3178]: E0216 17:24:01.247313 3178 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Feb 16 17:24:01.247386 master-0 kubenswrapper[3178]: E0216 17:24:01.247376 3178 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.247386 master-0 kubenswrapper[3178]: E0216 17:24:01.247395 3178 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.247661 master-0 kubenswrapper[3178]: E0216 17:24:01.247474 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.747452967 +0000 UTC m=+29.560145261 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.250718 master-0 kubenswrapper[3178]: W0216 17:24:01.250658 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f9bf4ab_5415_4616_aa36_ea387c699ea9.slice/crio-37867a9b89ca658d12f1765647ed5e15e132bb4023f3490c258e8e8c2d9cc767 WatchSource:0}: Error finding container 37867a9b89ca658d12f1765647ed5e15e132bb4023f3490c258e8e8c2d9cc767: Status 404 returned error can't find the container with id 37867a9b89ca658d12f1765647ed5e15e132bb4023f3490c258e8e8c2d9cc767 Feb 16 17:24:01.257869 master-0 kubenswrapper[3178]: I0216 17:24:01.257815 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:24:01.268935 master-0 kubenswrapper[3178]: E0216 17:24:01.268877 3178 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:01.268935 master-0 kubenswrapper[3178]: E0216 17:24:01.268903 3178 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.268935 master-0 kubenswrapper[3178]: E0216 17:24:01.268914 3178 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.269237 master-0 kubenswrapper[3178]: E0216 17:24:01.268959 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.768944548 +0000 UTC m=+29.581636832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.276982 master-0 kubenswrapper[3178]: W0216 17:24:01.276679 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6fe41b0_1a42_4f07_8220_d9aaa50788ad.slice/crio-3df38b68e675184d173d581679d4cdced10e660e88d294708f50f7b9553e8d93 WatchSource:0}: Error finding container 3df38b68e675184d173d581679d4cdced10e660e88d294708f50f7b9553e8d93: Status 404 returned error can't find the container with id 3df38b68e675184d173d581679d4cdced10e660e88d294708f50f7b9553e8d93 Feb 16 17:24:01.283124 master-0 kubenswrapper[3178]: E0216 17:24:01.283048 3178 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.283124 master-0 kubenswrapper[3178]: E0216 17:24:01.283078 3178 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.283124 master-0 kubenswrapper[3178]: E0216 17:24:01.283092 3178 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.283412 master-0 kubenswrapper[3178]: E0216 17:24:01.283147 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.783130335 +0000 UTC m=+29.595822619 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.305314 master-0 kubenswrapper[3178]: E0216 17:24:01.305218 3178 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.305314 master-0 kubenswrapper[3178]: E0216 17:24:01.305275 3178 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.305314 master-0 kubenswrapper[3178]: E0216 17:24:01.305290 3178 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.305605 master-0 kubenswrapper[3178]: E0216 17:24:01.305353 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.805331114 +0000 UTC m=+29.618023398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.320961 master-0 kubenswrapper[3178]: I0216 17:24:01.320745 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:01.320961 master-0 kubenswrapper[3178]: I0216 17:24:01.320794 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:01.320961 master-0 kubenswrapper[3178]: I0216 17:24:01.320836 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:01.320961 master-0 kubenswrapper[3178]: E0216 17:24:01.320948 3178 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:01.321196 master-0 kubenswrapper[3178]: E0216 17:24:01.320988 3178 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:01.321196 master-0 kubenswrapper[3178]: E0216 17:24:01.321051 3178 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:01.321196 master-0 kubenswrapper[3178]: E0216 17:24:01.321002 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.32098746 +0000 UTC m=+31.133679744 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:01.321196 master-0 kubenswrapper[3178]: I0216 17:24:01.321111 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:01.321196 master-0 kubenswrapper[3178]: I0216 17:24:01.321134 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:01.321408 master-0 kubenswrapper[3178]: I0216 17:24:01.321198 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:01.321408 master-0 kubenswrapper[3178]: I0216 17:24:01.321280 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:01.321408 master-0 kubenswrapper[3178]: E0216 17:24:01.321318 3178 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:01.321408 master-0 kubenswrapper[3178]: E0216 17:24:01.321323 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.321288998 +0000 UTC m=+31.133981292 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:01.321408 master-0 kubenswrapper[3178]: E0216 17:24:01.321359 3178 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:01.321408 master-0 kubenswrapper[3178]: E0216 17:24:01.321375 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.32136008 +0000 UTC m=+31.134052434 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:01.321408 master-0 kubenswrapper[3178]: E0216 17:24:01.321405 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.321392111 +0000 UTC m=+31.134084505 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:01.321408 master-0 kubenswrapper[3178]: E0216 17:24:01.321414 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:01.321642 master-0 kubenswrapper[3178]: E0216 17:24:01.321441 3178 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:01.321642 master-0 kubenswrapper[3178]: E0216 17:24:01.321462 3178 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:01.321642 master-0 kubenswrapper[3178]: I0216 17:24:01.321463 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:01.321642 master-0 kubenswrapper[3178]: E0216 17:24:01.321488 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.321477873 +0000 UTC m=+31.134170157 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:01.321642 master-0 kubenswrapper[3178]: E0216 17:24:01.321511 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.321497413 +0000 UTC m=+31.134189817 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:01.321642 master-0 kubenswrapper[3178]: E0216 17:24:01.321539 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.321526034 +0000 UTC m=+31.134218328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:01.321642 master-0 kubenswrapper[3178]: I0216 17:24:01.321571 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:01.321642 master-0 kubenswrapper[3178]: E0216 17:24:01.321597 3178 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:01.321642 master-0 kubenswrapper[3178]: E0216 17:24:01.321633 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.321623657 +0000 UTC m=+31.134316041 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:01.321927 master-0 kubenswrapper[3178]: I0216 17:24:01.321685 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:01.321927 master-0 kubenswrapper[3178]: I0216 17:24:01.321722 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:01.321927 master-0 kubenswrapper[3178]: I0216 17:24:01.321781 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:01.321927 master-0 kubenswrapper[3178]: I0216 17:24:01.321808 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:01.321927 master-0 kubenswrapper[3178]: I0216 17:24:01.321850 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:01.321927 master-0 kubenswrapper[3178]: I0216 17:24:01.321877 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:01.321927 master-0 kubenswrapper[3178]: I0216 17:24:01.321904 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:01.322147 master-0 kubenswrapper[3178]: I0216 17:24:01.321933 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:01.322147 master-0 kubenswrapper[3178]: I0216 17:24:01.322025 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:01.322147 master-0 kubenswrapper[3178]: I0216 17:24:01.322071 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:01.322147 master-0 kubenswrapper[3178]: I0216 17:24:01.322098 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:01.322147 master-0 kubenswrapper[3178]: I0216 17:24:01.322118 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:01.322147 master-0 kubenswrapper[3178]: I0216 17:24:01.322145 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: I0216 17:24:01.322166 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: I0216 17:24:01.322186 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: I0216 17:24:01.322210 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: I0216 17:24:01.322235 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: I0216 17:24:01.322269 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: I0216 17:24:01.322297 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: I0216 17:24:01.322318 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322389 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.322381547 +0000 UTC m=+31.135073821 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: I0216 17:24:01.322406 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: I0216 17:24:01.322426 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322438 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322503 3178 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322516 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.32249888 +0000 UTC m=+31.135191164 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322533 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.322526221 +0000 UTC m=+31.135218505 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: I0216 17:24:01.322445 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322451 3178 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322585 3178 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: I0216 17:24:01.322601 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322616 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322646 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.322630554 +0000 UTC m=+31.135322938 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322667 3178 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322687 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.322681285 +0000 UTC m=+31.135373559 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: I0216 17:24:01.322688 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322712 3178 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322734 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: I0216 17:24:01.322741 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322750 3178 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322517 3178 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322796 3178 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322800 3178 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322595 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322763 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.322756957 +0000 UTC m=+31.135449241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322844 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322649 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322650 3178 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322905 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322849 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.322837729 +0000 UTC m=+31.135530013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322950 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.322936482 +0000 UTC m=+31.135628886 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:01.323093 master-0 kubenswrapper[3178]: E0216 17:24:01.322977 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.322964662 +0000 UTC m=+31.135657046 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.322766 3178 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323003 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.322990533 +0000 UTC m=+31.135682927 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.322951 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323035 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323019264 +0000 UTC m=+31.135711688 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323059 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323046845 +0000 UTC m=+31.135739259 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323092 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323069905 +0000 UTC m=+31.135762359 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323112 3178 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323113 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323103276 +0000 UTC m=+31.135795580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323146 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323129087 +0000 UTC m=+31.135821371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323158 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323153147 +0000 UTC m=+31.135845431 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323172 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323165708 +0000 UTC m=+31.135857992 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323182 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323177228 +0000 UTC m=+31.135869502 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323190 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323196 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323191238 +0000 UTC m=+31.135883522 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323287 3178 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323301 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323285691 +0000 UTC m=+31.135977975 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323329 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323316682 +0000 UTC m=+31.136009126 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323352 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323342072 +0000 UTC m=+31.136034466 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323370 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323371 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323361943 +0000 UTC m=+31.136054347 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323410 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323403734 +0000 UTC m=+31.136096018 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323425 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: I0216 17:24:01.323475 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323545 3178 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323562 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323554918 +0000 UTC m=+31.136247312 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323443 3178 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323590 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323577999 +0000 UTC m=+31.136270343 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323503 3178 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:01.324621 master-0 kubenswrapper[3178]: E0216 17:24:01.323629 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.32362194 +0000 UTC m=+31.136314224 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.323643 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.32363861 +0000 UTC m=+31.136330894 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: I0216 17:24:01.323625 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: I0216 17:24:01.323673 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.323695 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.323735 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: I0216 17:24:01.323698 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.323785 3178 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.323741 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323727843 +0000 UTC m=+31.136420237 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.323810 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323800535 +0000 UTC m=+31.136492819 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.323826 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.323819295 +0000 UTC m=+31.136511709 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: I0216 17:24:01.323851 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: I0216 17:24:01.323877 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: I0216 17:24:01.323901 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: I0216 17:24:01.323921 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: I0216 17:24:01.323962 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.323982 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324013 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.32400438 +0000 UTC m=+31.136696654 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324016 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324037 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.324031101 +0000 UTC m=+31.136723385 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: I0216 17:24:01.323981 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324058 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324064 3178 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324205 3178 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324216 3178 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324236 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.324229606 +0000 UTC m=+31.136921880 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: I0216 17:24:01.324228 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324083 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324269 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.324256047 +0000 UTC m=+31.136948421 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324286 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.324279117 +0000 UTC m=+31.136971401 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324089 3178 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324287 3178 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324306 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.324300398 +0000 UTC m=+31.136992682 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324307 3178 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324320 3178 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.325835 master-0 kubenswrapper[3178]: E0216 17:24:01.324322 3178 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.324369 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.824359399 +0000 UTC m=+29.637051783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.324404 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.32438955 +0000 UTC m=+31.137081914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: I0216 17:24:01.324564 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: I0216 17:24:01.324745 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: I0216 17:24:01.324915 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: I0216 17:24:01.324969 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: I0216 17:24:01.325017 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: I0216 17:24:01.325056 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: I0216 17:24:01.325097 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: I0216 17:24:01.325137 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: I0216 17:24:01.325177 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: I0216 17:24:01.325214 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325382 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325433 3178 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325450 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: I0216 17:24:01.325452 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325511 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325456 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.325447308 +0000 UTC m=+31.138139592 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325549 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325559 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.325532251 +0000 UTC m=+31.138224535 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325579 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325583 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.325567171 +0000 UTC m=+31.138259455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325547 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325598 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.325590872 +0000 UTC m=+31.138283156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325609 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.325604542 +0000 UTC m=+31.138296826 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325612 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325622 3178 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325665 3178 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325621 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.325615703 +0000 UTC m=+31.138307987 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: I0216 17:24:01.325715 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325756 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.325747976 +0000 UTC m=+31.138440260 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325768 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.325762797 +0000 UTC m=+31.138455081 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325779 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.325774577 +0000 UTC m=+31.138466861 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:01.328099 master-0 kubenswrapper[3178]: E0216 17:24:01.325780 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.325805 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.325799848 +0000 UTC m=+31.138492132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: I0216 17:24:01.325801 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: I0216 17:24:01.325841 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: I0216 17:24:01.325882 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: I0216 17:24:01.325913 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: I0216 17:24:01.325934 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: I0216 17:24:01.325957 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: I0216 17:24:01.325998 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326007 3178 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326033 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326025714 +0000 UTC m=+31.138717998 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326051 3178 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: I0216 17:24:01.326052 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326075 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326068005 +0000 UTC m=+31.138760289 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: I0216 17:24:01.326092 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326102 3178 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326111 3178 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326101 3178 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326111 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326104126 +0000 UTC m=+31.138796410 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326132 3178 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326137 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326131546 +0000 UTC m=+31.138823830 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326149 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326143867 +0000 UTC m=+31.138836151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326162 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326156697 +0000 UTC m=+31.138848981 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326170 3178 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326170 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326194 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326185858 +0000 UTC m=+31.138878132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326197 3178 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326209 3178 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326225 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: I0216 17:24:01.326177 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326227 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326220029 +0000 UTC m=+31.138912413 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326316 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326302071 +0000 UTC m=+31.138994355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326327 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326322252 +0000 UTC m=+31.139014536 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: E0216 17:24:01.326344 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: I0216 17:24:01.326363 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.329514 master-0 kubenswrapper[3178]: I0216 17:24:01.326385 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326406 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326389633 +0000 UTC m=+31.139081967 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326429 3178 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326454 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326444385 +0000 UTC m=+31.139136659 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: I0216 17:24:01.326447 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326496 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: I0216 17:24:01.326525 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326529 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326549 3178 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326582 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326575348 +0000 UTC m=+31.139267632 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: I0216 17:24:01.326580 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326604 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326588909 +0000 UTC m=+31.139281263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326629 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326615889 +0000 UTC m=+31.139308293 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326650 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: I0216 17:24:01.326664 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326694 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326680931 +0000 UTC m=+31.139373295 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326712 3178 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: I0216 17:24:01.326724 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326749 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326742293 +0000 UTC m=+31.139434577 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326799 3178 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326802 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326757043 +0000 UTC m=+31.139449437 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.326842 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.326828485 +0000 UTC m=+31.139520859 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: I0216 17:24:01.326873 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: I0216 17:24:01.326923 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: I0216 17:24:01.327013 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.327049 3178 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: I0216 17:24:01.327057 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.327068 3178 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.327100 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.327083762 +0000 UTC m=+31.139776096 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.327124 3178 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: I0216 17:24:01.327137 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.327167 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.327153894 +0000 UTC m=+31.139846188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: I0216 17:24:01.327199 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.327217 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: E0216 17:24:01.327271 3178 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:01.332619 master-0 kubenswrapper[3178]: I0216 17:24:01.327275 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327290 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.327276027 +0000 UTC m=+31.139968351 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327327 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327331 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.327317908 +0000 UTC m=+31.140010262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327358 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.327346799 +0000 UTC m=+31.140039103 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: I0216 17:24:01.327403 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327448 3178 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: I0216 17:24:01.327473 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327496 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.327483052 +0000 UTC m=+31.140175346 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327520 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.327508383 +0000 UTC m=+31.140200737 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327536 3178 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: I0216 17:24:01.327552 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327583 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.327570265 +0000 UTC m=+31.140262569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: I0216 17:24:01.327614 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327637 3178 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: I0216 17:24:01.327660 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327679 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.327666857 +0000 UTC m=+31.140359161 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: I0216 17:24:01.327710 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327746 3178 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327754 3178 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327776 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.32776867 +0000 UTC m=+31.140460954 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327795 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327797 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.32778546 +0000 UTC m=+31.140477814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: I0216 17:24:01.327759 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327845 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.327836872 +0000 UTC m=+31.140529156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: I0216 17:24:01.327877 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327895 3178 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327923 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.327916854 +0000 UTC m=+31.140609138 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: I0216 17:24:01.327921 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327963 3178 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: I0216 17:24:01.327967 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327984 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.327978915 +0000 UTC m=+31.140671199 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: E0216 17:24:01.327987 3178 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:01.334922 master-0 kubenswrapper[3178]: I0216 17:24:01.328012 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: I0216 17:24:01.328055 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328073 3178 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328080 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.328063838 +0000 UTC m=+31.140756192 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328097 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.328091258 +0000 UTC m=+31.140783542 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328206 3178 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328199 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328262 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.328234942 +0000 UTC m=+31.140927316 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328301 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.328283044 +0000 UTC m=+31.140975328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328338 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: I0216 17:24:01.328378 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328390 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.328376096 +0000 UTC m=+31.141068460 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328448 3178 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328474 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.328467638 +0000 UTC m=+31.141159922 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: I0216 17:24:01.328468 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328552 3178 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: I0216 17:24:01.328575 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328595 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.328583952 +0000 UTC m=+31.141276226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328645 3178 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328680 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.328673054 +0000 UTC m=+31.141365338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: I0216 17:24:01.328762 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328886 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.328947 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.328933191 +0000 UTC m=+31.141625545 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: I0216 17:24:01.328981 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: I0216 17:24:01.329025 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: I0216 17:24:01.329069 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.329101 3178 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: I0216 17:24:01.329111 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.329131 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.329124316 +0000 UTC m=+31.141816700 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: I0216 17:24:01.329158 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.329172 3178 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.329191 3178 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.329210 3178 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: I0216 17:24:01.329207 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: E0216 17:24:01.329233 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.329219178 +0000 UTC m=+31.141911542 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:01.336620 master-0 kubenswrapper[3178]: I0216 17:24:01.329275 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329301 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.32929367 +0000 UTC m=+31.141985954 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329313 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.329306911 +0000 UTC m=+31.141999195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"audit" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: I0216 17:24:01.329330 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329344 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329376 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.329368482 +0000 UTC m=+31.142060766 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: I0216 17:24:01.329349 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329380 3178 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329387 3178 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329428 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329406 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.329400903 +0000 UTC m=+31.142093187 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329407 3178 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: I0216 17:24:01.329456 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: I0216 17:24:01.329479 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329490 3178 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329495 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.329477515 +0000 UTC m=+31.142169889 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329512 3178 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329513 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.329506926 +0000 UTC m=+31.142199350 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329543 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.329536037 +0000 UTC m=+31.142228321 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: I0216 17:24:01.329557 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: I0216 17:24:01.329581 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329614 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.329592318 +0000 UTC m=+31.142284632 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329630 3178 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329648 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.32963631 +0000 UTC m=+31.142328674 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329679 3178 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329707 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.329695161 +0000 UTC m=+31.142387455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: I0216 17:24:01.329685 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329762 3178 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329774 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.329719252 +0000 UTC m=+31.142411556 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: I0216 17:24:01.329881 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.329917 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.329900417 +0000 UTC m=+31.142592761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: I0216 17:24:01.329953 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.330038 3178 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.330050 3178 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.330058 3178 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.337919 master-0 kubenswrapper[3178]: E0216 17:24:01.330024 3178 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.330083 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.330076091 +0000 UTC m=+31.142768375 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.330036 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.330116 3178 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.330117 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.330100822 +0000 UTC m=+31.142793186 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.330224 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.330336 3178 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.330351 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.330342108 +0000 UTC m=+31.143034392 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.330892 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.330925 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.330946 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.330953 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.330947564 +0000 UTC m=+31.143639848 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.330989 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.330977775 +0000 UTC m=+31.143670079 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.330994 3178 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.331011 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.331015 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.331009566 +0000 UTC m=+31.143701850 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.331058 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.331117 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.331128 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.331121889 +0000 UTC m=+31.143814173 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.331151 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.331178 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.331244 3178 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.331305 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.331295854 +0000 UTC m=+31.143988148 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.331300 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.331342 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.331372 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.331378 3178 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.331399 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.331427 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.331437 3178 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: I0216 17:24:01.331455 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.331464 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.331455698 +0000 UTC m=+31.144148072 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.331490 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.331480928 +0000 UTC m=+31.144173222 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:01.340217 master-0 kubenswrapper[3178]: E0216 17:24:01.331507 3178 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331531 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.33152458 +0000 UTC m=+31.144216994 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: I0216 17:24:01.331572 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331625 3178 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331648 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.331641013 +0000 UTC m=+31.144333407 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: I0216 17:24:01.331671 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: I0216 17:24:01.331700 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: I0216 17:24:01.331736 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: I0216 17:24:01.331772 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: I0216 17:24:01.331797 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: I0216 17:24:01.331825 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331702 3178 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: I0216 17:24:01.331851 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: I0216 17:24:01.331878 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331908 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.331895769 +0000 UTC m=+31.144588193 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331914 3178 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331934 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331944 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.331934531 +0000 UTC m=+31.144626825 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331960 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0 podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.331951781 +0000 UTC m=+31.144644195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331740 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.332001 3178 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.332026 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332019933 +0000 UTC m=+31.144712347 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: I0216 17:24:01.332020 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.332085 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332057534 +0000 UTC m=+31.144749818 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.332087 3178 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.332123 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.332134 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332122906 +0000 UTC m=+31.144815210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.332166 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332155856 +0000 UTC m=+31.144848140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331777 3178 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.332205 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332199408 +0000 UTC m=+31.144891692 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331812 3178 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.332216 3178 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.332255 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332234748 +0000 UTC m=+31.144927133 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331836 3178 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:01.342033 master-0 kubenswrapper[3178]: E0216 17:24:01.331991 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332289 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.33227345 +0000 UTC m=+31.144965794 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: I0216 17:24:01.332325 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332348 3178 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332373 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332365802 +0000 UTC m=+31.145058196 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332389 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332383412 +0000 UTC m=+31.145075836 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332356 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: I0216 17:24:01.332370 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332423 3178 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332404 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332397103 +0000 UTC m=+31.145089517 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332440 3178 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332472 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332464255 +0000 UTC m=+31.145156539 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332489 3178 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332510 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332495625 +0000 UTC m=+31.145187969 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: I0216 17:24:01.332484 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332540 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332527216 +0000 UTC m=+31.145219510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332574 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332563537 +0000 UTC m=+31.145255911 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: I0216 17:24:01.332608 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: I0216 17:24:01.332679 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: I0216 17:24:01.332725 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332842 3178 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: I0216 17:24:01.332874 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332887 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.332873645 +0000 UTC m=+31.145565999 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: I0216 17:24:01.332923 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332939 3178 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332952 3178 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332960 3178 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: I0216 17:24:01.332966 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332977 3178 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.332982 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.332975998 +0000 UTC m=+30.145668282 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.333011 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.333002929 +0000 UTC m=+31.145695213 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.333046 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.333059 3178 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.333069 3178 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: I0216 17:24:01.333072 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:01.343858 master-0 kubenswrapper[3178]: E0216 17:24:01.333090 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333111 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333121 3178 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333128 3178 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333095 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.333079021 +0000 UTC m=+31.145771325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333195 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.333174983 +0000 UTC m=+31.145867287 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333274 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.333264006 +0000 UTC m=+31.145956290 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: I0216 17:24:01.333276 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333302 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.333291097 +0000 UTC m=+30.145983381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333358 3178 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: I0216 17:24:01.333370 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333410 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.333395329 +0000 UTC m=+31.146087703 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: I0216 17:24:01.333445 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333455 3178 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333473 3178 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333486 3178 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333537 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.333524533 +0000 UTC m=+31.146216827 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333776 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.333901 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.333873962 +0000 UTC m=+31.146566266 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: I0216 17:24:01.334004 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: I0216 17:24:01.334045 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: I0216 17:24:01.334076 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.334080 3178 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: I0216 17:24:01.334103 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.334121 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.334110108 +0000 UTC m=+31.146802392 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: I0216 17:24:01.334163 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: I0216 17:24:01.334207 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: I0216 17:24:01.334262 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.334168 3178 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.334379 3178 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.334388 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.334376905 +0000 UTC m=+31.147069179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.334206 3178 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.334417 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.334405506 +0000 UTC m=+31.147097890 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:01.346159 master-0 kubenswrapper[3178]: E0216 17:24:01.334231 3178 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.334456 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.334437787 +0000 UTC m=+31.147130151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.334521 3178 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: I0216 17:24:01.334341 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.334313 3178 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.334561 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.33455036 +0000 UTC m=+31.147242754 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.334340 3178 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: I0216 17:24:01.334587 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.334617 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.334601421 +0000 UTC m=+31.147293715 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.334643 3178 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: I0216 17:24:01.334660 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.334671 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.334663883 +0000 UTC m=+31.147356297 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.334719 3178 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.334755 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.334746065 +0000 UTC m=+31.147438359 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.334797 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.334764556 +0000 UTC m=+31.147456850 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: I0216 17:24:01.334750 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: I0216 17:24:01.334867 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: I0216 17:24:01.334903 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.334873 3178 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.334905 3178 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.335043 3178 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: I0216 17:24:01.335243 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: I0216 17:24:01.335333 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.335359 3178 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: I0216 17:24:01.335390 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.335413 3178 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: I0216 17:24:01.335423 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.335434 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.335419533 +0000 UTC m=+31.148111867 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.335456 3178 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.335457 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.335449784 +0000 UTC m=+31.148142068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.335474 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.335469424 +0000 UTC m=+31.148161708 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.335477 3178 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.335487 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.335483305 +0000 UTC m=+31.148175589 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"service-ca" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.335500 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.335494905 +0000 UTC m=+31.148187179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:01.347352 master-0 kubenswrapper[3178]: E0216 17:24:01.335520 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.335515296 +0000 UTC m=+31.148207580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-config" not registered Feb 16 17:24:01.348482 master-0 kubenswrapper[3178]: E0216 17:24:01.335533 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.335527226 +0000 UTC m=+31.148219510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:01.348482 master-0 kubenswrapper[3178]: E0216 17:24:01.335543 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.335539366 +0000 UTC m=+31.148231650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:01.348482 master-0 kubenswrapper[3178]: E0216 17:24:01.346571 3178 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:01.348482 master-0 kubenswrapper[3178]: E0216 17:24:01.346593 3178 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.348482 master-0 kubenswrapper[3178]: E0216 17:24:01.346602 3178 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.348482 master-0 kubenswrapper[3178]: E0216 17:24:01.346648 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.846636961 +0000 UTC m=+29.659329295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.367467 master-0 kubenswrapper[3178]: E0216 17:24:01.367413 3178 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:01.367467 master-0 kubenswrapper[3178]: E0216 17:24:01.367449 3178 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.367467 master-0 kubenswrapper[3178]: E0216 17:24:01.367463 3178 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.368150 master-0 kubenswrapper[3178]: E0216 17:24:01.367519 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.867503035 +0000 UTC m=+29.680195319 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.385368 master-0 kubenswrapper[3178]: E0216 17:24:01.385320 3178 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:01.385368 master-0 kubenswrapper[3178]: E0216 17:24:01.385352 3178 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.385368 master-0 kubenswrapper[3178]: E0216 17:24:01.385363 3178 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.385676 master-0 kubenswrapper[3178]: E0216 17:24:01.385423 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:01.88540192 +0000 UTC m=+29.698094204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.396372 master-0 kubenswrapper[3178]: I0216 17:24:01.396290 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-image-registry-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc03f91dbf08df9907c0ebad30c54a7fa92285b19ec4e440ed762b197378a861\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-image-registry-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca/\\\",\\\"name\\\":\\\"trusted-ca\\\"},{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"image-registry-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/openshift/serviceaccount\\\",\\\"name\\\":\\\"bound-sa-token\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b5mwd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-image-registry\"/\"cluster-image-registry-operator-96c8c64b8-zwwnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.436796 master-0 kubenswrapper[3178]: I0216 17:24:01.436736 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:01.436796 master-0 kubenswrapper[3178]: I0216 17:24:01.436785 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:01.437059 master-0 kubenswrapper[3178]: E0216 17:24:01.436953 3178 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.437059 master-0 kubenswrapper[3178]: E0216 17:24:01.436985 3178 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.437059 master-0 kubenswrapper[3178]: E0216 17:24:01.436995 3178 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.437215 master-0 kubenswrapper[3178]: E0216 17:24:01.437067 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.437040221 +0000 UTC m=+30.249732525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.437215 master-0 kubenswrapper[3178]: E0216 17:24:01.437077 3178 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:01.437215 master-0 kubenswrapper[3178]: E0216 17:24:01.437123 3178 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.437215 master-0 kubenswrapper[3178]: E0216 17:24:01.437141 3178 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.437420 master-0 kubenswrapper[3178]: E0216 17:24:01.437306 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.437290258 +0000 UTC m=+30.249982642 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.438153 master-0 kubenswrapper[3178]: I0216 17:24:01.438109 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:01.438307 master-0 kubenswrapper[3178]: E0216 17:24:01.438280 3178 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.438307 master-0 kubenswrapper[3178]: E0216 17:24:01.438301 3178 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.438307 master-0 kubenswrapper[3178]: E0216 17:24:01.438309 3178 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.438417 master-0 kubenswrapper[3178]: E0216 17:24:01.438339 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.438331366 +0000 UTC m=+31.251023650 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.438574 master-0 kubenswrapper[3178]: I0216 17:24:01.438536 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:01.439208 master-0 kubenswrapper[3178]: E0216 17:24:01.438660 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:01.439208 master-0 kubenswrapper[3178]: E0216 17:24:01.438680 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.439208 master-0 kubenswrapper[3178]: E0216 17:24:01.438687 3178 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.439208 master-0 kubenswrapper[3178]: E0216 17:24:01.438802 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.438788728 +0000 UTC m=+31.251481012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.439935 master-0 kubenswrapper[3178]: I0216 17:24:01.439864 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca-bundle\\\",\\\"name\\\":\\\"trusted-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/service-ca-bundle\\\",\\\"name\\\":\\\"service-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f42cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-755d954778-lf4cb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.440018 master-0 kubenswrapper[3178]: I0216 17:24:01.439984 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:01.440163 master-0 kubenswrapper[3178]: E0216 17:24:01.440132 3178 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:01.440163 master-0 kubenswrapper[3178]: E0216 17:24:01.440158 3178 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.440232 master-0 kubenswrapper[3178]: E0216 17:24:01.440169 3178 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.440232 master-0 kubenswrapper[3178]: E0216 17:24:01.440213 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.440202095 +0000 UTC m=+30.252894399 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.476932 master-0 kubenswrapper[3178]: I0216 17:24:01.476868 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:24:01.476932 master-0 kubenswrapper[3178]: I0216 17:24:01.476853 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9609a4f3-b947-47af-a685-baae26c50fa3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ingress-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3d21c51712e6e0cdd9c776479f6d1ab55bc1085df5bb5f583e69ee192d11fd3\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ingress-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"trusted-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/openshift/serviceaccount\\\",\\\"name\\\":\\\"bound-sa-token\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t24jh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"metrics-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t24jh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-ingress-operator\"/\"ingress-operator-c588d8cb4-wjr7d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.488403 master-0 kubenswrapper[3178]: W0216 17:24:01.488361 3178 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4549ea98_7379_49e1_8452_5efb643137ca.slice/crio-8a66c9c6ab0ffb6c022d21572d9ecd028be9e07d99ed15c25f8c09f001677ac9 WatchSource:0}: Error finding container 8a66c9c6ab0ffb6c022d21572d9ecd028be9e07d99ed15c25f8c09f001677ac9: Status 404 returned error can't find the container with id 8a66c9c6ab0ffb6c022d21572d9ecd028be9e07d99ed15c25f8c09f001677ac9 Feb 16 17:24:01.522227 master-0 kubenswrapper[3178]: I0216 17:24:01.522101 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [oauth-openshift]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2969828f1fcae82b7ef16d3588046ace3cf51b9ea578658c42475386e0ee1fc7\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"oauth-openshift\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/audit\\\",\\\"name\\\":\\\"audit-policies\\\"},{\\\"mountPath\\\":\\\"/var/log/oauth-server\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/var/config/system/secrets/v4-0-config-system-session\\\",\\\"name\\\":\\\"v4-0-config-system-session\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/system/configmaps/v4-0-config-system-cliconfig\\\",\\\"name\\\":\\\"v4-0-config-system-cliconfig\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/system/secrets/v4-0-config-system-serving-cert\\\",\\\"name\\\":\\\"v4-0-config-system-serving-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/system/configmaps/v4-0-config-system-service-ca\\\",\\\"name\\\":\\\"v4-0-config-system-service-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/system/secrets/v4-0-config-system-router-certs\\\",\\\"name\\\":\\\"v4-0-config-system-router-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/system/secrets/v4-0-config-system-ocp-branding-template\\\",\\\"name\\\":\\\"v4-0-config-system-ocp-branding-template\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/user/template/secret/v4-0-config-user-template-login\\\",\\\"name\\\":\\\"v4-0-config-user-template-login\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/user/template/secret/v4-0-config-user-template-provider-selection\\\",\\\"name\\\":\\\"v4-0-config-user-template-provider-selection\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/user/template/secret/v4-0-config-user-template-error\\\",\\\"name\\\":\\\"v4-0-config-user-template-error\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/config/system/configmaps/v4-0-config-system-trusted-ca-bundle\\\",\\\"name\\\":\\\"v4-0-config-system-trusted-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7mrkc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication\"/\"oauth-openshift-64f85b8fc9-n9msn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.541836 master-0 kubenswrapper[3178]: I0216 17:24:01.541786 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:01.542173 master-0 kubenswrapper[3178]: E0216 17:24:01.542110 3178 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:24:01.542173 master-0 kubenswrapper[3178]: E0216 17:24:01.542142 3178 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.542173 master-0 kubenswrapper[3178]: E0216 17:24:01.542158 3178 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.543299 master-0 kubenswrapper[3178]: I0216 17:24:01.542185 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:01.543299 master-0 kubenswrapper[3178]: E0216 17:24:01.542223 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.542203194 +0000 UTC m=+31.354895558 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.543299 master-0 kubenswrapper[3178]: E0216 17:24:01.542338 3178 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:01.543299 master-0 kubenswrapper[3178]: E0216 17:24:01.542369 3178 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.543299 master-0 kubenswrapper[3178]: E0216 17:24:01.542386 3178 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.543299 master-0 kubenswrapper[3178]: E0216 17:24:01.542526 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.542508562 +0000 UTC m=+31.355200846 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.543510 master-0 kubenswrapper[3178]: I0216 17:24:01.543454 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:01.543540 master-0 kubenswrapper[3178]: I0216 17:24:01.543515 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:01.543567 master-0 kubenswrapper[3178]: I0216 17:24:01.543549 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:01.543699 master-0 kubenswrapper[3178]: E0216 17:24:01.543661 3178 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:01.543735 master-0 kubenswrapper[3178]: I0216 17:24:01.543697 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:01.543735 master-0 kubenswrapper[3178]: E0216 17:24:01.543702 3178 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.543797 master-0 kubenswrapper[3178]: E0216 17:24:01.543735 3178 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.543797 master-0 kubenswrapper[3178]: E0216 17:24:01.543735 3178 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:01.543797 master-0 kubenswrapper[3178]: E0216 17:24:01.543751 3178 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.543797 master-0 kubenswrapper[3178]: E0216 17:24:01.543760 3178 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.543797 master-0 kubenswrapper[3178]: E0216 17:24:01.543771 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.543764565 +0000 UTC m=+30.356456849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.543797 master-0 kubenswrapper[3178]: E0216 17:24:01.543680 3178 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.543797 master-0 kubenswrapper[3178]: E0216 17:24:01.543788 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.543781016 +0000 UTC m=+30.356473300 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.543797 master-0 kubenswrapper[3178]: E0216 17:24:01.543791 3178 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.544105 master-0 kubenswrapper[3178]: E0216 17:24:01.543814 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.543809236 +0000 UTC m=+30.356501520 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.544105 master-0 kubenswrapper[3178]: E0216 17:24:01.543787 3178 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.544105 master-0 kubenswrapper[3178]: E0216 17:24:01.543861 3178 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.544105 master-0 kubenswrapper[3178]: E0216 17:24:01.543875 3178 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.544105 master-0 kubenswrapper[3178]: I0216 17:24:01.543881 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:01.544105 master-0 kubenswrapper[3178]: E0216 17:24:01.543918 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.543903879 +0000 UTC m=+30.356596233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.544105 master-0 kubenswrapper[3178]: E0216 17:24:01.543989 3178 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:01.544105 master-0 kubenswrapper[3178]: E0216 17:24:01.544008 3178 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.544105 master-0 kubenswrapper[3178]: E0216 17:24:01.544022 3178 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.544476 master-0 kubenswrapper[3178]: E0216 17:24:01.544441 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.544428643 +0000 UTC m=+30.357120927 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.557476 master-0 kubenswrapper[3178]: I0216 17:24:01.557422 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f18a5c41-3a62-4c14-88f5-cc9c09e81d38\\\"},\\\"status\\\":{\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"collect-profiles-29521035-zdh6r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.617665 master-0 kubenswrapper[3178]: I0216 17:24:01.617556 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"404c402a-705f-4352-b9df-b89562070d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy machine-api-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"machine-api-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkqml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa28b66298c8b34f2c7b357b012e663e3954cfc7c85aa1e44651a79aeaf8b2a9\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-api-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/machine-api-operator-config/images\\\",\\\"name\\\":\\\"images\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vkqml\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-machine-api\"/\"machine-api-operator-bd7dd5c46-92rqx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.658846 master-0 kubenswrapper[3178]: I0216 17:24:01.658760 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e73ee493-de15-44c2-bd51-e12fcbb27a15\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [packageserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc51710a07f6a46a72c7802009f13f26aa351caaa4adaebc0d4983c3601e8a2c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"packageserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmpfs\\\"},{\\\"mountPath\\\":\\\"/apiserver.local.config/certificates\\\",\\\"name\\\":\\\"apiservice-cert\\\"},{\\\"mountPath\\\":\\\"/tmp/k8s-webhook-server/serving-certs\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57xvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-operator-lifecycle-manager\"/\"packageserver-6d5d8c8c95-kzfjw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.659877 master-0 kubenswrapper[3178]: I0216 17:24:01.659819 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:01.660039 master-0 kubenswrapper[3178]: E0216 17:24:01.660001 3178 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.660039 master-0 kubenswrapper[3178]: E0216 17:24:01.660024 3178 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.660039 master-0 kubenswrapper[3178]: E0216 17:24:01.660035 3178 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.660401 master-0 kubenswrapper[3178]: E0216 17:24:01.660079 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.660067063 +0000 UTC m=+31.472759337 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.660401 master-0 kubenswrapper[3178]: I0216 17:24:01.660322 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:01.660401 master-0 kubenswrapper[3178]: I0216 17:24:01.660350 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:01.660617 master-0 kubenswrapper[3178]: I0216 17:24:01.660437 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:01.660617 master-0 kubenswrapper[3178]: E0216 17:24:01.660455 3178 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.660617 master-0 kubenswrapper[3178]: E0216 17:24:01.660475 3178 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.660617 master-0 kubenswrapper[3178]: E0216 17:24:01.660489 3178 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.660617 master-0 kubenswrapper[3178]: E0216 17:24:01.660491 3178 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:01.660617 master-0 kubenswrapper[3178]: E0216 17:24:01.660503 3178 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.660617 master-0 kubenswrapper[3178]: E0216 17:24:01.660512 3178 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.660617 master-0 kubenswrapper[3178]: E0216 17:24:01.660512 3178 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:01.660617 master-0 kubenswrapper[3178]: E0216 17:24:01.660535 3178 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.660617 master-0 kubenswrapper[3178]: E0216 17:24:01.660539 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.660523886 +0000 UTC m=+31.473216190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.660617 master-0 kubenswrapper[3178]: E0216 17:24:01.660547 3178 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.660617 master-0 kubenswrapper[3178]: E0216 17:24:01.660562 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.660552566 +0000 UTC m=+31.473244870 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.660617 master-0 kubenswrapper[3178]: E0216 17:24:01.660583 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.660570727 +0000 UTC m=+31.473263031 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.662227 master-0 kubenswrapper[3178]: I0216 17:24:01.662178 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:01.662426 master-0 kubenswrapper[3178]: I0216 17:24:01.662237 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:01.662426 master-0 kubenswrapper[3178]: I0216 17:24:01.662302 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:01.662426 master-0 kubenswrapper[3178]: E0216 17:24:01.662338 3178 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Feb 16 17:24:01.662426 master-0 kubenswrapper[3178]: E0216 17:24:01.662351 3178 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.662426 master-0 kubenswrapper[3178]: E0216 17:24:01.662359 3178 projected.go:194] Error preparing data for projected volume kube-api-access-7mrkc for pod openshift-authentication/oauth-openshift-64f85b8fc9-n9msn: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.662426 master-0 kubenswrapper[3178]: E0216 17:24:01.662383 3178 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.662426 master-0 kubenswrapper[3178]: I0216 17:24:01.662336 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:01.662426 master-0 kubenswrapper[3178]: E0216 17:24:01.662392 3178 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.662426 master-0 kubenswrapper[3178]: E0216 17:24:01.662419 3178 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.662426 master-0 kubenswrapper[3178]: E0216 17:24:01.662425 3178 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.662426 master-0 kubenswrapper[3178]: E0216 17:24:01.662403 3178 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.662426 master-0 kubenswrapper[3178]: E0216 17:24:01.662449 3178 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.663219 master-0 kubenswrapper[3178]: E0216 17:24:01.662385 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.662377555 +0000 UTC m=+30.475069829 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7mrkc" (UniqueName: "kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.663219 master-0 kubenswrapper[3178]: E0216 17:24:01.662567 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.66255584 +0000 UTC m=+30.475248134 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.663219 master-0 kubenswrapper[3178]: E0216 17:24:01.662579 3178 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.663219 master-0 kubenswrapper[3178]: E0216 17:24:01.662587 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.66257982 +0000 UTC m=+30.475272124 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.663219 master-0 kubenswrapper[3178]: E0216 17:24:01.662590 3178 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.663219 master-0 kubenswrapper[3178]: E0216 17:24:01.662602 3178 projected.go:194] Error preparing data for projected volume kube-api-access-sbrtz for pod openshift-console-operator/console-operator-7777d5cc66-64vhv: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.663219 master-0 kubenswrapper[3178]: E0216 17:24:01.662626 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.662619891 +0000 UTC m=+30.475312175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sbrtz" (UniqueName: "kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.697884 master-0 kubenswrapper[3178]: I0216 17:24:01.697685 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-server-2ws9r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c48005e-c4df-4332-87fc-ec028f2c6921\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-server]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9250bc5d881852654c420b833aa018257e927522e9d8e1b74307dd7b4b0bfc42\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"machine-config-server\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/mcs\\\",\\\"name\\\":\\\"certs\\\"},{\\\"mountPath\\\":\\\"/etc/mcs/bootstrap-token\\\",\\\"name\\\":\\\"node-bootstrap-token\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gvw4s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-server-2ws9r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.758926 master-0 kubenswrapper[3178]: I0216 17:24:01.758770 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d1636c0-f34d-444c-822d-77f1d203ddc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus-operator kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"prometheus-operator-tls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-policy\\\",\\\"name\\\":\\\"prometheus-operator-kube-rbac-proxy-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbtld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19c3c8392b72ccf9a518d1d60fab0fd1e58a05b544caa79eb11bb68f00981d9d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"prometheus-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbtld\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"prometheus-operator-7485d645b8-zxxwd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.765380 master-0 kubenswrapper[3178]: I0216 17:24:01.765300 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:01.765559 master-0 kubenswrapper[3178]: E0216 17:24:01.765468 3178 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.765559 master-0 kubenswrapper[3178]: E0216 17:24:01.765507 3178 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.765559 master-0 kubenswrapper[3178]: I0216 17:24:01.765542 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:01.765680 master-0 kubenswrapper[3178]: E0216 17:24:01.765568 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.765552504 +0000 UTC m=+30.578244778 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.765680 master-0 kubenswrapper[3178]: I0216 17:24:01.765654 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:01.765785 master-0 kubenswrapper[3178]: E0216 17:24:01.765706 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.765785 master-0 kubenswrapper[3178]: E0216 17:24:01.765774 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.765845 master-0 kubenswrapper[3178]: E0216 17:24:01.765795 3178 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.765875 master-0 kubenswrapper[3178]: E0216 17:24:01.765864 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.765840822 +0000 UTC m=+30.578533146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.765949 master-0 kubenswrapper[3178]: E0216 17:24:01.765908 3178 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Feb 16 17:24:01.765983 master-0 kubenswrapper[3178]: E0216 17:24:01.765950 3178 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.765983 master-0 kubenswrapper[3178]: E0216 17:24:01.765969 3178 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.766159 master-0 kubenswrapper[3178]: E0216 17:24:01.766129 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.766109589 +0000 UTC m=+30.578801913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.767634 master-0 kubenswrapper[3178]: I0216 17:24:01.767594 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:01.767744 master-0 kubenswrapper[3178]: E0216 17:24:01.767710 3178 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.767781 master-0 kubenswrapper[3178]: E0216 17:24:01.767743 3178 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.767937 master-0 kubenswrapper[3178]: E0216 17:24:01.767882 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.767864066 +0000 UTC m=+31.580556380 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.803375 master-0 kubenswrapper[3178]: I0216 17:24:01.803287 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/node-exporter-8256c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a94f9b8e-b020-4aab-8373-6c056ec07464\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-exporter kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-exporter kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"node-exporter-tls\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"metrics-client-ca\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-policy\\\",\\\"name\\\":\\\"node-exporter-kube-rbac-proxy-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nfk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a64a70eb2fef4095ba241021e37c52034c067c57121d6c588f8c7fd3dc24b55f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"node-exporter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/sys\\\",\\\"name\\\":\\\"sys\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/root\\\",\\\"name\\\":\\\"root\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/node_exporter/textfile\\\",\\\"name\\\":\\\"node-exporter-textfile\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nfk2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-monitoring\"/\"node-exporter-8256c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.842646 master-0 kubenswrapper[3178]: I0216 17:24:01.842552 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4549ea98-7379-49e1-8452-5efb643137ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zt8mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-6fcf4c966-6bmf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.869225 master-0 kubenswrapper[3178]: I0216 17:24:01.869057 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:01.869225 master-0 kubenswrapper[3178]: I0216 17:24:01.869174 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.869367 3178 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.869425 3178 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.869438 3178 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.869541 3178 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.869568 3178 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: I0216 17:24:01.869537 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.869577 3178 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.869778 3178 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.869586 3178 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.869814 3178 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.869593 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.869560725 +0000 UTC m=+30.682253049 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: I0216 17:24:01.870136 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.870161 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.87013906 +0000 UTC m=+30.682831375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.870280 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.870215893 +0000 UTC m=+30.682908217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.870338 3178 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.870367 3178 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: E0216 17:24:01.870386 3178 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.870337 master-0 kubenswrapper[3178]: I0216 17:24:01.870433 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:01.872948 master-0 kubenswrapper[3178]: E0216 17:24:01.870446 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.870421978 +0000 UTC m=+30.683114312 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.872948 master-0 kubenswrapper[3178]: E0216 17:24:01.870540 3178 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:01.872948 master-0 kubenswrapper[3178]: E0216 17:24:01.870560 3178 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.872948 master-0 kubenswrapper[3178]: E0216 17:24:01.870574 3178 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.872948 master-0 kubenswrapper[3178]: E0216 17:24:01.870632 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.870616743 +0000 UTC m=+30.683309067 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.873629 master-0 kubenswrapper[3178]: I0216 17:24:01.873299 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:01.873629 master-0 kubenswrapper[3178]: I0216 17:24:01.873437 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:01.873873 master-0 kubenswrapper[3178]: I0216 17:24:01.873686 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:01.873873 master-0 kubenswrapper[3178]: E0216 17:24:01.873751 3178 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:01.873873 master-0 kubenswrapper[3178]: E0216 17:24:01.873820 3178 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.873873 master-0 kubenswrapper[3178]: E0216 17:24:01.873847 3178 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.873873 master-0 kubenswrapper[3178]: E0216 17:24:01.873858 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:01.874397 master-0 kubenswrapper[3178]: E0216 17:24:01.873890 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.874397 master-0 kubenswrapper[3178]: E0216 17:24:01.873910 3178 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.874397 master-0 kubenswrapper[3178]: E0216 17:24:01.873945 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.873912671 +0000 UTC m=+31.686604995 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.874397 master-0 kubenswrapper[3178]: E0216 17:24:01.873966 3178 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:24:01.874397 master-0 kubenswrapper[3178]: E0216 17:24:01.873983 3178 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.874397 master-0 kubenswrapper[3178]: E0216 17:24:01.873992 3178 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.874397 master-0 kubenswrapper[3178]: E0216 17:24:01.874026 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.874012473 +0000 UTC m=+31.686704757 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.874397 master-0 kubenswrapper[3178]: E0216 17:24:01.874204 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.874184508 +0000 UTC m=+31.686876802 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.875608 master-0 kubenswrapper[3178]: I0216 17:24:01.875483 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:01.875860 master-0 kubenswrapper[3178]: E0216 17:24:01.875641 3178 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:01.875860 master-0 kubenswrapper[3178]: E0216 17:24:01.875675 3178 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.875860 master-0 kubenswrapper[3178]: E0216 17:24:01.875694 3178 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.875860 master-0 kubenswrapper[3178]: E0216 17:24:01.875768 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.875748809 +0000 UTC m=+30.688441153 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.877509 master-0 kubenswrapper[3178]: I0216 17:24:01.877393 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-version-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-release@sha256:6177c447b98c36a42fd45fa2ba413da73d14d0a7ad3aecfa977554f5ae9583cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-version-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ssl/certs\\\",\\\"name\\\":\\\"etc-ssl-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cvo/updatepayloads\\\",\\\"name\\\":\\\"etc-cvo-updatepayloads\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/service-ca\\\",\\\"name\\\":\\\"service-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-cluster-version\"/\"cluster-version-operator-649c4f5445-vt6wb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.920138 master-0 kubenswrapper[3178]: I0216 17:24:01.920020 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a275679-b7b6-4c28-b389-94cd2b014d6c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-storage-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-storage-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a90d19460fbc705172df7759a3da394930623c6b6974620b79ffa07bab53c51f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-storage-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"cluster-storage-operator-serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pmbll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-storage-operator\"/\"cluster-storage-operator-75b869db96-twmsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.958582 master-0 kubenswrapper[3178]: I0216 17:24:01.958330 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"544c6815-81d7-422a-9e4a-5fcbfabe8da8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus-operator-admission-webhook]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [prometheus-operator-admission-webhook]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:99ad83497ea12819957ccba33c807c6e4c5297621db568e5635202cb9cc69f8f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"prometheus-operator-admission-webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"tls-certificates\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"prometheus-operator-admission-webhook-695b766898-h94zg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:01.958582 master-0 kubenswrapper[3178]: I0216 17:24:01.958558 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958610 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958643 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958691 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958689 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958829 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958844 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: E0216 17:24:01.958841 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958868 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958918 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958882 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958877 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958946 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958942 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959003 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959010 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959056 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958925 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959087 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959028 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958946 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.958962 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959310 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: E0216 17:24:01.959347 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959393 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959431 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959434 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959469 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959403 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959500 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959375 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959443 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959529 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959453 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959454 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959543 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959585 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959483 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959606 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959498 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959524 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959667 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: E0216 17:24:01.959671 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959532 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959712 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959539 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959553 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959472 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959470 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959568 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959377 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959653 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: E0216 17:24:01.959810 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959511 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959456 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959679 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959693 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959721 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959541 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959558 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959753 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: I0216 17:24:01.959728 3178 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: E0216 17:24:01.959976 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: E0216 17:24:01.960132 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: E0216 17:24:01.960228 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: E0216 17:24:01.960426 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: E0216 17:24:01.960610 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:24:01.960897 master-0 kubenswrapper[3178]: E0216 17:24:01.960818 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.960925 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.961001 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.961092 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.961162 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.961362 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.961580 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.961666 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.962064 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.962131 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.962230 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.962431 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.962588 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.962709 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.962896 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.963021 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.963205 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.963356 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.963501 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.963657 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.963796 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.963999 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.964069 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.964166 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" podUID="6f44170a-3c1c-4944-b971-251f75a51fc3" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.964305 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.964389 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.964530 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.964663 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.964800 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.964938 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.965123 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.965230 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:24:01.970549 master-0 kubenswrapper[3178]: E0216 17:24:01.965400 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.965528 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.965725 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.965856 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.965988 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.966130 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.966209 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.966358 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.966465 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.966654 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.966829 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.966954 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.967145 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.967275 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.967359 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.967525 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:24:01.974107 master-0 kubenswrapper[3178]: E0216 17:24:01.967735 3178 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:24:01.976684 master-0 kubenswrapper[3178]: I0216 17:24:01.976613 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:01.976891 master-0 kubenswrapper[3178]: E0216 17:24:01.976840 3178 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:01.976891 master-0 kubenswrapper[3178]: E0216 17:24:01.976876 3178 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.976891 master-0 kubenswrapper[3178]: E0216 17:24:01.976889 3178 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.977157 master-0 kubenswrapper[3178]: E0216 17:24:01.976954 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:03.976932756 +0000 UTC m=+31.789625110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.978669 master-0 kubenswrapper[3178]: I0216 17:24:01.978613 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:01.978958 master-0 kubenswrapper[3178]: E0216 17:24:01.978787 3178 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:01.978958 master-0 kubenswrapper[3178]: E0216 17:24:01.978819 3178 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:01.978958 master-0 kubenswrapper[3178]: E0216 17:24:01.978832 3178 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:01.979231 master-0 kubenswrapper[3178]: E0216 17:24:01.978991 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:02.97896236 +0000 UTC m=+30.791654654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:02.025229 master-0 kubenswrapper[3178]: I0216 17:24:02.025093 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06067627-6ccf-4cc8-bd20-dabdd776bb46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [telemeter-client reload kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [telemeter-client reload kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"telemeter-client-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-policy\\\",\\\"name\\\":\\\"secret-telemeter-client-kube-rbac-proxy-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/tls/client\\\",\\\"name\\\":\\\"metrics-client-ca\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq4dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52ba78768a24efe94f7f95fe5bdd3c6156919979d5882682e06ae4a8a8d3fb4a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"reload\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/serving-certs-ca-bundle\\\",\\\"name\\\":\\\"serving-certs-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq4dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9899b0f08b1202d149e16f09616ee7b8f37e3cda642386d93a6d3f63d72a316b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"telemeter-client\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/serving-certs-ca-bundle\\\",\\\"name\\\":\\\"serving-certs-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/etc/telemeter\\\",\\\"name\\\":\\\"secret-telemeter-client\\\"},{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"federate-client-tls\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/\\\",\\\"name\\\":\\\"telemeter-trusted-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq4dn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-monitoring\"/\"telemeter-client-6bbd87b65b-mt2mz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:02.060564 master-0 kubenswrapper[3178]: I0216 17:24:02.060419 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e623376-9e14-4341-9dcf-7a7c218b6f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-cd5474998-829l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:02.083089 master-0 kubenswrapper[3178]: I0216 17:24:02.082983 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:02.083436 master-0 kubenswrapper[3178]: E0216 17:24:02.083149 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:02.083436 master-0 kubenswrapper[3178]: E0216 17:24:02.083185 3178 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:02.083436 master-0 kubenswrapper[3178]: E0216 17:24:02.083198 3178 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:02.083436 master-0 kubenswrapper[3178]: I0216 17:24:02.083296 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:02.083436 master-0 kubenswrapper[3178]: E0216 17:24:02.083347 3178 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:02.083436 master-0 kubenswrapper[3178]: E0216 17:24:02.083361 3178 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:02.083436 master-0 kubenswrapper[3178]: I0216 17:24:02.083356 3178 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:02.083436 master-0 kubenswrapper[3178]: E0216 17:24:02.083369 3178 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:02.083436 master-0 kubenswrapper[3178]: E0216 17:24:02.083392 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.083374932 +0000 UTC m=+31.896067216 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:02.084305 master-0 kubenswrapper[3178]: E0216 17:24:02.083481 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:02.084305 master-0 kubenswrapper[3178]: E0216 17:24:02.083498 3178 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:02.084305 master-0 kubenswrapper[3178]: E0216 17:24:02.083510 3178 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:02.084305 master-0 kubenswrapper[3178]: E0216 17:24:02.083549 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.083534017 +0000 UTC m=+31.896226311 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:02.084305 master-0 kubenswrapper[3178]: E0216 17:24:02.083567 3178 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.083559897 +0000 UTC m=+31.896252191 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:02.101945 master-0 kubenswrapper[3178]: I0216 17:24:02.101796 3178 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"442600dc-09b2-4fee-9f89-777296b2ee40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-78ff47c7c5-txr5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:02.132367 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 16 17:24:02.163039 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 16 17:24:02.163313 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 16 17:24:02.164945 master-0 systemd[1]: kubelet.service: Consumed 3.725s CPU time. Feb 16 17:24:02.179691 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 17:24:02.327058 master-0 kubenswrapper[4546]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:24:02.327058 master-0 kubenswrapper[4546]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 17:24:02.327444 master-0 kubenswrapper[4546]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:24:02.327444 master-0 kubenswrapper[4546]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:24:02.327444 master-0 kubenswrapper[4546]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 17:24:02.327444 master-0 kubenswrapper[4546]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:24:02.327444 master-0 kubenswrapper[4546]: I0216 17:24:02.327236 4546 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 17:24:02.332014 master-0 kubenswrapper[4546]: W0216 17:24:02.331963 4546 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:24:02.332014 master-0 kubenswrapper[4546]: W0216 17:24:02.331990 4546 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:24:02.332014 master-0 kubenswrapper[4546]: W0216 17:24:02.332005 4546 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:24:02.332014 master-0 kubenswrapper[4546]: W0216 17:24:02.332014 4546 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332022 4546 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332030 4546 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332038 4546 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332044 4546 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332051 4546 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332068 4546 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332083 4546 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332095 4546 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332105 4546 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332114 4546 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332123 4546 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332130 4546 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332137 4546 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332144 4546 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332151 4546 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332160 4546 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332169 4546 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332176 4546 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:24:02.332317 master-0 kubenswrapper[4546]: W0216 17:24:02.332183 4546 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332189 4546 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332196 4546 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332203 4546 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332209 4546 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332216 4546 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332223 4546 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332230 4546 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332236 4546 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332278 4546 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332288 4546 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332295 4546 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332301 4546 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332309 4546 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332315 4546 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332324 4546 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332331 4546 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332338 4546 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332344 4546 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332355 4546 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:24:02.333129 master-0 kubenswrapper[4546]: W0216 17:24:02.332366 4546 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332373 4546 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332382 4546 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332389 4546 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332396 4546 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332403 4546 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332410 4546 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332417 4546 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332423 4546 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332430 4546 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332439 4546 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332446 4546 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332453 4546 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332459 4546 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332468 4546 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332477 4546 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332485 4546 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332493 4546 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332500 4546 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332508 4546 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:24:02.333926 master-0 kubenswrapper[4546]: W0216 17:24:02.332515 4546 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: W0216 17:24:02.332522 4546 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: W0216 17:24:02.332529 4546 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: W0216 17:24:02.332538 4546 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: W0216 17:24:02.332546 4546 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: W0216 17:24:02.332554 4546 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: W0216 17:24:02.332561 4546 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: W0216 17:24:02.332569 4546 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: W0216 17:24:02.332576 4546 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: W0216 17:24:02.332583 4546 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: I0216 17:24:02.332739 4546 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: I0216 17:24:02.332755 4546 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: I0216 17:24:02.332769 4546 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: I0216 17:24:02.332776 4546 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: I0216 17:24:02.332784 4546 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: I0216 17:24:02.332791 4546 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: I0216 17:24:02.332798 4546 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: I0216 17:24:02.332807 4546 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: I0216 17:24:02.332813 4546 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: I0216 17:24:02.332819 4546 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: I0216 17:24:02.332826 4546 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: I0216 17:24:02.332832 4546 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 17:24:02.335176 master-0 kubenswrapper[4546]: I0216 17:24:02.332837 4546 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332843 4546 flags.go:64] FLAG: --cgroup-root="" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332848 4546 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332853 4546 flags.go:64] FLAG: --client-ca-file="" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332858 4546 flags.go:64] FLAG: --cloud-config="" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332863 4546 flags.go:64] FLAG: --cloud-provider="" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332868 4546 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332875 4546 flags.go:64] FLAG: --cluster-domain="" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332880 4546 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332886 4546 flags.go:64] FLAG: --config-dir="" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332891 4546 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332897 4546 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332904 4546 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332910 4546 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332916 4546 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332922 4546 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332928 4546 flags.go:64] FLAG: --contention-profiling="false" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332933 4546 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332939 4546 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332944 4546 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332950 4546 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332957 4546 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332963 4546 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332968 4546 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332974 4546 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 17:24:02.335828 master-0 kubenswrapper[4546]: I0216 17:24:02.332979 4546 flags.go:64] FLAG: --enable-server="true" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.332985 4546 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.332992 4546 flags.go:64] FLAG: --event-burst="100" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.332998 4546 flags.go:64] FLAG: --event-qps="50" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333003 4546 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333008 4546 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333014 4546 flags.go:64] FLAG: --eviction-hard="" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333021 4546 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333026 4546 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333032 4546 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333038 4546 flags.go:64] FLAG: --eviction-soft="" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333043 4546 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333049 4546 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333054 4546 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333059 4546 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333064 4546 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333069 4546 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333074 4546 flags.go:64] FLAG: --feature-gates="" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333082 4546 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333087 4546 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333093 4546 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333099 4546 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333160 4546 flags.go:64] FLAG: --healthz-port="10248" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333168 4546 flags.go:64] FLAG: --help="false" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333175 4546 flags.go:64] FLAG: --hostname-override="" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333180 4546 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 17:24:02.336602 master-0 kubenswrapper[4546]: I0216 17:24:02.333186 4546 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333191 4546 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333197 4546 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333202 4546 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333208 4546 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333213 4546 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333218 4546 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333224 4546 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333229 4546 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333235 4546 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333240 4546 flags.go:64] FLAG: --kube-reserved="" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333258 4546 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333265 4546 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333270 4546 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333276 4546 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333281 4546 flags.go:64] FLAG: --lock-file="" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333286 4546 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333292 4546 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333297 4546 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333308 4546 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333314 4546 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333319 4546 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333324 4546 flags.go:64] FLAG: --logging-format="text" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333330 4546 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333336 4546 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 17:24:02.337411 master-0 kubenswrapper[4546]: I0216 17:24:02.333341 4546 flags.go:64] FLAG: --manifest-url="" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333347 4546 flags.go:64] FLAG: --manifest-url-header="" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333354 4546 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333359 4546 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333366 4546 flags.go:64] FLAG: --max-pods="110" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333372 4546 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333377 4546 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333382 4546 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333388 4546 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333419 4546 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333426 4546 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333431 4546 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333445 4546 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333451 4546 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333456 4546 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333462 4546 flags.go:64] FLAG: --pod-cidr="" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333467 4546 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333476 4546 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333482 4546 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333487 4546 flags.go:64] FLAG: --pods-per-core="0" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333492 4546 flags.go:64] FLAG: --port="10250" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333498 4546 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333503 4546 flags.go:64] FLAG: --provider-id="" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333508 4546 flags.go:64] FLAG: --qos-reserved="" Feb 16 17:24:02.338322 master-0 kubenswrapper[4546]: I0216 17:24:02.333514 4546 flags.go:64] FLAG: --read-only-port="10255" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333520 4546 flags.go:64] FLAG: --register-node="true" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333525 4546 flags.go:64] FLAG: --register-schedulable="true" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333530 4546 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333547 4546 flags.go:64] FLAG: --registry-burst="10" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333553 4546 flags.go:64] FLAG: --registry-qps="5" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333559 4546 flags.go:64] FLAG: --reserved-cpus="" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333564 4546 flags.go:64] FLAG: --reserved-memory="" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333571 4546 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333577 4546 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333583 4546 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333588 4546 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333594 4546 flags.go:64] FLAG: --runonce="false" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333599 4546 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333605 4546 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333611 4546 flags.go:64] FLAG: --seccomp-default="false" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333616 4546 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333622 4546 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333628 4546 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333633 4546 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333639 4546 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333644 4546 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333650 4546 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333655 4546 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333660 4546 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 17:24:02.339085 master-0 kubenswrapper[4546]: I0216 17:24:02.333666 4546 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333672 4546 flags.go:64] FLAG: --system-cgroups="" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333677 4546 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333685 4546 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333692 4546 flags.go:64] FLAG: --tls-cert-file="" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333697 4546 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333705 4546 flags.go:64] FLAG: --tls-min-version="" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333710 4546 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333715 4546 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333720 4546 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333725 4546 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333731 4546 flags.go:64] FLAG: --v="2" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333738 4546 flags.go:64] FLAG: --version="false" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333745 4546 flags.go:64] FLAG: --vmodule="" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333751 4546 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: I0216 17:24:02.333757 4546 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: W0216 17:24:02.333882 4546 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: W0216 17:24:02.333892 4546 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: W0216 17:24:02.333898 4546 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: W0216 17:24:02.333903 4546 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: W0216 17:24:02.333908 4546 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: W0216 17:24:02.333913 4546 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: W0216 17:24:02.333918 4546 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:24:02.340083 master-0 kubenswrapper[4546]: W0216 17:24:02.333923 4546 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333927 4546 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333932 4546 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333936 4546 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333941 4546 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333945 4546 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333949 4546 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333954 4546 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333959 4546 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333964 4546 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333969 4546 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333978 4546 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333984 4546 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333990 4546 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.333995 4546 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.334000 4546 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.334006 4546 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.334012 4546 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.334018 4546 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:24:02.341057 master-0 kubenswrapper[4546]: W0216 17:24:02.334023 4546 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334028 4546 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334033 4546 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334038 4546 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334044 4546 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334050 4546 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334055 4546 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334060 4546 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334065 4546 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334070 4546 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334075 4546 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334084 4546 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334088 4546 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334093 4546 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334098 4546 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334102 4546 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334106 4546 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334111 4546 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334116 4546 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334120 4546 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:24:02.341749 master-0 kubenswrapper[4546]: W0216 17:24:02.334125 4546 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334129 4546 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334135 4546 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334140 4546 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334147 4546 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334152 4546 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334157 4546 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334161 4546 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334165 4546 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334170 4546 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334175 4546 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334179 4546 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334185 4546 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334191 4546 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334196 4546 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334239 4546 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334266 4546 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334272 4546 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334277 4546 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:24:02.342464 master-0 kubenswrapper[4546]: W0216 17:24:02.334283 4546 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: W0216 17:24:02.334288 4546 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: W0216 17:24:02.334293 4546 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: W0216 17:24:02.334298 4546 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: W0216 17:24:02.334307 4546 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: W0216 17:24:02.334312 4546 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: W0216 17:24:02.334317 4546 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: I0216 17:24:02.334325 4546 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: I0216 17:24:02.341321 4546 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: I0216 17:24:02.341361 4546 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: W0216 17:24:02.341467 4546 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: W0216 17:24:02.341476 4546 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: W0216 17:24:02.341480 4546 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: W0216 17:24:02.341484 4546 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: W0216 17:24:02.341505 4546 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:24:02.343720 master-0 kubenswrapper[4546]: W0216 17:24:02.341511 4546 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341517 4546 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341521 4546 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341525 4546 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341529 4546 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341532 4546 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341536 4546 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341540 4546 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341543 4546 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341547 4546 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341550 4546 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341554 4546 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341558 4546 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341561 4546 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341565 4546 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341585 4546 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341589 4546 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341593 4546 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341597 4546 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341600 4546 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:24:02.344174 master-0 kubenswrapper[4546]: W0216 17:24:02.341604 4546 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341608 4546 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341616 4546 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341620 4546 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341624 4546 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341628 4546 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341632 4546 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341636 4546 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341640 4546 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341644 4546 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341667 4546 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341671 4546 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341675 4546 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341679 4546 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341682 4546 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341686 4546 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341690 4546 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341693 4546 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341697 4546 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341700 4546 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:24:02.344810 master-0 kubenswrapper[4546]: W0216 17:24:02.341704 4546 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341707 4546 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341711 4546 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341714 4546 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341718 4546 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341738 4546 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341744 4546 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341748 4546 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341752 4546 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341757 4546 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341761 4546 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341765 4546 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341769 4546 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341773 4546 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341776 4546 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341780 4546 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341783 4546 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341787 4546 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341793 4546 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:24:02.346005 master-0 kubenswrapper[4546]: W0216 17:24:02.341797 4546 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.341817 4546 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.341823 4546 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.341827 4546 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.341831 4546 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.341835 4546 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.341839 4546 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.341843 4546 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: I0216 17:24:02.341849 4546 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.341992 4546 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.341998 4546 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.342002 4546 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.342006 4546 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.342009 4546 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.342013 4546 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:24:02.346530 master-0 kubenswrapper[4546]: W0216 17:24:02.342017 4546 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342020 4546 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342024 4546 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342028 4546 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342031 4546 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342053 4546 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342059 4546 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342065 4546 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342068 4546 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342072 4546 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342076 4546 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342080 4546 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342084 4546 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342088 4546 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342091 4546 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342096 4546 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342099 4546 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342103 4546 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342107 4546 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342110 4546 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:24:02.346912 master-0 kubenswrapper[4546]: W0216 17:24:02.342132 4546 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342135 4546 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342140 4546 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342144 4546 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342147 4546 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342152 4546 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342155 4546 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342159 4546 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342163 4546 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342166 4546 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342170 4546 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342174 4546 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342177 4546 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342181 4546 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342184 4546 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342188 4546 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342207 4546 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342210 4546 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342214 4546 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342217 4546 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342221 4546 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:24:02.347759 master-0 kubenswrapper[4546]: W0216 17:24:02.342224 4546 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342228 4546 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342231 4546 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342235 4546 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342238 4546 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342242 4546 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342267 4546 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342273 4546 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342278 4546 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342282 4546 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342287 4546 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342292 4546 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342296 4546 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342299 4546 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342303 4546 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342308 4546 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342312 4546 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342315 4546 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:24:02.352632 master-0 kubenswrapper[4546]: W0216 17:24:02.342320 4546 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:24:02.353291 master-0 kubenswrapper[4546]: W0216 17:24:02.342343 4546 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:24:02.353291 master-0 kubenswrapper[4546]: W0216 17:24:02.342348 4546 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:24:02.353291 master-0 kubenswrapper[4546]: W0216 17:24:02.342353 4546 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:24:02.353291 master-0 kubenswrapper[4546]: W0216 17:24:02.342356 4546 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:24:02.353291 master-0 kubenswrapper[4546]: W0216 17:24:02.342360 4546 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:24:02.353291 master-0 kubenswrapper[4546]: W0216 17:24:02.342363 4546 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:24:02.353291 master-0 kubenswrapper[4546]: I0216 17:24:02.342371 4546 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:24:02.353291 master-0 kubenswrapper[4546]: I0216 17:24:02.342556 4546 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 17:24:02.353291 master-0 kubenswrapper[4546]: I0216 17:24:02.344503 4546 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 17:24:02.353291 master-0 kubenswrapper[4546]: I0216 17:24:02.344584 4546 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 17:24:02.353291 master-0 kubenswrapper[4546]: I0216 17:24:02.344847 4546 server.go:997] "Starting client certificate rotation" Feb 16 17:24:02.353291 master-0 kubenswrapper[4546]: I0216 17:24:02.344856 4546 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 17:24:02.353291 master-0 kubenswrapper[4546]: I0216 17:24:02.345061 4546 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 11:41:49.226182306 +0000 UTC Feb 16 17:24:02.353730 master-0 kubenswrapper[4546]: I0216 17:24:02.345158 4546 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h17m46.881026162s for next certificate rotation Feb 16 17:24:02.353730 master-0 kubenswrapper[4546]: I0216 17:24:02.345624 4546 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:24:02.353730 master-0 kubenswrapper[4546]: I0216 17:24:02.348056 4546 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:24:02.353730 master-0 kubenswrapper[4546]: I0216 17:24:02.351496 4546 log.go:25] "Validated CRI v1 runtime API" Feb 16 17:24:02.356285 master-0 kubenswrapper[4546]: I0216 17:24:02.356216 4546 log.go:25] "Validated CRI v1 image API" Feb 16 17:24:02.357483 master-0 kubenswrapper[4546]: I0216 17:24:02.357466 4546 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 17:24:02.361586 master-0 kubenswrapper[4546]: I0216 17:24:02.361549 4546 fs.go:135] Filesystem UUIDs: map[35a0b0cc-84b1-4374-a18a-0f49ad7a8333:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 17:24:02.361836 master-0 kubenswrapper[4546]: I0216 17:24:02.361577 4546 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/00316e53224294770a34d485da1701e46dd1e2fb2c2bd8ae7389a5dd2d782710/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/00316e53224294770a34d485da1701e46dd1e2fb2c2bd8ae7389a5dd2d782710/userdata/shm major:0 minor:198 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/06cdc79aff420eb5730cf93c10b791911677809cb3e311984f04d7223bea2df7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/06cdc79aff420eb5730cf93c10b791911677809cb3e311984f04d7223bea2df7/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/36d50210d5c52db4b7e6fdca90b019b559fb61ad6d363fa02b488e76691be827/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/36d50210d5c52db4b7e6fdca90b019b559fb61ad6d363fa02b488e76691be827/userdata/shm major:0 minor:63 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/37867a9b89ca658d12f1765647ed5e15e132bb4023f3490c258e8e8c2d9cc767/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/37867a9b89ca658d12f1765647ed5e15e132bb4023f3490c258e8e8c2d9cc767/userdata/shm major:0 minor:305 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3df38b68e675184d173d581679d4cdced10e660e88d294708f50f7b9553e8d93/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3df38b68e675184d173d581679d4cdced10e660e88d294708f50f7b9553e8d93/userdata/shm major:0 minor:309 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3f5274f53616b7f3f394dd7e765c70a0d9d9d82d26946040a2390d3b98008538/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3f5274f53616b7f3f394dd7e765c70a0d9d9d82d26946040a2390d3b98008538/userdata/shm major:0 minor:190 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/441fa2dcc2054ce74afe608caccc7ace43169040cc77c644b15838983f1c426d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/441fa2dcc2054ce74afe608caccc7ace43169040cc77c644b15838983f1c426d/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4ab1b3b76e20d135df3b1131111388991974b01e8267bfd94f88542db725e3af/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4ab1b3b76e20d135df3b1131111388991974b01e8267bfd94f88542db725e3af/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/60ad9673b9da87a543bba4e5a24b9c3c17606af8ac65c311daabcc313339be82/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/60ad9673b9da87a543bba4e5a24b9c3c17606af8ac65c311daabcc313339be82/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/650c00271176c30e19eb9cdf1573f9c862bf460c2839e903b286275740a5a883/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/650c00271176c30e19eb9cdf1573f9c862bf460c2839e903b286275740a5a883/userdata/shm major:0 minor:294 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7390c7d89f79e636baa8c58deafe3fc046c5d3959b31e83d9fd704ba232e7cc1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7390c7d89f79e636baa8c58deafe3fc046c5d3959b31e83d9fd704ba232e7cc1/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7cb9dc3e1cfd504ac51740676aa8abfea42a74cb0bb3c1ae429538ab24b08f03/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7cb9dc3e1cfd504ac51740676aa8abfea42a74cb0bb3c1ae429538ab24b08f03/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/86caf26a899fa2ef707de37b05830737248aa086d2a7fc23bbea1ac0ba7504f6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/86caf26a899fa2ef707de37b05830737248aa086d2a7fc23bbea1ac0ba7504f6/userdata/shm major:0 minor:274 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8829b7ba3dde2781a29cd29841cecd44ba49a0453c7b226cb4e93d3298990b75/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8829b7ba3dde2781a29cd29841cecd44ba49a0453c7b226cb4e93d3298990b75/userdata/shm major:0 minor:181 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8a66c9c6ab0ffb6c022d21572d9ecd028be9e07d99ed15c25f8c09f001677ac9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8a66c9c6ab0ffb6c022d21572d9ecd028be9e07d99ed15c25f8c09f001677ac9/userdata/shm major:0 minor:317 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/991e68de3901c2fa1007e2e130ec8671c0a957ba6c92997b14008db00c17ebb5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/991e68de3901c2fa1007e2e130ec8671c0a957ba6c92997b14008db00c17ebb5/userdata/shm major:0 minor:230 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9b5d8f819f97cebd14131d50dc1935b79709a51c884f493fa2fa58cc6a695b9a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9b5d8f819f97cebd14131d50dc1935b79709a51c884f493fa2fa58cc6a695b9a/userdata/shm major:0 minor:199 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9c1f9646afbb62e247cefae88e6ea50065550d78f7935c044f7dcb7faa56701d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9c1f9646afbb62e247cefae88e6ea50065550d78f7935c044f7dcb7faa56701d/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cd3d71a6084ee248a560124746ee307460625fa3d9ee1fe1d378dbd98e43a0fb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cd3d71a6084ee248a560124746ee307460625fa3d9ee1fe1d378dbd98e43a0fb/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e858d51ba7f7c1ad0ca843ee57bf6eb31850a3a502ba6e109fa74505612f66cd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e858d51ba7f7c1ad0ca843ee57bf6eb31850a3a502ba6e109fa74505612f66cd/userdata/shm major:0 minor:234 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fc1859bea800a3c6a414cc64bcfd32dfbc9f487ecf2e012603f9cd17e1541615/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fc1859bea800a3c6a414cc64bcfd32dfbc9f487ecf2e012603f9cd17e1541615/userdata/shm major:0 minor:231 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06067627-6ccf-4cc8-bd20-dabdd776bb46/volumes/kubernetes.io~projected/kube-api-access-pq4dn:{mountpoint:/var/lib/kubelet/pods/06067627-6ccf-4cc8-bd20-dabdd776bb46/volumes/kubernetes.io~projected/kube-api-access-pq4dn major:0 minor:177 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d980a9a-2574-41b9-b970-0718cd97c8cd/volumes/kubernetes.io~projected/kube-api-access-t7l6q:{mountpoint:/var/lib/kubelet/pods/0d980a9a-2574-41b9-b970-0718cd97c8cd/volumes/kubernetes.io~projected/kube-api-access-t7l6q major:0 minor:300 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d1636c0-f34d-444c-822d-77f1d203ddc4/volumes/kubernetes.io~projected/kube-api-access-vbtld:{mountpoint:/var/lib/kubelet/pods/2d1636c0-f34d-444c-822d-77f1d203ddc4/volumes/kubernetes.io~projected/kube-api-access-vbtld major:0 minor:171 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2:{mountpoint:/var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2 major:0 minor:197 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e/volumes/kubernetes.io~empty-dir/config-out major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e/volumes/kubernetes.io~projected/kube-api-access-l67l5:{mountpoint:/var/lib/kubelet/pods/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e/volumes/kubernetes.io~projected/kube-api-access-l67l5 major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl:{mountpoint:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl major:0 minor:188 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert major:0 minor:159 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x:{mountpoint:/var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x major:0 minor:206 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt:{mountpoint:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt major:0 minor:304 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls major:0 minor:73 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x:{mountpoint:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55d635cd-1f0d-4086-96f2-9f3524f3f18c/volumes/kubernetes.io~projected/kube-api-access-76rtg:{mountpoint:/var/lib/kubelet/pods/55d635cd-1f0d-4086-96f2-9f3524f3f18c/volumes/kubernetes.io~projected/kube-api-access-76rtg major:0 minor:286 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw:{mountpoint:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:168 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/kube-api-access-b5mwd:{mountpoint:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/kube-api-access-b5mwd major:0 minor:194 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x:{mountpoint:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x major:0 minor:173 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls major:0 minor:158 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld:{mountpoint:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:153 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/810a2275-fae5-45df-a3b8-92860451d33b/volumes/kubernetes.io~projected/kube-api-access-ktgm7:{mountpoint:/var/lib/kubelet/pods/810a2275-fae5-45df-a3b8-92860451d33b/volumes/kubernetes.io~projected/kube-api-access-ktgm7 major:0 minor:178 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~projected/kube-api-access-gvw4s:{mountpoint:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~projected/kube-api-access-gvw4s major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/certs major:0 minor:165 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:170 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2 major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:75 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g:{mountpoint:/var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g major:0 minor:293 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~projected/kube-api-access-8nfk2:{mountpoint:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~projected/kube-api-access-8nfk2 major:0 minor:180 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:161 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:163 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm:{mountpoint:/var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm major:0 minor:189 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl:{mountpoint:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl major:0 minor:179 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:76 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5:{mountpoint:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5 major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae20b683-dac8-419e-808a-ddcdb3c564e1/volumes/kubernetes.io~projected/kube-api-access-f69cb:{mountpoint:/var/lib/kubelet/pods/ae20b683-dac8-419e-808a-ddcdb3c564e1/volumes/kubernetes.io~projected/kube-api-access-f69cb major:0 minor:207 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b04ee64e-5e83-499c-812d-749b2b6824c6/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/b04ee64e-5e83-499c-812d-749b2b6824c6/volumes/kubernetes.io~empty-dir/config-out major:0 minor:169 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b04ee64e-5e83-499c-812d-749b2b6824c6/volumes/kubernetes.io~projected/kube-api-access-vpjv7:{mountpoint:/var/lib/kubelet/pods/b04ee64e-5e83-499c-812d-749b2b6824c6/volumes/kubernetes.io~projected/kube-api-access-vpjv7 major:0 minor:285 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg:{mountpoint:/var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert major:0 minor:80 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ba37ef0e-373c-4ccc-b082-668630399765/volumes/kubernetes.io~projected/kube-api-access-57455:{mountpoint:/var/lib/kubelet/pods/ba37ef0e-373c-4ccc-b082-668630399765/volumes/kubernetes.io~projected/kube-api-access-57455 major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:162 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp major:0 minor:157 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52:{mountpoint:/var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52 major:0 minor:172 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67:{mountpoint:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67 major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~projected/kube-api-access-94kdz:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~projected/kube-api-access-94kdz major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/default-certificate major:0 minor:166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/metrics-certs major:0 minor:164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/stats-auth major:0 minor:160 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz:{mountpoint:/var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz major:0 minor:185 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fe8e8e5d-cebb-4361-b765-5ff737f5e838/volumes/kubernetes.io~projected/kube-api-access-j99jl:{mountpoint:/var/lib/kubelet/pods/fe8e8e5d-cebb-4361-b765-5ff737f5e838/volumes/kubernetes.io~projected/kube-api-access-j99jl major:0 minor:280 fsType:tmpfs blockSize:0} overlay_0-101:{mountpoint:/var/lib/containers/storage/overlay/223b14f09bd92b571bec07573e4d64e8f29e65918658ad1c2acc2878e11564d1/merged major:0 minor:101 fsType:overlay blockSize:0} overlay_0-109:{mountpoint:/var/lib/containers/storage/overlay/b0db791901f45fd5654338dd2da0eb3508d79b78d3f8cb72d22a21451c683bc0/merged major:0 minor:109 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/3217e1ee7dd287e9bfa0a12eda546957253ddc939f6ad78b28d6688c7525bf90/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/bc5437cae35eb901d78c295b2b28eff9c541c40ba0583829acd0d4f2fce77ecc/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-128:{mountpoint:/var/lib/containers/storage/overlay/84c0f97d7e9bea3b48805bbfc88ba77bd77d8963075ddcb3676991e52345b90e/merged major:0 minor:128 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/5ca6250e835705734911177363196d788d36817b0e28274fe1471db7fa0c4177/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/45b977858c8f299fd63d108fc14bdb142050d92358e003e626592510a29ed84b/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-143:{mountpoint:/var/lib/containers/storage/overlay/2351df5d7875ee95502d968aa1fcb30d43c16a136cb2e8f85a6cac2a2ce069bb/merged major:0 minor:143 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/e89801cb52095636fa5cea58f77c36ca9198c8fca2ac9b4be1ee7769fd7dcbb5/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-183:{mountpoint:/var/lib/containers/storage/overlay/e1226576351d7a031d95593d76ca981db72a4b6e1d59c1a178a2a66400319d01/merged major:0 minor:183 fsType:overlay blockSize:0} overlay_0-186:{mountpoint:/var/lib/containers/storage/overlay/67144324a812f946917e246b404371a610a5818469f5db04b4ce602c5c49a71d/merged major:0 minor:186 fsType:overlay blockSize:0} overlay_0-192:{mountpoint:/var/lib/containers/storage/overlay/827a9eb23593e0d91a55834b438c9ce334abca97ba94fc7c7ae26f5b3ac6d6da/merged major:0 minor:192 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/b9d7dc517425fa6bedc9ce661dfe29b8e3282f0914baabf66db7b8ac60242ad4/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-202:{mountpoint:/var/lib/containers/storage/overlay/34b4cf0f6787135236ddb100b16d6779e8361c14138a7442dc9d7f4d34a79500/merged major:0 minor:202 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/a209e21dead87f52e40f2b52b8165ff7cae5db925c3788792df9a8cef69316ae/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-208:{mountpoint:/var/lib/containers/storage/overlay/d91c0e28eab38161af331e9cc70d581d4f3530fa41a5225e83a9d6656ae6c50b/merged major:0 minor:208 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/225f17838488c74ae37a35ac3744e93567d27d165ebc5a088828857cdfe7a520/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/436bcce2c5884dc66f1ab95dfb98466f0ede2261b7ff6c0cac19945310a856b8/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-223:{mountpoint:/var/lib/containers/storage/overlay/62c57bb30e77191cccb3ff2f90d7682bf736ae48bb0a418f65d05356705619cb/merged major:0 minor:223 fsType:overlay blockSize:0} overlay_0-235:{mountpoint:/var/lib/containers/storage/overlay/e6da6301aed8074d3fd311137450d4cb57f55b1a3e2ee14b502ad7317d8550c0/merged major:0 minor:235 fsType:overlay blockSize:0} overlay_0-237:{mountpoint:/var/lib/containers/storage/overlay/4f7015e552eb0be13a2897a2e51bfd736d7d8b706ee4f5e0fa1fac25fdb6eb32/merged major:0 minor:237 fsType:overlay blockSize:0} overlay_0-240:{mountpoint:/var/lib/containers/storage/overlay/b0bf2ca1a482610de6fa3bd76f99d89649d9ce67b4bd1cc022d7660e55ac843e/merged major:0 minor:240 fsType:overlay blockSize:0} overlay_0-242:{mountpoint:/var/lib/containers/storage/overlay/f66174227e4516f481dfec45e9056d578468fd08b39786f06f943466ffad1b1e/merged major:0 minor:242 fsType:overlay blockSize:0} overlay_0-244:{mountpoint:/var/lib/containers/storage/overlay/561699ce7451f43e70b35e91ababd90ee1823acf0e80e130e54721e0cea2f4ca/merged major:0 minor:244 fsType:overlay blockSize:0} overlay_0-247:{mountpoint:/var/lib/containers/storage/overlay/cf1e3c776d39e6deb224fa718ebf5b8c39b938c9587d033d945d89857ee931e5/merged major:0 minor:247 fsType:overlay blockSize:0} overlay_0-254:{mountpoint:/var/lib/containers/storage/overlay/f02ddf8beaf1e955c5e3fb42304d2241c08d3b5bac5e2f1c55b538ea1609f340/merged major:0 minor:254 fsType:overlay blockSize:0} overlay_0-256:{mountpoint:/var/lib/containers/storage/overlay/212936f4b65e6d6d856fd8a6e416922df81878df8f740a3ac07ecf2f0f3365d1/merged major:0 minor:256 fsType:overlay blockSize:0} overlay_0-258:{mountpoint:/var/lib/containers/storage/overlay/34d0129db2a2e80bfec09bc66e873b338869ffd64d1e6089b0e351a72080da60/merged major:0 minor:258 fsType:overlay blockSize:0} overlay_0-263:{mountpoint:/var/lib/containers/storage/overlay/d6b0c9ffc7461d15a9cda7a0df2fc35149944470f7f1884104ad995497066bd6/merged major:0 minor:263 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/18292d86671ac228c78cdb29c79a509f5cf306ad2f0fcebbc5de1222d3702fb0/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-272:{mountpoint:/var/lib/containers/storage/overlay/df260ba957fcad41c488a422ad109784dba60d4c5a6764911e09dcc74aede15a/merged major:0 minor:272 fsType:overlay blockSize:0} overlay_0-276:{mountpoint:/var/lib/containers/storage/overlay/204b3eccbe1aac37ea8384527dbd1118a204430f58516d97d3edb8fe6be0d4f4/merged major:0 minor:276 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/a65dea2abb27e36ea39fceae552fe04e76fa3619d551380f842b66a76c271601/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/72f58253862cf97daf3deb8e3b3076ba2c9bd192a2562f7e0bf92d7b16f01cea/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/f2ee7c3532497a29918747011941f07af864cab4fe52c415029ccbb32b12c15c/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/664e5f37ea2b1a0025a414ce40aa0188e92a062430a290c447bc3df865bf2e14/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/d15e19c57b5fad57b3465846974602c0355855b31d1faf9a4804e0f1193f2cb1/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/b5c78aa6efe9c10e3e4c0a49fe98281e83dbad2e1125401569d52fd664fa1df8/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/23fd83666f0607e9073b237c5f3d2dbeedceabdb8e42f1da0c7c8c7e2ec56ae4/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/7532dde41c7173802167d77a4b796ab2f36bdc1b79005f5c852190cda02d8345/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/e809714da25c0b6c67c4aaad33e85bf712783cc8fe6823c7eea78e2b6654ddff/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/270c988e6ff7290d7bd0516ba407ee6ea0d10fa3a00ce3f8b68162bb1b6658e1/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/fa2d948d294b6135eb5f1bc5dab6a66e48355f7573187307aebc28f51a5ef1bb/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/70cd4935443e1cb66cfcef36452ce39b022240fe264fc3a70ec7cc6a8b5d8074/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/989ecb2824e8f201ff415bb2d40db9f828dd388db97777f97daba5848300c5eb/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/f4da991c975295317952534da7850506c233b6dbe6465a4de13ce848336a915c/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/032d34e37ac1961cb9976fa1ab6f8f64e19f0a8a7339bd89beb9e1b2d6ec8bc2/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/cf011d722fb721e24b397773813a77c67ff3e9e0b15c084ff23de99dc64b3298/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/13903d96e709eb25fd6a8bf8efa13fb3d7bef214f7f473da1f8323842a93cf7d/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/c1f339898417d1d18dd85b46049080ce34a4158ae19f1e71dc51d7d7f9bed451/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/85923ebbcba601b19a533bae1a81399803e745ef6d9ed9f09a814b426d0c2ed8/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-57:{mountpoint:/var/lib/containers/storage/overlay/6a50798e11b78174b946a23ab93379e690ad6765372c074ed2dc3971c35935bd/merged major:0 minor:57 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/bfffbf2b09dcea91c746c2147bd09e944a60aafd9775eb13da0ab85d9e38c731/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/f8478984223dfcdb31850f42f8c6ec515a6b75e29246668dacac0837a72a8b96/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/a5de566836074329c1f2cb1ed3899418c1921f50df937b19ffc50692a307de16/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/796ccd19bfca6ca91aa5fe72897547e306a2ecfe6bf5eed00caf95c24d3c7e89/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/d94ebbc245212a395a308b43c98c1b6a901dcacba93db21c4a1187b94856f2c2/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/b73b320d06822aecbc1af158bf2064c759fb0306afc38dde62f0ece4bd1f4dc5/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/containers/storage/overlay/82d6ceaccb29716141b0fe7890d347ac191904e00c084d9526c635719057c9af/merged major:0 minor:77 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/var/lib/containers/storage/overlay/ffe173c2993a3466ab10eb204d3aa5665d4413b635839c645bdb9b0b5adec3c7/merged major:0 minor:79 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/6a4bfe5200f6146240ca4a60e5d5917561bcb3880d2f2e866af91769b863eb4e/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/e0053ea6a980ca911506223a8a6073ab84a417b0c8662c665d5da6245f9b31f4/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/eefdf392125223da77614dbff0dff99c4cd3d38e0da1aed93a92dddf16b86eb6/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-88:{mountpoint:/var/lib/containers/storage/overlay/f9fb3b36252fe85ee0690e452bee88f86dc705968822e4a5fa68091324ed2256/merged major:0 minor:88 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/29f53996616af0af76d2d191f60d4ffb31ca8b99a98d88e09ba44be991f4b5e1/merged major:0 minor:90 fsType:overlay blockSize:0}] Feb 16 17:24:02.399989 master-0 kubenswrapper[4546]: I0216 17:24:02.399423 4546 manager.go:217] Machine: {Timestamp:2026-02-16 17:24:02.398277974 +0000 UTC m=+0.127750803 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514141184 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:47bfea951bd14de8bb3b008f6812b13f SystemUUID:47bfea95-1bd1-4de8-bb3b-008f6812b13f BootID:bff30cf7-71da-4e66-9940-13ec1ab42f05 Filesystems:[{Device:overlay_0-186 DeviceMajor:0 DeviceMinor:186 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-183 DeviceMajor:0 DeviceMinor:183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld DeviceMajor:0 DeviceMinor:265 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/36d50210d5c52db4b7e6fdca90b019b559fb61ad6d363fa02b488e76691be827/userdata/shm DeviceMajor:0 DeviceMinor:63 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e858d51ba7f7c1ad0ca843ee57bf6eb31850a3a502ba6e109fa74505612f66cd/userdata/shm DeviceMajor:0 DeviceMinor:234 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9b5d8f819f97cebd14131d50dc1935b79709a51c884f493fa2fa58cc6a695b9a/userdata/shm DeviceMajor:0 DeviceMinor:199 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/fe8e8e5d-cebb-4361-b765-5ff737f5e838/volumes/kubernetes.io~projected/kube-api-access-j99jl DeviceMajor:0 DeviceMinor:280 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b04ee64e-5e83-499c-812d-749b2b6824c6/volumes/kubernetes.io~projected/kube-api-access-vpjv7 DeviceMajor:0 DeviceMinor:285 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl DeviceMajor:0 DeviceMinor:188 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:166 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x DeviceMajor:0 DeviceMinor:173 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5 DeviceMajor:0 DeviceMinor:220 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:158 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:163 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8829b7ba3dde2781a29cd29841cecd44ba49a0453c7b226cb4e93d3298990b75/userdata/shm DeviceMajor:0 DeviceMinor:181 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:164 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9c1f9646afbb62e247cefae88e6ea50065550d78f7935c044f7dcb7faa56701d/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/kube-api-access-b5mwd DeviceMajor:0 DeviceMinor:194 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/00316e53224294770a34d485da1701e46dd1e2fb2c2bd8ae7389a5dd2d782710/userdata/shm DeviceMajor:0 DeviceMinor:198 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/0d980a9a-2574-41b9-b970-0718cd97c8cd/volumes/kubernetes.io~projected/kube-api-access-t7l6q DeviceMajor:0 DeviceMinor:300 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:76 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/810a2275-fae5-45df-a3b8-92860451d33b/volumes/kubernetes.io~projected/kube-api-access-ktgm7 DeviceMajor:0 DeviceMinor:178 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-88 DeviceMajor:0 DeviceMinor:88 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x DeviceMajor:0 DeviceMinor:206 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-242 DeviceMajor:0 DeviceMinor:242 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-276 DeviceMajor:0 DeviceMinor:276 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g DeviceMajor:0 DeviceMinor:293 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cd3d71a6084ee248a560124746ee307460625fa3d9ee1fe1d378dbd98e43a0fb/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:80 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7cb9dc3e1cfd504ac51740676aa8abfea42a74cb0bb3c1ae429538ab24b08f03/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/441fa2dcc2054ce74afe608caccc7ace43169040cc77c644b15838983f1c426d/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/55d635cd-1f0d-4086-96f2-9f3524f3f18c/volumes/kubernetes.io~projected/kube-api-access-76rtg DeviceMajor:0 DeviceMinor:286 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:73 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:161 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/2d1636c0-f34d-444c-822d-77f1d203ddc4/volumes/kubernetes.io~projected/kube-api-access-vbtld DeviceMajor:0 DeviceMinor:171 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-244 DeviceMajor:0 DeviceMinor:244 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/37867a9b89ca658d12f1765647ed5e15e132bb4023f3490c258e8e8c2d9cc767/userdata/shm DeviceMajor:0 DeviceMinor:305 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-101 DeviceMajor:0 DeviceMinor:101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw DeviceMajor:0 DeviceMinor:221 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:279 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-240 DeviceMajor:0 DeviceMinor:240 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257070592 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52 DeviceMajor:0 DeviceMinor:172 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-202 DeviceMajor:0 DeviceMinor:202 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/991e68de3901c2fa1007e2e130ec8671c0a957ba6c92997b14008db00c17ebb5/userdata/shm DeviceMajor:0 DeviceMinor:230 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67 DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt DeviceMajor:0 DeviceMinor:304 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl DeviceMajor:0 DeviceMinor:179 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz DeviceMajor:0 DeviceMinor:185 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-223 DeviceMajor:0 DeviceMinor:223 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/650c00271176c30e19eb9cdf1573f9c862bf460c2839e903b286275740a5a883/userdata/shm DeviceMajor:0 DeviceMinor:294 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8a66c9c6ab0ffb6c022d21572d9ecd028be9e07d99ed15c25f8c09f001677ac9/userdata/shm DeviceMajor:0 DeviceMinor:317 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-57 DeviceMajor:0 DeviceMinor:57 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-263 DeviceMajor:0 DeviceMinor:263 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:167 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-237 DeviceMajor:0 DeviceMinor:237 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257070592 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/06cdc79aff420eb5730cf93c10b791911677809cb3e311984f04d7223bea2df7/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-235 DeviceMajor:0 DeviceMinor:235 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:222 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ae20b683-dac8-419e-808a-ddcdb3c564e1/volumes/kubernetes.io~projected/kube-api-access-f69cb DeviceMajor:0 DeviceMinor:207 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:227 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-256 DeviceMajor:0 DeviceMinor:256 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-143 DeviceMajor:0 DeviceMinor:143 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm DeviceMajor:0 DeviceMinor:189 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:168 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/06067627-6ccf-4cc8-bd20-dabdd776bb46/volumes/kubernetes.io~projected/kube-api-access-pq4dn DeviceMajor:0 DeviceMinor:177 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~projected/kube-api-access-94kdz DeviceMajor:0 DeviceMinor:278 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:160 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~projected/kube-api-access-8nfk2 DeviceMajor:0 DeviceMinor:180 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2 DeviceMajor:0 DeviceMinor:303 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:75 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:162 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:165 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:170 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/ba37ef0e-373c-4ccc-b082-668630399765/volumes/kubernetes.io~projected/kube-api-access-57455 DeviceMajor:0 DeviceMinor:228 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-258 DeviceMajor:0 DeviceMinor:258 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/60ad9673b9da87a543bba4e5a24b9c3c17606af8ac65c311daabcc313339be82/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:159 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fc1859bea800a3c6a414cc64bcfd32dfbc9f487ecf2e012603f9cd17e1541615/userdata/shm DeviceMajor:0 DeviceMinor:231 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~projected/kube-api-access-gvw4s DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3df38b68e675184d173d581679d4cdced10e660e88d294708f50f7b9553e8d93/userdata/shm DeviceMajor:0 DeviceMinor:309 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-109 DeviceMajor:0 DeviceMinor:109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-128 DeviceMajor:0 DeviceMinor:128 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-208 DeviceMajor:0 DeviceMinor:208 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e/volumes/kubernetes.io~projected/kube-api-access-l67l5 DeviceMajor:0 DeviceMinor:229 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-247 DeviceMajor:0 DeviceMinor:247 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102829056 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:157 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4ab1b3b76e20d135df3b1131111388991974b01e8267bfd94f88542db725e3af/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/86caf26a899fa2ef707de37b05830737248aa086d2a7fc23bbea1ac0ba7504f6/userdata/shm DeviceMajor:0 DeviceMinor:274 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:153 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b04ee64e-5e83-499c-812d-749b2b6824c6/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:169 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3f5274f53616b7f3f394dd7e765c70a0d9d9d82d26946040a2390d3b98008538/userdata/shm DeviceMajor:0 DeviceMinor:190 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-192 DeviceMajor:0 DeviceMinor:192 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-272 DeviceMajor:0 DeviceMinor:272 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2 DeviceMajor:0 DeviceMinor:197 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-254 DeviceMajor:0 DeviceMinor:254 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7390c7d89f79e636baa8c58deafe3fc046c5d3959b31e83d9fd704ba232e7cc1/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:52:47:03:db:66:8a Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:2c:b9:e2 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:4a:2e:ce Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:46:8e:c9:b8:46:79 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514141184 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 17:24:02.399989 master-0 kubenswrapper[4546]: I0216 17:24:02.399979 4546 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 17:24:02.400348 master-0 kubenswrapper[4546]: I0216 17:24:02.400146 4546 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 17:24:02.400443 master-0 kubenswrapper[4546]: I0216 17:24:02.400418 4546 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 17:24:02.400632 master-0 kubenswrapper[4546]: I0216 17:24:02.400592 4546 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 17:24:02.400825 master-0 kubenswrapper[4546]: I0216 17:24:02.400624 4546 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 17:24:02.400896 master-0 kubenswrapper[4546]: I0216 17:24:02.400847 4546 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 17:24:02.400896 master-0 kubenswrapper[4546]: I0216 17:24:02.400861 4546 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 17:24:02.400896 master-0 kubenswrapper[4546]: I0216 17:24:02.400872 4546 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:24:02.401052 master-0 kubenswrapper[4546]: I0216 17:24:02.400898 4546 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:24:02.401166 master-0 kubenswrapper[4546]: I0216 17:24:02.401137 4546 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:24:02.401602 master-0 kubenswrapper[4546]: I0216 17:24:02.401577 4546 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 17:24:02.401657 master-0 kubenswrapper[4546]: I0216 17:24:02.401646 4546 kubelet.go:418] "Attempting to sync node with API server" Feb 16 17:24:02.401698 master-0 kubenswrapper[4546]: I0216 17:24:02.401660 4546 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 17:24:02.401698 master-0 kubenswrapper[4546]: I0216 17:24:02.401676 4546 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 17:24:02.401698 master-0 kubenswrapper[4546]: I0216 17:24:02.401690 4546 kubelet.go:324] "Adding apiserver pod source" Feb 16 17:24:02.401809 master-0 kubenswrapper[4546]: I0216 17:24:02.401702 4546 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 17:24:02.408447 master-0 kubenswrapper[4546]: I0216 17:24:02.408401 4546 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 17:24:02.408656 master-0 kubenswrapper[4546]: I0216 17:24:02.408575 4546 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 17:24:02.409709 master-0 kubenswrapper[4546]: I0216 17:24:02.409651 4546 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 17:24:02.410055 master-0 kubenswrapper[4546]: I0216 17:24:02.409978 4546 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 17:24:02.410126 master-0 kubenswrapper[4546]: I0216 17:24:02.410074 4546 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 17:24:02.410126 master-0 kubenswrapper[4546]: I0216 17:24:02.410091 4546 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 17:24:02.410126 master-0 kubenswrapper[4546]: I0216 17:24:02.410105 4546 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 17:24:02.410126 master-0 kubenswrapper[4546]: I0216 17:24:02.410118 4546 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 17:24:02.410301 master-0 kubenswrapper[4546]: I0216 17:24:02.410132 4546 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 17:24:02.410301 master-0 kubenswrapper[4546]: I0216 17:24:02.410146 4546 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 17:24:02.410301 master-0 kubenswrapper[4546]: I0216 17:24:02.410159 4546 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 17:24:02.410301 master-0 kubenswrapper[4546]: I0216 17:24:02.410177 4546 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 17:24:02.410301 master-0 kubenswrapper[4546]: I0216 17:24:02.410192 4546 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 17:24:02.410301 master-0 kubenswrapper[4546]: I0216 17:24:02.410218 4546 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 17:24:02.410301 master-0 kubenswrapper[4546]: I0216 17:24:02.410241 4546 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 17:24:02.410564 master-0 kubenswrapper[4546]: I0216 17:24:02.410353 4546 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 17:24:02.411541 master-0 kubenswrapper[4546]: I0216 17:24:02.410830 4546 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:24:02.411611 master-0 kubenswrapper[4546]: I0216 17:24:02.411548 4546 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 17:24:02.411671 master-0 kubenswrapper[4546]: I0216 17:24:02.411570 4546 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 17:24:02.411726 master-0 kubenswrapper[4546]: I0216 17:24:02.411371 4546 server.go:1280] "Started kubelet" Feb 16 17:24:02.411778 master-0 kubenswrapper[4546]: I0216 17:24:02.411724 4546 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 17:24:02.412474 master-0 kubenswrapper[4546]: I0216 17:24:02.412412 4546 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 17:24:02.413166 master-0 kubenswrapper[4546]: I0216 17:24:02.413021 4546 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:24:02.413166 master-0 kubenswrapper[4546]: I0216 17:24:02.413082 4546 server.go:449] "Adding debug handlers to kubelet server" Feb 16 17:24:02.418039 master-0 kubenswrapper[4546]: I0216 17:24:02.417968 4546 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 17:24:02.418039 master-0 kubenswrapper[4546]: I0216 17:24:02.418027 4546 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 17:24:02.418412 master-0 kubenswrapper[4546]: I0216 17:24:02.418350 4546 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 14:17:39.378578463 +0000 UTC Feb 16 17:24:02.418412 master-0 kubenswrapper[4546]: I0216 17:24:02.418399 4546 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h53m36.960182635s for next certificate rotation Feb 16 17:24:02.419020 master-0 kubenswrapper[4546]: I0216 17:24:02.418987 4546 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 17:24:02.419020 master-0 kubenswrapper[4546]: I0216 17:24:02.419005 4546 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 17:24:02.419157 master-0 kubenswrapper[4546]: I0216 17:24:02.419051 4546 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 17:24:02.420824 master-0 kubenswrapper[4546]: I0216 17:24:02.420643 4546 factory.go:55] Registering systemd factory Feb 16 17:24:02.420824 master-0 kubenswrapper[4546]: I0216 17:24:02.420663 4546 factory.go:221] Registration of the systemd container factory successfully Feb 16 17:24:02.421699 master-0 kubenswrapper[4546]: I0216 17:24:02.421029 4546 factory.go:153] Registering CRI-O factory Feb 16 17:24:02.421699 master-0 kubenswrapper[4546]: I0216 17:24:02.421049 4546 factory.go:221] Registration of the crio container factory successfully Feb 16 17:24:02.421699 master-0 kubenswrapper[4546]: I0216 17:24:02.421124 4546 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 17:24:02.421699 master-0 kubenswrapper[4546]: I0216 17:24:02.421147 4546 factory.go:103] Registering Raw factory Feb 16 17:24:02.421699 master-0 kubenswrapper[4546]: I0216 17:24:02.421163 4546 manager.go:1196] Started watching for new ooms in manager Feb 16 17:24:02.422159 master-0 kubenswrapper[4546]: I0216 17:24:02.421808 4546 manager.go:319] Starting recovery of all containers Feb 16 17:24:02.423284 master-0 kubenswrapper[4546]: I0216 17:24:02.422785 4546 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:24:02.431114 master-0 kubenswrapper[4546]: E0216 17:24:02.430215 4546 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 16 17:24:02.438071 master-0 kubenswrapper[4546]: I0216 17:24:02.437448 4546 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439560 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439640 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439656 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439693 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439705 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439717 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439729 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="544c6815-81d7-422a-9e4a-5fcbfabe8da8" volumeName="kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439760 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439776 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439787 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439797 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439860 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439874 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439890 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439925 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439942 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.439977 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440019 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440033 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-db" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440045 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440057 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440094 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440111 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440125 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440137 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1524fc1-d157-435a-8bf8-7e877c45909d" volumeName="kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440174 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440193 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440208 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440221 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440303 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440323 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440338 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18e9a9d3-9b18-4c19-9558-f33c68101922" volumeName="kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440350 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440617 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440688 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440708 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440721 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca" seLinuxMountContext="" Feb 16 17:24:02.440721 master-0 kubenswrapper[4546]: I0216 17:24:02.440735 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440779 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440797 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440811 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440823 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440835 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440846 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440887 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440900 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9859457-f0d1-4754-a6c5-cf05d5abf447" volumeName="kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440913 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440925 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440937 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440952 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440966 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.440981 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="970d4376-f299-412c-a8ee-90aa980c689e" volumeName="kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441026 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441041 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441054 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441067 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441080 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441094 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441106 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad805251-19d0-4d2f-b741-7d11158f1f03" volumeName="kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441118 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441130 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441142 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441154 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54fba066-0e9e-49f6-8a86-34d5b4b660df" volumeName="kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441167 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441180 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441192 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441202 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-config-out" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441238 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441269 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441289 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441303 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441316 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441380 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f44170a-3c1c-4944-b971-251f75a51fc3" volumeName="kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441395 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441408 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441427 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441440 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441453 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441466 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441478 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441491 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441504 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441516 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441546 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441559 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441572 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441587 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441623 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441638 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441650 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441703 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441721 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441756 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441768 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441795 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441807 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441822 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441834 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441845 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441872 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441883 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441895 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441909 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441950 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441971 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441983 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441989 4546 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.441995 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442008 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442020 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442033 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442072 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442085 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442143 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-out" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442159 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442192 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442204 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442216 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442271 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442285 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442297 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442327 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442338 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442350 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442361 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442371 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442385 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" volumeName="kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442396 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9859457-f0d1-4754-a6c5-cf05d5abf447" volumeName="kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442408 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442416 4546 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442423 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442435 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad805251-19d0-4d2f-b741-7d11158f1f03" volumeName="kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442449 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442459 4546 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442461 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442473 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442486 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442496 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442507 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442519 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442532 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442585 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442608 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a275679-b7b6-4c28-b389-94cd2b014d6c" volumeName="kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442621 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442633 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: E0216 17:24:02.442662 4546 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442925 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d980a9a-2574-41b9-b970-0718cd97c8cd" volumeName="kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442942 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442954 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442963 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442972 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442981 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442990 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.442999 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443008 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443017 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443029 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443040 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443052 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="810a2275-fae5-45df-a3b8-92860451d33b" volumeName="kubernetes.io/projected/810a2275-fae5-45df-a3b8-92860451d33b-kube-api-access-ktgm7" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443063 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443074 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443096 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="642e5115-b7f2-4561-bc6b-1a74b6d891c4" volumeName="kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443107 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443119 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443134 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443145 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443179 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443193 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443205 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443218 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443230 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443241 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443268 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443310 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443321 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443329 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f44170a-3c1c-4944-b971-251f75a51fc3" volumeName="kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443339 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443370 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443380 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443389 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443399 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443446 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62fc29f4-557f-4a75-8b78-6ca425c81b81" volumeName="kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443456 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443466 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443477 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443485 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67" seLinuxMountContext="" Feb 16 17:24:02.443306 master-0 kubenswrapper[4546]: I0216 17:24:02.443496 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443505 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443515 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443524 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443533 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443600 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443611 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443638 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443668 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443677 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-main-db" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443686 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443695 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443707 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443717 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443726 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443735 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443749 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443759 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443770 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443780 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443790 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443798 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443807 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443815 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443824 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443833 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443845 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18e9a9d3-9b18-4c19-9558-f33c68101922" volumeName="kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443855 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443864 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443874 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443884 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443894 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443906 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" volumeName="kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443916 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443927 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443935 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="642e5115-b7f2-4561-bc6b-1a74b6d891c4" volumeName="kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443945 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443954 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443963 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443974 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443984 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-kube-api-access-l67l5" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.443993 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444002 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1524fc1-d157-435a-8bf8-7e877c45909d" volumeName="kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444012 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444023 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444032 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444041 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444051 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444061 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444070 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444116 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444124 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444134 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444142 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444151 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ff68421-1741-41c1-93d5-5c722dfd295e" volumeName="kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444159 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444170 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444178 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444186 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444195 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444204 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444213 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444221 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444229 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444239 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444266 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444279 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444289 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444299 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444308 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444318 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444327 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444336 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444346 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-kube-api-access-vpjv7" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444387 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444398 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444408 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444417 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d980a9a-2574-41b9-b970-0718cd97c8cd" volumeName="kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444427 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444436 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444445 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444455 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444464 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444473 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444475 4546 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444485 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444509 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444521 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444532 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444544 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444556 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444568 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444579 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444593 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444606 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444617 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.444628 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446016 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446042 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446057 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446069 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446082 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446097 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446110 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446122 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446134 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446147 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446161 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446172 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08a90dc5-b0d8-4aad-a002-736492b6c1a9" volumeName="kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446186 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446195 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="810a2275-fae5-45df-a3b8-92860451d33b" volumeName="kubernetes.io/configmap/810a2275-fae5-45df-a3b8-92860451d33b-serviceca" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446204 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446214 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446229 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446238 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446283 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446298 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446313 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446326 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446338 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446351 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446364 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a275679-b7b6-4c28-b389-94cd2b014d6c" volumeName="kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446377 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446388 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446401 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80d3b238-70c3-4e71-96a1-99405352033f" volumeName="kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446412 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446425 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446438 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446450 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446465 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6fe41b0-1a42-4f07-8220-d9aaa50788ad" volumeName="kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446478 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446492 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c303189e-adae-4fe2-8dd7-cc9b80f73e66" volumeName="kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446505 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446518 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446531 4546 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv" seLinuxMountContext="" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446544 4546 reconstruct.go:97] "Volume reconstruction finished" Feb 16 17:24:02.448298 master-0 kubenswrapper[4546]: I0216 17:24:02.446552 4546 reconciler.go:26] "Reconciler: start to sync state" Feb 16 17:24:02.455173 master-0 kubenswrapper[4546]: I0216 17:24:02.450242 4546 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 17:24:02.460644 master-0 kubenswrapper[4546]: I0216 17:24:02.460594 4546 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="024918b99b0960332808509aca9a4a206a98049b3cbbd79cb59ca43d40614ee8" exitCode=0 Feb 16 17:24:02.486984 master-0 kubenswrapper[4546]: I0216 17:24:02.484022 4546 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="ac596da9ba2aef67b1e91be250c648d0d94ad9e7ab12065e2f256126b855cff0" exitCode=1 Feb 16 17:24:02.486984 master-0 kubenswrapper[4546]: I0216 17:24:02.484068 4546 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838" exitCode=255 Feb 16 17:24:02.490366 master-0 kubenswrapper[4546]: I0216 17:24:02.490316 4546 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="8418547cd53261f1b77929899a0ab7c7d55cf1c91b349c65456cae4040067db4" exitCode=0 Feb 16 17:24:02.495629 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 17:24:02.497038 master-0 kubenswrapper[4546]: I0216 17:24:02.497014 4546 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576" exitCode=0 Feb 16 17:24:02.501405 master-0 kubenswrapper[4546]: I0216 17:24:02.501373 4546 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="aa4ad8fed1eda81c2562cc6c50ee8eff149a61c6fa1ef5cf233edb4d1184264a" exitCode=0 Feb 16 17:24:02.501496 master-0 kubenswrapper[4546]: I0216 17:24:02.501404 4546 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="b1d0578227c4edafa8bba585414b028b5fae4c055f5a0b9d56187660cf9393ff" exitCode=0 Feb 16 17:24:02.501496 master-0 kubenswrapper[4546]: I0216 17:24:02.501418 4546 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="2f9e036184f8cd2fd14e3ee4e8e0984726c748a2f48514f7099254370b0935ca" exitCode=0 Feb 16 17:24:02.504967 master-0 kubenswrapper[4546]: I0216 17:24:02.503353 4546 generic.go:334] "Generic (PLEG): container finished" podID="a94f9b8e-b020-4aab-8373-6c056ec07464" containerID="102a2b3ff0c0802de14c69b4e98a9814b1e46ce4db6fc83e68edccac0436a089" exitCode=0 Feb 16 17:24:02.517331 master-0 systemd[1]: Stopping Kubernetes Kubelet... Feb 16 17:24:02.517432 master-0 kubenswrapper[4546]: I0216 17:24:02.517139 4546 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:24:02.528766 master-0 systemd[1]: kubelet.service: Deactivated successfully. Feb 16 17:24:02.529117 master-0 systemd[1]: Stopped Kubernetes Kubelet. Feb 16 17:24:02.555939 master-0 systemd[1]: Starting Kubernetes Kubelet... Feb 16 17:24:02.652459 master-0 kubenswrapper[4652]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:24:02.652459 master-0 kubenswrapper[4652]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 17:24:02.652459 master-0 kubenswrapper[4652]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:24:02.652459 master-0 kubenswrapper[4652]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:24:02.652459 master-0 kubenswrapper[4652]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 17:24:02.652459 master-0 kubenswrapper[4652]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 17:24:02.653644 master-0 kubenswrapper[4652]: I0216 17:24:02.652555 4652 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 17:24:02.655625 master-0 kubenswrapper[4652]: W0216 17:24:02.655598 4652 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:24:02.655625 master-0 kubenswrapper[4652]: W0216 17:24:02.655616 4652 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:24:02.655625 master-0 kubenswrapper[4652]: W0216 17:24:02.655623 4652 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:24:02.655625 master-0 kubenswrapper[4652]: W0216 17:24:02.655628 4652 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655634 4652 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655639 4652 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655643 4652 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655648 4652 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655653 4652 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655657 4652 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655662 4652 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655667 4652 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655671 4652 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655676 4652 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655680 4652 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655685 4652 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655690 4652 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655694 4652 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655707 4652 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655711 4652 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655716 4652 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655720 4652 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655725 4652 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:24:02.655892 master-0 kubenswrapper[4652]: W0216 17:24:02.655730 4652 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655735 4652 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655739 4652 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655744 4652 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655748 4652 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655753 4652 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655758 4652 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655762 4652 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655767 4652 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655771 4652 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655776 4652 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655780 4652 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655787 4652 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655793 4652 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655799 4652 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655804 4652 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655811 4652 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655817 4652 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655823 4652 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:24:02.657043 master-0 kubenswrapper[4652]: W0216 17:24:02.655828 4652 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655833 4652 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655839 4652 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655844 4652 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655849 4652 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655854 4652 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655859 4652 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655864 4652 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655868 4652 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655874 4652 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655878 4652 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655883 4652 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655888 4652 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655892 4652 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655897 4652 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655901 4652 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655906 4652 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655910 4652 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655916 4652 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655921 4652 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:24:02.657828 master-0 kubenswrapper[4652]: W0216 17:24:02.655928 4652 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: W0216 17:24:02.655935 4652 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: W0216 17:24:02.655940 4652 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: W0216 17:24:02.655946 4652 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: W0216 17:24:02.655951 4652 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: W0216 17:24:02.655957 4652 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: W0216 17:24:02.655962 4652 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: W0216 17:24:02.655969 4652 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: W0216 17:24:02.655974 4652 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: W0216 17:24:02.655980 4652 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: I0216 17:24:02.656109 4652 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: I0216 17:24:02.656121 4652 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: I0216 17:24:02.656133 4652 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: I0216 17:24:02.656141 4652 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: I0216 17:24:02.656148 4652 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: I0216 17:24:02.656154 4652 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: I0216 17:24:02.656161 4652 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: I0216 17:24:02.656167 4652 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: I0216 17:24:02.656173 4652 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: I0216 17:24:02.656179 4652 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: I0216 17:24:02.656184 4652 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 17:24:02.658584 master-0 kubenswrapper[4652]: I0216 17:24:02.656191 4652 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656196 4652 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656202 4652 flags.go:64] FLAG: --cgroup-root="" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656207 4652 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656212 4652 flags.go:64] FLAG: --client-ca-file="" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656217 4652 flags.go:64] FLAG: --cloud-config="" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656222 4652 flags.go:64] FLAG: --cloud-provider="" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656236 4652 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656270 4652 flags.go:64] FLAG: --cluster-domain="" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656277 4652 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656283 4652 flags.go:64] FLAG: --config-dir="" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656287 4652 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656294 4652 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656301 4652 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656306 4652 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656312 4652 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656318 4652 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656323 4652 flags.go:64] FLAG: --contention-profiling="false" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656345 4652 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656351 4652 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656357 4652 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656363 4652 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656370 4652 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656375 4652 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656381 4652 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 17:24:02.659586 master-0 kubenswrapper[4652]: I0216 17:24:02.656385 4652 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656391 4652 flags.go:64] FLAG: --enable-server="true" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656396 4652 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656406 4652 flags.go:64] FLAG: --event-burst="100" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656411 4652 flags.go:64] FLAG: --event-qps="50" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656416 4652 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656421 4652 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656426 4652 flags.go:64] FLAG: --eviction-hard="" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656432 4652 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656438 4652 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656442 4652 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656447 4652 flags.go:64] FLAG: --eviction-soft="" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656452 4652 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656457 4652 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656462 4652 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656467 4652 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656472 4652 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656476 4652 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656490 4652 flags.go:64] FLAG: --feature-gates="" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656496 4652 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656501 4652 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656506 4652 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656511 4652 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656516 4652 flags.go:64] FLAG: --healthz-port="10248" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656521 4652 flags.go:64] FLAG: --help="false" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656526 4652 flags.go:64] FLAG: --hostname-override="" Feb 16 17:24:02.661319 master-0 kubenswrapper[4652]: I0216 17:24:02.656531 4652 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656535 4652 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656540 4652 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656545 4652 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656550 4652 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656555 4652 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656559 4652 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656564 4652 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656568 4652 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656573 4652 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656578 4652 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656583 4652 flags.go:64] FLAG: --kube-reserved="" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656588 4652 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656593 4652 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656598 4652 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656603 4652 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656608 4652 flags.go:64] FLAG: --lock-file="" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656612 4652 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656617 4652 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656622 4652 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656629 4652 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656635 4652 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656639 4652 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656644 4652 flags.go:64] FLAG: --logging-format="text" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656649 4652 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 17:24:02.662698 master-0 kubenswrapper[4652]: I0216 17:24:02.656655 4652 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656660 4652 flags.go:64] FLAG: --manifest-url="" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656664 4652 flags.go:64] FLAG: --manifest-url-header="" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656676 4652 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656681 4652 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656687 4652 flags.go:64] FLAG: --max-pods="110" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656692 4652 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656696 4652 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656701 4652 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656706 4652 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656711 4652 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656716 4652 flags.go:64] FLAG: --node-ip="192.168.32.10" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656721 4652 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656743 4652 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656747 4652 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656753 4652 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656762 4652 flags.go:64] FLAG: --pod-cidr="" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656768 4652 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1593b6aac7bb18c1bbb5d41693e8b8c7f0c0410fcc09e15de52d8bd53e356541" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656776 4652 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656780 4652 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656788 4652 flags.go:64] FLAG: --pods-per-core="0" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656793 4652 flags.go:64] FLAG: --port="10250" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656798 4652 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656809 4652 flags.go:64] FLAG: --provider-id="" Feb 16 17:24:02.665537 master-0 kubenswrapper[4652]: I0216 17:24:02.656814 4652 flags.go:64] FLAG: --qos-reserved="" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656818 4652 flags.go:64] FLAG: --read-only-port="10255" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656823 4652 flags.go:64] FLAG: --register-node="true" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656828 4652 flags.go:64] FLAG: --register-schedulable="true" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656832 4652 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656841 4652 flags.go:64] FLAG: --registry-burst="10" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656846 4652 flags.go:64] FLAG: --registry-qps="5" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656850 4652 flags.go:64] FLAG: --reserved-cpus="" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656855 4652 flags.go:64] FLAG: --reserved-memory="" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656861 4652 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656866 4652 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656871 4652 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656875 4652 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656880 4652 flags.go:64] FLAG: --runonce="false" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656885 4652 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656895 4652 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656900 4652 flags.go:64] FLAG: --seccomp-default="false" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656905 4652 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656909 4652 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656914 4652 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656919 4652 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656924 4652 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656929 4652 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656934 4652 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656941 4652 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 17:24:02.666741 master-0 kubenswrapper[4652]: I0216 17:24:02.656946 4652 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.656950 4652 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.656956 4652 flags.go:64] FLAG: --system-cgroups="" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.656962 4652 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.656970 4652 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.656974 4652 flags.go:64] FLAG: --tls-cert-file="" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.656979 4652 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.656988 4652 flags.go:64] FLAG: --tls-min-version="" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.656993 4652 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.656997 4652 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.657002 4652 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.657007 4652 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.657012 4652 flags.go:64] FLAG: --v="2" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.657018 4652 flags.go:64] FLAG: --version="false" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.657024 4652 flags.go:64] FLAG: --vmodule="" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.657030 4652 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: I0216 17:24:02.657035 4652 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: W0216 17:24:02.657180 4652 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: W0216 17:24:02.657186 4652 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: W0216 17:24:02.657191 4652 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: W0216 17:24:02.657195 4652 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: W0216 17:24:02.657201 4652 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: W0216 17:24:02.657207 4652 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:24:02.668090 master-0 kubenswrapper[4652]: W0216 17:24:02.657231 4652 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657236 4652 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657240 4652 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657272 4652 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657279 4652 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657283 4652 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657288 4652 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657293 4652 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657298 4652 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657302 4652 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657307 4652 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657312 4652 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657319 4652 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657324 4652 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657328 4652 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657332 4652 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657337 4652 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657341 4652 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657345 4652 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:24:02.669408 master-0 kubenswrapper[4652]: W0216 17:24:02.657349 4652 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657354 4652 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657358 4652 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657362 4652 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657366 4652 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657371 4652 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657375 4652 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657379 4652 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657383 4652 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657388 4652 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657392 4652 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657396 4652 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657400 4652 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657405 4652 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657409 4652 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657413 4652 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657417 4652 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657422 4652 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657426 4652 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657430 4652 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:24:02.670225 master-0 kubenswrapper[4652]: W0216 17:24:02.657439 4652 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657445 4652 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657450 4652 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657455 4652 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657459 4652 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657465 4652 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657470 4652 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657474 4652 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657479 4652 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657483 4652 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657487 4652 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657492 4652 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657496 4652 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657500 4652 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657505 4652 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657509 4652 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657514 4652 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657518 4652 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657522 4652 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657527 4652 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:24:02.671060 master-0 kubenswrapper[4652]: W0216 17:24:02.657531 4652 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: W0216 17:24:02.657537 4652 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: W0216 17:24:02.657542 4652 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: W0216 17:24:02.657548 4652 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: W0216 17:24:02.657552 4652 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: W0216 17:24:02.657557 4652 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: W0216 17:24:02.657562 4652 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: I0216 17:24:02.657569 4652 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: I0216 17:24:02.663949 4652 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: I0216 17:24:02.663974 4652 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: W0216 17:24:02.664064 4652 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: W0216 17:24:02.664074 4652 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: W0216 17:24:02.664084 4652 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: W0216 17:24:02.664091 4652 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: W0216 17:24:02.664097 4652 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:24:02.671908 master-0 kubenswrapper[4652]: W0216 17:24:02.664102 4652 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664107 4652 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664112 4652 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664117 4652 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664123 4652 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664128 4652 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664135 4652 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664142 4652 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664148 4652 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664153 4652 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664159 4652 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664164 4652 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664169 4652 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664174 4652 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664180 4652 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664185 4652 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664190 4652 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664197 4652 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664206 4652 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:24:02.672699 master-0 kubenswrapper[4652]: W0216 17:24:02.664213 4652 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664218 4652 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664224 4652 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664230 4652 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664235 4652 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664241 4652 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664280 4652 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664289 4652 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664296 4652 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664302 4652 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664310 4652 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664318 4652 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664324 4652 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664331 4652 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664338 4652 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664344 4652 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664351 4652 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664358 4652 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664364 4652 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664371 4652 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:24:02.673732 master-0 kubenswrapper[4652]: W0216 17:24:02.664377 4652 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664384 4652 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664391 4652 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664397 4652 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664404 4652 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664413 4652 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664422 4652 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664432 4652 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664440 4652 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664448 4652 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664456 4652 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664464 4652 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664471 4652 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664479 4652 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664487 4652 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664494 4652 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664501 4652 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664509 4652 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664515 4652 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:24:02.674965 master-0 kubenswrapper[4652]: W0216 17:24:02.664523 4652 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664531 4652 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664537 4652 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664544 4652 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664550 4652 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664557 4652 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664564 4652 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664572 4652 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664579 4652 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: I0216 17:24:02.664591 4652 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664784 4652 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664797 4652 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664804 4652 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664811 4652 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664817 4652 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 17:24:02.676198 master-0 kubenswrapper[4652]: W0216 17:24:02.664824 4652 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664831 4652 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664837 4652 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664844 4652 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664850 4652 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664857 4652 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664864 4652 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664871 4652 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664877 4652 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664883 4652 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664892 4652 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664902 4652 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664910 4652 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664917 4652 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664924 4652 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664932 4652 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664939 4652 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664947 4652 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664954 4652 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 17:24:02.677806 master-0 kubenswrapper[4652]: W0216 17:24:02.664961 4652 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.664968 4652 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.664974 4652 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.664981 4652 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.664988 4652 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.664994 4652 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665000 4652 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665007 4652 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665013 4652 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665022 4652 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665029 4652 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665035 4652 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665042 4652 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665049 4652 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665055 4652 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665062 4652 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665068 4652 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665075 4652 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665083 4652 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665089 4652 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 17:24:02.679096 master-0 kubenswrapper[4652]: W0216 17:24:02.665096 4652 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665102 4652 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665109 4652 feature_gate.go:330] unrecognized feature gate: Example Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665116 4652 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665122 4652 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665129 4652 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665136 4652 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665143 4652 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665150 4652 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665157 4652 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665164 4652 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665173 4652 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665179 4652 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665189 4652 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665198 4652 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665242 4652 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665274 4652 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665282 4652 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665290 4652 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 17:24:02.679707 master-0 kubenswrapper[4652]: W0216 17:24:02.665298 4652 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: W0216 17:24:02.665305 4652 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: W0216 17:24:02.665312 4652 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: W0216 17:24:02.665321 4652 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: W0216 17:24:02.665330 4652 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: W0216 17:24:02.665338 4652 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: W0216 17:24:02.665347 4652 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: W0216 17:24:02.665353 4652 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: W0216 17:24:02.665360 4652 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: I0216 17:24:02.665371 4652 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: I0216 17:24:02.665598 4652 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: I0216 17:24:02.669592 4652 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: I0216 17:24:02.669684 4652 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: I0216 17:24:02.669924 4652 server.go:997] "Starting client certificate rotation" Feb 16 17:24:02.680194 master-0 kubenswrapper[4652]: I0216 17:24:02.669933 4652 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 17:24:02.680585 master-0 kubenswrapper[4652]: I0216 17:24:02.670125 4652 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 10:22:05.62474866 +0000 UTC Feb 16 17:24:02.680585 master-0 kubenswrapper[4652]: I0216 17:24:02.670211 4652 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 16h58m2.954542965s for next certificate rotation Feb 16 17:24:02.680585 master-0 kubenswrapper[4652]: I0216 17:24:02.670586 4652 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:24:02.680585 master-0 kubenswrapper[4652]: I0216 17:24:02.671800 4652 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 17:24:02.680585 master-0 kubenswrapper[4652]: I0216 17:24:02.675668 4652 log.go:25] "Validated CRI v1 runtime API" Feb 16 17:24:02.682733 master-0 kubenswrapper[4652]: I0216 17:24:02.682677 4652 log.go:25] "Validated CRI v1 image API" Feb 16 17:24:02.684115 master-0 kubenswrapper[4652]: I0216 17:24:02.684082 4652 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 17:24:02.688679 master-0 kubenswrapper[4652]: I0216 17:24:02.688637 4652 fs.go:135] Filesystem UUIDs: map[35a0b0cc-84b1-4374-a18a-0f49ad7a8333:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Feb 16 17:24:02.689021 master-0 kubenswrapper[4652]: I0216 17:24:02.688687 4652 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/00316e53224294770a34d485da1701e46dd1e2fb2c2bd8ae7389a5dd2d782710/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/00316e53224294770a34d485da1701e46dd1e2fb2c2bd8ae7389a5dd2d782710/userdata/shm major:0 minor:198 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/06cdc79aff420eb5730cf93c10b791911677809cb3e311984f04d7223bea2df7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/06cdc79aff420eb5730cf93c10b791911677809cb3e311984f04d7223bea2df7/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/36d50210d5c52db4b7e6fdca90b019b559fb61ad6d363fa02b488e76691be827/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/36d50210d5c52db4b7e6fdca90b019b559fb61ad6d363fa02b488e76691be827/userdata/shm major:0 minor:63 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/37867a9b89ca658d12f1765647ed5e15e132bb4023f3490c258e8e8c2d9cc767/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/37867a9b89ca658d12f1765647ed5e15e132bb4023f3490c258e8e8c2d9cc767/userdata/shm major:0 minor:305 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3df38b68e675184d173d581679d4cdced10e660e88d294708f50f7b9553e8d93/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3df38b68e675184d173d581679d4cdced10e660e88d294708f50f7b9553e8d93/userdata/shm major:0 minor:309 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3f5274f53616b7f3f394dd7e765c70a0d9d9d82d26946040a2390d3b98008538/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3f5274f53616b7f3f394dd7e765c70a0d9d9d82d26946040a2390d3b98008538/userdata/shm major:0 minor:190 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/441fa2dcc2054ce74afe608caccc7ace43169040cc77c644b15838983f1c426d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/441fa2dcc2054ce74afe608caccc7ace43169040cc77c644b15838983f1c426d/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4ab1b3b76e20d135df3b1131111388991974b01e8267bfd94f88542db725e3af/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4ab1b3b76e20d135df3b1131111388991974b01e8267bfd94f88542db725e3af/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/60ad9673b9da87a543bba4e5a24b9c3c17606af8ac65c311daabcc313339be82/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/60ad9673b9da87a543bba4e5a24b9c3c17606af8ac65c311daabcc313339be82/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/650c00271176c30e19eb9cdf1573f9c862bf460c2839e903b286275740a5a883/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/650c00271176c30e19eb9cdf1573f9c862bf460c2839e903b286275740a5a883/userdata/shm major:0 minor:294 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7390c7d89f79e636baa8c58deafe3fc046c5d3959b31e83d9fd704ba232e7cc1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7390c7d89f79e636baa8c58deafe3fc046c5d3959b31e83d9fd704ba232e7cc1/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7cb9dc3e1cfd504ac51740676aa8abfea42a74cb0bb3c1ae429538ab24b08f03/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7cb9dc3e1cfd504ac51740676aa8abfea42a74cb0bb3c1ae429538ab24b08f03/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/86caf26a899fa2ef707de37b05830737248aa086d2a7fc23bbea1ac0ba7504f6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/86caf26a899fa2ef707de37b05830737248aa086d2a7fc23bbea1ac0ba7504f6/userdata/shm major:0 minor:274 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8829b7ba3dde2781a29cd29841cecd44ba49a0453c7b226cb4e93d3298990b75/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8829b7ba3dde2781a29cd29841cecd44ba49a0453c7b226cb4e93d3298990b75/userdata/shm major:0 minor:181 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8a66c9c6ab0ffb6c022d21572d9ecd028be9e07d99ed15c25f8c09f001677ac9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8a66c9c6ab0ffb6c022d21572d9ecd028be9e07d99ed15c25f8c09f001677ac9/userdata/shm major:0 minor:317 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/991e68de3901c2fa1007e2e130ec8671c0a957ba6c92997b14008db00c17ebb5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/991e68de3901c2fa1007e2e130ec8671c0a957ba6c92997b14008db00c17ebb5/userdata/shm major:0 minor:230 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9b5d8f819f97cebd14131d50dc1935b79709a51c884f493fa2fa58cc6a695b9a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9b5d8f819f97cebd14131d50dc1935b79709a51c884f493fa2fa58cc6a695b9a/userdata/shm major:0 minor:199 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9c1f9646afbb62e247cefae88e6ea50065550d78f7935c044f7dcb7faa56701d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9c1f9646afbb62e247cefae88e6ea50065550d78f7935c044f7dcb7faa56701d/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cd3d71a6084ee248a560124746ee307460625fa3d9ee1fe1d378dbd98e43a0fb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cd3d71a6084ee248a560124746ee307460625fa3d9ee1fe1d378dbd98e43a0fb/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e858d51ba7f7c1ad0ca843ee57bf6eb31850a3a502ba6e109fa74505612f66cd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e858d51ba7f7c1ad0ca843ee57bf6eb31850a3a502ba6e109fa74505612f66cd/userdata/shm major:0 minor:234 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fc1859bea800a3c6a414cc64bcfd32dfbc9f487ecf2e012603f9cd17e1541615/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fc1859bea800a3c6a414cc64bcfd32dfbc9f487ecf2e012603f9cd17e1541615/userdata/shm major:0 minor:231 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06067627-6ccf-4cc8-bd20-dabdd776bb46/volumes/kubernetes.io~projected/kube-api-access-pq4dn:{mountpoint:/var/lib/kubelet/pods/06067627-6ccf-4cc8-bd20-dabdd776bb46/volumes/kubernetes.io~projected/kube-api-access-pq4dn major:0 minor:177 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d980a9a-2574-41b9-b970-0718cd97c8cd/volumes/kubernetes.io~projected/kube-api-access-t7l6q:{mountpoint:/var/lib/kubelet/pods/0d980a9a-2574-41b9-b970-0718cd97c8cd/volumes/kubernetes.io~projected/kube-api-access-t7l6q major:0 minor:300 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d1636c0-f34d-444c-822d-77f1d203ddc4/volumes/kubernetes.io~projected/kube-api-access-vbtld:{mountpoint:/var/lib/kubelet/pods/2d1636c0-f34d-444c-822d-77f1d203ddc4/volumes/kubernetes.io~projected/kube-api-access-vbtld major:0 minor:171 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2:{mountpoint:/var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2 major:0 minor:197 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e/volumes/kubernetes.io~empty-dir/config-out major:0 minor:167 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e/volumes/kubernetes.io~projected/kube-api-access-l67l5:{mountpoint:/var/lib/kubelet/pods/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e/volumes/kubernetes.io~projected/kube-api-access-l67l5 major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl:{mountpoint:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl major:0 minor:188 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert major:0 minor:159 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x:{mountpoint:/var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x major:0 minor:206 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt:{mountpoint:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt major:0 minor:304 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls major:0 minor:73 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x:{mountpoint:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/55d635cd-1f0d-4086-96f2-9f3524f3f18c/volumes/kubernetes.io~projected/kube-api-access-76rtg:{mountpoint:/var/lib/kubelet/pods/55d635cd-1f0d-4086-96f2-9f3524f3f18c/volumes/kubernetes.io~projected/kube-api-access-76rtg major:0 minor:286 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw:{mountpoint:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:168 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/kube-api-access-b5mwd:{mountpoint:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/kube-api-access-b5mwd major:0 minor:194 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x:{mountpoint:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x major:0 minor:173 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls major:0 minor:158 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld:{mountpoint:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:153 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/810a2275-fae5-45df-a3b8-92860451d33b/volumes/kubernetes.io~projected/kube-api-access-ktgm7:{mountpoint:/var/lib/kubelet/pods/810a2275-fae5-45df-a3b8-92860451d33b/volumes/kubernetes.io~projected/kube-api-access-ktgm7 major:0 minor:178 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:279 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~projected/kube-api-access-gvw4s:{mountpoint:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~projected/kube-api-access-gvw4s major:0 minor:251 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/certs major:0 minor:165 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:170 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2 major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:75 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g:{mountpoint:/var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g major:0 minor:293 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~projected/kube-api-access-8nfk2:{mountpoint:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~projected/kube-api-access-8nfk2 major:0 minor:180 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:161 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:163 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm:{mountpoint:/var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm major:0 minor:189 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl:{mountpoint:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl major:0 minor:179 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:76 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5:{mountpoint:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5 major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ae20b683-dac8-419e-808a-ddcdb3c564e1/volumes/kubernetes.io~projected/kube-api-access-f69cb:{mountpoint:/var/lib/kubelet/pods/ae20b683-dac8-419e-808a-ddcdb3c564e1/volumes/kubernetes.io~projected/kube-api-access-f69cb major:0 minor:207 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b04ee64e-5e83-499c-812d-749b2b6824c6/volumes/kubernetes.io~empty-dir/config-out:{mountpoint:/var/lib/kubelet/pods/b04ee64e-5e83-499c-812d-749b2b6824c6/volumes/kubernetes.io~empty-dir/config-out major:0 minor:169 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b04ee64e-5e83-499c-812d-749b2b6824c6/volumes/kubernetes.io~projected/kube-api-access-vpjv7:{mountpoint:/var/lib/kubelet/pods/b04ee64e-5e83-499c-812d-749b2b6824c6/volumes/kubernetes.io~projected/kube-api-access-vpjv7 major:0 minor:285 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg:{mountpoint:/var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert major:0 minor:80 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ba37ef0e-373c-4ccc-b082-668630399765/volumes/kubernetes.io~projected/kube-api-access-57455:{mountpoint:/var/lib/kubelet/pods/ba37ef0e-373c-4ccc-b082-668630399765/volumes/kubernetes.io~projected/kube-api-access-57455 major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:162 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp major:0 minor:157 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n:{mountpoint:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52:{mountpoint:/var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52 major:0 minor:172 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67:{mountpoint:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67 major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~projected/kube-api-access-94kdz:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~projected/kube-api-access-94kdz major:0 minor:278 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/default-certificate major:0 minor:166 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/metrics-certs major:0 minor:164 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/stats-auth major:0 minor:160 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz:{mountpoint:/var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz major:0 minor:185 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fe8e8e5d-cebb-4361-b765-5ff737f5e838/volumes/kubernetes.io~projected/kube-api-access-j99jl:{mountpoint:/var/lib/kubelet/pods/fe8e8e5d-cebb-4361-b765-5ff737f5e838/volumes/kubernetes.io~projected/kube-api-access-j99jl major:0 minor:280 fsType:tmpfs blockSize:0} overlay_0-101:{mountpoint:/var/lib/containers/storage/overlay/223b14f09bd92b571bec07573e4d64e8f29e65918658ad1c2acc2878e11564d1/merged major:0 minor:101 fsType:overlay blockSize:0} overlay_0-109:{mountpoint:/var/lib/containers/storage/overlay/b0db791901f45fd5654338dd2da0eb3508d79b78d3f8cb72d22a21451c683bc0/merged major:0 minor:109 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/3217e1ee7dd287e9bfa0a12eda546957253ddc939f6ad78b28d6688c7525bf90/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/bc5437cae35eb901d78c295b2b28eff9c541c40ba0583829acd0d4f2fce77ecc/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-128:{mountpoint:/var/lib/containers/storage/overlay/84c0f97d7e9bea3b48805bbfc88ba77bd77d8963075ddcb3676991e52345b90e/merged major:0 minor:128 fsType:overlay blockSize:0} overlay_0-133:{mountpoint:/var/lib/containers/storage/overlay/5ca6250e835705734911177363196d788d36817b0e28274fe1471db7fa0c4177/merged major:0 minor:133 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/45b977858c8f299fd63d108fc14bdb142050d92358e003e626592510a29ed84b/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-143:{mountpoint:/var/lib/containers/storage/overlay/2351df5d7875ee95502d968aa1fcb30d43c16a136cb2e8f85a6cac2a2ce069bb/merged major:0 minor:143 fsType:overlay blockSize:0} overlay_0-148:{mountpoint:/var/lib/containers/storage/overlay/e89801cb52095636fa5cea58f77c36ca9198c8fca2ac9b4be1ee7769fd7dcbb5/merged major:0 minor:148 fsType:overlay blockSize:0} overlay_0-183:{mountpoint:/var/lib/containers/storage/overlay/e1226576351d7a031d95593d76ca981db72a4b6e1d59c1a178a2a66400319d01/merged major:0 minor:183 fsType:overlay blockSize:0} overlay_0-186:{mountpoint:/var/lib/containers/storage/overlay/67144324a812f946917e246b404371a610a5818469f5db04b4ce602c5c49a71d/merged major:0 minor:186 fsType:overlay blockSize:0} overlay_0-192:{mountpoint:/var/lib/containers/storage/overlay/827a9eb23593e0d91a55834b438c9ce334abca97ba94fc7c7ae26f5b3ac6d6da/merged major:0 minor:192 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/b9d7dc517425fa6bedc9ce661dfe29b8e3282f0914baabf66db7b8ac60242ad4/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-202:{mountpoint:/var/lib/containers/storage/overlay/34b4cf0f6787135236ddb100b16d6779e8361c14138a7442dc9d7f4d34a79500/merged major:0 minor:202 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/a209e21dead87f52e40f2b52b8165ff7cae5db925c3788792df9a8cef69316ae/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-208:{mountpoint:/var/lib/containers/storage/overlay/d91c0e28eab38161af331e9cc70d581d4f3530fa41a5225e83a9d6656ae6c50b/merged major:0 minor:208 fsType:overlay blockSize:0} overlay_0-210:{mountpoint:/var/lib/containers/storage/overlay/225f17838488c74ae37a35ac3744e93567d27d165ebc5a088828857cdfe7a520/merged major:0 minor:210 fsType:overlay blockSize:0} overlay_0-215:{mountpoint:/var/lib/containers/storage/overlay/436bcce2c5884dc66f1ab95dfb98466f0ede2261b7ff6c0cac19945310a856b8/merged major:0 minor:215 fsType:overlay blockSize:0} overlay_0-223:{mountpoint:/var/lib/containers/storage/overlay/62c57bb30e77191cccb3ff2f90d7682bf736ae48bb0a418f65d05356705619cb/merged major:0 minor:223 fsType:overlay blockSize:0} overlay_0-235:{mountpoint:/var/lib/containers/storage/overlay/e6da6301aed8074d3fd311137450d4cb57f55b1a3e2ee14b502ad7317d8550c0/merged major:0 minor:235 fsType:overlay blockSize:0} overlay_0-237:{mountpoint:/var/lib/containers/storage/overlay/4f7015e552eb0be13a2897a2e51bfd736d7d8b706ee4f5e0fa1fac25fdb6eb32/merged major:0 minor:237 fsType:overlay blockSize:0} overlay_0-240:{mountpoint:/var/lib/containers/storage/overlay/b0bf2ca1a482610de6fa3bd76f99d89649d9ce67b4bd1cc022d7660e55ac843e/merged major:0 minor:240 fsType:overlay blockSize:0} overlay_0-242:{mountpoint:/var/lib/containers/storage/overlay/f66174227e4516f481dfec45e9056d578468fd08b39786f06f943466ffad1b1e/merged major:0 minor:242 fsType:overlay blockSize:0} overlay_0-244:{mountpoint:/var/lib/containers/storage/overlay/561699ce7451f43e70b35e91ababd90ee1823acf0e80e130e54721e0cea2f4ca/merged major:0 minor:244 fsType:overlay blockSize:0} overlay_0-247:{mountpoint:/var/lib/containers/storage/overlay/cf1e3c776d39e6deb224fa718ebf5b8c39b938c9587d033d945d89857ee931e5/merged major:0 minor:247 fsType:overlay blockSize:0} overlay_0-254:{mountpoint:/var/lib/containers/storage/overlay/f02ddf8beaf1e955c5e3fb42304d2241c08d3b5bac5e2f1c55b538ea1609f340/merged major:0 minor:254 fsType:overlay blockSize:0} overlay_0-256:{mountpoint:/var/lib/containers/storage/overlay/212936f4b65e6d6d856fd8a6e416922df81878df8f740a3ac07ecf2f0f3365d1/merged major:0 minor:256 fsType:overlay blockSize:0} overlay_0-258:{mountpoint:/var/lib/containers/storage/overlay/34d0129db2a2e80bfec09bc66e873b338869ffd64d1e6089b0e351a72080da60/merged major:0 minor:258 fsType:overlay blockSize:0} overlay_0-263:{mountpoint:/var/lib/containers/storage/overlay/d6b0c9ffc7461d15a9cda7a0df2fc35149944470f7f1884104ad995497066bd6/merged major:0 minor:263 fsType:overlay blockSize:0} overlay_0-269:{mountpoint:/var/lib/containers/storage/overlay/18292d86671ac228c78cdb29c79a509f5cf306ad2f0fcebbc5de1222d3702fb0/merged major:0 minor:269 fsType:overlay blockSize:0} overlay_0-272:{mountpoint:/var/lib/containers/storage/overlay/df260ba957fcad41c488a422ad109784dba60d4c5a6764911e09dcc74aede15a/merged major:0 minor:272 fsType:overlay blockSize:0} overlay_0-276:{mountpoint:/var/lib/containers/storage/overlay/204b3eccbe1aac37ea8384527dbd1118a204430f58516d97d3edb8fe6be0d4f4/merged major:0 minor:276 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/a65dea2abb27e36ea39fceae552fe04e76fa3619d551380f842b66a76c271601/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/72f58253862cf97daf3deb8e3b3076ba2c9bd192a2562f7e0bf92d7b16f01cea/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/f2ee7c3532497a29918747011941f07af864cab4fe52c415029ccbb32b12c15c/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/664e5f37ea2b1a0025a414ce40aa0188e92a062430a290c447bc3df865bf2e14/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/d15e19c57b5fad57b3465846974602c0355855b31d1faf9a4804e0f1193f2cb1/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-296:{mountpoint:/var/lib/containers/storage/overlay/b5c78aa6efe9c10e3e4c0a49fe98281e83dbad2e1125401569d52fd664fa1df8/merged major:0 minor:296 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/23fd83666f0607e9073b237c5f3d2dbeedceabdb8e42f1da0c7c8c7e2ec56ae4/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/7532dde41c7173802167d77a4b796ab2f36bdc1b79005f5c852190cda02d8345/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/e809714da25c0b6c67c4aaad33e85bf712783cc8fe6823c7eea78e2b6654ddff/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/270c988e6ff7290d7bd0516ba407ee6ea0d10fa3a00ce3f8b68162bb1b6658e1/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/fa2d948d294b6135eb5f1bc5dab6a66e48355f7573187307aebc28f51a5ef1bb/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-315:{mountpoint:/var/lib/containers/storage/overlay/70cd4935443e1cb66cfcef36452ce39b022240fe264fc3a70ec7cc6a8b5d8074/merged major:0 minor:315 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/989ecb2824e8f201ff415bb2d40db9f828dd388db97777f97daba5848300c5eb/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/f4da991c975295317952534da7850506c233b6dbe6465a4de13ce848336a915c/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-43:{mountpoint:/var/lib/containers/storage/overlay/032d34e37ac1961cb9976fa1ab6f8f64e19f0a8a7339bd89beb9e1b2d6ec8bc2/merged major:0 minor:43 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/cf011d722fb721e24b397773813a77c67ff3e9e0b15c084ff23de99dc64b3298/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/13903d96e709eb25fd6a8bf8efa13fb3d7bef214f7f473da1f8323842a93cf7d/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/c1f339898417d1d18dd85b46049080ce34a4158ae19f1e71dc51d7d7f9bed451/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-54:{mountpoint:/var/lib/containers/storage/overlay/85923ebbcba601b19a533bae1a81399803e745ef6d9ed9f09a814b426d0c2ed8/merged major:0 minor:54 fsType:overlay blockSize:0} overlay_0-57:{mountpoint:/var/lib/containers/storage/overlay/6a50798e11b78174b946a23ab93379e690ad6765372c074ed2dc3971c35935bd/merged major:0 minor:57 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/bfffbf2b09dcea91c746c2147bd09e944a60aafd9775eb13da0ab85d9e38c731/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/f8478984223dfcdb31850f42f8c6ec515a6b75e29246668dacac0837a72a8b96/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/a5de566836074329c1f2cb1ed3899418c1921f50df937b19ffc50692a307de16/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/796ccd19bfca6ca91aa5fe72897547e306a2ecfe6bf5eed00caf95c24d3c7e89/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-70:{mountpoint:/var/lib/containers/storage/overlay/d94ebbc245212a395a308b43c98c1b6a901dcacba93db21c4a1187b94856f2c2/merged major:0 minor:70 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/b73b320d06822aecbc1af158bf2064c759fb0306afc38dde62f0ece4bd1f4dc5/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/containers/storage/overlay/82d6ceaccb29716141b0fe7890d347ac191904e00c084d9526c635719057c9af/merged major:0 minor:77 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/var/lib/containers/storage/overlay/ffe173c2993a3466ab10eb204d3aa5665d4413b635839c645bdb9b0b5adec3c7/merged major:0 minor:79 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/6a4bfe5200f6146240ca4a60e5d5917561bcb3880d2f2e866af91769b863eb4e/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/e0053ea6a980ca911506223a8a6073ab84a417b0c8662c665d5da6245f9b31f4/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-86:{mountpoint:/var/lib/containers/storage/overlay/eefdf392125223da77614dbff0dff99c4cd3d38e0da1aed93a92dddf16b86eb6/merged major:0 minor:86 fsType:overlay blockSize:0} overlay_0-88:{mountpoint:/var/lib/containers/storage/overlay/f9fb3b36252fe85ee0690e452bee88f86dc705968822e4a5fa68091324ed2256/merged major:0 minor:88 fsType:overlay blockSize:0} overlay_0-90:{mountpoint:/var/lib/containers/storage/overlay/29f53996616af0af76d2d191f60d4ffb31ca8b99a98d88e09ba44be991f4b5e1/merged major:0 minor:90 fsType:overlay blockSize:0}] Feb 16 17:24:02.708627 master-0 kubenswrapper[4652]: I0216 17:24:02.708063 4652 manager.go:217] Machine: {Timestamp:2026-02-16 17:24:02.707148596 +0000 UTC m=+0.095317132 CPUVendorID:AuthenticAMD NumCores:16 NumPhysicalCores:1 NumSockets:16 CpuFrequency:2800000 MemoryCapacity:50514141184 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:47bfea951bd14de8bb3b008f6812b13f SystemUUID:47bfea95-1bd1-4de8-bb3b-008f6812b13f BootID:bff30cf7-71da-4e66-9940-13ec1ab42f05 Filesystems:[{Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~projected/kube-api-access-gvw4s DeviceMajor:0 DeviceMinor:251 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3df38b68e675184d173d581679d4cdced10e660e88d294708f50f7b9553e8d93/userdata/shm DeviceMajor:0 DeviceMinor:309 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~projected/kube-api-access-94kdz DeviceMajor:0 DeviceMinor:278 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/55d635cd-1f0d-4086-96f2-9f3524f3f18c/volumes/kubernetes.io~projected/kube-api-access-76rtg DeviceMajor:0 DeviceMinor:286 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-296 DeviceMajor:0 DeviceMinor:296 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/06cdc79aff420eb5730cf93c10b791911677809cb3e311984f04d7223bea2df7/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/991e68de3901c2fa1007e2e130ec8671c0a957ba6c92997b14008db00c17ebb5/userdata/shm DeviceMajor:0 DeviceMinor:230 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e858d51ba7f7c1ad0ca843ee57bf6eb31850a3a502ba6e109fa74505612f66cd/userdata/shm DeviceMajor:0 DeviceMinor:234 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-258 DeviceMajor:0 DeviceMinor:258 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-57 DeviceMajor:0 DeviceMinor:57 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:227 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-254 DeviceMajor:0 DeviceMinor:254 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-272 DeviceMajor:0 DeviceMinor:272 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ba37ef0e-373c-4ccc-b082-668630399765/volumes/kubernetes.io~projected/kube-api-access-57455 DeviceMajor:0 DeviceMinor:228 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~projected/kube-api-access-6ftld DeviceMajor:0 DeviceMinor:265 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-269 DeviceMajor:0 DeviceMinor:269 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-202 DeviceMajor:0 DeviceMinor:202 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-256 DeviceMajor:0 DeviceMinor:256 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:157 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-186 DeviceMajor:0 DeviceMinor:186 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-244 DeviceMajor:0 DeviceMinor:244 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-133 DeviceMajor:0 DeviceMinor:133 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~projected/kube-api-access-vk7xl DeviceMajor:0 DeviceMinor:188 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-237 DeviceMajor:0 DeviceMinor:237 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-143 DeviceMajor:0 DeviceMinor:143 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:158 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e/volumes/kubernetes.io~projected/kube-api-access-l67l5 DeviceMajor:0 DeviceMinor:229 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~projected/kube-api-access-wn82n DeviceMajor:0 DeviceMinor:250 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:10102829056 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-54 DeviceMajor:0 DeviceMinor:54 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:164 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-109 DeviceMajor:0 DeviceMinor:109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c8729b1a-e365-4cf7-8a05-91a9987dabe9/volumes/kubernetes.io~projected/kube-api-access-hmj52 DeviceMajor:0 DeviceMinor:172 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-242 DeviceMajor:0 DeviceMinor:242 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~projected/kube-api-access-9xrw2 DeviceMajor:0 DeviceMinor:303 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2d96ccdc-0b09-437d-bfca-1958af5d9953/volumes/kubernetes.io~projected/kube-api-access-zl5w2 DeviceMajor:0 DeviceMinor:197 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-247 DeviceMajor:0 DeviceMinor:247 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:222 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:25257070592 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-70 DeviceMajor:0 DeviceMinor:70 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-101 DeviceMajor:0 DeviceMinor:101 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-148 DeviceMajor:0 DeviceMinor:148 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:165 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3f5274f53616b7f3f394dd7e765c70a0d9d9d82d26946040a2390d3b98008538/userdata/shm DeviceMajor:0 DeviceMinor:190 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/ad805251-19d0-4d2f-b741-7d11158f1f03/volumes/kubernetes.io~projected/kube-api-access-bnnc5 DeviceMajor:0 DeviceMinor:220 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~projected/kube-api-access-r87zw DeviceMajor:0 DeviceMinor:221 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:25257070592 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:160 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/37867a9b89ca658d12f1765647ed5e15e132bb4023f3490c258e8e8c2d9cc767/userdata/shm DeviceMajor:0 DeviceMinor:305 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-128 DeviceMajor:0 DeviceMinor:128 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b6ad958f-25e4-40cb-89ec-5da9cb6395c7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:80 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/06067627-6ccf-4cc8-bd20-dabdd776bb46/volumes/kubernetes.io~projected/kube-api-access-pq4dn DeviceMajor:0 DeviceMinor:177 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fc1859bea800a3c6a414cc64bcfd32dfbc9f487ecf2e012603f9cd17e1541615/userdata/shm DeviceMajor:0 DeviceMinor:231 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-43 DeviceMajor:0 DeviceMinor:43 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:73 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/ab5760f1-b2e0-4138-9383-e4827154ac50/volumes/kubernetes.io~projected/kube-api-access-j5qxm DeviceMajor:0 DeviceMinor:189 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:162 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-276 DeviceMajor:0 DeviceMinor:276 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-183 DeviceMajor:0 DeviceMinor:183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fe8e8e5d-cebb-4361-b765-5ff737f5e838/volumes/kubernetes.io~projected/kube-api-access-j99jl DeviceMajor:0 DeviceMinor:280 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/9609a4f3-b947-47af-a685-baae26c50fa3/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:279 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-315 DeviceMajor:0 DeviceMinor:315 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-90 DeviceMajor:0 DeviceMinor:90 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9f9bf4ab-5415-4616-aa36-ea387c699ea9/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:75 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/volumes/kubernetes.io~projected/kube-api-access-8p2jz DeviceMajor:0 DeviceMinor:185 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/36d50210d5c52db4b7e6fdca90b019b559fb61ad6d363fa02b488e76691be827/userdata/shm DeviceMajor:0 DeviceMinor:63 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/5192fa49-d81c-47ce-b2ab-f90996cc0bd5/volumes/kubernetes.io~projected/kube-api-access-2gq8x DeviceMajor:0 DeviceMinor:246 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/e10d0b0c-4c2a-45b3-8d69-3070d566b97d/volumes/kubernetes.io~projected/kube-api-access-j7w67 DeviceMajor:0 DeviceMinor:260 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/86caf26a899fa2ef707de37b05830737248aa086d2a7fc23bbea1ac0ba7504f6/userdata/shm DeviceMajor:0 DeviceMinor:274 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9c48005e-c4df-4332-87fc-ec028f2c6921/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:170 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/648abb6c-9c81-4e5c-b5f1-3b7eb254f743/volumes/kubernetes.io~projected/kube-api-access-sx92x DeviceMajor:0 DeviceMinor:173 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8a66c9c6ab0ffb6c022d21572d9ecd028be9e07d99ed15c25f8c09f001677ac9/userdata/shm DeviceMajor:0 DeviceMinor:317 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-192 DeviceMajor:0 DeviceMinor:192 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9b5d8f819f97cebd14131d50dc1935b79709a51c884f493fa2fa58cc6a695b9a/userdata/shm DeviceMajor:0 DeviceMinor:199 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/43f65f23-4ddd-471a-9cb3-b0945382d83c/volumes/kubernetes.io~projected/kube-api-access-8r28x DeviceMajor:0 DeviceMinor:206 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/a6fe41b0-1a42-4f07-8220-d9aaa50788ad/volumes/kubernetes.io~projected/kube-api-access-8m29g DeviceMajor:0 DeviceMinor:293 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:161 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b04ee64e-5e83-499c-812d-749b2b6824c6/volumes/kubernetes.io~projected/kube-api-access-vpjv7 DeviceMajor:0 DeviceMinor:285 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/441fa2dcc2054ce74afe608caccc7ace43169040cc77c644b15838983f1c426d/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8829b7ba3dde2781a29cd29841cecd44ba49a0453c7b226cb4e93d3298990b75/userdata/shm DeviceMajor:0 DeviceMinor:181 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/volumes/kubernetes.io~projected/kube-api-access-b5mwd DeviceMajor:0 DeviceMinor:194 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-208 DeviceMajor:0 DeviceMinor:208 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7cb9dc3e1cfd504ac51740676aa8abfea42a74cb0bb3c1ae429538ab24b08f03/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/60ad9673b9da87a543bba4e5a24b9c3c17606af8ac65c311daabcc313339be82/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-88 DeviceMajor:0 DeviceMinor:88 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b04ee64e-5e83-499c-812d-749b2b6824c6/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:169 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-210 DeviceMajor:0 DeviceMinor:210 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/702322ac-7610-4568-9a68-b6acbd1f0c12/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:153 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/ae20b683-dac8-419e-808a-ddcdb3c564e1/volumes/kubernetes.io~projected/kube-api-access-f69cb DeviceMajor:0 DeviceMinor:207 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-86 DeviceMajor:0 DeviceMinor:86 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d980a9a-2574-41b9-b970-0718cd97c8cd/volumes/kubernetes.io~projected/kube-api-access-t7l6q DeviceMajor:0 DeviceMinor:300 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-215 DeviceMajor:0 DeviceMinor:215 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-240 DeviceMajor:0 DeviceMinor:240 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/650c00271176c30e19eb9cdf1573f9c862bf460c2839e903b286275740a5a883/userdata/shm DeviceMajor:0 DeviceMinor:294 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7390c7d89f79e636baa8c58deafe3fc046c5d3959b31e83d9fd704ba232e7cc1/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/39387549-c636-4bd4-b463-f6a93810f277/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:159 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-223 DeviceMajor:0 DeviceMinor:223 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-263 DeviceMajor:0 DeviceMinor:263 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4ab1b3b76e20d135df3b1131111388991974b01e8267bfd94f88542db725e3af/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4549ea98-7379-49e1-8452-5efb643137ca/volumes/kubernetes.io~projected/kube-api-access-zt8mt DeviceMajor:0 DeviceMinor:304 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cd3d71a6084ee248a560124746ee307460625fa3d9ee1fe1d378dbd98e43a0fb/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:166 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/5a939dd0-fc27-4d47-b81b-96e13e4bbca9/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:168 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/810a2275-fae5-45df-a3b8-92860451d33b/volumes/kubernetes.io~projected/kube-api-access-ktgm7 DeviceMajor:0 DeviceMinor:178 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/00316e53224294770a34d485da1701e46dd1e2fb2c2bd8ae7389a5dd2d782710/userdata/shm DeviceMajor:0 DeviceMinor:198 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:76 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:163 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e/volumes/kubernetes.io~empty-dir/config-out DeviceMajor:0 DeviceMinor:167 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/a94f9b8e-b020-4aab-8373-6c056ec07464/volumes/kubernetes.io~projected/kube-api-access-8nfk2 DeviceMajor:0 DeviceMinor:180 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/b3fa6ac1-781f-446c-b6b4-18bdb7723c23/volumes/kubernetes.io~projected/kube-api-access-q46jg DeviceMajor:0 DeviceMinor:249 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9c1f9646afbb62e247cefae88e6ea50065550d78f7935c044f7dcb7faa56701d/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:6166277 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/2d1636c0-f34d-444c-822d-77f1d203ddc4/volumes/kubernetes.io~projected/kube-api-access-vbtld DeviceMajor:0 DeviceMinor:171 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:/var/lib/kubelet/pods/ab80e0fb-09dd-4c93-b235-1487024105d2/volumes/kubernetes.io~projected/kube-api-access-fkwxl DeviceMajor:0 DeviceMinor:179 Capacity:49335554048 Type:vfs Inodes:6166277 HasInodes:true} {Device:overlay_0-235 DeviceMajor:0 DeviceMinor:235 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:52:47:03:db:66:8a Speed:0 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:2c:b9:e2 Speed:-1 Mtu:9000} {Name:eth2 MacAddress:fa:16:3e:4a:2e:ce Speed:-1 Mtu:9000} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:46:8e:c9:b8:46:79 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:50514141184 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[12] Caches:[{Id:12 Size:32768 Type:Data Level:1} {Id:12 Size:32768 Type:Instruction Level:1} {Id:12 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:12 Size:16777216 Type:Unified Level:3}] SocketID:12 BookID: DrawerID:} {Id:0 Threads:[13] Caches:[{Id:13 Size:32768 Type:Data Level:1} {Id:13 Size:32768 Type:Instruction Level:1} {Id:13 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:13 Size:16777216 Type:Unified Level:3}] SocketID:13 BookID: DrawerID:} {Id:0 Threads:[14] Caches:[{Id:14 Size:32768 Type:Data Level:1} {Id:14 Size:32768 Type:Instruction Level:1} {Id:14 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:14 Size:16777216 Type:Unified Level:3}] SocketID:14 BookID: DrawerID:} {Id:0 Threads:[15] Caches:[{Id:15 Size:32768 Type:Data Level:1} {Id:15 Size:32768 Type:Instruction Level:1} {Id:15 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:15 Size:16777216 Type:Unified Level:3}] SocketID:15 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 17:24:02.708627 master-0 kubenswrapper[4652]: I0216 17:24:02.708607 4652 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 17:24:02.708943 master-0 kubenswrapper[4652]: I0216 17:24:02.708722 4652 manager.go:233] Version: {KernelVersion:5.14.0-427.107.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202601202224-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 17:24:02.708943 master-0 kubenswrapper[4652]: I0216 17:24:02.708914 4652 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 17:24:02.709081 master-0 kubenswrapper[4652]: I0216 17:24:02.709036 4652 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 17:24:02.709277 master-0 kubenswrapper[4652]: I0216 17:24:02.709074 4652 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 17:24:02.709332 master-0 kubenswrapper[4652]: I0216 17:24:02.709294 4652 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 17:24:02.709332 master-0 kubenswrapper[4652]: I0216 17:24:02.709303 4652 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 17:24:02.709332 master-0 kubenswrapper[4652]: I0216 17:24:02.709310 4652 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:24:02.709332 master-0 kubenswrapper[4652]: I0216 17:24:02.709331 4652 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 17:24:02.709503 master-0 kubenswrapper[4652]: I0216 17:24:02.709473 4652 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:24:02.709573 master-0 kubenswrapper[4652]: I0216 17:24:02.709550 4652 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 17:24:02.709623 master-0 kubenswrapper[4652]: I0216 17:24:02.709609 4652 kubelet.go:418] "Attempting to sync node with API server" Feb 16 17:24:02.709654 master-0 kubenswrapper[4652]: I0216 17:24:02.709624 4652 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 17:24:02.709654 master-0 kubenswrapper[4652]: I0216 17:24:02.709637 4652 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 17:24:02.709654 master-0 kubenswrapper[4652]: I0216 17:24:02.709648 4652 kubelet.go:324] "Adding apiserver pod source" Feb 16 17:24:02.709751 master-0 kubenswrapper[4652]: I0216 17:24:02.709664 4652 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 17:24:02.710658 master-0 kubenswrapper[4652]: I0216 17:24:02.710627 4652 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-3.rhaos4.18.gite0b87e5.el9" apiVersion="v1" Feb 16 17:24:02.710780 master-0 kubenswrapper[4652]: I0216 17:24:02.710759 4652 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711037 4652 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711151 4652 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711166 4652 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711172 4652 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711178 4652 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711184 4652 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711191 4652 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711197 4652 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711204 4652 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711212 4652 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711218 4652 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711268 4652 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711281 4652 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711306 4652 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711638 4652 server.go:1280] "Started kubelet" Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711728 4652 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711771 4652 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.711845 4652 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 17:24:02.711393 master-0 kubenswrapper[4652]: I0216 17:24:02.712373 4652 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 17:24:02.712827 master-0 systemd[1]: Started Kubernetes Kubelet. Feb 16 17:24:02.716423 master-0 kubenswrapper[4652]: I0216 17:24:02.716305 4652 server.go:449] "Adding debug handlers to kubelet server" Feb 16 17:24:02.719306 master-0 kubenswrapper[4652]: I0216 17:24:02.719269 4652 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:24:02.720466 master-0 kubenswrapper[4652]: I0216 17:24:02.720274 4652 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 17:24:02.720466 master-0 kubenswrapper[4652]: I0216 17:24:02.720376 4652 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-17 16:50:49 +0000 UTC, rotation deadline is 2026-02-17 13:21:56.935115343 +0000 UTC Feb 16 17:24:02.720466 master-0 kubenswrapper[4652]: I0216 17:24:02.720413 4652 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h57m54.214706425s for next certificate rotation Feb 16 17:24:02.721326 master-0 kubenswrapper[4652]: I0216 17:24:02.720741 4652 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:24:02.721326 master-0 kubenswrapper[4652]: E0216 17:24:02.720747 4652 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Feb 16 17:24:02.721326 master-0 kubenswrapper[4652]: I0216 17:24:02.720904 4652 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 17:24:02.721326 master-0 kubenswrapper[4652]: I0216 17:24:02.721112 4652 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 17:24:02.721326 master-0 kubenswrapper[4652]: I0216 17:24:02.721129 4652 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 17:24:02.721326 master-0 kubenswrapper[4652]: I0216 17:24:02.721161 4652 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Feb 16 17:24:02.722048 master-0 kubenswrapper[4652]: I0216 17:24:02.721837 4652 factory.go:55] Registering systemd factory Feb 16 17:24:02.722048 master-0 kubenswrapper[4652]: I0216 17:24:02.721877 4652 factory.go:221] Registration of the systemd container factory successfully Feb 16 17:24:02.722138 master-0 kubenswrapper[4652]: I0216 17:24:02.722100 4652 factory.go:153] Registering CRI-O factory Feb 16 17:24:02.722138 master-0 kubenswrapper[4652]: I0216 17:24:02.722118 4652 factory.go:221] Registration of the crio container factory successfully Feb 16 17:24:02.722877 master-0 kubenswrapper[4652]: I0216 17:24:02.722197 4652 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 17:24:02.722877 master-0 kubenswrapper[4652]: I0216 17:24:02.722228 4652 factory.go:103] Registering Raw factory Feb 16 17:24:02.722877 master-0 kubenswrapper[4652]: I0216 17:24:02.722266 4652 manager.go:1196] Started watching for new ooms in manager Feb 16 17:24:02.722877 master-0 kubenswrapper[4652]: I0216 17:24:02.722763 4652 manager.go:319] Starting recovery of all containers Feb 16 17:24:02.724130 master-0 kubenswrapper[4652]: I0216 17:24:02.723925 4652 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:24:02.737231 master-0 kubenswrapper[4652]: I0216 17:24:02.737157 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" volumeName="kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb" seLinuxMountContext="" Feb 16 17:24:02.737231 master-0 kubenswrapper[4652]: I0216 17:24:02.737219 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert" seLinuxMountContext="" Feb 16 17:24:02.737231 master-0 kubenswrapper[4652]: I0216 17:24:02.737231 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca" seLinuxMountContext="" Feb 16 17:24:02.737231 master-0 kubenswrapper[4652]: I0216 17:24:02.737240 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737268 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737277 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737285 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737293 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c303189e-adae-4fe2-8dd7-cc9b80f73e66" volumeName="kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737303 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737312 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737321 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737329 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737337 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737348 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737356 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737364 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737374 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737385 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737393 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737402 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737410 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737438 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737447 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737461 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737480 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737493 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737509 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737519 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets" seLinuxMountContext="" Feb 16 17:24:02.737510 master-0 kubenswrapper[4652]: I0216 17:24:02.737530 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737559 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737569 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737577 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737586 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737608 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737618 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f44170a-3c1c-4944-b971-251f75a51fc3" volumeName="kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737626 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737634 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737656 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737666 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54fba066-0e9e-49f6-8a86-34d5b4b660df" volumeName="kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737675 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a275679-b7b6-4c28-b389-94cd2b014d6c" volumeName="kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737684 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737693 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737701 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737709 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737719 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737728 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737739 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737748 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737758 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737766 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737775 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737783 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737796 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737805 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737816 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737825 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737834 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737843 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737853 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737861 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737869 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737878 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737887 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737895 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737905 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737937 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737947 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737956 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737965 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737975 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737983 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.737994 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738007 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738028 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738040 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738052 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738064 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738076 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738087 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738098 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d980a9a-2574-41b9-b970-0718cd97c8cd" volumeName="kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738109 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738119 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738131 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738145 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738164 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738181 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738193 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738206 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738218 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle" seLinuxMountContext="" Feb 16 17:24:02.738166 master-0 kubenswrapper[4652]: I0216 17:24:02.738227 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738236 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738271 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0517b180-00ee-47fe-a8e7-36a3931b7e72" volumeName="kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738281 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738290 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738300 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738309 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738320 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738331 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738339 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738348 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738356 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738366 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738374 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738384 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-config-out" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738397 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738407 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738416 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738426 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738434 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738444 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738453 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738462 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738473 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738482 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738491 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738501 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738511 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738520 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1524fc1-d157-435a-8bf8-7e877c45909d" volumeName="kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738529 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738539 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738548 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738557 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738566 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738583 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738592 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738600 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738608 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738616 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-kube-api-access-l67l5" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738624 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738632 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738640 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738648 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738656 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738664 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9859457-f0d1-4754-a6c5-cf05d5abf447" volumeName="kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738672 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738681 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738690 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738698 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738706 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738713 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738721 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738729 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738739 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738747 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738756 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738763 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc9a20f4-255a-4312-8f43-174a28c06340" volumeName="kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738771 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738778 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738786 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738794 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738802 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738817 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738826 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738834 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738842 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738850 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738861 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738872 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738882 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738892 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738903 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d980a9a-2574-41b9-b970-0718cd97c8cd" volumeName="kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738912 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738927 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738945 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738956 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738967 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738980 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="404c402a-705f-4352-b9df-b89562070d9c" volumeName="kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.738991 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739000 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739011 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739021 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739030 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3beb7bf-922f-425d-8a19-fd407a7153a8" volumeName="kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739039 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62220aa5-4065-472c-8a17-c0a58942ab8a" volumeName="kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739051 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6b3e071c-1c62-489b-91c1-aef0d197f40b" volumeName="kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739061 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739070 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739078 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d96ccdc-0b09-437d-bfca-1958af5d9953" volumeName="kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739086 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739094 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739103 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739111 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739118 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c8729b1a-e365-4cf7-8a05-91a9987dabe9" volumeName="kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739126 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739135 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="62fc29f4-557f-4a75-8b78-6ca425c81b81" volumeName="kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739142 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="702322ac-7610-4568-9a68-b6acbd1f0c12" volumeName="kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739151 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739159 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739168 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739176 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739184 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739193 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739201 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739208 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739218 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739226 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739234 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9609a4f3-b947-47af-a685-baae26c50fa3" volumeName="kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739258 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739268 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739276 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739286 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739294 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739303 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" volumeName="kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739310 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739318 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739326 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739336 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739345 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739354 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a939dd0-fc27-4d47-b81b-96e13e4bbca9" volumeName="kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739365 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="29402454-a920-471e-895e-764235d16eb4" volumeName="kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739377 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" volumeName="kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739391 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="737fcc7d-d850-4352-9f17-383c85d5bc28" volumeName="kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739404 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="970d4376-f299-412c-a8ee-90aa980c689e" volumeName="kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739416 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739429 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739440 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eaf7edff-0a89-4ac0-b9dd-511e098b5434" volumeName="kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739451 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739462 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739471 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9c48005e-c4df-4332-87fc-ec028f2c6921" volumeName="kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739481 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad805251-19d0-4d2f-b741-7d11158f1f03" volumeName="kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739496 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d9859457-f0d1-4754-a6c5-cf05d5abf447" volumeName="kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739506 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e69d8c51-e2a6-4f61-9c26-072784f6cf40" volumeName="kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739515 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739525 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739536 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739548 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739560 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0393fe12-2533-4c9c-a8e4-a58003c88f36" volumeName="kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739572 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739586 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739608 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ff68421-1741-41c1-93d5-5c722dfd295e" volumeName="kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739619 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739637 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739656 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739668 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="544c6815-81d7-422a-9e4a-5fcbfabe8da8" volumeName="kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739678 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739688 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="188e42e5-9f9c-42af-ba15-5548c4fa4b52" volumeName="kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739701 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739713 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739725 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a94f9b8e-b020-4aab-8373-6c056ec07464" volumeName="kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739736 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06067627-6ccf-4cc8-bd20-dabdd776bb46" volumeName="kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739747 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="642e5115-b7f2-4561-bc6b-1a74b6d891c4" volumeName="kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739755 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739764 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f9bf4ab-5415-4616-aa36-ea387c699ea9" volumeName="kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739772 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739781 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739789 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" volumeName="kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739798 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739807 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e623376-9e14-4341-9dcf-7a7c218b6f9f" volumeName="kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739817 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739825 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739834 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739842 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ee84198d-6357-4429-a90c-455c3850a788" volumeName="kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739850 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08a90dc5-b0d8-4aad-a002-736492b6c1a9" volumeName="kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739858 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739868 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="55d635cd-1f0d-4086-96f2-9f3524f3f18c" volumeName="kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739876 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739885 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739894 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="442600dc-09b2-4fee-9f89-777296b2ee40" volumeName="kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739903 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab5760f1-b2e0-4138-9383-e4827154ac50" volumeName="kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739912 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-db" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739922 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739932 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fe8e8e5d-cebb-4361-b765-5ff737f5e838" volumeName="kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739942 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.739889 master-0 kubenswrapper[4652]: I0216 17:24:02.739950 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.739959 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4549ea98-7379-49e1-8452-5efb643137ca" volumeName="kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.739968 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" volumeName="kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.739976 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ab80e0fb-09dd-4c93-b235-1487024105d2" volumeName="kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.739983 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" volumeName="kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.739992 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740001 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740010 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-main-db" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740018 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="39387549-c636-4bd4-b463-f6a93810f277" volumeName="kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740027 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" volumeName="kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740036 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5a275679-b7b6-4c28-b389-94cd2b014d6c" volumeName="kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740044 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740052 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740062 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" volumeName="kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740070 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ad805251-19d0-4d2f-b741-7d11158f1f03" volumeName="kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740079 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c45ce0e5-c50b-4210-b7bb-82db2b2bc1db" volumeName="kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740087 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4e51bba5-0ebe-4e55-a588-38b71548c605" volumeName="kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740096 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b04ee64e-5e83-499c-812d-749b2b6824c6" volumeName="kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-kube-api-access-vpjv7" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740150 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d1524fc1-d157-435a-8bf8-7e877c45909d" volumeName="kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740159 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1a7c783-2e23-4284-b648-147984cf1022" volumeName="kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740168 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="642e5115-b7f2-4561-bc6b-1a74b6d891c4" volumeName="kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740176 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74b2561b-933b-4c58-a63a-7a8c671d0ae9" volumeName="kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740184 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="78be97a3-18d1-4962-804f-372974dc8ccc" volumeName="kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740192 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80d3b238-70c3-4e71-96a1-99405352033f" volumeName="kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740200 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d020c902-2adb-4919-8dd9-0c2109830580" volumeName="kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740208 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dce85b5e-6e92-4e0e-bee7-07b1a3634302" volumeName="kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740217 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18e9a9d3-9b18-4c19-9558-f33c68101922" volumeName="kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740225 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54f29618-42c2-4270-9af7-7d82852d7cec" volumeName="kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740234 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="810a2275-fae5-45df-a3b8-92860451d33b" volumeName="kubernetes.io/configmap/810a2275-fae5-45df-a3b8-92860451d33b-serviceca" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740256 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ae20b683-dac8-419e-808a-ddcdb3c564e1" volumeName="kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740265 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ba37ef0e-373c-4ccc-b082-668630399765" volumeName="kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740274 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" volumeName="kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740284 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740293 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="810a2275-fae5-45df-a3b8-92860451d33b" volumeName="kubernetes.io/projected/810a2275-fae5-45df-a3b8-92860451d33b-kube-api-access-ktgm7" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740303 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="822e1750-652e-4ceb-8fea-b2c1c905b0f1" volumeName="kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740312 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18e9a9d3-9b18-4c19-9558-f33c68101922" volumeName="kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740320 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="48801344-a48a-493e-aea4-19d998d0b708" volumeName="kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740330 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6f44170a-3c1c-4944-b971-251f75a51fc3" volumeName="kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740339 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740348 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" volumeName="kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740357 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6ad958f-25e4-40cb-89ec-5da9cb6395c7" volumeName="kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740366 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c2511146-1d04-4ecd-a28e-79662ef7b9d3" volumeName="kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740375 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" volumeName="kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740385 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e73ee493-de15-44c2-bd51-e12fcbb27a15" volumeName="kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740394 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" volumeName="kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740402 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" volumeName="kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-out" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740411 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3fa6ac1-781f-446c-b6b4-18bdb7723c23" volumeName="kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740419 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" volumeName="kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740426 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d1636c0-f34d-444c-822d-77f1d203ddc4" volumeName="kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740436 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43f65f23-4ddd-471a-9cb3-b0945382d83c" volumeName="kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740445 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" volumeName="kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740453 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6fe41b0-1a42-4f07-8220-d9aaa50788ad" volumeName="kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740461 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7390ccc6-dfbe-4f51-960c-7628f49bffb7" volumeName="kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740469 4652 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" volumeName="kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz" seLinuxMountContext="" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740477 4652 reconstruct.go:97] "Volume reconstruction finished" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.740483 4652 reconciler.go:26] "Reconciler: start to sync state" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.742306 4652 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.743086 4652 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.744190 4652 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.744243 4652 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: I0216 17:24:02.744300 4652 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 17:24:02.745080 master-0 kubenswrapper[4652]: E0216 17:24:02.744467 4652 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 17:24:02.747098 master-0 kubenswrapper[4652]: I0216 17:24:02.745936 4652 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 17:24:02.752759 master-0 kubenswrapper[4652]: I0216 17:24:02.752724 4652 generic.go:334] "Generic (PLEG): container finished" podID="b3322fd3717f4aec0d8f54ec7862c07e" containerID="024918b99b0960332808509aca9a4a206a98049b3cbbd79cb59ca43d40614ee8" exitCode=0 Feb 16 17:24:02.755844 master-0 kubenswrapper[4652]: I0216 17:24:02.755800 4652 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="ac596da9ba2aef67b1e91be250c648d0d94ad9e7ab12065e2f256126b855cff0" exitCode=1 Feb 16 17:24:02.755844 master-0 kubenswrapper[4652]: I0216 17:24:02.755837 4652 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838" exitCode=255 Feb 16 17:24:02.760315 master-0 kubenswrapper[4652]: I0216 17:24:02.760286 4652 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="aa4ad8fed1eda81c2562cc6c50ee8eff149a61c6fa1ef5cf233edb4d1184264a" exitCode=0 Feb 16 17:24:02.760400 master-0 kubenswrapper[4652]: I0216 17:24:02.760387 4652 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="b1d0578227c4edafa8bba585414b028b5fae4c055f5a0b9d56187660cf9393ff" exitCode=0 Feb 16 17:24:02.760457 master-0 kubenswrapper[4652]: I0216 17:24:02.760446 4652 generic.go:334] "Generic (PLEG): container finished" podID="7adecad495595c43c57c30abd350e987" containerID="2f9e036184f8cd2fd14e3ee4e8e0984726c748a2f48514f7099254370b0935ca" exitCode=0 Feb 16 17:24:02.767100 master-0 kubenswrapper[4652]: I0216 17:24:02.767071 4652 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="8418547cd53261f1b77929899a0ab7c7d55cf1c91b349c65456cae4040067db4" exitCode=0 Feb 16 17:24:02.795890 master-0 kubenswrapper[4652]: I0216 17:24:02.795840 4652 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576" exitCode=0 Feb 16 17:24:02.799887 master-0 kubenswrapper[4652]: I0216 17:24:02.799849 4652 generic.go:334] "Generic (PLEG): container finished" podID="a94f9b8e-b020-4aab-8373-6c056ec07464" containerID="102a2b3ff0c0802de14c69b4e98a9814b1e46ce4db6fc83e68edccac0436a089" exitCode=0 Feb 16 17:24:02.844923 master-0 kubenswrapper[4652]: E0216 17:24:02.844725 4652 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 16 17:24:02.854139 master-0 kubenswrapper[4652]: I0216 17:24:02.854108 4652 manager.go:324] Recovery completed Feb 16 17:24:02.883005 master-0 kubenswrapper[4652]: I0216 17:24:02.882973 4652 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 17:24:02.883005 master-0 kubenswrapper[4652]: I0216 17:24:02.882995 4652 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 17:24:02.883005 master-0 kubenswrapper[4652]: I0216 17:24:02.883012 4652 state_mem.go:36] "Initialized new in-memory state store" Feb 16 17:24:02.883264 master-0 kubenswrapper[4652]: I0216 17:24:02.883164 4652 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 16 17:24:02.883264 master-0 kubenswrapper[4652]: I0216 17:24:02.883174 4652 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 16 17:24:02.883264 master-0 kubenswrapper[4652]: I0216 17:24:02.883205 4652 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Feb 16 17:24:02.883264 master-0 kubenswrapper[4652]: I0216 17:24:02.883211 4652 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Feb 16 17:24:02.883264 master-0 kubenswrapper[4652]: I0216 17:24:02.883217 4652 policy_none.go:49] "None policy: Start" Feb 16 17:24:02.885172 master-0 kubenswrapper[4652]: I0216 17:24:02.885136 4652 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 17:24:02.885172 master-0 kubenswrapper[4652]: I0216 17:24:02.885169 4652 state_mem.go:35] "Initializing new in-memory state store" Feb 16 17:24:02.885405 master-0 kubenswrapper[4652]: I0216 17:24:02.885375 4652 state_mem.go:75] "Updated machine memory state" Feb 16 17:24:02.885405 master-0 kubenswrapper[4652]: I0216 17:24:02.885393 4652 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Feb 16 17:24:02.895826 master-0 kubenswrapper[4652]: I0216 17:24:02.895788 4652 manager.go:334] "Starting Device Plugin manager" Feb 16 17:24:02.896452 master-0 kubenswrapper[4652]: I0216 17:24:02.896406 4652 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 17:24:02.896452 master-0 kubenswrapper[4652]: I0216 17:24:02.896435 4652 server.go:79] "Starting device plugin registration server" Feb 16 17:24:02.896951 master-0 kubenswrapper[4652]: I0216 17:24:02.896917 4652 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 17:24:02.897330 master-0 kubenswrapper[4652]: I0216 17:24:02.896934 4652 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 17:24:02.897330 master-0 kubenswrapper[4652]: I0216 17:24:02.897235 4652 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 17:24:02.898472 master-0 kubenswrapper[4652]: I0216 17:24:02.898431 4652 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 17:24:02.898472 master-0 kubenswrapper[4652]: I0216 17:24:02.898455 4652 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 17:24:02.899319 master-0 kubenswrapper[4652]: E0216 17:24:02.899142 4652 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:24:02.998091 master-0 kubenswrapper[4652]: I0216 17:24:02.997995 4652 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 17:24:02.999666 master-0 kubenswrapper[4652]: I0216 17:24:02.999613 4652 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Feb 16 17:24:02.999666 master-0 kubenswrapper[4652]: I0216 17:24:02.999645 4652 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Feb 16 17:24:02.999666 master-0 kubenswrapper[4652]: I0216 17:24:02.999656 4652 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Feb 16 17:24:02.999940 master-0 kubenswrapper[4652]: I0216 17:24:02.999770 4652 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Feb 16 17:24:03.020455 master-0 kubenswrapper[4652]: I0216 17:24:03.020333 4652 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Feb 16 17:24:03.020598 master-0 kubenswrapper[4652]: I0216 17:24:03.020485 4652 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Feb 16 17:24:03.021921 master-0 kubenswrapper[4652]: I0216 17:24:03.021872 4652 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:24:03.022057 master-0 kubenswrapper[4652]: I0216 17:24:03.021917 4652 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:24:03Z","lastTransitionTime":"2026-02-16T17:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:24:03.037519 master-0 kubenswrapper[4652]: E0216 17:24:03.037460 4652 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bff30cf7-71da-4e66-9940-13ec1ab42f05\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:03.042418 master-0 kubenswrapper[4652]: I0216 17:24:03.042384 4652 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:24:03.042536 master-0 kubenswrapper[4652]: I0216 17:24:03.042421 4652 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:24:03Z","lastTransitionTime":"2026-02-16T17:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:24:03.045922 master-0 kubenswrapper[4652]: I0216 17:24:03.045817 4652 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 17:24:03.046670 master-0 kubenswrapper[4652]: I0216 17:24:03.046591 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"5fa8b867f6c7632908fe33e45a5de76207c3a49f016816d7a95a271132f5f9bc"} Feb 16 17:24:03.046670 master-0 kubenswrapper[4652]: I0216 17:24:03.046648 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerDied","Data":"024918b99b0960332808509aca9a4a206a98049b3cbbd79cb59ca43d40614ee8"} Feb 16 17:24:03.046670 master-0 kubenswrapper[4652]: I0216 17:24:03.046660 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"b3322fd3717f4aec0d8f54ec7862c07e","Type":"ContainerStarted","Data":"7390c7d89f79e636baa8c58deafe3fc046c5d3959b31e83d9fd704ba232e7cc1"} Feb 16 17:24:03.046670 master-0 kubenswrapper[4652]: I0216 17:24:03.046669 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b"} Feb 16 17:24:03.046670 master-0 kubenswrapper[4652]: I0216 17:24:03.046678 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc"} Feb 16 17:24:03.046670 master-0 kubenswrapper[4652]: I0216 17:24:03.046688 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"ac596da9ba2aef67b1e91be250c648d0d94ad9e7ab12065e2f256126b855cff0"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.046698 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.046708 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"06cdc79aff420eb5730cf93c10b791911677809cb3e311984f04d7223bea2df7"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.046716 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"92efa28d5ccdf6a2d1f34efa5ec12c219983a97ee3917a992682d4d798721c42"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.046725 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"022e1fddc422dc77252f1d7b260702feb66ffc90c31448ea87e5739cd23f3805"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.046733 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"2e99ae292d3b4bbd76a9fa68cff04a8bd972ff36354aa7a07d342bf6c90a37c3"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.046741 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"78c4c28ec182d11145fdbcfed4e0587dbd19c642c7d08933143edfffac5518da"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.046748 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"1208371bf87d9b91c91faffa32fd11198d3867d9e1e74ab3e3e862ddf72963a2"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.046756 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"aa4ad8fed1eda81c2562cc6c50ee8eff149a61c6fa1ef5cf233edb4d1184264a"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.046764 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"b1d0578227c4edafa8bba585414b028b5fae4c055f5a0b9d56187660cf9393ff"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.046772 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerDied","Data":"2f9e036184f8cd2fd14e3ee4e8e0984726c748a2f48514f7099254370b0935ca"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.046780 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"7adecad495595c43c57c30abd350e987","Type":"ContainerStarted","Data":"cd3d71a6084ee248a560124746ee307460625fa3d9ee1fe1d378dbd98e43a0fb"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.046790 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"df0e3c6ca3dd8af42c8e9ac9cdc311a5a319df0fa4ca786ea177a90a6aefea49"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.047081 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"208e46a3c641476f9960cdd4e77a82fbdec0a87c2f2f91e56dfe5eb0ed0268f8"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.047109 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"94ea5b6007080bd428cd7b6fe066cc75a9a70841a978304207907aa746d9ac27"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.047120 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"8418547cd53261f1b77929899a0ab7c7d55cf1c91b349c65456cae4040067db4"} Feb 16 17:24:03.047105 master-0 kubenswrapper[4652]: I0216 17:24:03.047130 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"60ad9673b9da87a543bba4e5a24b9c3c17606af8ac65c311daabcc313339be82"} Feb 16 17:24:03.048061 master-0 kubenswrapper[4652]: I0216 17:24:03.047152 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5"} Feb 16 17:24:03.048061 master-0 kubenswrapper[4652]: I0216 17:24:03.047161 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d"} Feb 16 17:24:03.048061 master-0 kubenswrapper[4652]: I0216 17:24:03.047169 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7"} Feb 16 17:24:03.048061 master-0 kubenswrapper[4652]: I0216 17:24:03.047177 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988"} Feb 16 17:24:03.048061 master-0 kubenswrapper[4652]: I0216 17:24:03.047185 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134"} Feb 16 17:24:03.048061 master-0 kubenswrapper[4652]: I0216 17:24:03.047194 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerDied","Data":"bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576"} Feb 16 17:24:03.048061 master-0 kubenswrapper[4652]: I0216 17:24:03.047203 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"10e298020284b0e8ffa6a0bc184059d9","Type":"ContainerStarted","Data":"36d50210d5c52db4b7e6fdca90b019b559fb61ad6d363fa02b488e76691be827"} Feb 16 17:24:03.055558 master-0 kubenswrapper[4652]: E0216 17:24:03.055482 4652 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bff30cf7-71da-4e66-9940-13ec1ab42f05\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:03.088707 master-0 kubenswrapper[4652]: E0216 17:24:03.088572 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:24:03.089949 master-0 kubenswrapper[4652]: I0216 17:24:03.089929 4652 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:24:03.090224 master-0 kubenswrapper[4652]: I0216 17:24:03.089951 4652 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:24:03Z","lastTransitionTime":"2026-02-16T17:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:24:03.103617 master-0 kubenswrapper[4652]: E0216 17:24:03.103452 4652 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bff30cf7-71da-4e66-9940-13ec1ab42f05\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:03.107680 master-0 kubenswrapper[4652]: I0216 17:24:03.107649 4652 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:24:03.107803 master-0 kubenswrapper[4652]: I0216 17:24:03.107691 4652 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:24:03Z","lastTransitionTime":"2026-02-16T17:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:24:03.116170 master-0 kubenswrapper[4652]: E0216 17:24:03.116106 4652 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bff30cf7-71da-4e66-9940-13ec1ab42f05\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:03.119262 master-0 kubenswrapper[4652]: I0216 17:24:03.119204 4652 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:24:03.119352 master-0 kubenswrapper[4652]: I0216 17:24:03.119266 4652 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:24:03Z","lastTransitionTime":"2026-02-16T17:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:24:03.127762 master-0 kubenswrapper[4652]: E0216 17:24:03.127706 4652 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"memory\\\":\\\"48179240Ki\\\"},\\\"capacity\\\":{\\\"memory\\\":\\\"49330216Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"bff30cf7-71da-4e66-9940-13ec1ab42f05\\\"}}}\" for node \"master-0\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:03.127762 master-0 kubenswrapper[4652]: E0216 17:24:03.127750 4652 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 17:24:03.143974 master-0 kubenswrapper[4652]: I0216 17:24:03.143922 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:03.143974 master-0 kubenswrapper[4652]: I0216 17:24:03.143965 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.144111 master-0 kubenswrapper[4652]: I0216 17:24:03.143987 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:24:03.144111 master-0 kubenswrapper[4652]: I0216 17:24:03.144003 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:24:03.144111 master-0 kubenswrapper[4652]: I0216 17:24:03.144018 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.144111 master-0 kubenswrapper[4652]: I0216 17:24:03.144037 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.144314 master-0 kubenswrapper[4652]: I0216 17:24:03.144127 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.144314 master-0 kubenswrapper[4652]: I0216 17:24:03.144171 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:03.144314 master-0 kubenswrapper[4652]: I0216 17:24:03.144203 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.144314 master-0 kubenswrapper[4652]: I0216 17:24:03.144227 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.144314 master-0 kubenswrapper[4652]: I0216 17:24:03.144260 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:24:03.144516 master-0 kubenswrapper[4652]: I0216 17:24:03.144314 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.144516 master-0 kubenswrapper[4652]: I0216 17:24:03.144357 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:24:03.144516 master-0 kubenswrapper[4652]: I0216 17:24:03.144381 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.144516 master-0 kubenswrapper[4652]: I0216 17:24:03.144430 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.144516 master-0 kubenswrapper[4652]: I0216 17:24:03.144463 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.144516 master-0 kubenswrapper[4652]: I0216 17:24:03.144481 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:03.144516 master-0 kubenswrapper[4652]: I0216 17:24:03.144499 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.164717 master-0 kubenswrapper[4652]: E0216 17:24:03.164653 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.165579 master-0 kubenswrapper[4652]: E0216 17:24:03.165526 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:24:03.165715 master-0 kubenswrapper[4652]: E0216 17:24:03.165692 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:03.165966 master-0 kubenswrapper[4652]: E0216 17:24:03.165923 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.245143 master-0 kubenswrapper[4652]: I0216 17:24:03.245074 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:03.245381 master-0 kubenswrapper[4652]: I0216 17:24:03.245150 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.245381 master-0 kubenswrapper[4652]: I0216 17:24:03.245174 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:03.245381 master-0 kubenswrapper[4652]: I0216 17:24:03.245190 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:24:03.245381 master-0 kubenswrapper[4652]: I0216 17:24:03.245270 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:24:03.245381 master-0 kubenswrapper[4652]: I0216 17:24:03.245285 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:24:03.245381 master-0 kubenswrapper[4652]: I0216 17:24:03.245299 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.245381 master-0 kubenswrapper[4652]: I0216 17:24:03.245345 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:24:03.245381 master-0 kubenswrapper[4652]: I0216 17:24:03.245370 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.245712 master-0 kubenswrapper[4652]: I0216 17:24:03.245439 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.245712 master-0 kubenswrapper[4652]: I0216 17:24:03.245487 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-log-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.245712 master-0 kubenswrapper[4652]: I0216 17:24:03.245490 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-resource-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.245712 master-0 kubenswrapper[4652]: I0216 17:24:03.245522 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.245712 master-0 kubenswrapper[4652]: I0216 17:24:03.245555 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:03.245712 master-0 kubenswrapper[4652]: I0216 17:24:03.245648 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-usr-local-bin\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.246060 master-0 kubenswrapper[4652]: I0216 17:24:03.245738 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:03.246060 master-0 kubenswrapper[4652]: I0216 17:24:03.245732 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.246060 master-0 kubenswrapper[4652]: I0216 17:24:03.245773 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.246060 master-0 kubenswrapper[4652]: I0216 17:24:03.245775 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.246060 master-0 kubenswrapper[4652]: I0216 17:24:03.245832 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:24:03.246060 master-0 kubenswrapper[4652]: I0216 17:24:03.245801 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.246060 master-0 kubenswrapper[4652]: I0216 17:24:03.245862 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.246060 master-0 kubenswrapper[4652]: I0216 17:24:03.245895 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.246060 master-0 kubenswrapper[4652]: I0216 17:24:03.245909 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:24:03.246060 master-0 kubenswrapper[4652]: I0216 17:24:03.245926 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:24:03.246060 master-0 kubenswrapper[4652]: I0216 17:24:03.245942 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.246060 master-0 kubenswrapper[4652]: I0216 17:24:03.246023 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b3322fd3717f4aec0d8f54ec7862c07e-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"b3322fd3717f4aec0d8f54ec7862c07e\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:24:03.246917 master-0 kubenswrapper[4652]: I0216 17:24:03.246097 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.246917 master-0 kubenswrapper[4652]: I0216 17:24:03.246125 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-static-pod-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.246917 master-0 kubenswrapper[4652]: I0216 17:24:03.246147 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.246917 master-0 kubenswrapper[4652]: I0216 17:24:03.246170 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-cert-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.246917 master-0 kubenswrapper[4652]: I0216 17:24:03.246190 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:03.246917 master-0 kubenswrapper[4652]: I0216 17:24:03.246209 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/7adecad495595c43c57c30abd350e987-data-dir\") pod \"etcd-master-0\" (UID: \"7adecad495595c43c57c30abd350e987\") " pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.246917 master-0 kubenswrapper[4652]: I0216 17:24:03.246235 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.246917 master-0 kubenswrapper[4652]: I0216 17:24:03.246286 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:03.246917 master-0 kubenswrapper[4652]: I0216 17:24:03.246332 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.634137 master-0 kubenswrapper[4652]: I0216 17:24:03.634027 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.710852 master-0 kubenswrapper[4652]: I0216 17:24:03.710753 4652 apiserver.go:52] "Watching apiserver" Feb 16 17:24:03.754509 master-0 kubenswrapper[4652]: I0216 17:24:03.754433 4652 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 17:24:03.757376 master-0 kubenswrapper[4652]: I0216 17:24:03.757291 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2","openshift-console/console-599b567ff7-nrcpr","openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd","openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd","openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b","openshift-ovn-kubernetes/ovnkube-node-flr86","openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9","openshift-machine-config-operator/machine-config-daemon-98q6v","openshift-marketplace/redhat-operators-lnzfx","openshift-network-node-identity/network-node-identity-hhcpr","openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9","openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp","openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl","openshift-authentication/oauth-openshift-64f85b8fc9-n9msn","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf","openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6","openshift-network-operator/iptables-alerter-czzz2","openshift-dns-operator/dns-operator-86b8869b79-nhxlp","openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d","openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx","openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm","openshift-monitoring/prometheus-operator-7485d645b8-zxxwd","openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs","openshift-service-ca/service-ca-676cd8b9b5-cp9rb","openshift-console/downloads-dcd7b7d95-dhhfh","openshift-kube-apiserver/installer-1-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg","openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz","openshift-network-diagnostics/network-check-target-vwvwx","openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw","openshift-marketplace/redhat-marketplace-4kd66","openshift-monitoring/monitoring-plugin-555857f695-nlrnr","openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl","openshift-kube-apiserver/installer-3-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-monitoring/alertmanager-main-0","openshift-multus/multus-admission-controller-6d678b8d67-5n9cl","openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r","openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5","openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf","openshift-dns/node-resolver-vfxj4","openshift-etcd/installer-2-master-0","openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc","openshift-monitoring/node-exporter-8256c","openshift-multus/network-metrics-daemon-279g6","openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz","openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9","openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8","openshift-multus/multus-additional-cni-plugins-rjdlk","openshift-network-operator/network-operator-6fcf4c966-6bmf9","kube-system/bootstrap-kube-controller-manager-master-0","openshift-apiserver/apiserver-fc4bf7f79-tqnlw","openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv","openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb","openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv","openshift-kube-controller-manager/installer-2-master-0","openshift-monitoring/prometheus-k8s-0","openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc","assisted-installer/assisted-installer-controller-thhq2","openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v","openshift-authentication-operator/authentication-operator-755d954778-lf4cb","openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr","openshift-ingress/router-default-864ddd5f56-pm4rt","openshift-kube-apiserver/installer-4-master-0","openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn","openshift-machine-config-operator/machine-config-server-2ws9r","openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2","openshift-etcd/etcd-master-0","openshift-marketplace/community-operators-7w4km","openshift-monitoring/metrics-server-745bd8d89b-qr4zh","openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h","openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6","openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8","openshift-console-operator/console-operator-7777d5cc66-64vhv","openshift-etcd/installer-2-retry-1-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k","openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx","openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48","openshift-marketplace/certified-operators-z69zq","openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct","openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq","openshift-cluster-node-tuning-operator/tuned-l5kbz","openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk","openshift-ingress-canary/ingress-canary-qqvg4","openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk","openshift-multus/multus-6r7wj","openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k","openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts","openshift-dns/dns-default-qcgxx","openshift-image-registry/node-ca-xv2wv","openshift-insights/insights-operator-cb4f7b4cf-6qrw5","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6","openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf"] Feb 16 17:24:03.757773 master-0 kubenswrapper[4652]: I0216 17:24:03.757723 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-thhq2" Feb 16 17:24:03.757920 master-0 kubenswrapper[4652]: I0216 17:24:03.757870 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:03.757979 master-0 kubenswrapper[4652]: I0216 17:24:03.757961 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:03.758084 master-0 kubenswrapper[4652]: E0216 17:24:03.758039 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:24:03.759028 master-0 kubenswrapper[4652]: E0216 17:24:03.758795 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:24:03.759932 master-0 kubenswrapper[4652]: I0216 17:24:03.759429 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:03.759932 master-0 kubenswrapper[4652]: I0216 17:24:03.759457 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:03.759932 master-0 kubenswrapper[4652]: I0216 17:24:03.759555 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:03.759932 master-0 kubenswrapper[4652]: E0216 17:24:03.759545 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:24:03.759932 master-0 kubenswrapper[4652]: E0216 17:24:03.759626 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:24:03.759932 master-0 kubenswrapper[4652]: I0216 17:24:03.759708 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:03.759932 master-0 kubenswrapper[4652]: E0216 17:24:03.759717 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:24:03.759932 master-0 kubenswrapper[4652]: E0216 17:24:03.759796 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:24:03.759932 master-0 kubenswrapper[4652]: I0216 17:24:03.759818 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:03.759932 master-0 kubenswrapper[4652]: E0216 17:24:03.759891 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: I0216 17:24:03.760907 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: E0216 17:24:03.760994 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: I0216 17:24:03.761040 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: I0216 17:24:03.761102 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: I0216 17:24:03.761183 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: E0216 17:24:03.761179 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: I0216 17:24:03.761213 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: E0216 17:24:03.761271 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: I0216 17:24:03.761272 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: I0216 17:24:03.761317 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: E0216 17:24:03.761354 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: I0216 17:24:03.761542 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: E0216 17:24:03.761562 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: I0216 17:24:03.761629 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:03.761644 master-0 kubenswrapper[4652]: I0216 17:24:03.761606 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: E0216 17:24:03.761685 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: E0216 17:24:03.761761 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: E0216 17:24:03.761834 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: I0216 17:24:03.761881 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: I0216 17:24:03.761867 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: E0216 17:24:03.761963 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: E0216 17:24:03.762000 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: I0216 17:24:03.762134 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: I0216 17:24:03.762311 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: E0216 17:24:03.762405 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: I0216 17:24:03.762604 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: E0216 17:24:03.762676 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: I0216 17:24:03.764115 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: E0216 17:24:03.764197 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: I0216 17:24:03.764704 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: I0216 17:24:03.764736 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:03.765152 master-0 kubenswrapper[4652]: E0216 17:24:03.764946 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:24:03.766971 master-0 kubenswrapper[4652]: E0216 17:24:03.765233 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:24:03.766971 master-0 kubenswrapper[4652]: I0216 17:24:03.766737 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf" Feb 16 17:24:03.770669 master-0 kubenswrapper[4652]: I0216 17:24:03.766943 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:03.771323 master-0 kubenswrapper[4652]: I0216 17:24:03.770892 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:03.771323 master-0 kubenswrapper[4652]: E0216 17:24:03.770992 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:24:03.771323 master-0 kubenswrapper[4652]: E0216 17:24:03.771056 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:24:03.771572 master-0 kubenswrapper[4652]: I0216 17:24:03.771387 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 17:24:03.771662 master-0 kubenswrapper[4652]: I0216 17:24:03.771626 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 17:24:03.772193 master-0 kubenswrapper[4652]: I0216 17:24:03.771826 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:03.772193 master-0 kubenswrapper[4652]: E0216 17:24:03.771890 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:24:03.772451 master-0 kubenswrapper[4652]: I0216 17:24:03.772287 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.772743 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.772993 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.773053 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.773041 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.773154 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.773378 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: E0216 17:24:03.773455 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.773652 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.773652 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.773999 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.774066 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.774236 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.774267 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.774410 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.774220 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.774495 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: E0216 17:24:03.774545 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.774583 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.774673 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.774751 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 17:24:03.776337 master-0 kubenswrapper[4652]: I0216 17:24:03.774969 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 17:24:03.778283 master-0 kubenswrapper[4652]: I0216 17:24:03.776578 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 17:24:03.779586 master-0 kubenswrapper[4652]: I0216 17:24:03.779524 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:03.779709 master-0 kubenswrapper[4652]: E0216 17:24:03.779657 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:24:03.779818 master-0 kubenswrapper[4652]: I0216 17:24:03.779704 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:03.779921 master-0 kubenswrapper[4652]: E0216 17:24:03.779809 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:24:03.779921 master-0 kubenswrapper[4652]: I0216 17:24:03.779831 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 17:24:03.779921 master-0 kubenswrapper[4652]: I0216 17:24:03.779851 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 17:24:03.781124 master-0 kubenswrapper[4652]: I0216 17:24:03.781046 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:03.781308 master-0 kubenswrapper[4652]: E0216 17:24:03.781207 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:24:03.781892 master-0 kubenswrapper[4652]: I0216 17:24:03.781828 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 17:24:03.781999 master-0 kubenswrapper[4652]: I0216 17:24:03.781935 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 17:24:03.782090 master-0 kubenswrapper[4652]: I0216 17:24:03.782047 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 17:24:03.782308 master-0 kubenswrapper[4652]: I0216 17:24:03.782227 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 17:24:03.782430 master-0 kubenswrapper[4652]: I0216 17:24:03.782310 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 17:24:03.782430 master-0 kubenswrapper[4652]: I0216 17:24:03.782260 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 17:24:03.783193 master-0 kubenswrapper[4652]: I0216 17:24:03.783083 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:03.783371 master-0 kubenswrapper[4652]: E0216 17:24:03.783304 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:24:03.784187 master-0 kubenswrapper[4652]: I0216 17:24:03.783908 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:03.784187 master-0 kubenswrapper[4652]: E0216 17:24:03.784038 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:24:03.784187 master-0 kubenswrapper[4652]: I0216 17:24:03.784086 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:03.784187 master-0 kubenswrapper[4652]: I0216 17:24:03.784124 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Feb 16 17:24:03.784187 master-0 kubenswrapper[4652]: I0216 17:24:03.784164 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Feb 16 17:24:03.784187 master-0 kubenswrapper[4652]: I0216 17:24:03.784138 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Feb 16 17:24:03.784187 master-0 kubenswrapper[4652]: E0216 17:24:03.784159 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:24:03.785109 master-0 kubenswrapper[4652]: I0216 17:24:03.784230 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:03.785109 master-0 kubenswrapper[4652]: E0216 17:24:03.784378 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:24:03.785238 master-0 kubenswrapper[4652]: I0216 17:24:03.785126 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:03.785238 master-0 kubenswrapper[4652]: E0216 17:24:03.785165 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:24:03.785826 master-0 kubenswrapper[4652]: I0216 17:24:03.785780 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:03.785920 master-0 kubenswrapper[4652]: E0216 17:24:03.785827 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:24:03.785920 master-0 kubenswrapper[4652]: I0216 17:24:03.785876 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:03.785920 master-0 kubenswrapper[4652]: E0216 17:24:03.785899 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:24:03.786528 master-0 kubenswrapper[4652]: I0216 17:24:03.786480 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:03.786636 master-0 kubenswrapper[4652]: E0216 17:24:03.786573 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:24:03.786959 master-0 kubenswrapper[4652]: I0216 17:24:03.786908 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:03.787074 master-0 kubenswrapper[4652]: E0216 17:24:03.787020 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:24:03.787301 master-0 kubenswrapper[4652]: I0216 17:24:03.787219 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 17:24:03.787449 master-0 kubenswrapper[4652]: I0216 17:24:03.787342 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:03.787518 master-0 kubenswrapper[4652]: E0216 17:24:03.787434 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:24:03.787518 master-0 kubenswrapper[4652]: I0216 17:24:03.787492 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 17:24:03.788202 master-0 kubenswrapper[4652]: I0216 17:24:03.787882 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:03.788202 master-0 kubenswrapper[4652]: I0216 17:24:03.787895 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 17:24:03.788202 master-0 kubenswrapper[4652]: E0216 17:24:03.787952 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:24:03.788551 master-0 kubenswrapper[4652]: I0216 17:24:03.788213 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:03.788551 master-0 kubenswrapper[4652]: E0216 17:24:03.788374 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:24:03.789493 master-0 kubenswrapper[4652]: I0216 17:24:03.789191 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:03.789493 master-0 kubenswrapper[4652]: E0216 17:24:03.789345 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:24:03.789493 master-0 kubenswrapper[4652]: I0216 17:24:03.789364 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:03.789493 master-0 kubenswrapper[4652]: E0216 17:24:03.789488 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:24:03.790561 master-0 kubenswrapper[4652]: I0216 17:24:03.790238 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Feb 16 17:24:03.790561 master-0 kubenswrapper[4652]: I0216 17:24:03.790373 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:03.790645 master-0 kubenswrapper[4652]: E0216 17:24:03.790487 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:24:03.791130 master-0 kubenswrapper[4652]: I0216 17:24:03.791088 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:03.791516 master-0 kubenswrapper[4652]: E0216 17:24:03.791218 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:24:03.793094 master-0 kubenswrapper[4652]: I0216 17:24:03.792976 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:03.793094 master-0 kubenswrapper[4652]: E0216 17:24:03.793054 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:24:03.795291 master-0 kubenswrapper[4652]: I0216 17:24:03.794997 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 17:24:03.795291 master-0 kubenswrapper[4652]: I0216 17:24:03.795026 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 17:24:03.795291 master-0 kubenswrapper[4652]: I0216 17:24:03.795182 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-q5h8t" Feb 16 17:24:03.795291 master-0 kubenswrapper[4652]: I0216 17:24:03.795184 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 17:24:03.795692 master-0 kubenswrapper[4652]: I0216 17:24:03.795429 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:03.795692 master-0 kubenswrapper[4652]: I0216 17:24:03.795465 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 17:24:03.795692 master-0 kubenswrapper[4652]: E0216 17:24:03.795495 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:24:03.796150 master-0 kubenswrapper[4652]: I0216 17:24:03.795875 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:03.796150 master-0 kubenswrapper[4652]: I0216 17:24:03.795947 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:03.796150 master-0 kubenswrapper[4652]: E0216 17:24:03.795982 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:24:03.796150 master-0 kubenswrapper[4652]: E0216 17:24:03.795999 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:24:03.796892 master-0 kubenswrapper[4652]: I0216 17:24:03.796566 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 17:24:03.798744 master-0 kubenswrapper[4652]: I0216 17:24:03.797452 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:03.798744 master-0 kubenswrapper[4652]: E0216 17:24:03.797524 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:24:03.798744 master-0 kubenswrapper[4652]: I0216 17:24:03.797986 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 17:24:03.798744 master-0 kubenswrapper[4652]: I0216 17:24:03.798150 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-wnnb7" Feb 16 17:24:03.798744 master-0 kubenswrapper[4652]: I0216 17:24:03.798558 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:03.798744 master-0 kubenswrapper[4652]: I0216 17:24:03.798666 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:03.798744 master-0 kubenswrapper[4652]: E0216 17:24:03.798667 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:24:03.798744 master-0 kubenswrapper[4652]: E0216 17:24:03.798711 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:24:03.800096 master-0 kubenswrapper[4652]: I0216 17:24:03.798916 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 17:24:03.800096 master-0 kubenswrapper[4652]: I0216 17:24:03.798991 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 17:24:03.800096 master-0 kubenswrapper[4652]: I0216 17:24:03.798893 4652 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"442600dc-09b2-4fee-9f89-777296b2ee40\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8ae0b5bab647ff989f276cead5f360bcb88c813f181d75dc3106eb5dbde0b39\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-78ff47c7c5-txr5k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:03.800096 master-0 kubenswrapper[4652]: I0216 17:24:03.799010 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 17:24:03.800096 master-0 kubenswrapper[4652]: I0216 17:24:03.799025 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 17:24:03.800096 master-0 kubenswrapper[4652]: I0216 17:24:03.799167 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 17:24:03.800096 master-0 kubenswrapper[4652]: I0216 17:24:03.799305 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:03.800096 master-0 kubenswrapper[4652]: E0216 17:24:03.799340 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:24:03.800096 master-0 kubenswrapper[4652]: I0216 17:24:03.799371 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Feb 16 17:24:03.800096 master-0 kubenswrapper[4652]: I0216 17:24:03.799399 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:24:03.800997 master-0 kubenswrapper[4652]: I0216 17:24:03.800280 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:24:03.800997 master-0 kubenswrapper[4652]: I0216 17:24:03.800297 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:03.800997 master-0 kubenswrapper[4652]: E0216 17:24:03.800394 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:24:03.801282 master-0 kubenswrapper[4652]: I0216 17:24:03.801150 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 17:24:03.801282 master-0 kubenswrapper[4652]: I0216 17:24:03.801191 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-retry-1-master-0" Feb 16 17:24:03.802300 master-0 kubenswrapper[4652]: I0216 17:24:03.802209 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:03.802421 master-0 kubenswrapper[4652]: E0216 17:24:03.802336 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:24:03.802528 master-0 kubenswrapper[4652]: I0216 17:24:03.802440 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:03.802528 master-0 kubenswrapper[4652]: I0216 17:24:03.802459 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:03.802528 master-0 kubenswrapper[4652]: E0216 17:24:03.802491 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:24:03.802811 master-0 kubenswrapper[4652]: E0216 17:24:03.802527 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:24:03.802811 master-0 kubenswrapper[4652]: I0216 17:24:03.801407 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lc8g2" Feb 16 17:24:03.802811 master-0 kubenswrapper[4652]: I0216 17:24:03.802609 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:03.802811 master-0 kubenswrapper[4652]: E0216 17:24:03.802730 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:24:03.803199 master-0 kubenswrapper[4652]: I0216 17:24:03.803147 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:03.803318 master-0 kubenswrapper[4652]: E0216 17:24:03.803225 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:24:03.803663 master-0 kubenswrapper[4652]: I0216 17:24:03.803613 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:03.803933 master-0 kubenswrapper[4652]: I0216 17:24:03.803892 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Feb 16 17:24:03.804547 master-0 kubenswrapper[4652]: E0216 17:24:03.804315 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:24:03.805922 master-0 kubenswrapper[4652]: I0216 17:24:03.805412 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.805922 master-0 kubenswrapper[4652]: I0216 17:24:03.805467 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 17:24:03.805922 master-0 kubenswrapper[4652]: E0216 17:24:03.805573 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" Feb 16 17:24:03.805922 master-0 kubenswrapper[4652]: I0216 17:24:03.805638 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 17:24:03.805922 master-0 kubenswrapper[4652]: I0216 17:24:03.805659 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 17:24:03.806793 master-0 kubenswrapper[4652]: I0216 17:24:03.806071 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:03.806793 master-0 kubenswrapper[4652]: E0216 17:24:03.806152 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" podUID="6f44170a-3c1c-4944-b971-251f75a51fc3" Feb 16 17:24:03.806793 master-0 kubenswrapper[4652]: I0216 17:24:03.806507 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:03.811766 master-0 kubenswrapper[4652]: E0216 17:24:03.806560 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" Feb 16 17:24:03.811766 master-0 kubenswrapper[4652]: I0216 17:24:03.807415 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 17:24:03.811766 master-0 kubenswrapper[4652]: I0216 17:24:03.807701 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 17:24:03.811766 master-0 kubenswrapper[4652]: I0216 17:24:03.807895 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 17:24:03.811766 master-0 kubenswrapper[4652]: I0216 17:24:03.808045 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 17:24:03.811766 master-0 kubenswrapper[4652]: I0216 17:24:03.808445 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 17:24:03.811766 master-0 kubenswrapper[4652]: I0216 17:24:03.809301 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 17:24:03.811766 master-0 kubenswrapper[4652]: I0216 17:24:03.809432 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-gvwqd" Feb 16 17:24:03.811766 master-0 kubenswrapper[4652]: I0216 17:24:03.809673 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 17:24:03.812604 master-0 kubenswrapper[4652]: I0216 17:24:03.812376 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r" Feb 16 17:24:03.812704 master-0 kubenswrapper[4652]: I0216 17:24:03.812653 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.812953 master-0 kubenswrapper[4652]: E0216 17:24:03.812846 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" Feb 16 17:24:03.814585 master-0 kubenswrapper[4652]: I0216 17:24:03.814403 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.814585 master-0 kubenswrapper[4652]: E0216 17:24:03.814524 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="b04ee64e-5e83-499c-812d-749b2b6824c6" Feb 16 17:24:03.831220 master-0 kubenswrapper[4652]: E0216 17:24:03.831014 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:03.831220 master-0 kubenswrapper[4652]: I0216 17:24:03.831010 4652 status_manager.go:875] "Failed to update status for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dce85b5e-6e92-4e0e-bee7-07b1a3634302\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-apiserver openshift-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b41a8ae60c0eafa4a13e6dcd0e79ba63b0d7bd2bdc28aaed434b3bef98a5dc95\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/kubelet/\\\",\\\"name\\\":\\\"node-pullsecrets\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/audit\\\",\\\"name\\\":\\\"audit\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/etcd-client\\\",\\\"name\\\":\\\"etcd-client\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/etcd-serving-ca\\\",\\\"name\\\":\\\"etcd-serving-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/image-import-ca\\\",\\\"name\\\":\\\"image-import-ca\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca-bundle\\\",\\\"name\\\":\\\"trusted-ca-bundle\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/encryption-config\\\",\\\"name\\\":\\\"encryption-config\\\"},{\\\"mountPath\\\":\\\"/var/log/openshift-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhcw6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-apiserver\"/\"apiserver-fc4bf7f79-tqnlw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:03.831434 master-0 kubenswrapper[4652]: E0216 17:24:03.831234 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:24:03.831434 master-0 kubenswrapper[4652]: E0216 17:24:03.831287 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:03.831434 master-0 kubenswrapper[4652]: E0216 17:24:03.831425 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Feb 16 17:24:03.831546 master-0 kubenswrapper[4652]: E0216 17:24:03.831437 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Feb 16 17:24:03.834794 master-0 kubenswrapper[4652]: I0216 17:24:03.834747 4652 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Feb 16 17:24:03.843821 master-0 kubenswrapper[4652]: I0216 17:24:03.843736 4652 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"295dd2cc-4b35-40bc-959c-aa8ad90fc453\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5fa8b867f6c7632908fe33e45a5de76207c3a49f016816d7a95a271132f5f9bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":7,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T17:23:35Z\\\"}}}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://024918b99b0960332808509aca9a4a206a98049b3cbbd79cb59ca43d40614ee8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://024918b99b0960332808509aca9a4a206a98049b3cbbd79cb59ca43d40614ee8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T17:23:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T17:23:33Z\\\"}}}],\\\"startTime\\\":\\\"2026-02-16T17:24:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-master-0\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:03.850552 master-0 kubenswrapper[4652]: I0216 17:24:03.850501 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.850552 master-0 kubenswrapper[4652]: I0216 17:24:03.850553 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:03.850701 master-0 kubenswrapper[4652]: I0216 17:24:03.850579 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:03.850701 master-0 kubenswrapper[4652]: I0216 17:24:03.850619 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq4dn\" (UniqueName: \"kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:03.850701 master-0 kubenswrapper[4652]: I0216 17:24:03.850643 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:03.850964 master-0 kubenswrapper[4652]: E0216 17:24:03.850827 4652 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:03.850964 master-0 kubenswrapper[4652]: E0216 17:24:03.850942 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.350917076 +0000 UTC m=+1.739085662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:03.851077 master-0 kubenswrapper[4652]: I0216 17:24:03.850976 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:03.851077 master-0 kubenswrapper[4652]: I0216 17:24:03.851021 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94kdz\" (UniqueName: \"kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:03.851077 master-0 kubenswrapper[4652]: I0216 17:24:03.851052 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:24:03.851202 master-0 kubenswrapper[4652]: I0216 17:24:03.851103 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:03.851202 master-0 kubenswrapper[4652]: I0216 17:24:03.851149 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:03.851202 master-0 kubenswrapper[4652]: I0216 17:24:03.851178 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:03.851427 master-0 kubenswrapper[4652]: E0216 17:24:03.851204 4652 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:03.851427 master-0 kubenswrapper[4652]: I0216 17:24:03.851209 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-dir\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.851427 master-0 kubenswrapper[4652]: I0216 17:24:03.851240 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:03.851427 master-0 kubenswrapper[4652]: E0216 17:24:03.851287 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.351269575 +0000 UTC m=+1.739438101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:03.851427 master-0 kubenswrapper[4652]: E0216 17:24:03.851420 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:03.851614 master-0 kubenswrapper[4652]: E0216 17:24:03.851457 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.351447 +0000 UTC m=+1.739615546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:03.851614 master-0 kubenswrapper[4652]: E0216 17:24:03.851511 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:03.851614 master-0 kubenswrapper[4652]: E0216 17:24:03.851557 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.351542182 +0000 UTC m=+1.739710698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:03.851614 master-0 kubenswrapper[4652]: I0216 17:24:03.851575 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:03.851614 master-0 kubenswrapper[4652]: I0216 17:24:03.851605 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.851885 master-0 kubenswrapper[4652]: I0216 17:24:03.851629 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:24:03.851885 master-0 kubenswrapper[4652]: E0216 17:24:03.851684 4652 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:03.851885 master-0 kubenswrapper[4652]: E0216 17:24:03.851694 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:03.851885 master-0 kubenswrapper[4652]: E0216 17:24:03.851718 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.351708387 +0000 UTC m=+1.739876903 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:03.851885 master-0 kubenswrapper[4652]: E0216 17:24:03.851733 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.351725097 +0000 UTC m=+1.739893613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:03.851885 master-0 kubenswrapper[4652]: I0216 17:24:03.851748 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.851885 master-0 kubenswrapper[4652]: I0216 17:24:03.851788 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:03.851885 master-0 kubenswrapper[4652]: I0216 17:24:03.851820 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.851885 master-0 kubenswrapper[4652]: I0216 17:24:03.851844 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:03.852455 master-0 kubenswrapper[4652]: E0216 17:24:03.852168 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:03.852455 master-0 kubenswrapper[4652]: I0216 17:24:03.851873 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:03.852455 master-0 kubenswrapper[4652]: E0216 17:24:03.852232 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.35221303 +0000 UTC m=+1.740381566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:03.852455 master-0 kubenswrapper[4652]: I0216 17:24:03.852281 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:24:03.852455 master-0 kubenswrapper[4652]: I0216 17:24:03.852315 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:03.852455 master-0 kubenswrapper[4652]: E0216 17:24:03.852336 4652 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:03.852455 master-0 kubenswrapper[4652]: I0216 17:24:03.852348 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-ovnkube-config\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:24:03.852455 master-0 kubenswrapper[4652]: E0216 17:24:03.852395 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.352381075 +0000 UTC m=+1.740549651 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:03.852455 master-0 kubenswrapper[4652]: I0216 17:24:03.852343 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:03.852455 master-0 kubenswrapper[4652]: I0216 17:24:03.852427 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:03.852455 master-0 kubenswrapper[4652]: I0216 17:24:03.852446 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:03.852455 master-0 kubenswrapper[4652]: I0216 17:24:03.852464 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: E0216 17:24:03.852478 4652 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: E0216 17:24:03.852526 4652 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: E0216 17:24:03.852591 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.35257124 +0000 UTC m=+1.740739766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: E0216 17:24:03.852608 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: E0216 17:24:03.852612 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.35260461 +0000 UTC m=+1.740773136 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: I0216 17:24:03.852484 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: E0216 17:24:03.852635 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.352626251 +0000 UTC m=+1.740794837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: I0216 17:24:03.852659 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: I0216 17:24:03.852695 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: I0216 17:24:03.852725 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: I0216 17:24:03.852803 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: I0216 17:24:03.852832 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: E0216 17:24:03.852835 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: I0216 17:24:03.852857 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: I0216 17:24:03.852884 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: I0216 17:24:03.852906 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-whereabouts-configmap\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: E0216 17:24:03.852916 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.352893558 +0000 UTC m=+1.741062134 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: I0216 17:24:03.852966 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.853210 master-0 kubenswrapper[4652]: I0216 17:24:03.853012 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.853293 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: E0216 17:24:03.853808 4652 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: E0216 17:24:03.853863 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.353849194 +0000 UTC m=+1.742017730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: E0216 17:24:03.853813 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.853905 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: E0216 17:24:03.853915 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.353904735 +0000 UTC m=+1.742073341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.853938 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.853970 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.853999 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854027 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854049 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4549ea98-7379-49e1-8452-5efb643137ca-metrics-tls\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854423 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-auth-proxy-config\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854454 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854460 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854511 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854532 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854551 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854570 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: E0216 17:24:03.854567 4652 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854600 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j99jl\" (UniqueName: \"kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: E0216 17:24:03.854620 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.354607034 +0000 UTC m=+1.742775560 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854644 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854669 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854689 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854707 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76rtg\" (UniqueName: \"kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854725 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854743 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854759 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-out\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854778 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854794 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854813 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854832 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: I0216 17:24:03.854849 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-auth-proxy-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: E0216 17:24:03.854977 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:03.854959 master-0 kubenswrapper[4652]: E0216 17:24:03.855003 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.354996384 +0000 UTC m=+1.743164900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.855060 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.855075 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.855095 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.855101 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-service-ca-bundle\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.855105 4652 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.855159 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.355145038 +0000 UTC m=+1.743313644 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.855181 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.355172899 +0000 UTC m=+1.743341515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.855295 4652 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.855361 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.355340113 +0000 UTC m=+1.743508709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.855482 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.855516 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.855543 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.855567 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.855595 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-catalog-content\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.855610 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.855621 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-out\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.855696 4652 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4549ea98-7379-49e1-8452-5efb643137ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zt8mt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-6fcf4c966-6bmf9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.855718 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-metrics-client-ca\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.855822 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.855875 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.355859337 +0000 UTC m=+1.744027853 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.855968 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.856013 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.355998781 +0000 UTC m=+1.744167287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.855997 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.856055 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.856075 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.356068502 +0000 UTC m=+1.744237108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.856071 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.856105 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/39387549-c636-4bd4-b463-f6a93810f277-webhook-cert\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.856117 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.856109 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.856145 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.856163 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.356152885 +0000 UTC m=+1.744321481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.856185 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.856212 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvw4s\" (UniqueName: \"kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.856218 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-utilities\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.856288 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: I0216 17:24:03.856389 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/ba37ef0e-373c-4ccc-b082-668630399765-audit-log\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.856492 4652 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.856525 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.356516034 +0000 UTC m=+1.744684550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.856652 4652 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:03.856673 master-0 kubenswrapper[4652]: E0216 17:24:03.856697 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.356685999 +0000 UTC m=+1.744854525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.856724 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.856773 4652 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.856798 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.356789662 +0000 UTC m=+1.744958178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.856864 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.856896 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.856920 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.856944 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57455\" (UniqueName: \"kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.856970 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.856991 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.856998 4652 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.856992 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.857105 4652 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.857140 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.357130101 +0000 UTC m=+1.745298627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-config" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.857170 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.357162462 +0000 UTC m=+1.745330988 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.857168 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktgm7\" (UniqueName: \"kubernetes.io/projected/810a2275-fae5-45df-a3b8-92860451d33b-kube-api-access-ktgm7\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.857280 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.357237774 +0000 UTC m=+1.745406370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.857557 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.857621 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.857651 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.857679 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.857799 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.857880 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.857930 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.857935 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.357924552 +0000 UTC m=+1.746093068 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858037 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858064 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858087 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858111 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.858168 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.858200 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.358192239 +0000 UTC m=+1.746360755 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.858293 4652 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.858318 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.358311802 +0000 UTC m=+1.746480318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858340 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858370 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.858424 4652 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.858461 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.358450036 +0000 UTC m=+1.746618562 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858479 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858501 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858519 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858535 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858554 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858572 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858591 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858608 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858624 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858645 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858663 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858681 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858698 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858716 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858731 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858747 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858764 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858781 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858809 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858830 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858834 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858851 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858874 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.858880 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858892 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: E0216 17:24:03.858908 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.358899378 +0000 UTC m=+1.747067894 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:03.858832 master-0 kubenswrapper[4652]: I0216 17:24:03.858925 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.858945 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.858958 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a20f4-255a-4312-8f43-174a28c06340-utilities\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.858962 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.858982 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.858998 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859016 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859034 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859052 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859068 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859087 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859106 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859126 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859144 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859160 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859178 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859199 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859224 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859337 4652 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859377 4652 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859389 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.35937477 +0000 UTC m=+1.747543336 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859413 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.359404171 +0000 UTC m=+1.747572747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859416 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859429 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/ae20b683-dac8-419e-808a-ddcdb3c564e1-metrics-client-ca\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859437 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859459 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.359447572 +0000 UTC m=+1.747616178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859478 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859485 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859486 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.359472553 +0000 UTC m=+1.747641119 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859500 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/702322ac-7610-4568-9a68-b6acbd1f0c12-config\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859505 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-serving-cert\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859241 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859570 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859606 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.359598556 +0000 UTC m=+1.747767072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859623 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.359615657 +0000 UTC m=+1.747784173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859649 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.359637527 +0000 UTC m=+1.747806143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859661 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859678 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859692 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859729 4652 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859762 4652 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859767 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859785 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.359776511 +0000 UTC m=+1.747945027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859799 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859816 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859819 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859837 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.359830902 +0000 UTC m=+1.747999418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859850 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859873 4652 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859878 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859893 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.359886764 +0000 UTC m=+1.748055280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859905 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859925 4652 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859926 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.859944 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859947 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.359942015 +0000 UTC m=+1.748110531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859971 4652 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859982 4652 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859989 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.359984176 +0000 UTC m=+1.748152692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.859999 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.359994157 +0000 UTC m=+1.748162673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.860011 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860055 4652 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860076 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.360070219 +0000 UTC m=+1.748238735 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860104 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860120 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.36011546 +0000 UTC m=+1.748283976 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.860134 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.860163 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-certs\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860179 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860214 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.360204112 +0000 UTC m=+1.748372708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860240 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.860316 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-daemon-config\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860353 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.360343416 +0000 UTC m=+1.748511932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860369 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.860373 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860387 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.360382477 +0000 UTC m=+1.748550993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.860366 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-catalog-content\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860402 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.360392317 +0000 UTC m=+1.748560923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860425 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.360416858 +0000 UTC m=+1.748585464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860443 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.360436208 +0000 UTC m=+1.748604814 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860448 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860430 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860467 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.360461629 +0000 UTC m=+1.748630145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860483 4652 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860488 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.36047812 +0000 UTC m=+1.748646636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860507 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.36049931 +0000 UTC m=+1.748667936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860449 4652 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860579 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.360569222 +0000 UTC m=+1.748737828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860514 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860626 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.360617813 +0000 UTC m=+1.748786419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: E0216 17:24:03.860672 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.360663814 +0000 UTC m=+1.748832420 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.860987 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.860895 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-images\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.861021 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.861061 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:03.862058 master-0 kubenswrapper[4652]: I0216 17:24:03.861083 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.862460 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.862494 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.362485763 +0000 UTC m=+1.750654279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.862565 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.862595 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.862625 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbtld\" (UniqueName: \"kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.862648 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.862720 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.862744 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.862767 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.862790 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.862814 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.862836 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.862862 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.862886 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.862911 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.863226 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-stats-auth\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.863315 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.863350 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.363340326 +0000 UTC m=+1.751508842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.863352 4652 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.863397 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.363385897 +0000 UTC m=+1.751554463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.863438 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.863470 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.863387 4652 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.863527 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.36351839 +0000 UTC m=+1.751686896 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.863534 4652 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.863709 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-cache\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.863794 4652 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.863839 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.363828129 +0000 UTC m=+1.751996745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.863937 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864161 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-config\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.864204 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.364193838 +0000 UTC m=+1.752362404 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.864276 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.864319 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.364310541 +0000 UTC m=+1.752479057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.864378 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.364366883 +0000 UTC m=+1.752535389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864352 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gq8x\" (UniqueName: \"kubernetes.io/projected/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-kube-api-access-2gq8x\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864578 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864634 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864671 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864715 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864735 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864757 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864775 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864795 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864813 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864832 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864845 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-mcd-auth-proxy-config\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864849 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864886 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.864917 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.864959 4652 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.864978 4652 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.864985 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.364977929 +0000 UTC m=+1.753146445 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.865010 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.3650007 +0000 UTC m=+1.753169216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.865021 4652 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.865040 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.365034661 +0000 UTC m=+1.753203177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.865048 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.865074 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.365065981 +0000 UTC m=+1.753234497 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.865141 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: E0216 17:24:03.865172 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.365160604 +0000 UTC m=+1.753329120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865195 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865221 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865265 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865293 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865316 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865341 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865364 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865389 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865411 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865435 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865460 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865483 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865502 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865522 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865540 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865561 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865584 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865608 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865633 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865655 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865678 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865701 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865728 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865752 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865774 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865791 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865809 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865826 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865842 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865857 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865874 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865891 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865907 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865922 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865939 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865954 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865971 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.865989 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866007 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866023 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866039 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866058 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866077 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866094 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866111 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866128 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866144 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866162 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866179 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866195 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866212 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866229 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866261 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866279 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866297 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866324 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866342 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866367 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866392 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866415 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l67l5\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-kube-api-access-l67l5\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866440 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866462 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866486 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866509 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866532 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:03.868262 master-0 kubenswrapper[4652]: I0216 17:24:03.866556 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866578 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866604 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866628 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866649 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7l6q\" (UniqueName: \"kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866665 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866682 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866701 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866718 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-config-out\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866741 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866759 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866776 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866793 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866809 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866829 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866848 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866866 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866886 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866904 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpjv7\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-kube-api-access-vpjv7\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866920 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866938 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866954 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866971 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.866989 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867006 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nfk2\" (UniqueName: \"kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867024 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867039 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867057 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867073 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867093 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867109 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867128 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867146 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867164 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867182 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867204 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867228 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867266 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/810a2275-fae5-45df-a3b8-92860451d33b-host\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867288 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867312 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867329 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867349 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867367 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867386 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867403 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867423 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867439 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867457 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867475 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867492 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867511 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867527 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867545 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867564 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867581 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867599 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867616 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867639 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867659 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867675 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867693 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867710 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867727 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867747 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867764 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867781 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867798 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867816 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867833 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/810a2275-fae5-45df-a3b8-92860451d33b-serviceca\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867850 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867868 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867886 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867902 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867922 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867941 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867960 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867979 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.867996 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868015 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868032 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868051 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868071 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868087 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868105 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868125 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868145 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868161 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868178 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868198 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868218 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868238 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868286 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868314 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868339 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868366 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868391 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868417 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868444 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868462 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868482 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868501 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868519 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868560 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868579 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868598 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868619 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868639 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868663 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868691 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868712 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868731 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868748 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868767 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868785 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868802 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868821 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868840 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868859 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868877 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868895 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868914 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868934 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868953 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868971 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.868992 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.869008 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.869027 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.869048 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.869066 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:03.875526 master-0 kubenswrapper[4652]: I0216 17:24:03.869086 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.869106 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.869127 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.869146 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.869164 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.869182 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.869199 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.869218 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.869241 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.869280 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.869299 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.869527 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c8729b1a-e365-4cf7-8a05-91a9987dabe9-mcc-auth-proxy-config\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.869565 4652 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.869587 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.369580511 +0000 UTC m=+1.757749027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.869745 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ab80e0fb-09dd-4c93-b235-1487024105d2-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.869789 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.869816 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.369807927 +0000 UTC m=+1.757976443 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.870039 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e73ee493-de15-44c2-bd51-e12fcbb27a15-tmpfs\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.870193 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-metrics-client-ca\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870238 4652 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870276 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.37026837 +0000 UTC m=+1.758436886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.870327 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-tmp\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870363 4652 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870382 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.370376362 +0000 UTC m=+1.758544878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870461 4652 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870481 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.370474905 +0000 UTC m=+1.758643421 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.870634 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-env-overrides\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870679 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870699 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.370692571 +0000 UTC m=+1.758861087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870723 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870739 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.370734182 +0000 UTC m=+1.758902698 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870769 4652 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870786 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.370781683 +0000 UTC m=+1.758950189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870819 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870835 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.370830615 +0000 UTC m=+1.758999121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870894 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870913 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.370907577 +0000 UTC m=+1.759076093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870951 4652 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870967 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.370962468 +0000 UTC m=+1.759130984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.870995 4652 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871015 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.371010599 +0000 UTC m=+1.759179115 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.871188 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/9c48005e-c4df-4332-87fc-ec028f2c6921-node-bootstrap-token\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871221 4652 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871241 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.371235035 +0000 UTC m=+1.759403551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871341 4652 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871361 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.371355938 +0000 UTC m=+1.759524454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871417 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871435 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.37143012 +0000 UTC m=+1.759598636 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871500 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871518 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.371512903 +0000 UTC m=+1.759681419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871549 4652 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871565 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.371560854 +0000 UTC m=+1.759729370 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.871633 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-utilities\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871668 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871686 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.371680477 +0000 UTC m=+1.759848993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.871753 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/54f29618-42c2-4270-9af7-7d82852d7cec-cache\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.871840 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-textfile\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871884 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.871902 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.371896943 +0000 UTC m=+1.760065449 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872029 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872049 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.372043947 +0000 UTC m=+1.760212453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.872100 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872131 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872149 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.372143179 +0000 UTC m=+1.760311695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.872407 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/c2511146-1d04-4ecd-a28e-79662ef7b9d3-snapshots\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872453 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872473 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.372467488 +0000 UTC m=+1.760636004 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872499 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872515 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.372510259 +0000 UTC m=+1.760678765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872564 4652 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872581 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.372576031 +0000 UTC m=+1.760744547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872611 4652 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872631 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.372625292 +0000 UTC m=+1.760793808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872664 4652 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872671 4652 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872679 4652 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872704 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.372698744 +0000 UTC m=+1.760867260 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872776 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872785 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872791 4652 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872809 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.372803777 +0000 UTC m=+1.760972293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872840 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872857 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.372852738 +0000 UTC m=+1.761021254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872886 4652 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.872903 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.372898179 +0000 UTC m=+1.761066695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.873041 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e69d8c51-e2a6-4f61-9c26-072784f6cf40-available-featuregates\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873085 4652 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873092 4652 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873110 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.373104195 +0000 UTC m=+1.761272711 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873140 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873158 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.373152736 +0000 UTC m=+1.761321252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873181 4652 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873197 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.373192277 +0000 UTC m=+1.761360793 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.873366 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-default-certificate\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.873508 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/43f65f23-4ddd-471a-9cb3-b0945382d83c-cni-binary-copy\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873546 4652 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873566 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.373560137 +0000 UTC m=+1.761728653 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"service-ca" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873598 4652 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873615 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.373609728 +0000 UTC m=+1.761778244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873672 4652 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873691 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.37368563 +0000 UTC m=+1.761854146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.873749 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/55d635cd-1f0d-4086-96f2-9f3524f3f18c-volume-directive-shadow\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.873803 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-config-out\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873882 4652 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.873912 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.373905536 +0000 UTC m=+1.762074052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.874061 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-iptables-alerter-script\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: I0216 17:24:03.874215 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-metrics-certs\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.874263 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.874285 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0 podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.374277436 +0000 UTC m=+1.762445952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.874317 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.874335 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.374329047 +0000 UTC m=+1.762497553 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.874357 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.874374 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.374369578 +0000 UTC m=+1.762538094 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.874406 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.874424 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.37441957 +0000 UTC m=+1.762588086 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.874551 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.874571 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.374566354 +0000 UTC m=+1.762734870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.874608 4652 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.874626 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.374621395 +0000 UTC m=+1.762789911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:03.882861 master-0 kubenswrapper[4652]: E0216 17:24:03.874648 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.874663 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.374658886 +0000 UTC m=+1.762827392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.874840 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/4e51bba5-0ebe-4e55-a588-38b71548c605-operand-assets\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.874871 4652 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.874891 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.374885652 +0000 UTC m=+1.763054168 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"audit" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.874966 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.874993 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.374986245 +0000 UTC m=+1.763154761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.875074 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-tuned\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875137 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875157 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.375151269 +0000 UTC m=+1.763319775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875219 4652 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875236 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.375231411 +0000 UTC m=+1.763399927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875371 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875394 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.375388656 +0000 UTC m=+1.763557172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.875547 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-proxy-tls\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875597 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875619 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.375612921 +0000 UTC m=+1.763781437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875650 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875673 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.375665683 +0000 UTC m=+1.763834199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875704 4652 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875724 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.375719284 +0000 UTC m=+1.763887800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875757 4652 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875779 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.375772636 +0000 UTC m=+1.763941152 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875821 4652 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875840 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.375835237 +0000 UTC m=+1.764003753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875887 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.875914 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.375906109 +0000 UTC m=+1.764074625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.875974 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3beb7bf-922f-425d-8a19-fd407a7153a8-utilities\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.876074 4652 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.876102 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.376093914 +0000 UTC m=+1.764262430 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.876156 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.876181 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.376173126 +0000 UTC m=+1.764341642 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.876314 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.876345 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.376334571 +0000 UTC m=+1.764503087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.876385 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.876411 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.376404062 +0000 UTC m=+1.764572578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.876452 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.876476 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.376469424 +0000 UTC m=+1.764637940 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.876724 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a94f9b8e-b020-4aab-8373-6c056ec07464-metrics-client-ca\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.876903 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovn-node-metrics-cert\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.876943 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.876964 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.376957467 +0000 UTC m=+1.765125983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.877193 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/810a2275-fae5-45df-a3b8-92860451d33b-serviceca\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.877225 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.877271 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.377239855 +0000 UTC m=+1.765408381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.877317 4652 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.877344 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.377336627 +0000 UTC m=+1.765505143 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.877383 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.877406 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.377399559 +0000 UTC m=+1.765568075 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.877451 4652 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.877461 4652 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.877483 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.377476361 +0000 UTC m=+1.765644877 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.877602 4652 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.877626 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.377620175 +0000 UTC m=+1.765788691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.877699 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.877904 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-tls\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.878104 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fe8e8e5d-cebb-4361-b765-5ff737f5e838-metrics-client-ca\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.878193 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/822e1750-652e-4ceb-8fea-b2c1c905b0f1-catalog-content\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878235 4652 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878281 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.378271762 +0000 UTC m=+1.766440278 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878391 4652 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878420 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.378412946 +0000 UTC m=+1.766581462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878462 4652 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878486 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.378479278 +0000 UTC m=+1.766647794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878527 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878551 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.378543269 +0000 UTC m=+1.766711785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878679 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878702 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.378694723 +0000 UTC m=+1.766863239 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878732 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878751 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.378745895 +0000 UTC m=+1.766914401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878772 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878787 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.378782776 +0000 UTC m=+1.766951282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878817 4652 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878833 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.378828717 +0000 UTC m=+1.766997233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878867 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.878887 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.378882798 +0000 UTC m=+1.767051314 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.879106 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-env-overrides\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.879221 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab5760f1-b2e0-4138-9383-e4827154ac50-cni-binary-copy\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879283 4652 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879305 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.379298859 +0000 UTC m=+1.767467375 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879334 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879352 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.379347111 +0000 UTC m=+1.767515627 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879383 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879413 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.379397192 +0000 UTC m=+1.767565818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879445 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879465 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.379459134 +0000 UTC m=+1.767627650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879497 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879522 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.379513445 +0000 UTC m=+1.767682071 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879562 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879581 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.379575397 +0000 UTC m=+1.767743913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.879603 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-service-ca\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879705 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.879767 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/702322ac-7610-4568-9a68-b6acbd1f0c12-machine-approver-tls\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.879780 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.379762302 +0000 UTC m=+1.767930848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.879916 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0393fe12-2533-4c9c-a8e4-a58003c88f36-catalog-content\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880018 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880047 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.380038699 +0000 UTC m=+1.768207325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.880085 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/2d1636c0-f34d-444c-822d-77f1d203ddc4-metrics-client-ca\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880161 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880193 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.380184473 +0000 UTC m=+1.768353109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880163 4652 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880205 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880223 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.380215764 +0000 UTC m=+1.768384400 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880273 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.380237164 +0000 UTC m=+1.768405700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880388 4652 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880431 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.380418689 +0000 UTC m=+1.768587305 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.880454 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9f9bf4ab-5415-4616-aa36-ea387c699ea9-ovnkube-script-lib\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880544 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880558 4652 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880569 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.380562943 +0000 UTC m=+1.768731459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880601 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.380589394 +0000 UTC m=+1.768758010 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880619 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880651 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880665 4652 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880724 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.380703417 +0000 UTC m=+1.768871993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: I0216 17:24:03.880744 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/39387549-c636-4bd4-b463-f6a93810f277-ovnkube-identity-cm\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880758 4652 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880770 4652 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.890358 master-0 kubenswrapper[4652]: E0216 17:24:03.880778 4652 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.880806 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.380798589 +0000 UTC m=+1.768967225 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.880818 4652 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.880867 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.380852601 +0000 UTC m=+1.769021187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.880890 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.880902 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.880911 4652 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.880923 4652 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.880930 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.880965 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.380956293 +0000 UTC m=+1.769124919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.880976 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.880987 4652 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.880993 4652 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881001 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.880987 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.380979634 +0000 UTC m=+1.769148250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881030 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881043 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881044 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.381034615 +0000 UTC m=+1.769203221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881052 4652 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881061 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.381056816 +0000 UTC m=+1.769225332 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881083 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.381076127 +0000 UTC m=+1.769244733 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.881196 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ab80e0fb-09dd-4c93-b235-1487024105d2-env-overrides\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881298 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881340 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.381327313 +0000 UTC m=+1.769495919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881384 4652 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881399 4652 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881407 4652 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881436 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.381427406 +0000 UTC m=+1.769595922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881488 4652 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881497 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881512 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881521 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881552 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.381542539 +0000 UTC m=+1.769711125 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.881841 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j99jl\" (UniqueName: \"kubernetes.io/projected/fe8e8e5d-cebb-4361-b765-5ff737f5e838-kube-api-access-j99jl\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.881875 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-bound-sa-token\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881500 4652 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881904 4652 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881927 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.381920499 +0000 UTC m=+1.770089015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881960 4652 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881968 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.881986 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.381980911 +0000 UTC m=+1.770149427 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.882468 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76rtg\" (UniqueName: \"kubernetes.io/projected/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-api-access-76rtg\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.882574 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94kdz\" (UniqueName: \"kubernetes.io/projected/f0b1ebd3-1068-4624-9b6d-3e9f45ded76a-kube-api-access-94kdz\") pod \"router-default-864ddd5f56-pm4rt\" (UID: \"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a\") " pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.882789 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq4dn\" (UniqueName: \"kubernetes.io/projected/06067627-6ccf-4cc8-bd20-dabdd776bb46-kube-api-access-pq4dn\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.882846 4652 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.882858 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.882881 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.382874344 +0000 UTC m=+1.771042860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.883168 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvw4s\" (UniqueName: \"kubernetes.io/projected/9c48005e-c4df-4332-87fc-ec028f2c6921-kube-api-access-gvw4s\") pod \"machine-config-server-2ws9r\" (UID: \"9c48005e-c4df-4332-87fc-ec028f2c6921\") " pod="openshift-machine-config-operator/machine-config-server-2ws9r" Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.883405 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57455\" (UniqueName: \"kubernetes.io/projected/ba37ef0e-373c-4ccc-b082-668630399765-kube-api-access-57455\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.885285 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkwxl\" (UniqueName: \"kubernetes.io/projected/ab80e0fb-09dd-4c93-b235-1487024105d2-kube-api-access-fkwxl\") pod \"ovnkube-control-plane-bb7ffbb8d-lzgs9\" (UID: \"ab80e0fb-09dd-4c93-b235-1487024105d2\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.886103 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnnc5\" (UniqueName: \"kubernetes.io/projected/ad805251-19d0-4d2f-b741-7d11158f1f03-kube-api-access-bnnc5\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.886121 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q46jg\" (UniqueName: \"kubernetes.io/projected/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-kube-api-access-q46jg\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.887158 4652 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.887173 4652 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.887183 4652 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.887226 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.38721345 +0000 UTC m=+1.775381966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.887327 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk7xl\" (UniqueName: \"kubernetes.io/projected/39387549-c636-4bd4-b463-f6a93810f277-kube-api-access-vk7xl\") pod \"network-node-identity-hhcpr\" (UID: \"39387549-c636-4bd4-b463-f6a93810f277\") " pod="openshift-network-node-identity/network-node-identity-hhcpr" Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.890375 4652 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.890389 4652 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.890398 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.890403 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m29g\" (UniqueName: \"kubernetes.io/projected/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-kube-api-access-8m29g\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: E0216 17:24:03.890431 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.390423365 +0000 UTC m=+1.778591881 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.893650 master-0 kubenswrapper[4652]: I0216 17:24:03.890740 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktgm7\" (UniqueName: \"kubernetes.io/projected/810a2275-fae5-45df-a3b8-92860451d33b-kube-api-access-ktgm7\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:24:03.895572 master-0 kubenswrapper[4652]: E0216 17:24:03.895545 4652 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.895572 master-0 kubenswrapper[4652]: E0216 17:24:03.895564 4652 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.895572 master-0 kubenswrapper[4652]: E0216 17:24:03.895573 4652 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.895662 master-0 kubenswrapper[4652]: E0216 17:24:03.895616 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.395605542 +0000 UTC m=+1.783774118 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.909332 master-0 kubenswrapper[4652]: E0216 17:24:03.909297 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:03.909332 master-0 kubenswrapper[4652]: E0216 17:24:03.909327 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.909418 master-0 kubenswrapper[4652]: E0216 17:24:03.909340 4652 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.909418 master-0 kubenswrapper[4652]: E0216 17:24:03.909387 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.409374288 +0000 UTC m=+1.797542794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.931414 master-0 kubenswrapper[4652]: E0216 17:24:03.931360 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:03.931414 master-0 kubenswrapper[4652]: E0216 17:24:03.931383 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.931414 master-0 kubenswrapper[4652]: E0216 17:24:03.931393 4652 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.931738 master-0 kubenswrapper[4652]: E0216 17:24:03.931434 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.431423543 +0000 UTC m=+1.819592059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.952935 master-0 kubenswrapper[4652]: E0216 17:24:03.952877 4652 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:03.952935 master-0 kubenswrapper[4652]: E0216 17:24:03.952903 4652 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:03.952935 master-0 kubenswrapper[4652]: E0216 17:24:03.952913 4652 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.952935 master-0 kubenswrapper[4652]: E0216 17:24:03.952949 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.452939575 +0000 UTC m=+1.841108091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:03.970034 master-0 kubenswrapper[4652]: I0216 17:24:03.969972 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:24:03.970034 master-0 kubenswrapper[4652]: I0216 17:24:03.970011 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.970034 master-0 kubenswrapper[4652]: I0216 17:24:03.970029 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.970302 master-0 kubenswrapper[4652]: I0216 17:24:03.970114 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-rootfs\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:24:03.970302 master-0 kubenswrapper[4652]: I0216 17:24:03.970203 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-node-log\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.970302 master-0 kubenswrapper[4652]: I0216 17:24:03.970230 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-etc-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.970302 master-0 kubenswrapper[4652]: I0216 17:24:03.970234 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.970302 master-0 kubenswrapper[4652]: I0216 17:24:03.970286 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-conf-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.970448 master-0 kubenswrapper[4652]: I0216 17:24:03.970309 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.970448 master-0 kubenswrapper[4652]: I0216 17:24:03.970351 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-multus-certs\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.970448 master-0 kubenswrapper[4652]: I0216 17:24:03.970417 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.970541 master-0 kubenswrapper[4652]: I0216 17:24:03.970523 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.970541 master-0 kubenswrapper[4652]: I0216 17:24:03.970529 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-sys\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.970601 master-0 kubenswrapper[4652]: I0216 17:24:03.970576 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-systemd-units\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.970704 master-0 kubenswrapper[4652]: I0216 17:24:03.970662 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.970745 master-0 kubenswrapper[4652]: I0216 17:24:03.970716 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:03.970857 master-0 kubenswrapper[4652]: I0216 17:24:03.970836 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.970894 master-0 kubenswrapper[4652]: I0216 17:24:03.970866 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-ssl-certs\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:03.970894 master-0 kubenswrapper[4652]: I0216 17:24:03.970874 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-bin\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.970955 master-0 kubenswrapper[4652]: I0216 17:24:03.970938 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.971004 master-0 kubenswrapper[4652]: I0216 17:24:03.970982 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.971040 master-0 kubenswrapper[4652]: I0216 17:24:03.971016 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.971074 master-0 kubenswrapper[4652]: I0216 17:24:03.971060 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-run\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.971119 master-0 kubenswrapper[4652]: I0216 17:24:03.971097 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-netns\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.971213 master-0 kubenswrapper[4652]: I0216 17:24:03.971189 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.971401 master-0 kubenswrapper[4652]: I0216 17:24:03.971377 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:03.971511 master-0 kubenswrapper[4652]: I0216 17:24:03.971497 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.971589 master-0 kubenswrapper[4652]: I0216 17:24:03.971574 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.971589 master-0 kubenswrapper[4652]: I0216 17:24:03.971579 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-docker\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:03.971650 master-0 kubenswrapper[4652]: I0216 17:24:03.971589 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-hostroot\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.971650 master-0 kubenswrapper[4652]: I0216 17:24:03.971589 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-lib-modules\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.971650 master-0 kubenswrapper[4652]: I0216 17:24:03.971631 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysconfig\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.971737 master-0 kubenswrapper[4652]: I0216 17:24:03.971708 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.971825 master-0 kubenswrapper[4652]: I0216 17:24:03.971807 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-cnibin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.971856 master-0 kubenswrapper[4652]: I0216 17:24:03.971826 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.971890 master-0 kubenswrapper[4652]: I0216 17:24:03.971861 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-run-k8s-cni-cncf-io\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.971920 master-0 kubenswrapper[4652]: I0216 17:24:03.971887 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.972009 master-0 kubenswrapper[4652]: I0216 17:24:03.971982 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.972041 master-0 kubenswrapper[4652]: I0216 17:24:03.972011 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-cni-netd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.972102 master-0 kubenswrapper[4652]: I0216 17:24:03.971982 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.972138 master-0 kubenswrapper[4652]: I0216 17:24:03.972086 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:03.972138 master-0 kubenswrapper[4652]: I0216 17:24:03.972123 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/54f29618-42c2-4270-9af7-7d82852d7cec-etc-containers\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:03.972193 master-0 kubenswrapper[4652]: I0216 17:24:03.972156 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:24:03.972293 master-0 kubenswrapper[4652]: I0216 17:24:03.972274 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a6fe41b0-1a42-4f07-8220-d9aaa50788ad-hosts-file\") pod \"node-resolver-vfxj4\" (UID: \"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\") " pod="openshift-dns/node-resolver-vfxj4" Feb 16 17:24:03.972293 master-0 kubenswrapper[4652]: I0216 17:24:03.972281 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.972577 master-0 kubenswrapper[4652]: I0216 17:24:03.972337 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-var-lib-kubelet\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.972577 master-0 kubenswrapper[4652]: I0216 17:24:03.972359 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.972577 master-0 kubenswrapper[4652]: I0216 17:24:03.972392 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-kubernetes\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.972577 master-0 kubenswrapper[4652]: I0216 17:24:03.972469 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.972577 master-0 kubenswrapper[4652]: I0216 17:24:03.972530 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.972577 master-0 kubenswrapper[4652]: I0216 17:24:03.972534 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-cnibin\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.972577 master-0 kubenswrapper[4652]: I0216 17:24:03.972567 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-systemd\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.972577 master-0 kubenswrapper[4652]: I0216 17:24:03.972589 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.972935 master-0 kubenswrapper[4652]: I0216 17:24:03.972615 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.972935 master-0 kubenswrapper[4652]: I0216 17:24:03.972637 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-ovn\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.972935 master-0 kubenswrapper[4652]: I0216 17:24:03.972674 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-system-cni-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.972935 master-0 kubenswrapper[4652]: I0216 17:24:03.972681 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.973110 master-0 kubenswrapper[4652]: I0216 17:24:03.972850 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.973304 master-0 kubenswrapper[4652]: I0216 17:24:03.973200 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.973304 master-0 kubenswrapper[4652]: I0216 17:24:03.972853 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-os-release\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.973304 master-0 kubenswrapper[4652]: I0216 17:24:03.973201 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.973452 master-0 kubenswrapper[4652]: I0216 17:24:03.973310 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-system-cni-dir\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.973452 master-0 kubenswrapper[4652]: I0216 17:24:03.973339 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.973575 master-0 kubenswrapper[4652]: I0216 17:24:03.973430 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.974078 master-0 kubenswrapper[4652]: I0216 17:24:03.973470 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-host\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.974078 master-0 kubenswrapper[4652]: I0216 17:24:03.973668 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.974078 master-0 kubenswrapper[4652]: I0216 17:24:03.973669 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xrw2\" (UniqueName: \"kubernetes.io/projected/9f9bf4ab-5415-4616-aa36-ea387c699ea9-kube-api-access-9xrw2\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.974078 master-0 kubenswrapper[4652]: I0216 17:24:03.973697 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-run-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.974078 master-0 kubenswrapper[4652]: I0216 17:24:03.973498 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-run-netns\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.974078 master-0 kubenswrapper[4652]: I0216 17:24:03.973914 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.974078 master-0 kubenswrapper[4652]: I0216 17:24:03.973960 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/810a2275-fae5-45df-a3b8-92860451d33b-host\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974109 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974165 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974189 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-root\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974231 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974290 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-multus-socket-dir-parent\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974316 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974341 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/810a2275-fae5-45df-a3b8-92860451d33b-host\") pod \"node-ca-xv2wv\" (UID: \"810a2275-fae5-45df-a3b8-92860451d33b\") " pod="openshift-image-registry/node-ca-xv2wv" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974375 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974384 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit-dir\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974426 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-systemd\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974438 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-log-socket\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974503 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-os-release\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974606 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974660 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974766 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974799 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974830 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.974922 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.975032 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.975119 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.975152 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.975335 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.975366 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.975521 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.975718 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.975772 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.975798 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.975945 master-0 kubenswrapper[4652]: I0216 17:24:03.975879 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-dir\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976058 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-containers\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976114 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-multus\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976125 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-dir\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976133 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-kubelet\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976512 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab5760f1-b2e0-4138-9383-e4827154ac50-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976582 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-sys\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976645 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976683 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-slash\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976708 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-var-lib-openvswitch\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976736 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-host-var-lib-cni-bin\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976770 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-etc-docker\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976832 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976840 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.976885 master-0 kubenswrapper[4652]: I0216 17:24:03.976867 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/43f65f23-4ddd-471a-9cb3-b0945382d83c-etc-kubernetes\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:03.977355 master-0 kubenswrapper[4652]: I0216 17:24:03.976830 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-sysctl-conf\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.977472 master-0 kubenswrapper[4652]: I0216 17:24:03.977450 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9f9bf4ab-5415-4616-aa36-ea387c699ea9-host-kubelet\") pod \"ovnkube-node-flr86\" (UID: \"9f9bf4ab-5415-4616-aa36-ea387c699ea9\") " pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:03.977510 master-0 kubenswrapper[4652]: I0216 17:24:03.977486 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/4549ea98-7379-49e1-8452-5efb643137ca-host-etc-kube\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:24:03.977510 master-0 kubenswrapper[4652]: I0216 17:24:03.977505 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b3fa6ac1-781f-446c-b6b4-18bdb7723c23-host-slash\") pod \"iptables-alerter-czzz2\" (UID: \"b3fa6ac1-781f-446c-b6b4-18bdb7723c23\") " pod="openshift-network-operator/iptables-alerter-czzz2" Feb 16 17:24:03.977566 master-0 kubenswrapper[4652]: I0216 17:24:03.977539 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:03.977652 master-0 kubenswrapper[4652]: I0216 17:24:03.977599 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-dir\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:03.977741 master-0 kubenswrapper[4652]: I0216 17:24:03.977718 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.977792 master-0 kubenswrapper[4652]: I0216 17:24:03.977776 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:24:03.977831 master-0 kubenswrapper[4652]: I0216 17:24:03.977808 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.978035 master-0 kubenswrapper[4652]: I0216 17:24:03.977913 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.978035 master-0 kubenswrapper[4652]: I0216 17:24:03.978003 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/dce85b5e-6e92-4e0e-bee7-07b1a3634302-node-pullsecrets\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:03.978035 master-0 kubenswrapper[4652]: I0216 17:24:03.978013 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:24:03.978169 master-0 kubenswrapper[4652]: I0216 17:24:03.978119 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/a94f9b8e-b020-4aab-8373-6c056ec07464-node-exporter-wtmp\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:03.978260 master-0 kubenswrapper[4652]: I0216 17:24:03.978187 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-etc-modprobe-d\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:03.992051 master-0 kubenswrapper[4652]: I0216 17:24:03.992008 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7w67\" (UniqueName: \"kubernetes.io/projected/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-kube-api-access-j7w67\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:04.015767 master-0 kubenswrapper[4652]: I0216 17:24:04.015721 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:04.017330 master-0 kubenswrapper[4652]: E0216 17:24:04.017284 4652 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:04.017330 master-0 kubenswrapper[4652]: E0216 17:24:04.017329 4652 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.017485 master-0 kubenswrapper[4652]: E0216 17:24:04.017346 4652 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.017485 master-0 kubenswrapper[4652]: E0216 17:24:04.017423 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.517401216 +0000 UTC m=+1.905569742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.020551 master-0 kubenswrapper[4652]: I0216 17:24:04.020440 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:04.033907 master-0 kubenswrapper[4652]: I0216 17:24:04.033882 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbtld\" (UniqueName: \"kubernetes.io/projected/2d1636c0-f34d-444c-822d-77f1d203ddc4-kube-api-access-vbtld\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:04.050890 master-0 kubenswrapper[4652]: I0216 17:24:04.050819 4652 status_manager.go:875] "Failed to update status for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-controller-manager-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f353131d8a1223db7f637c9851016b3a348d80c2b2be663a2db6d01e14ddca88\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"openshift-controller-manager-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dptnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-5f5f84757d-ktmm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:04.065332 master-0 kubenswrapper[4652]: I0216 17:24:04.065293 4652 scope.go:117] "RemoveContainer" containerID="b4315f83666dc83dd0d090c586d4190807ce457fde99424e3d29daa05720934f" Feb 16 17:24:04.076150 master-0 kubenswrapper[4652]: I0216 17:24:04.075513 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt8mt\" (UniqueName: \"kubernetes.io/projected/4549ea98-7379-49e1-8452-5efb643137ca-kube-api-access-zt8mt\") pod \"network-operator-6fcf4c966-6bmf9\" (UID: \"4549ea98-7379-49e1-8452-5efb643137ca\") " pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" Feb 16 17:24:04.095604 master-0 kubenswrapper[4652]: E0216 17:24:04.095567 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:04.095604 master-0 kubenswrapper[4652]: E0216 17:24:04.095598 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.095727 master-0 kubenswrapper[4652]: E0216 17:24:04.095610 4652 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.095727 master-0 kubenswrapper[4652]: E0216 17:24:04.095671 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.595651724 +0000 UTC m=+1.983820250 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.099808 master-0 kubenswrapper[4652]: I0216 17:24:04.099748 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Feb 16 17:24:04.111817 master-0 kubenswrapper[4652]: E0216 17:24:04.111785 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:04.111817 master-0 kubenswrapper[4652]: E0216 17:24:04.111811 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.111918 master-0 kubenswrapper[4652]: E0216 17:24:04.111825 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.111918 master-0 kubenswrapper[4652]: E0216 17:24:04.111879 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.611862425 +0000 UTC m=+2.000030941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.116311 master-0 kubenswrapper[4652]: I0216 17:24:04.116264 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:04.120350 master-0 kubenswrapper[4652]: I0216 17:24:04.120319 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:04.131732 master-0 kubenswrapper[4652]: I0216 17:24:04.131612 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn82n\" (UniqueName: \"kubernetes.io/projected/c45ce0e5-c50b-4210-b7bb-82db2b2bc1db-kube-api-access-wn82n\") pod \"tuned-l5kbz\" (UID: \"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db\") " pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" Feb 16 17:24:04.154416 master-0 kubenswrapper[4652]: E0216 17:24:04.154029 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.154416 master-0 kubenswrapper[4652]: E0216 17:24:04.154066 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.154416 master-0 kubenswrapper[4652]: E0216 17:24:04.154080 4652 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.154416 master-0 kubenswrapper[4652]: E0216 17:24:04.154157 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.654135657 +0000 UTC m=+2.042304173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.175422 master-0 kubenswrapper[4652]: I0216 17:24:04.175378 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5qxm\" (UniqueName: \"kubernetes.io/projected/ab5760f1-b2e0-4138-9383-e4827154ac50-kube-api-access-j5qxm\") pod \"multus-additional-cni-plugins-rjdlk\" (UID: \"ab5760f1-b2e0-4138-9383-e4827154ac50\") " pod="openshift-multus/multus-additional-cni-plugins-rjdlk" Feb 16 17:24:04.190071 master-0 kubenswrapper[4652]: E0216 17:24:04.190021 4652 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:24:04.190071 master-0 kubenswrapper[4652]: E0216 17:24:04.190052 4652 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.190071 master-0 kubenswrapper[4652]: E0216 17:24:04.190064 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.190292 master-0 kubenswrapper[4652]: E0216 17:24:04.190112 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.690098732 +0000 UTC m=+2.078267248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.212572 master-0 kubenswrapper[4652]: I0216 17:24:04.212532 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p2jz\" (UniqueName: \"kubernetes.io/projected/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-kube-api-access-8p2jz\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:04.235410 master-0 kubenswrapper[4652]: E0216 17:24:04.235344 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:04.235410 master-0 kubenswrapper[4652]: E0216 17:24:04.235394 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.235410 master-0 kubenswrapper[4652]: E0216 17:24:04.235412 4652 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.235664 master-0 kubenswrapper[4652]: E0216 17:24:04.235491 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.735468457 +0000 UTC m=+2.123636983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.254813 master-0 kubenswrapper[4652]: E0216 17:24:04.254768 4652 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Feb 16 17:24:04.254813 master-0 kubenswrapper[4652]: E0216 17:24:04.254797 4652 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.254813 master-0 kubenswrapper[4652]: E0216 17:24:04.254809 4652 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.254986 master-0 kubenswrapper[4652]: E0216 17:24:04.254866 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.754850321 +0000 UTC m=+2.143018837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.270869 master-0 kubenswrapper[4652]: E0216 17:24:04.270822 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.270869 master-0 kubenswrapper[4652]: E0216 17:24:04.270850 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.270869 master-0 kubenswrapper[4652]: E0216 17:24:04.270861 4652 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.271007 master-0 kubenswrapper[4652]: E0216 17:24:04.270917 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.770901988 +0000 UTC m=+2.159070504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.289612 master-0 kubenswrapper[4652]: E0216 17:24:04.289572 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:04.289612 master-0 kubenswrapper[4652]: E0216 17:24:04.289603 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.289612 master-0 kubenswrapper[4652]: E0216 17:24:04.289617 4652 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.289811 master-0 kubenswrapper[4652]: E0216 17:24:04.289750 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.789732408 +0000 UTC m=+2.177900924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.312343 master-0 kubenswrapper[4652]: E0216 17:24:04.312302 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:04.312465 master-0 kubenswrapper[4652]: E0216 17:24:04.312356 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.312465 master-0 kubenswrapper[4652]: E0216 17:24:04.312371 4652 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.312465 master-0 kubenswrapper[4652]: E0216 17:24:04.312451 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.81243171 +0000 UTC m=+2.200600226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.331662 master-0 kubenswrapper[4652]: E0216 17:24:04.331636 4652 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:24:04.331805 master-0 kubenswrapper[4652]: E0216 17:24:04.331794 4652 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.331871 master-0 kubenswrapper[4652]: E0216 17:24:04.331861 4652 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.332014 master-0 kubenswrapper[4652]: E0216 17:24:04.331999 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.831978519 +0000 UTC m=+2.220147035 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.358816 master-0 kubenswrapper[4652]: E0216 17:24:04.358781 4652 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:04.358816 master-0 kubenswrapper[4652]: E0216 17:24:04.358809 4652 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.358816 master-0 kubenswrapper[4652]: E0216 17:24:04.358820 4652 projected.go:194] Error preparing data for projected volume kube-api-access-st6bv for pod openshift-console/console-599b567ff7-nrcpr: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.359009 master-0 kubenswrapper[4652]: E0216 17:24:04.358873 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.858855763 +0000 UTC m=+2.247024279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-st6bv" (UniqueName: "kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.391322 master-0 kubenswrapper[4652]: I0216 17:24:04.391279 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:04.391417 master-0 kubenswrapper[4652]: I0216 17:24:04.391325 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:04.391417 master-0 kubenswrapper[4652]: I0216 17:24:04.391347 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:04.391534 master-0 kubenswrapper[4652]: E0216 17:24:04.391502 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:04.391582 master-0 kubenswrapper[4652]: E0216 17:24:04.391555 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.391631 master-0 kubenswrapper[4652]: E0216 17:24:04.391581 4652 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.391672 master-0 kubenswrapper[4652]: E0216 17:24:04.391634 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:04.391672 master-0 kubenswrapper[4652]: E0216 17:24:04.391649 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.391626253 +0000 UTC m=+2.779794799 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.391672 master-0 kubenswrapper[4652]: E0216 17:24:04.391669 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.391660264 +0000 UTC m=+2.779828780 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:04.391672 master-0 kubenswrapper[4652]: I0216 17:24:04.391579 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:04.391837 master-0 kubenswrapper[4652]: I0216 17:24:04.391702 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:04.391837 master-0 kubenswrapper[4652]: I0216 17:24:04.391726 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.391837 master-0 kubenswrapper[4652]: I0216 17:24:04.391752 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:04.391949 master-0 kubenswrapper[4652]: E0216 17:24:04.391834 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:04.391949 master-0 kubenswrapper[4652]: E0216 17:24:04.391914 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.39186719 +0000 UTC m=+2.780035736 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:04.391949 master-0 kubenswrapper[4652]: E0216 17:24:04.391914 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:04.392074 master-0 kubenswrapper[4652]: E0216 17:24:04.391831 4652 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:04.392074 master-0 kubenswrapper[4652]: E0216 17:24:04.391950 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:04.392155 master-0 kubenswrapper[4652]: E0216 17:24:04.391961 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.391949742 +0000 UTC m=+2.780118288 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:04.392993 master-0 kubenswrapper[4652]: I0216 17:24:04.392922 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:04.393128 master-0 kubenswrapper[4652]: I0216 17:24:04.393015 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:04.393128 master-0 kubenswrapper[4652]: E0216 17:24:04.393051 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:04.393128 master-0 kubenswrapper[4652]: E0216 17:24:04.393057 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.39301415 +0000 UTC m=+2.781182686 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:04.393450 master-0 kubenswrapper[4652]: E0216 17:24:04.393161 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.393136633 +0000 UTC m=+2.781305149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:04.393450 master-0 kubenswrapper[4652]: E0216 17:24:04.393231 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:04.393450 master-0 kubenswrapper[4652]: E0216 17:24:04.393269 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.393238926 +0000 UTC m=+2.781407442 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:04.393450 master-0 kubenswrapper[4652]: E0216 17:24:04.393285 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.393450 master-0 kubenswrapper[4652]: E0216 17:24:04.393303 4652 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.393450 master-0 kubenswrapper[4652]: E0216 17:24:04.393342 4652 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:04.393450 master-0 kubenswrapper[4652]: E0216 17:24:04.393363 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.393348079 +0000 UTC m=+2.781516605 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.393450 master-0 kubenswrapper[4652]: I0216 17:24:04.393215 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:04.393450 master-0 kubenswrapper[4652]: E0216 17:24:04.393386 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.3933755 +0000 UTC m=+2.781544026 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:04.393450 master-0 kubenswrapper[4652]: I0216 17:24:04.393430 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: I0216 17:24:04.393480 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: I0216 17:24:04.393518 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: E0216 17:24:04.393484 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: E0216 17:24:04.393562 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: E0216 17:24:04.393574 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: E0216 17:24:04.393616 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.393606356 +0000 UTC m=+2.781774882 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: E0216 17:24:04.393657 4652 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: E0216 17:24:04.393743 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.393705898 +0000 UTC m=+2.781874414 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: I0216 17:24:04.393785 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: E0216 17:24:04.393953 4652 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: E0216 17:24:04.393993 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.393982206 +0000 UTC m=+2.782150722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: I0216 17:24:04.394024 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: I0216 17:24:04.394065 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: I0216 17:24:04.394098 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: I0216 17:24:04.394131 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: I0216 17:24:04.394158 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:04.394360 master-0 kubenswrapper[4652]: I0216 17:24:04.394188 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:04.399182 master-0 kubenswrapper[4652]: E0216 17:24:04.399018 4652 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.399552 master-0 kubenswrapper[4652]: E0216 17:24:04.399453 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:04.400003 master-0 kubenswrapper[4652]: E0216 17:24:04.399894 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:04.400003 master-0 kubenswrapper[4652]: E0216 17:24:04.399939 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.399930194 +0000 UTC m=+2.788098700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:04.400003 master-0 kubenswrapper[4652]: E0216 17:24:04.399968 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400038 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400063 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400079 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.400056377 +0000 UTC m=+2.788224893 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: I0216 17:24:04.399815 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400105 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.400095428 +0000 UTC m=+2.788263944 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.399896 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400125 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.400112839 +0000 UTC m=+2.788281355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400136 4652 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: I0216 17:24:04.400158 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400173 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.40016085 +0000 UTC m=+2.788329366 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.399756 4652 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400216 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.400205351 +0000 UTC m=+2.788373857 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.399772 4652 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400287 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.400281483 +0000 UTC m=+2.788449999 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.399617 4652 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400307 4652 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400311 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400338 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.400326634 +0000 UTC m=+2.788495150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400368 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.400359545 +0000 UTC m=+2.788528061 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:04.400356 master-0 kubenswrapper[4652]: E0216 17:24:04.400394 4652 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.400419 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.400413327 +0000 UTC m=+2.788581843 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.400426 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.400502 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.400543 4652 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.400542 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.400650 4652 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.400709 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.400692994 +0000 UTC m=+2.788861530 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.400763 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.400818 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.400898 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.400867509 +0000 UTC m=+2.789036065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.400907 4652 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.400939 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.400952 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.40091874 +0000 UTC m=+2.789087296 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.401013 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.401122 4652 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402113 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402193 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.402159823 +0000 UTC m=+2.790328339 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.401799 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.402318 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.402352 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.402385 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402398 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.402416 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402425 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.40241648 +0000 UTC m=+2.790584996 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.402446 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402454 4652 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.402478 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402504 4652 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402512 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.402492512 +0000 UTC m=+2.790661038 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-config" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402525 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.402519552 +0000 UTC m=+2.790688068 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.402555 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402571 4652 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402571 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402603 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402608 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.402598175 +0000 UTC m=+2.790766701 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: I0216 17:24:04.402580 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402623 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.402617425 +0000 UTC m=+2.790785931 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:04.402549 master-0 kubenswrapper[4652]: E0216 17:24:04.402634 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.402629085 +0000 UTC m=+2.790797601 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.402680 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.402694 4652 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.402710 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.402729 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.402718658 +0000 UTC m=+2.790887284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.402728 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.402768 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.402762929 +0000 UTC m=+2.790931445 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.402764 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.402811 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.402841 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.402865 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.402887 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.402870292 +0000 UTC m=+2.791038918 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.402931 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.402962 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.402978 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.402988 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403013 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403037 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403061 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403081 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.402992 4652 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403142 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403158 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403177 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403186 4652 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403194 4652 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403204 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403165 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.403159209 +0000 UTC m=+2.791327715 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403234 4652 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403026 4652 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403264 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.403231941 +0000 UTC m=+2.791400527 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403137 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403189 4652 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403312 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403235 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403036 4652 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403063 4652 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403127 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403263 4652 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403436 4652 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403285 4652 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403470 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403305 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.403293193 +0000 UTC m=+2.791461719 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403510 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.403499919 +0000 UTC m=+2.791668525 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.403528 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.403519609 +0000 UTC m=+2.791688235 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403552 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403587 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403620 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403673 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403703 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403735 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403766 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403796 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403827 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403857 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403885 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403914 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403946 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.403978 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404000 4652 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.404009 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404032 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.404022702 +0000 UTC m=+2.792191218 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.404068 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.404095 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.404128 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.404150 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.404172 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.404193 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.404223 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.404281 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404073 4652 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404476 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404497 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404564 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404612 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404620 4652 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404636 4652 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404646 4652 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404659 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404684 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404691 4652 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404174 4652 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404141 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404297 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404354 4652 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404214 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404563 4652 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404381 4652 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404388 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404400 4652 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404428 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404441 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404763 4652 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404315 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.40430829 +0000 UTC m=+2.792476806 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404987 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.404974538 +0000 UTC m=+2.793143054 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405007 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.404998088 +0000 UTC m=+2.793166674 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405023 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.405014989 +0000 UTC m=+2.793183645 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405038 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.405030489 +0000 UTC m=+2.793199115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405054 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.40504606 +0000 UTC m=+2.793214716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405070 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.40506137 +0000 UTC m=+2.793230006 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405085 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.40507751 +0000 UTC m=+2.793246166 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405100 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.405092201 +0000 UTC m=+2.793260837 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405114 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.405106851 +0000 UTC m=+2.793275487 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405130 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.405122232 +0000 UTC m=+2.793290878 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405241 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.405138972 +0000 UTC m=+2.793307598 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405272 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.405263895 +0000 UTC m=+2.793432411 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405289 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.405280376 +0000 UTC m=+2.793448892 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.404354 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405310 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.405299146 +0000 UTC m=+2.793467792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405372 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.405357008 +0000 UTC m=+2.793525534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405391 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.405382128 +0000 UTC m=+2.793550654 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: E0216 17:24:04.405407 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.405399669 +0000 UTC m=+2.793568195 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405434 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405471 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405518 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405547 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405575 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405602 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405632 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405661 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405692 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405771 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405811 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405851 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405896 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405935 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.405978 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.406019 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:04.407101 master-0 kubenswrapper[4652]: I0216 17:24:04.406047 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.406109 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.406146 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.406202 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.406272 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407300 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407323 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407344 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407369 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407411 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407442 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407480 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407520 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407544 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407575 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407612 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407653 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407680 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407707 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407738 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407767 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407792 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407820 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407858 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407893 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407916 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407935 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407954 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407972 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.407995 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408025 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.408044 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408056 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.408048149 +0000 UTC m=+2.796216665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408079 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.40806969 +0000 UTC m=+2.796238326 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408090 4652 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408124 4652 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408135 4652 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408166 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408192 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408221 4652 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408284 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408344 4652 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408365 4652 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408373 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408425 4652 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408429 4652 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408468 4652 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408475 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408504 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408511 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408532 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408550 4652 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408571 4652 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408092 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.40808555 +0000 UTC m=+2.796254056 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408593 4652 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408601 4652 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408610 4652 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408636 4652 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408664 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408685 4652 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408689 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408721 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408743 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408750 4652 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408779 4652 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408806 4652 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408815 4652 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408854 4652 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408898 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408664 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408927 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408935 4652 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408937 4652 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408901 4652 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408343 4652 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408976 4652 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408996 4652 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.409015 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.409042 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.409067 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408194 4652 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.409101 4652 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.409108 4652 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: I0216 17:24:04.409014 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.409045 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408612 4652 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408998 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.408596 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.408588244 +0000 UTC m=+2.796756760 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.409170 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.409161849 +0000 UTC m=+2.797330365 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410476 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410464883 +0000 UTC m=+2.798633499 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410492 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410485714 +0000 UTC m=+2.798654360 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410501 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410496864 +0000 UTC m=+2.798665380 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410511 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410506045 +0000 UTC m=+2.798674561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410521 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410516285 +0000 UTC m=+2.798684801 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410531 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410525795 +0000 UTC m=+2.798694311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410539 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410535305 +0000 UTC m=+2.798703821 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410553 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410546046 +0000 UTC m=+2.798714692 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410564 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410558756 +0000 UTC m=+2.798727412 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410577 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410570616 +0000 UTC m=+2.798739252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410590 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410584107 +0000 UTC m=+2.798752623 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410602 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410596397 +0000 UTC m=+2.798764913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410613 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410607897 +0000 UTC m=+2.798776413 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410626 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410619938 +0000 UTC m=+2.798788554 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410638 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410632378 +0000 UTC m=+2.798800894 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410652 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410644598 +0000 UTC m=+2.798813244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410662 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410658639 +0000 UTC m=+2.798827155 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410672 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410668959 +0000 UTC m=+2.798837475 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410682 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410677769 +0000 UTC m=+2.798846285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410695 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410689649 +0000 UTC m=+2.798858285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410707 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.41070146 +0000 UTC m=+2.798869976 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410719 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.41071361 +0000 UTC m=+2.798882246 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410731 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.41072582 +0000 UTC m=+2.798894346 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410743 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410736991 +0000 UTC m=+2.798905507 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410755 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410749751 +0000 UTC m=+2.798918277 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410767 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410761271 +0000 UTC m=+2.798929787 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410779 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410773702 +0000 UTC m=+2.798942328 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410791 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410785292 +0000 UTC m=+2.798953808 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410803 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410797492 +0000 UTC m=+2.798966008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410814 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410810093 +0000 UTC m=+2.798978599 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410824 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410819953 +0000 UTC m=+2.798988469 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410837 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410831483 +0000 UTC m=+2.798999999 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410846 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410841603 +0000 UTC m=+2.799010119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410856 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410852144 +0000 UTC m=+2.799020660 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:04.416895 master-0 kubenswrapper[4652]: E0216 17:24:04.410866 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410861714 +0000 UTC m=+2.799030230 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410874 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410870564 +0000 UTC m=+2.799039070 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410883 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410879154 +0000 UTC m=+2.799047670 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410892 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410888515 +0000 UTC m=+2.799057031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410900 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410896875 +0000 UTC m=+2.799065391 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"service-ca" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410910 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410906015 +0000 UTC m=+2.799074531 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410918 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410914535 +0000 UTC m=+2.799083051 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410927 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410923596 +0000 UTC m=+2.799092112 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410936 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410931886 +0000 UTC m=+2.799100402 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410944 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410940706 +0000 UTC m=+2.799109222 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410953 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410948996 +0000 UTC m=+2.799117512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410962 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410958217 +0000 UTC m=+2.799126723 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410971 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410966777 +0000 UTC m=+2.799135293 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410980 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410975957 +0000 UTC m=+2.799144473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410988 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410984327 +0000 UTC m=+2.799152843 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.410999 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.410993047 +0000 UTC m=+2.799161693 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411012 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.411007028 +0000 UTC m=+2.799175534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411024 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.411018368 +0000 UTC m=+2.799186894 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411044 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.411037349 +0000 UTC m=+2.799205985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411056 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.411050139 +0000 UTC m=+2.799218655 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411068 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.411061939 +0000 UTC m=+2.799230465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.410448 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l67l5\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-kube-api-access-l67l5\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.409081 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411119 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.411113481 +0000 UTC m=+2.799281997 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411204 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411226 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.411219994 +0000 UTC m=+2.799388510 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411239 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411280 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411299 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411351 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411380 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411418 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411445 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411472 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411510 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411566 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411607 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411647 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411687 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411713 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411738 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411870 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411880 4652 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411904 4652 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411927 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.411916162 +0000 UTC m=+2.800084768 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411950 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.411900 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411958 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411980 4652 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412001 4652 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412006 4652 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.411958 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.411946213 +0000 UTC m=+2.800114819 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"audit" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412041 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412042 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412033155 +0000 UTC m=+2.800201781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412091 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412117 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412129 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412090887 +0000 UTC m=+2.800259403 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412150 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412177 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0 podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412167579 +0000 UTC m=+2.800336205 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.412171 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412073 4652 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412279 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412196 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412189019 +0000 UTC m=+2.800357665 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412314 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412301842 +0000 UTC m=+2.800470458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412336 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412281 4652 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412342 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412333143 +0000 UTC m=+2.800501749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412386 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412376524 +0000 UTC m=+2.800545170 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412398 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412426 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412401 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412394665 +0000 UTC m=+2.800563311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412463 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412452856 +0000 UTC m=+2.800621492 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.412500 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412533 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412519758 +0000 UTC m=+2.800688284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412558 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412551819 +0000 UTC m=+2.800720335 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412563 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412576 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412570769 +0000 UTC m=+2.800739285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412594 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.41258685 +0000 UTC m=+2.800755366 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412607 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.41260142 +0000 UTC m=+2.800769926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412618 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412613881 +0000 UTC m=+2.800782397 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.412650 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412676 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412662162 +0000 UTC m=+2.800830778 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412699 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.412714 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412722 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412715893 +0000 UTC m=+2.800884519 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412751 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.412756 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412770 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412765265 +0000 UTC m=+2.800933781 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.412787 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.412809 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412816 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.412827 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.412855 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412860 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412851607 +0000 UTC m=+2.801020233 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412905 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412914 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412950 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.412936959 +0000 UTC m=+2.801105595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412972 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.41296316 +0000 UTC m=+2.801131836 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.412986 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.413021 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.413011921 +0000 UTC m=+2.801180537 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.413068 4652 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.413095 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.413088003 +0000 UTC m=+2.801256629 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.413152 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.413182 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.413213 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.413238 4652 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: I0216 17:24:04.413275 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.413294 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:04.424900 master-0 kubenswrapper[4652]: E0216 17:24:04.413308 4652 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413319 4652 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413329 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.413320699 +0000 UTC m=+2.801489215 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413309 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413350 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.41334204 +0000 UTC m=+2.801510666 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413350 4652 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413372 4652 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413375 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.41336795 +0000 UTC m=+2.801536466 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413438 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413485 4652 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413497 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413522 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.413512094 +0000 UTC m=+2.801680700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413543 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413553 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413574 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.413564126 +0000 UTC m=+2.801732752 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413608 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413628 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.413617657 +0000 UTC m=+2.801786263 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413606 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413645 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.413638728 +0000 UTC m=+2.801807344 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413660 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.413652388 +0000 UTC m=+2.801821034 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413679 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413689 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413706 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.413698159 +0000 UTC m=+2.801866675 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413751 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413756 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413781 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413811 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413834 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413861 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.413846033 +0000 UTC m=+2.802014549 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413896 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413918 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413937 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.413968 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.413994 4652 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414012 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414032 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414023788 +0000 UTC m=+2.802192304 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414047 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414041568 +0000 UTC m=+2.802210084 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414053 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414074 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414067779 +0000 UTC m=+2.802236285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414100 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414101 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414153 4652 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414162 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414185 4652 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414195 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414184072 +0000 UTC m=+2.802352638 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414157 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414216 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414216 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414205613 +0000 UTC m=+2.802374219 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414262 4652 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414276 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414284 4652 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414300 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414288895 +0000 UTC m=+2.802457521 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414338 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414328556 +0000 UTC m=+2.802497152 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414353 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414346496 +0000 UTC m=+2.802515103 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414369 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414361237 +0000 UTC m=+2.802529873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414384 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414377877 +0000 UTC m=+2.802546393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414408 4652 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414430 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414443 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414431439 +0000 UTC m=+2.802599955 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414469 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414513 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414568 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414573 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414594 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414601 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414592123 +0000 UTC m=+2.802760749 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414632 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414658 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414689 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414637 4652 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414750 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414775 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414716 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414700846 +0000 UTC m=+2.802869422 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414802 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414795438 +0000 UTC m=+2.802963954 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414814 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414807979 +0000 UTC m=+2.802976495 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414823 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414819079 +0000 UTC m=+2.802987595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414834 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414840 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414858 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.41485079 +0000 UTC m=+2.803019306 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414879 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414881 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414896 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414891441 +0000 UTC m=+2.803059957 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414917 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414918 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414934 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414929022 +0000 UTC m=+2.803097538 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414949 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414964 4652 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.414971 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.414984 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.414977683 +0000 UTC m=+2.803146199 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.415027 4652 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.415036 4652 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.415044 4652 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.415067 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.415060205 +0000 UTC m=+2.803228841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.415044 4652 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.415096 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.415089316 +0000 UTC m=+2.803257962 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.415185 4652 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.415201 4652 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.415208 4652 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: E0216 17:24:04.415231 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.41522387 +0000 UTC m=+2.803392486 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.417803 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7l6q\" (UniqueName: \"kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:04.430511 master-0 kubenswrapper[4652]: I0216 17:24:04.422359 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r87zw\" (UniqueName: \"kubernetes.io/projected/5a939dd0-fc27-4d47-b81b-96e13e4bbca9-kube-api-access-r87zw\") pod \"cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz\" (UID: \"5a939dd0-fc27-4d47-b81b-96e13e4bbca9\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" Feb 16 17:24:04.436600 master-0 kubenswrapper[4652]: E0216 17:24:04.432171 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:04.436600 master-0 kubenswrapper[4652]: E0216 17:24:04.432197 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.436600 master-0 kubenswrapper[4652]: E0216 17:24:04.432209 4652 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.436600 master-0 kubenswrapper[4652]: E0216 17:24:04.432311 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.932293633 +0000 UTC m=+2.320462139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.453437 master-0 kubenswrapper[4652]: I0216 17:24:04.453366 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpjv7\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-kube-api-access-vpjv7\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:04.477888 master-0 kubenswrapper[4652]: I0216 17:24:04.477751 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl5w2\" (UniqueName: \"kubernetes.io/projected/2d96ccdc-0b09-437d-bfca-1958af5d9953-kube-api-access-zl5w2\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:04.499642 master-0 kubenswrapper[4652]: E0216 17:24:04.499541 4652 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.499642 master-0 kubenswrapper[4652]: E0216 17:24:04.499575 4652 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.499642 master-0 kubenswrapper[4652]: E0216 17:24:04.499588 4652 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.499802 master-0 kubenswrapper[4652]: E0216 17:24:04.499652 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:04.999629801 +0000 UTC m=+2.387798317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.516787 master-0 kubenswrapper[4652]: I0216 17:24:04.516139 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:04.516787 master-0 kubenswrapper[4652]: I0216 17:24:04.516702 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:04.516787 master-0 kubenswrapper[4652]: E0216 17:24:04.516411 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.516787 master-0 kubenswrapper[4652]: E0216 17:24:04.516741 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.516787 master-0 kubenswrapper[4652]: E0216 17:24:04.516754 4652 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.516787 master-0 kubenswrapper[4652]: E0216 17:24:04.516790 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.516779326 +0000 UTC m=+2.904947842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.517131 master-0 kubenswrapper[4652]: E0216 17:24:04.516809 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:04.517131 master-0 kubenswrapper[4652]: E0216 17:24:04.516827 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.517131 master-0 kubenswrapper[4652]: E0216 17:24:04.516839 4652 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.517131 master-0 kubenswrapper[4652]: E0216 17:24:04.516884 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.516872589 +0000 UTC m=+2.905041115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.517131 master-0 kubenswrapper[4652]: I0216 17:24:04.516980 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:04.517419 master-0 kubenswrapper[4652]: E0216 17:24:04.517361 4652 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:04.517419 master-0 kubenswrapper[4652]: E0216 17:24:04.517417 4652 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.517505 master-0 kubenswrapper[4652]: E0216 17:24:04.517437 4652 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.517600 master-0 kubenswrapper[4652]: E0216 17:24:04.517590 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.517572097 +0000 UTC m=+2.905740633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.523699 master-0 kubenswrapper[4652]: I0216 17:24:04.523653 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nfk2\" (UniqueName: \"kubernetes.io/projected/a94f9b8e-b020-4aab-8373-6c056ec07464-kube-api-access-8nfk2\") pod \"node-exporter-8256c\" (UID: \"a94f9b8e-b020-4aab-8373-6c056ec07464\") " pod="openshift-monitoring/node-exporter-8256c" Feb 16 17:24:04.549290 master-0 kubenswrapper[4652]: E0216 17:24:04.549202 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:04.549290 master-0 kubenswrapper[4652]: E0216 17:24:04.549276 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.549290 master-0 kubenswrapper[4652]: E0216 17:24:04.549295 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.549512 master-0 kubenswrapper[4652]: E0216 17:24:04.549371 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.049351011 +0000 UTC m=+2.437519547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.550786 master-0 kubenswrapper[4652]: E0216 17:24:04.550747 4652 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.550841 master-0 kubenswrapper[4652]: E0216 17:24:04.550791 4652 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.550841 master-0 kubenswrapper[4652]: E0216 17:24:04.550809 4652 projected.go:194] Error preparing data for projected volume kube-api-access-sbrtz for pod openshift-console-operator/console-operator-7777d5cc66-64vhv: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.550910 master-0 kubenswrapper[4652]: E0216 17:24:04.550893 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.050870442 +0000 UTC m=+2.439038968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sbrtz" (UniqueName: "kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.581045 master-0 kubenswrapper[4652]: I0216 17:24:04.581005 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx92x\" (UniqueName: \"kubernetes.io/projected/648abb6c-9c81-4e5c-b5f1-3b7eb254f743-kube-api-access-sx92x\") pod \"machine-config-daemon-98q6v\" (UID: \"648abb6c-9c81-4e5c-b5f1-3b7eb254f743\") " pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:24:04.594163 master-0 kubenswrapper[4652]: E0216 17:24:04.594129 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:04.594163 master-0 kubenswrapper[4652]: E0216 17:24:04.594159 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.594163 master-0 kubenswrapper[4652]: E0216 17:24:04.594171 4652 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.594410 master-0 kubenswrapper[4652]: E0216 17:24:04.594232 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.094214183 +0000 UTC m=+2.482382699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.618776 master-0 kubenswrapper[4652]: E0216 17:24:04.618726 4652 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:24:04.618776 master-0 kubenswrapper[4652]: E0216 17:24:04.618774 4652 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.618948 master-0 kubenswrapper[4652]: E0216 17:24:04.618791 4652 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.618948 master-0 kubenswrapper[4652]: E0216 17:24:04.618862 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.118838896 +0000 UTC m=+2.507007422 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.621985 master-0 kubenswrapper[4652]: I0216 17:24:04.621958 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:04.622059 master-0 kubenswrapper[4652]: I0216 17:24:04.622011 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:04.622144 master-0 kubenswrapper[4652]: E0216 17:24:04.622124 4652 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:04.622199 master-0 kubenswrapper[4652]: E0216 17:24:04.622146 4652 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.622199 master-0 kubenswrapper[4652]: E0216 17:24:04.622155 4652 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.622319 master-0 kubenswrapper[4652]: E0216 17:24:04.622289 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.622264827 +0000 UTC m=+3.010433343 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.622382 master-0 kubenswrapper[4652]: E0216 17:24:04.622301 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:04.622382 master-0 kubenswrapper[4652]: E0216 17:24:04.622339 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.622382 master-0 kubenswrapper[4652]: E0216 17:24:04.622347 4652 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.622382 master-0 kubenswrapper[4652]: I0216 17:24:04.622364 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:04.622382 master-0 kubenswrapper[4652]: E0216 17:24:04.622371 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.62236302 +0000 UTC m=+3.010531526 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.622538 master-0 kubenswrapper[4652]: E0216 17:24:04.622417 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:04.622538 master-0 kubenswrapper[4652]: E0216 17:24:04.622438 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.622538 master-0 kubenswrapper[4652]: E0216 17:24:04.622447 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.622628 master-0 kubenswrapper[4652]: E0216 17:24:04.622596 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.622585396 +0000 UTC m=+3.010754002 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.693845 master-0 kubenswrapper[4652]: I0216 17:24:04.693776 4652 scope.go:117] "RemoveContainer" containerID="f73a53b162a589e909165b2bc18de01e9864b7a7de56b7c875e6bb88e00666ce" Feb 16 17:24:04.726936 master-0 kubenswrapper[4652]: I0216 17:24:04.724888 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:04.726936 master-0 kubenswrapper[4652]: I0216 17:24:04.725213 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:04.726936 master-0 kubenswrapper[4652]: E0216 17:24:04.725859 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.726936 master-0 kubenswrapper[4652]: E0216 17:24:04.725915 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.726936 master-0 kubenswrapper[4652]: E0216 17:24:04.725936 4652 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.726936 master-0 kubenswrapper[4652]: E0216 17:24:04.726004 4652 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:24:04.726936 master-0 kubenswrapper[4652]: E0216 17:24:04.726047 4652 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.726936 master-0 kubenswrapper[4652]: E0216 17:24:04.726074 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.726936 master-0 kubenswrapper[4652]: E0216 17:24:04.726018 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.725993982 +0000 UTC m=+3.114162538 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.729295 master-0 kubenswrapper[4652]: E0216 17:24:04.728606 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.7285484 +0000 UTC m=+3.116716966 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.762309 master-0 kubenswrapper[4652]: E0216 17:24:04.762219 4652 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Feb 16 17:24:04.762309 master-0 kubenswrapper[4652]: E0216 17:24:04.762293 4652 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.762309 master-0 kubenswrapper[4652]: E0216 17:24:04.762317 4652 projected.go:194] Error preparing data for projected volume kube-api-access-7mrkc for pod openshift-authentication/oauth-openshift-64f85b8fc9-n9msn: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.762309 master-0 kubenswrapper[4652]: E0216 17:24:04.762314 4652 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:04.762691 master-0 kubenswrapper[4652]: E0216 17:24:04.762345 4652 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.762691 master-0 kubenswrapper[4652]: E0216 17:24:04.762360 4652 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.762691 master-0 kubenswrapper[4652]: E0216 17:24:04.762398 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.262372198 +0000 UTC m=+2.650540734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7mrkc" (UniqueName: "kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.762691 master-0 kubenswrapper[4652]: E0216 17:24:04.762432 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.262418339 +0000 UTC m=+2.650586875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.763692 master-0 kubenswrapper[4652]: E0216 17:24:04.763423 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:04.763692 master-0 kubenswrapper[4652]: E0216 17:24:04.763448 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.763692 master-0 kubenswrapper[4652]: E0216 17:24:04.763459 4652 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.763692 master-0 kubenswrapper[4652]: E0216 17:24:04.763502 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.263484507 +0000 UTC m=+2.651653033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.765201 master-0 kubenswrapper[4652]: E0216 17:24:04.764987 4652 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:04.765201 master-0 kubenswrapper[4652]: E0216 17:24:04.765012 4652 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.765201 master-0 kubenswrapper[4652]: E0216 17:24:04.765024 4652 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.765201 master-0 kubenswrapper[4652]: E0216 17:24:04.765079 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.265066769 +0000 UTC m=+2.653235295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.767326 master-0 kubenswrapper[4652]: I0216 17:24:04.766686 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f69cb\" (UniqueName: \"kubernetes.io/projected/ae20b683-dac8-419e-808a-ddcdb3c564e1-kube-api-access-f69cb\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:04.767326 master-0 kubenswrapper[4652]: I0216 17:24:04.766945 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmj52\" (UniqueName: \"kubernetes.io/projected/c8729b1a-e365-4cf7-8a05-91a9987dabe9-kube-api-access-hmj52\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:04.771447 master-0 kubenswrapper[4652]: I0216 17:24:04.770642 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5mwd\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-kube-api-access-b5mwd\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:04.802099 master-0 kubenswrapper[4652]: E0216 17:24:04.802060 4652 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.802099 master-0 kubenswrapper[4652]: E0216 17:24:04.802087 4652 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.802099 master-0 kubenswrapper[4652]: E0216 17:24:04.802099 4652 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.802360 master-0 kubenswrapper[4652]: E0216 17:24:04.802147 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.302131003 +0000 UTC m=+2.690299519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.811519 master-0 kubenswrapper[4652]: I0216 17:24:04.811463 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-l5kbz" event={"ID":"c45ce0e5-c50b-4210-b7bb-82db2b2bc1db","Type":"ContainerStarted","Data":"c714e25857bc1458e5b2239bf506133af8863a9a54f69d236db94e7103c42f91"} Feb 16 17:24:04.813859 master-0 kubenswrapper[4652]: I0216 17:24:04.813814 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-6fcf4c966-6bmf9" event={"ID":"4549ea98-7379-49e1-8452-5efb643137ca","Type":"ContainerStarted","Data":"46258edb4e5778c11ac3dbfc65e5dcbb85fa4d37f135fee3eb9d73b98ad88aaf"} Feb 16 17:24:04.815711 master-0 kubenswrapper[4652]: I0216 17:24:04.815659 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" event={"ID":"f0b1ebd3-1068-4624-9b6d-3e9f45ded76a","Type":"ContainerStarted","Data":"5dd538a7a8b2b6b668185eb3d2b049d0f216c57823a67d1ea32767d9a474a2d9"} Feb 16 17:24:04.822714 master-0 kubenswrapper[4652]: I0216 17:24:04.822671 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-bb7ffbb8d-lzgs9" event={"ID":"ab80e0fb-09dd-4c93-b235-1487024105d2","Type":"ContainerStarted","Data":"9d36c889938be83b9d8ad1a3695ce158d437c9d9a2db353bc823fc0d888cab87"} Feb 16 17:24:04.824569 master-0 kubenswrapper[4652]: I0216 17:24:04.824539 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-vfxj4" event={"ID":"a6fe41b0-1a42-4f07-8220-d9aaa50788ad","Type":"ContainerStarted","Data":"3058c1711cb9655d6e38a2730ef2bc992787bf011d890becd19ebee8f58e3d10"} Feb 16 17:24:04.826187 master-0 kubenswrapper[4652]: I0216 17:24:04.826141 4652 generic.go:334] "Generic (PLEG): container finished" podID="9f9bf4ab-5415-4616-aa36-ea387c699ea9" containerID="d62fb01e4a9c77bd3aa1bc41dbff5c5ff3cf532d574baed6e1aea333b8a9eb3b" exitCode=0 Feb 16 17:24:04.826290 master-0 kubenswrapper[4652]: I0216 17:24:04.826201 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerDied","Data":"d62fb01e4a9c77bd3aa1bc41dbff5c5ff3cf532d574baed6e1aea333b8a9eb3b"} Feb 16 17:24:04.828026 master-0 kubenswrapper[4652]: I0216 17:24:04.827963 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2ws9r" event={"ID":"9c48005e-c4df-4332-87fc-ec028f2c6921","Type":"ContainerStarted","Data":"cb93d3c07e79dc89e2c740b26ef5760ed3fcc03c325b8d5c4d2f875f9c26ff59"} Feb 16 17:24:04.828235 master-0 kubenswrapper[4652]: I0216 17:24:04.828198 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ftld\" (UniqueName: \"kubernetes.io/projected/702322ac-7610-4568-9a68-b6acbd1f0c12-kube-api-access-6ftld\") pod \"machine-approver-8569dd85ff-4vxmz\" (UID: \"702322ac-7610-4568-9a68-b6acbd1f0c12\") " pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" Feb 16 17:24:04.829700 master-0 kubenswrapper[4652]: I0216 17:24:04.829637 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerStarted","Data":"08c96670824413510d2001a6f0800b036356ccecb69134798b7dc51c4ca769b9"} Feb 16 17:24:04.829871 master-0 kubenswrapper[4652]: I0216 17:24:04.829718 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-hhcpr" event={"ID":"39387549-c636-4bd4-b463-f6a93810f277","Type":"ContainerStarted","Data":"b491354576b405d4876f40fb31a9f1f520c70579e2638f5ec725b0a2b8cea5f5"} Feb 16 17:24:04.833689 master-0 kubenswrapper[4652]: I0216 17:24:04.831368 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerStarted","Data":"5881960966daaf04ceec4414519cc07cc722d1b95aa6f19462c28cdead3c5c45"} Feb 16 17:24:04.833827 master-0 kubenswrapper[4652]: I0216 17:24:04.833724 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:04.833827 master-0 kubenswrapper[4652]: I0216 17:24:04.833782 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:04.833918 master-0 kubenswrapper[4652]: I0216 17:24:04.833856 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:04.834030 master-0 kubenswrapper[4652]: I0216 17:24:04.833941 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:04.834030 master-0 kubenswrapper[4652]: I0216 17:24:04.833987 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:04.834172 master-0 kubenswrapper[4652]: E0216 17:24:04.834130 4652 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Feb 16 17:24:04.834229 master-0 kubenswrapper[4652]: E0216 17:24:04.834201 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.834339 master-0 kubenswrapper[4652]: E0216 17:24:04.834230 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.834339 master-0 kubenswrapper[4652]: E0216 17:24:04.834255 4652 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.834339 master-0 kubenswrapper[4652]: E0216 17:24:04.834312 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.834289007 +0000 UTC m=+3.222457523 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.834339 master-0 kubenswrapper[4652]: E0216 17:24:04.834309 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:04.834339 master-0 kubenswrapper[4652]: E0216 17:24:04.834339 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.834566 master-0 kubenswrapper[4652]: E0216 17:24:04.834353 4652 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.834566 master-0 kubenswrapper[4652]: E0216 17:24:04.834410 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.83439201 +0000 UTC m=+3.222560526 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.834566 master-0 kubenswrapper[4652]: E0216 17:24:04.834412 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:04.834566 master-0 kubenswrapper[4652]: E0216 17:24:04.834449 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.834566 master-0 kubenswrapper[4652]: E0216 17:24:04.834418 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:04.834566 master-0 kubenswrapper[4652]: E0216 17:24:04.834473 4652 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.834566 master-0 kubenswrapper[4652]: E0216 17:24:04.834487 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.834566 master-0 kubenswrapper[4652]: E0216 17:24:04.834501 4652 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.834566 master-0 kubenswrapper[4652]: I0216 17:24:04.834461 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:04.834566 master-0 kubenswrapper[4652]: E0216 17:24:04.834559 4652 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:24:04.834566 master-0 kubenswrapper[4652]: E0216 17:24:04.834576 4652 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.835328 master-0 kubenswrapper[4652]: E0216 17:24:04.834587 4652 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.835328 master-0 kubenswrapper[4652]: E0216 17:24:04.834559 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.834531034 +0000 UTC m=+3.222699590 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.835328 master-0 kubenswrapper[4652]: E0216 17:24:04.834643 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.834624376 +0000 UTC m=+3.222792892 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.835328 master-0 kubenswrapper[4652]: E0216 17:24:04.834751 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.834720889 +0000 UTC m=+3.222889405 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.835328 master-0 kubenswrapper[4652]: E0216 17:24:04.835002 4652 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.835328 master-0 kubenswrapper[4652]: E0216 17:24:04.835022 4652 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.835328 master-0 kubenswrapper[4652]: E0216 17:24:04.835083 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.835065798 +0000 UTC m=+3.223234314 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.835328 master-0 kubenswrapper[4652]: I0216 17:24:04.835235 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xv2wv" event={"ID":"810a2275-fae5-45df-a3b8-92860451d33b","Type":"ContainerStarted","Data":"cd1dcc6663f82674b6b55e23585951122210f13a603a2fe83780585ba9cdbbde"} Feb 16 17:24:04.835683 master-0 kubenswrapper[4652]: I0216 17:24:04.835350 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:24:04.839727 master-0 kubenswrapper[4652]: E0216 17:24:04.839627 4652 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.839727 master-0 kubenswrapper[4652]: E0216 17:24:04.839669 4652 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.839727 master-0 kubenswrapper[4652]: E0216 17:24:04.839678 4652 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.839852 master-0 kubenswrapper[4652]: E0216 17:24:04.839739 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.339708051 +0000 UTC m=+2.727876567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.879632 master-0 kubenswrapper[4652]: I0216 17:24:04.879559 4652 request.go:700] Waited for 1.000211869s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/serviceaccounts/default/token Feb 16 17:24:04.886220 master-0 kubenswrapper[4652]: E0216 17:24:04.886125 4652 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.886220 master-0 kubenswrapper[4652]: E0216 17:24:04.886156 4652 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.886220 master-0 kubenswrapper[4652]: E0216 17:24:04.886169 4652 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.886220 master-0 kubenswrapper[4652]: E0216 17:24:04.886230 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.386213786 +0000 UTC m=+2.774382302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.886960 master-0 kubenswrapper[4652]: E0216 17:24:04.886913 4652 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.886960 master-0 kubenswrapper[4652]: E0216 17:24:04.886960 4652 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.887070 master-0 kubenswrapper[4652]: E0216 17:24:04.886981 4652 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.887107 master-0 kubenswrapper[4652]: E0216 17:24:04.887070 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.387043328 +0000 UTC m=+2.775211874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.894881 master-0 kubenswrapper[4652]: E0216 17:24:04.894820 4652 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:04.894881 master-0 kubenswrapper[4652]: E0216 17:24:04.894864 4652 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.895053 master-0 kubenswrapper[4652]: E0216 17:24:04.894909 4652 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.895053 master-0 kubenswrapper[4652]: E0216 17:24:04.895006 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.394983539 +0000 UTC m=+2.783152065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.899320 master-0 kubenswrapper[4652]: I0216 17:24:04.899281 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r28x\" (UniqueName: \"kubernetes.io/projected/43f65f23-4ddd-471a-9cb3-b0945382d83c-kube-api-access-8r28x\") pod \"multus-6r7wj\" (UID: \"43f65f23-4ddd-471a-9cb3-b0945382d83c\") " pod="openshift-multus/multus-6r7wj" Feb 16 17:24:04.911448 master-0 kubenswrapper[4652]: E0216 17:24:04.911407 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:04.911448 master-0 kubenswrapper[4652]: E0216 17:24:04.911446 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.911605 master-0 kubenswrapper[4652]: E0216 17:24:04.911461 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.911605 master-0 kubenswrapper[4652]: E0216 17:24:04.911528 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.411508598 +0000 UTC m=+2.799677124 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.932319 master-0 kubenswrapper[4652]: E0216 17:24:04.932275 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:04.932319 master-0 kubenswrapper[4652]: E0216 17:24:04.932310 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.932319 master-0 kubenswrapper[4652]: E0216 17:24:04.932324 4652 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.932613 master-0 kubenswrapper[4652]: E0216 17:24:04.932394 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.432374272 +0000 UTC m=+2.820542788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.945168 master-0 kubenswrapper[4652]: I0216 17:24:04.945058 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:04.945386 master-0 kubenswrapper[4652]: E0216 17:24:04.945190 4652 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:04.945386 master-0 kubenswrapper[4652]: E0216 17:24:04.945210 4652 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.945386 master-0 kubenswrapper[4652]: E0216 17:24:04.945221 4652 projected.go:194] Error preparing data for projected volume kube-api-access-st6bv for pod openshift-console/console-599b567ff7-nrcpr: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.945386 master-0 kubenswrapper[4652]: I0216 17:24:04.945328 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:04.945386 master-0 kubenswrapper[4652]: E0216 17:24:04.945359 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.945341756 +0000 UTC m=+3.333510312 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-st6bv" (UniqueName: "kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.945611 master-0 kubenswrapper[4652]: E0216 17:24:04.945404 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:04.945611 master-0 kubenswrapper[4652]: E0216 17:24:04.945419 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:04.945611 master-0 kubenswrapper[4652]: E0216 17:24:04.945432 4652 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.945611 master-0 kubenswrapper[4652]: E0216 17:24:04.945479 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.945465279 +0000 UTC m=+3.333633885 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:04.965342 master-0 kubenswrapper[4652]: I0216 17:24:04.965271 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6ad958f-25e4-40cb-89ec-5da9cb6395c7-kube-api-access\") pod \"cluster-version-operator-649c4f5445-vt6wb\" (UID: \"b6ad958f-25e4-40cb-89ec-5da9cb6395c7\") " pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" Feb 16 17:24:04.969850 master-0 kubenswrapper[4652]: I0216 17:24:04.969816 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:04.984656 master-0 kubenswrapper[4652]: I0216 17:24:04.984572 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-bound-sa-token\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:05.048760 master-0 kubenswrapper[4652]: I0216 17:24:05.048695 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:05.049012 master-0 kubenswrapper[4652]: E0216 17:24:05.048959 4652 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.049075 master-0 kubenswrapper[4652]: E0216 17:24:05.049023 4652 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.049075 master-0 kubenswrapper[4652]: E0216 17:24:05.049052 4652 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.049212 master-0 kubenswrapper[4652]: E0216 17:24:05.049171 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.049135032 +0000 UTC m=+3.437303598 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.080932 master-0 kubenswrapper[4652]: I0216 17:24:05.080854 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:24:05.082449 master-0 kubenswrapper[4652]: I0216 17:24:05.082363 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 17:24:05.082596 master-0 kubenswrapper[4652]: I0216 17:24:05.082501 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 17:24:05.087509 master-0 kubenswrapper[4652]: E0216 17:24:05.087473 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:24:05.087509 master-0 kubenswrapper[4652]: E0216 17:24:05.087501 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.087509 master-0 kubenswrapper[4652]: E0216 17:24:05.087514 4652 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.088334 master-0 kubenswrapper[4652]: E0216 17:24:05.087585 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:05.587566113 +0000 UTC m=+2.975734629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.100091 master-0 kubenswrapper[4652]: I0216 17:24:05.099959 4652 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-6r7wj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43f65f23-4ddd-471a-9cb3-b0945382d83c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe683caef773a1963fc13f96afe58892563ea9921db8ac39369e3a9a05ea7181\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8r28x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-multus\"/\"multus-6r7wj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:05.123154 master-0 kubenswrapper[4652]: I0216 17:24:05.123090 4652 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-vwvwx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c303189e-adae-4fe2-8dd7-cc9b80f73e66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aacc2698d097e25bf26e35393ef3536f7a240880d0a87f46a2b7ea3c13731d1e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v2s8l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-vwvwx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:05.153391 master-0 kubenswrapper[4652]: I0216 17:24:05.153291 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:05.153581 master-0 kubenswrapper[4652]: E0216 17:24:05.153517 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:05.153581 master-0 kubenswrapper[4652]: E0216 17:24:05.153565 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.153727 master-0 kubenswrapper[4652]: E0216 17:24:05.153585 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.153727 master-0 kubenswrapper[4652]: I0216 17:24:05.153606 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:05.153727 master-0 kubenswrapper[4652]: E0216 17:24:05.153663 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.153638737 +0000 UTC m=+3.541807293 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.153960 master-0 kubenswrapper[4652]: I0216 17:24:05.153746 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:05.153960 master-0 kubenswrapper[4652]: E0216 17:24:05.153760 4652 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.153960 master-0 kubenswrapper[4652]: E0216 17:24:05.153785 4652 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.153960 master-0 kubenswrapper[4652]: E0216 17:24:05.153803 4652 projected.go:194] Error preparing data for projected volume kube-api-access-sbrtz for pod openshift-console-operator/console-operator-7777d5cc66-64vhv: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.153960 master-0 kubenswrapper[4652]: E0216 17:24:05.153901 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.153882313 +0000 UTC m=+3.542050839 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sbrtz" (UniqueName: "kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.154348 master-0 kubenswrapper[4652]: I0216 17:24:05.153995 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:05.154348 master-0 kubenswrapper[4652]: E0216 17:24:05.154304 4652 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:24:05.154348 master-0 kubenswrapper[4652]: E0216 17:24:05.154323 4652 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.154348 master-0 kubenswrapper[4652]: E0216 17:24:05.154335 4652 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.154609 master-0 kubenswrapper[4652]: E0216 17:24:05.154377 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.154366976 +0000 UTC m=+3.542535502 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.154609 master-0 kubenswrapper[4652]: E0216 17:24:05.154469 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:05.154609 master-0 kubenswrapper[4652]: E0216 17:24:05.154492 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.154609 master-0 kubenswrapper[4652]: E0216 17:24:05.154507 4652 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.154609 master-0 kubenswrapper[4652]: E0216 17:24:05.154581 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.154562152 +0000 UTC m=+3.542730678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.179903 master-0 kubenswrapper[4652]: I0216 17:24:05.179468 4652 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9f9bf4ab-5415-4616-aa36-ea387c699ea9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xrw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:93bf1697031cce06c4e576aa1ba5d8bda7e91b918627ed1d61f8c89a95a111f0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xrw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xrw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xrw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xrw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xrw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xrw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e5b6b8ee694f3fd3cb9494b50110abbf01839036c632aece1719d091d844fec\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9xrw2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-flr86\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:05.191451 master-0 kubenswrapper[4652]: I0216 17:24:05.191349 4652 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-vfxj4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6fe41b0-1a42-4f07-8220-d9aaa50788ad\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e155421620a4ac28a759265f53059b75308fdd1491caeba6a9a34d2fbeab4954\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8m29g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-dns\"/\"node-resolver-vfxj4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:05.218294 master-0 kubenswrapper[4652]: I0216 17:24:05.217937 4652 status_manager.go:875] "Failed to update status for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-node-tuning-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [cluster-node-tuning-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dffbd86bfae06921432678caf184b335bf2fc6ac7ee128f48aee396d57ea55\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"cluster-node-tuning-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/secrets\\\",\\\"name\\\":\\\"node-tuning-operator-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca/\\\",\\\"name\\\":\\\"trusted-ca\\\"},{\\\"mountPath\\\":\\\"/apiserver.local.config/certificates\\\",\\\"name\\\":\\\"apiservice-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2gq8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-cluster-node-tuning-operator\"/\"cluster-node-tuning-operator-ff6c9b66-6j4ts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:05.230731 master-0 kubenswrapper[4652]: I0216 17:24:05.229822 4652 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e623376-9e14-4341-9dcf-7a7c218b6f9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-storage-version-migrator-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e391fce0b2e04f22fc089597db9e0671ba7f8b5b3a709151b5f33dd23b262144\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"kube-storage-version-migrator-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xvwzr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-cd5474998-829l6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:05.263075 master-0 kubenswrapper[4652]: I0216 17:24:05.263017 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:05.263278 master-0 kubenswrapper[4652]: I0216 17:24:05.263108 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:05.263278 master-0 kubenswrapper[4652]: E0216 17:24:05.263146 4652 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:05.263278 master-0 kubenswrapper[4652]: E0216 17:24:05.263177 4652 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.263278 master-0 kubenswrapper[4652]: E0216 17:24:05.263192 4652 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.263278 master-0 kubenswrapper[4652]: E0216 17:24:05.263273 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.263239917 +0000 UTC m=+3.651408473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.263519 master-0 kubenswrapper[4652]: E0216 17:24:05.263328 4652 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Feb 16 17:24:05.263519 master-0 kubenswrapper[4652]: E0216 17:24:05.263362 4652 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.263519 master-0 kubenswrapper[4652]: E0216 17:24:05.263375 4652 projected.go:194] Error preparing data for projected volume kube-api-access-7mrkc for pod openshift-authentication/oauth-openshift-64f85b8fc9-n9msn: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.263519 master-0 kubenswrapper[4652]: E0216 17:24:05.263440 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.263422962 +0000 UTC m=+3.651591528 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7mrkc" (UniqueName: "kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.272019 master-0 kubenswrapper[4652]: I0216 17:24:05.271844 4652 status_manager.go:875] "Failed to update status for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T17:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [authentication-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:047699c5a63593f45e9dd6f9fac0fa636ffc012331ee592891bfb08001bdd963\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"authentication-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/configmaps/config\\\",\\\"name\\\":\\\"config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/trusted-ca-bundle\\\",\\\"name\\\":\\\"trusted-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/configmaps/service-ca-bundle\\\",\\\"name\\\":\\\"service-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f42cr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-authentication-operator\"/\"authentication-operator-755d954778-lf4cb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 17:24:05.295267 master-0 kubenswrapper[4652]: I0216 17:24:05.295206 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:05.365914 master-0 kubenswrapper[4652]: I0216 17:24:05.365597 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:05.365914 master-0 kubenswrapper[4652]: I0216 17:24:05.365670 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:05.365914 master-0 kubenswrapper[4652]: E0216 17:24:05.365830 4652 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:05.365914 master-0 kubenswrapper[4652]: E0216 17:24:05.365865 4652 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.365914 master-0 kubenswrapper[4652]: E0216 17:24:05.365879 4652 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.366112 master-0 kubenswrapper[4652]: E0216 17:24:05.365938 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.365916194 +0000 UTC m=+3.754084720 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.367197 master-0 kubenswrapper[4652]: I0216 17:24:05.366308 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:05.367197 master-0 kubenswrapper[4652]: I0216 17:24:05.366427 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:05.368949 master-0 kubenswrapper[4652]: E0216 17:24:05.367532 4652 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.368949 master-0 kubenswrapper[4652]: E0216 17:24:05.367562 4652 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.368949 master-0 kubenswrapper[4652]: E0216 17:24:05.367576 4652 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.368949 master-0 kubenswrapper[4652]: E0216 17:24:05.367639 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.367620299 +0000 UTC m=+3.755788875 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.368949 master-0 kubenswrapper[4652]: E0216 17:24:05.367736 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:05.368949 master-0 kubenswrapper[4652]: E0216 17:24:05.367764 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.368949 master-0 kubenswrapper[4652]: E0216 17:24:05.367776 4652 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.368949 master-0 kubenswrapper[4652]: E0216 17:24:05.367831 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.367815804 +0000 UTC m=+3.755984320 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.368949 master-0 kubenswrapper[4652]: E0216 17:24:05.367967 4652 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.368949 master-0 kubenswrapper[4652]: E0216 17:24:05.367991 4652 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.368949 master-0 kubenswrapper[4652]: E0216 17:24:05.368003 4652 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.368949 master-0 kubenswrapper[4652]: E0216 17:24:05.368058 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.36804217 +0000 UTC m=+3.756210756 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.468917 master-0 kubenswrapper[4652]: I0216 17:24:05.468809 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:05.468917 master-0 kubenswrapper[4652]: I0216 17:24:05.468863 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.468917 master-0 kubenswrapper[4652]: I0216 17:24:05.468898 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:05.468917 master-0 kubenswrapper[4652]: I0216 17:24:05.468917 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:05.469183 master-0 kubenswrapper[4652]: E0216 17:24:05.468998 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:05.469183 master-0 kubenswrapper[4652]: E0216 17:24:05.469043 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.469030681 +0000 UTC m=+4.857199187 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:05.469183 master-0 kubenswrapper[4652]: E0216 17:24:05.469103 4652 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:05.469183 master-0 kubenswrapper[4652]: E0216 17:24:05.469158 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:05.469183 master-0 kubenswrapper[4652]: E0216 17:24:05.469168 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.469154164 +0000 UTC m=+4.857322680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:05.469430 master-0 kubenswrapper[4652]: I0216 17:24:05.469187 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:05.469430 master-0 kubenswrapper[4652]: I0216 17:24:05.469218 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:05.469430 master-0 kubenswrapper[4652]: E0216 17:24:05.469222 4652 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:05.469430 master-0 kubenswrapper[4652]: E0216 17:24:05.469281 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: I0216 17:24:05.469439 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: I0216 17:24:05.469460 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: E0216 17:24:05.469470 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: I0216 17:24:05.469480 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: I0216 17:24:05.469498 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: I0216 17:24:05.469515 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: E0216 17:24:05.469517 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: E0216 17:24:05.469531 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.469522484 +0000 UTC m=+4.857691000 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: E0216 17:24:05.469548 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.469541074 +0000 UTC m=+4.857709590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: E0216 17:24:05.469562 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.469555395 +0000 UTC m=+4.857723911 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: E0216 17:24:05.469578 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: I0216 17:24:05.469583 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: E0216 17:24:05.469597 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.469590726 +0000 UTC m=+4.857759242 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: E0216 17:24:05.469610 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.469602016 +0000 UTC m=+4.857770532 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: E0216 17:24:05.469621 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:05.469613 master-0 kubenswrapper[4652]: I0216 17:24:05.469631 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469641 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.469634127 +0000 UTC m=+4.857802643 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469651 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.469646457 +0000 UTC m=+4.857814973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: I0216 17:24:05.469665 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: I0216 17:24:05.469686 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469691 4652 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469701 4652 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: I0216 17:24:05.469705 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469712 4652 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: I0216 17:24:05.469723 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469741 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.469731789 +0000 UTC m=+4.857900305 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469762 4652 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: I0216 17:24:05.469761 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469770 4652 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: I0216 17:24:05.469789 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469795 4652 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469827 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.469820332 +0000 UTC m=+4.857988848 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469839 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.469833702 +0000 UTC m=+4.858002218 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: I0216 17:24:05.469925 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: I0216 17:24:05.469956 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469971 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469980 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: I0216 17:24:05.469980 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469977 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.470018 4652 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: I0216 17:24:05.470020 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.470043 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470035397 +0000 UTC m=+4.858203913 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.470062 4652 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: I0216 17:24:05.470063 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.470088 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.470093 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470061728 +0000 UTC m=+4.858230274 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.469987 4652 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.470104 master-0 kubenswrapper[4652]: E0216 17:24:05.470123 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470110019 +0000 UTC m=+4.858278575 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470147 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0 podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47013603 +0000 UTC m=+4.858304576 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470171 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470160671 +0000 UTC m=+4.858329217 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470171 4652 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470203 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470229 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470221822 +0000 UTC m=+4.858390338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470275 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470239373 +0000 UTC m=+4.858407939 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470278 4652 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470290 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470321 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470312675 +0000 UTC m=+4.858481281 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.470320 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.470360 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470324 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470373 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470364936 +0000 UTC m=+4.858533552 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"service-ca" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470409 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470421 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470415227 +0000 UTC m=+4.858583743 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470445 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470435758 +0000 UTC m=+4.858604354 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470447 4652 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470466 4652 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470482 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470474829 +0000 UTC m=+4.858643445 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.470410 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470504 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47049789 +0000 UTC m=+4.858666396 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470502 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.470533 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470537 4652 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470563 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470546461 +0000 UTC m=+4.858715017 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470599 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.470619 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470632 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470624383 +0000 UTC m=+4.858792899 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470630 4652 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.470658 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470664 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470655944 +0000 UTC m=+4.858824460 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470697 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470688415 +0000 UTC m=+4.858857051 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470709 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.470738 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470756 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470742916 +0000 UTC m=+4.858911472 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470782 4652 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470810 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470802428 +0000 UTC m=+4.858971014 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"audit" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.470841 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470897 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470929 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470940 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470928551 +0000 UTC m=+4.859097097 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.470960 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.470951892 +0000 UTC m=+4.859120518 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.470900 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.471013 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.471057 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.471091 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.471125 4652 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.471180 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.471171737 +0000 UTC m=+4.859340333 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.471199 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.471256 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.471276 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.471239009 +0000 UTC m=+4.859407565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.471310 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: I0216 17:24:05.471320 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:05.471293 master-0 kubenswrapper[4652]: E0216 17:24:05.471335 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.471328742 +0000 UTC m=+4.859497258 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471369 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.471370 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471392 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.471386843 +0000 UTC m=+4.859555359 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471391 4652 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471425 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.471417754 +0000 UTC m=+4.859586360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.471423 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.471468 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.471532 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471472 4652 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471492 4652 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471597 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.471591049 +0000 UTC m=+4.859759565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.471593 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.471694 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471509 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471746 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.471739933 +0000 UTC m=+4.859908449 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471628 4652 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.471742 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471712 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471786 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.471769343 +0000 UTC m=+4.859937889 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471799 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471832 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.471822685 +0000 UTC m=+4.859991201 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471845 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.471839275 +0000 UTC m=+4.860007791 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471855 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.471850406 +0000 UTC m=+4.860018912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.471825 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471877 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471903 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.471895277 +0000 UTC m=+4.860063913 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.471897 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.471932 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.471954 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.471973 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.471992 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.472047 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.472072 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.472102 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471902 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472172 4652 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.471951 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472217 4652 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472048 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472275 4652 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472094 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472149 4652 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472153 4652 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472334 4652 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472193 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.472187654 +0000 UTC m=+4.860356170 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472382 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.472375179 +0000 UTC m=+4.860543695 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472396 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47239037 +0000 UTC m=+4.860558886 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.472415 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472476 4652 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472500 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.472483762 +0000 UTC m=+4.860652318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472540 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.472529384 +0000 UTC m=+4.860697940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472561 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.472550964 +0000 UTC m=+4.860719520 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472593 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.472582445 +0000 UTC m=+4.860750991 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472623 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.472613686 +0000 UTC m=+4.860782232 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472653 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.472643307 +0000 UTC m=+4.860811863 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.472703 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472726 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472749 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.472742969 +0000 UTC m=+4.860911475 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472774 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47276825 +0000 UTC m=+4.860936766 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.472774 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.472814 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.472853 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.472892 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472861 4652 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472946 4652 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472953 4652 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472973 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.472967455 +0000 UTC m=+3.861135971 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.472931 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472993 4652 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.473003 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.473027 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.473047 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473063 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.473069 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.473091 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473099 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.473087608 +0000 UTC m=+4.861256164 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473119 4652 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.473131 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473139 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47313448 +0000 UTC m=+4.861302996 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473165 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473183 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.473178171 +0000 UTC m=+4.861346687 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.473181 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.473208 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.473229 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473239 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.473287 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473309 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.473296834 +0000 UTC m=+4.861465380 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473332 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.473341 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473352 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.473346725 +0000 UTC m=+4.861515241 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473377 4652 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473394 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.473389626 +0000 UTC m=+4.861558142 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.473395 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473404 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.473399447 +0000 UTC m=+4.861567963 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473432 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473453 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.473447048 +0000 UTC m=+4.861615564 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472894 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473487 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473502 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.473488289 +0000 UTC m=+4.861656845 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473517 4652 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473525 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.4735148 +0000 UTC m=+4.861683356 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473548 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47353861 +0000 UTC m=+4.861707166 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473561 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473570 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473576 4652 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473595 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.473588982 +0000 UTC m=+3.861757498 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473615 4652 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473633 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.473628163 +0000 UTC m=+4.861796679 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473633 4652 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473649 4652 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473657 4652 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473676 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.473671314 +0000 UTC m=+3.861839830 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473694 4652 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473713 4652 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473723 4652 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473731 4652 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473741 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.473736426 +0000 UTC m=+4.861904942 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.472926 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: I0216 17:24:05.473696 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:05.475234 master-0 kubenswrapper[4652]: E0216 17:24:05.473761 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.473756186 +0000 UTC m=+4.861924702 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.473806 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.473791807 +0000 UTC m=+3.861960353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.473838 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.473882 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.473942 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.473980 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.473969002 +0000 UTC m=+4.862137548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474011 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474029 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474036 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474059 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474067 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.474058694 +0000 UTC m=+4.862227210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474128 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474132 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474132 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474161 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474172 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.474166607 +0000 UTC m=+4.862335113 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474195 4652 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474212 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.474199178 +0000 UTC m=+4.862367734 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474237 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.474224379 +0000 UTC m=+4.862392925 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474290 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474336 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474349 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.474339422 +0000 UTC m=+4.862508038 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474360 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474377 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474393 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.474385123 +0000 UTC m=+4.862553729 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474417 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474461 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474474 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474480 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474497 4652 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474531 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474502 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.474495996 +0000 UTC m=+3.862664512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474590 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474635 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474671 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.4746592 +0000 UTC m=+4.862827796 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474707 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.474689401 +0000 UTC m=+4.862858047 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474724 4652 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474736 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474742 4652 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474758 4652 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474794 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474798 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.474784433 +0000 UTC m=+4.862952979 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474760 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474821 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.474814564 +0000 UTC m=+4.862983080 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474840 4652 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474841 4652 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474849 4652 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474858 4652 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474853 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474874 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.474866456 +0000 UTC m=+4.863034972 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474896 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.474888596 +0000 UTC m=+4.863057122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474916 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474939 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474944 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474959 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474966 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.474959 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474987 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.474981589 +0000 UTC m=+4.863150105 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.474970 4652 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475006 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475005 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475037 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47502871 +0000 UTC m=+4.863197226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475072 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475072 4652 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475097 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.475083331 +0000 UTC m=+4.863251847 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475129 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475143 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475163 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475186 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.475173584 +0000 UTC m=+4.863342140 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475204 4652 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475207 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475211 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.475199024 +0000 UTC m=+4.863367570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475223 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475219 4652 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475232 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475241 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475270 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.475262916 +0000 UTC m=+4.863431432 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475298 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475371 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.475349088 +0000 UTC m=+4.863517604 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475400 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.475388339 +0000 UTC m=+4.863556955 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475421 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47541259 +0000 UTC m=+4.863581216 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475445 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475478 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475486 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475506 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475542 4652 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475565 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.475559274 +0000 UTC m=+4.863727790 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475541 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475580 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475588 4652 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475597 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475605 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.475600075 +0000 UTC m=+4.863768581 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475626 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475630 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475695 4652 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475656 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475735 4652 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475716 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.475702188 +0000 UTC m=+4.863870734 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475767 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475770 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.475762059 +0000 UTC m=+4.863930645 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475794 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47578813 +0000 UTC m=+4.863956646 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475807 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47580219 +0000 UTC m=+4.863970706 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475823 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475831 4652 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475843 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475848 4652 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475863 4652 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475884 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.475878472 +0000 UTC m=+4.864046988 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475913 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475926 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475938 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475944 4652 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475950 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.475938034 +0000 UTC m=+4.864106580 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.475964 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.475958645 +0000 UTC m=+4.864127151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.475989 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.476009 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.476031 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.476054 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: I0216 17:24:05.476073 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:05.482007 master-0 kubenswrapper[4652]: E0216 17:24:05.476077 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.476095 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476130 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.476111889 +0000 UTC m=+4.864280445 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476210 4652 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476227 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476273 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.476263663 +0000 UTC m=+4.864432179 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476297 4652 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476338 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.476325024 +0000 UTC m=+4.864493610 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476363 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476386 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.476379866 +0000 UTC m=+4.864548382 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476412 4652 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476431 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.476425767 +0000 UTC m=+4.864594273 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476452 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476478 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.476470988 +0000 UTC m=+4.864639504 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.476531 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.476583 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476616 4652 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476637 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.476631783 +0000 UTC m=+4.864800299 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476637 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476683 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.476674064 +0000 UTC m=+4.864842680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.476633 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476701 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.476728 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476749 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.476735835 +0000 UTC m=+4.864904391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476781 4652 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.476781 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.476830 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476793 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.476871 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476893 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.476885169 +0000 UTC m=+4.865053685 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476812 4652 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.476914 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476923 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47691534 +0000 UTC m=+4.865083856 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.476940 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476952 4652 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.476966 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476984 4652 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.476989 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.476977982 +0000 UTC m=+4.865146528 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477013 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477023 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477033 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477027933 +0000 UTC m=+4.865196449 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477071 4652 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477086 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477093 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477119 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477054 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477097 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477090845 +0000 UTC m=+4.865259361 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477036 4652 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477176 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477211 4652 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477223 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477236 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477229568 +0000 UTC m=+4.865398084 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477300 4652 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477312 4652 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477320 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477330 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477310961 +0000 UTC m=+4.865479587 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477358 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477343771 +0000 UTC m=+4.865512377 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477398 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477383592 +0000 UTC m=+4.865552228 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477434 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477419993 +0000 UTC m=+4.865588609 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477483 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477522 4652 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477532 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477551 4652 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477551 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477538117 +0000 UTC m=+4.865706723 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477560 4652 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477584 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477578418 +0000 UTC m=+4.865746934 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477583 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477591 4652 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477613 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477639 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477622089 +0000 UTC m=+4.865790635 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477647 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477651 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477657 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477664 4652 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477674 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47766886 +0000 UTC m=+4.865837376 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477671 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477692 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47768477 +0000 UTC m=+4.865853286 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477710 4652 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477719 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477741 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477763 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477782 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477792 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477802 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477816 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477809794 +0000 UTC m=+4.865978300 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477832 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477847 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477855 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477849145 +0000 UTC m=+4.866017661 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477870 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477863085 +0000 UTC m=+4.866031711 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477887 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477900 4652 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477908 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477919 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477914217 +0000 UTC m=+4.866082733 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477945 4652 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477947 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477964 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477958528 +0000 UTC m=+4.866127044 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.477979 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.477984 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478005 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.477999679 +0000 UTC m=+4.866168195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.478002 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478021 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.478032 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478042 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47803648 +0000 UTC m=+4.866205196 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.478057 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478071 4652 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.478076 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478103 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478151 4652 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478153 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478189 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478209 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478219 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478089 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478084091 +0000 UTC m=+4.866252607 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478191 4652 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.478266 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478274 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478266656 +0000 UTC m=+4.866435162 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.478290 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478307 4652 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.478309 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478327 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478322417 +0000 UTC m=+4.866490933 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.478346 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478353 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: I0216 17:24:05.478368 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:05.489968 master-0 kubenswrapper[4652]: E0216 17:24:05.478372 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478366538 +0000 UTC m=+4.866535054 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478394 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478389299 +0000 UTC m=+4.866557805 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478401 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478407 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478401079 +0000 UTC m=+4.866569595 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478418 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47841385 +0000 UTC m=+4.866582356 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478431 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47842578 +0000 UTC m=+4.866594296 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478440 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47843646 +0000 UTC m=+4.866604976 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478444 4652 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478455 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478464 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478457231 +0000 UTC m=+4.866625747 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478479 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478510 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478513 4652 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478523 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478517062 +0000 UTC m=+4.866685578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478531 4652 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478539 4652 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478545 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478560 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478554123 +0000 UTC m=+4.866722639 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478577 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478599 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478627 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478619695 +0000 UTC m=+4.866788291 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478630 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478650 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478644156 +0000 UTC m=+4.866812672 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478600 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478660 4652 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478673 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478682 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478675677 +0000 UTC m=+4.866844183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478706 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478713 4652 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478731 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478726338 +0000 UTC m=+4.866894854 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478730 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478750 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478745879 +0000 UTC m=+4.866914395 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478760 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478782 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478776679 +0000 UTC m=+4.866945195 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478784 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478804 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47879952 +0000 UTC m=+4.866968036 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478809 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478829 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478823421 +0000 UTC m=+4.866991937 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478835 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478858 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478853071 +0000 UTC m=+4.867021587 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478859 4652 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.478880 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.478875502 +0000 UTC m=+4.867044018 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478896 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478919 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478940 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478958 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.478989 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479121 4652 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479170 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479203 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.47918215 +0000 UTC m=+4.867350736 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.479133 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479205 4652 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479229 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.479216491 +0000 UTC m=+4.867385117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479234 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479271 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.479264842 +0000 UTC m=+4.867433358 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479277 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479287 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.479281493 +0000 UTC m=+4.867450009 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479324 4652 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.479335 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479351 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.479343094 +0000 UTC m=+4.867511680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479357 4652 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479413 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.479391066 +0000 UTC m=+4.867559612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.479499 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479542 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.479527659 +0000 UTC m=+4.867696225 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.479571 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.479622 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479627 4652 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479686 4652 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479702 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.479669 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479733 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.479713374 +0000 UTC m=+4.867881930 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479748 4652 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479768 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.479752065 +0000 UTC m=+4.867920691 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.479802 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.479784236 +0000 UTC m=+4.867952852 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.479848 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.479912 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.479972 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.480031 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.480090 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.480149 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.480237 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.480323 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.480362 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.480400 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.480452 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.480490 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.480527 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: I0216 17:24:05.480565 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480687 4652 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480730 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.480718261 +0000 UTC m=+4.868886817 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480813 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480823 4652 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480851 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.480841254 +0000 UTC m=+4.869009760 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480866 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.480859895 +0000 UTC m=+4.869028411 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480883 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480911 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.480901076 +0000 UTC m=+4.869069592 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480921 4652 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480929 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.480923076 +0000 UTC m=+4.869091582 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480938 4652 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480954 4652 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480974 4652 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480974 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.480945 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.480940167 +0000 UTC m=+4.869108683 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.481004 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.481011 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.481005969 +0000 UTC m=+4.869174475 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.481028 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.481020639 +0000 UTC m=+4.869189155 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.481040 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.481034199 +0000 UTC m=+4.869202715 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.481055 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.48104774 +0000 UTC m=+4.869216256 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.481066 4652 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.481068 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.48106133 +0000 UTC m=+4.869229846 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.481130 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.481119972 +0000 UTC m=+4.869288578 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.481156 4652 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.481197 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.481186173 +0000 UTC m=+4.869354759 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.481240 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:05.497052 master-0 kubenswrapper[4652]: E0216 17:24:05.481295 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.481285126 +0000 UTC m=+4.869453712 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:05.504111 master-0 kubenswrapper[4652]: E0216 17:24:05.481346 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:05.504111 master-0 kubenswrapper[4652]: E0216 17:24:05.481375 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.481368828 +0000 UTC m=+4.869537344 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:05.582815 master-0 kubenswrapper[4652]: I0216 17:24:05.582737 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:05.583087 master-0 kubenswrapper[4652]: E0216 17:24:05.582910 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.583087 master-0 kubenswrapper[4652]: E0216 17:24:05.582929 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.583087 master-0 kubenswrapper[4652]: E0216 17:24:05.582939 4652 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.583087 master-0 kubenswrapper[4652]: I0216 17:24:05.582960 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:05.583087 master-0 kubenswrapper[4652]: E0216 17:24:05.582981 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.582968826 +0000 UTC m=+4.971137342 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.583087 master-0 kubenswrapper[4652]: E0216 17:24:05.583036 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:05.583087 master-0 kubenswrapper[4652]: E0216 17:24:05.583050 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.583087 master-0 kubenswrapper[4652]: E0216 17:24:05.583060 4652 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.583087 master-0 kubenswrapper[4652]: I0216 17:24:05.583070 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:05.583087 master-0 kubenswrapper[4652]: E0216 17:24:05.583091 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.583080529 +0000 UTC m=+4.971249045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.583914 master-0 kubenswrapper[4652]: E0216 17:24:05.583131 4652 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:05.583914 master-0 kubenswrapper[4652]: E0216 17:24:05.583140 4652 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.583914 master-0 kubenswrapper[4652]: E0216 17:24:05.583148 4652 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.583914 master-0 kubenswrapper[4652]: E0216 17:24:05.583167 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.583161161 +0000 UTC m=+4.971329677 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.686695 master-0 kubenswrapper[4652]: I0216 17:24:05.686627 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:05.686861 master-0 kubenswrapper[4652]: E0216 17:24:05.686813 4652 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:05.686861 master-0 kubenswrapper[4652]: E0216 17:24:05.686841 4652 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.686861 master-0 kubenswrapper[4652]: E0216 17:24:05.686853 4652 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.687195 master-0 kubenswrapper[4652]: E0216 17:24:05.686900 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.686884635 +0000 UTC m=+5.075053151 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.687741 master-0 kubenswrapper[4652]: I0216 17:24:05.686965 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:05.687741 master-0 kubenswrapper[4652]: I0216 17:24:05.687389 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:05.687741 master-0 kubenswrapper[4652]: E0216 17:24:05.687133 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:05.687741 master-0 kubenswrapper[4652]: E0216 17:24:05.687675 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.687741 master-0 kubenswrapper[4652]: E0216 17:24:05.687699 4652 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.688001 master-0 kubenswrapper[4652]: E0216 17:24:05.687744 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.687736218 +0000 UTC m=+5.075904734 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.688847 master-0 kubenswrapper[4652]: E0216 17:24:05.688055 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:05.688847 master-0 kubenswrapper[4652]: E0216 17:24:05.688083 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.688847 master-0 kubenswrapper[4652]: E0216 17:24:05.688099 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.688847 master-0 kubenswrapper[4652]: E0216 17:24:05.688158 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.688141309 +0000 UTC m=+5.076309865 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.689375 master-0 kubenswrapper[4652]: I0216 17:24:05.689043 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:05.690143 master-0 kubenswrapper[4652]: E0216 17:24:05.689348 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:24:05.690143 master-0 kubenswrapper[4652]: E0216 17:24:05.690065 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.690143 master-0 kubenswrapper[4652]: E0216 17:24:05.690079 4652 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.690143 master-0 kubenswrapper[4652]: E0216 17:24:05.690122 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:06.690111871 +0000 UTC m=+4.078280387 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.745861 master-0 kubenswrapper[4652]: I0216 17:24:05.745410 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: E0216 17:24:05.745901 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.745938 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.745974 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.745982 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746006 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746307 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746326 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746351 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746371 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746387 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746406 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746423 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746441 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746460 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746485 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746518 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746562 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746586 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746609 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.746626 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: E0216 17:24:05.747125 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747151 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747180 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747181 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747231 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747263 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747339 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747364 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747364 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747372 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747385 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747395 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747406 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747422 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747425 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747440 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747447 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747404 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747440 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747461 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747433 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747460 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747512 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747460 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: E0216 17:24:05.747546 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747454 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747521 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747475 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747579 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747586 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747203 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747601 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747606 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747232 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747620 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747626 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747622 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747627 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: E0216 17:24:05.747676 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747702 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747719 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747718 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747733 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747757 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747738 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: I0216 17:24:05.747778 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:05.747723 master-0 kubenswrapper[4652]: E0216 17:24:05.747761 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: I0216 17:24:05.747329 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.747994 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: I0216 17:24:05.748066 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.748135 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.748214 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.748306 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.748363 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.748568 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.748654 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.748713 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.748767 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.748838 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.748938 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.749035 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.749111 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.749175 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.749235 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.749350 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.749412 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.749469 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.749524 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.749653 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.749723 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.749878 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.750002 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.750078 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.750176 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.750327 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:24:05.750512 master-0 kubenswrapper[4652]: E0216 17:24:05.750496 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.750616 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.750686 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.750744 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.750907 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="b04ee64e-5e83-499c-812d-749b2b6824c6" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.750963 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.751020 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.751086 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.751155 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.751212 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.751282 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.751339 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.751385 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.751453 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.751534 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:24:05.751605 master-0 kubenswrapper[4652]: E0216 17:24:05.751594 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:24:05.752160 master-0 kubenswrapper[4652]: E0216 17:24:05.751648 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:24:05.752160 master-0 kubenswrapper[4652]: E0216 17:24:05.751693 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:24:05.752160 master-0 kubenswrapper[4652]: E0216 17:24:05.751735 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:24:05.752160 master-0 kubenswrapper[4652]: E0216 17:24:05.751800 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:24:05.752160 master-0 kubenswrapper[4652]: E0216 17:24:05.751844 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" podUID="6f44170a-3c1c-4944-b971-251f75a51fc3" Feb 16 17:24:05.752160 master-0 kubenswrapper[4652]: E0216 17:24:05.751886 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:24:05.752160 master-0 kubenswrapper[4652]: E0216 17:24:05.751938 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:24:05.752160 master-0 kubenswrapper[4652]: E0216 17:24:05.751996 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:24:05.752160 master-0 kubenswrapper[4652]: E0216 17:24:05.752043 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:24:05.752160 master-0 kubenswrapper[4652]: E0216 17:24:05.752110 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:24:05.752576 master-0 kubenswrapper[4652]: E0216 17:24:05.752195 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:24:05.752576 master-0 kubenswrapper[4652]: E0216 17:24:05.752279 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:24:05.752576 master-0 kubenswrapper[4652]: E0216 17:24:05.752327 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:24:05.752576 master-0 kubenswrapper[4652]: E0216 17:24:05.752373 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:24:05.752576 master-0 kubenswrapper[4652]: E0216 17:24:05.752427 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:24:05.752576 master-0 kubenswrapper[4652]: E0216 17:24:05.752490 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:24:05.752576 master-0 kubenswrapper[4652]: E0216 17:24:05.752534 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:24:05.752576 master-0 kubenswrapper[4652]: E0216 17:24:05.752576 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:24:05.753278 master-0 kubenswrapper[4652]: I0216 17:24:05.753227 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:05.753781 master-0 kubenswrapper[4652]: E0216 17:24:05.753629 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:24:05.792575 master-0 kubenswrapper[4652]: I0216 17:24:05.792520 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:05.792862 master-0 kubenswrapper[4652]: E0216 17:24:05.792716 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.792862 master-0 kubenswrapper[4652]: E0216 17:24:05.792849 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.792862 master-0 kubenswrapper[4652]: E0216 17:24:05.792859 4652 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.794123 master-0 kubenswrapper[4652]: I0216 17:24:05.794097 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:05.794478 master-0 kubenswrapper[4652]: E0216 17:24:05.794172 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.794109582 +0000 UTC m=+5.182278098 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.794478 master-0 kubenswrapper[4652]: E0216 17:24:05.794192 4652 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:24:05.794478 master-0 kubenswrapper[4652]: E0216 17:24:05.794208 4652 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.794478 master-0 kubenswrapper[4652]: E0216 17:24:05.794218 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.794478 master-0 kubenswrapper[4652]: E0216 17:24:05.794296 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.794284557 +0000 UTC m=+5.182453073 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.841173 master-0 kubenswrapper[4652]: I0216 17:24:05.841130 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerStarted","Data":"8f2944d1d0aec996be70de5e76d2929053504567d7565a28b6340e22dcf2f9f4"} Feb 16 17:24:05.841173 master-0 kubenswrapper[4652]: I0216 17:24:05.841169 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-8256c" event={"ID":"a94f9b8e-b020-4aab-8373-6c056ec07464","Type":"ContainerStarted","Data":"367e624caa8d2d2f749744d48a8e9409a4413904802dd7955c55d517aef13669"} Feb 16 17:24:05.843128 master-0 kubenswrapper[4652]: I0216 17:24:05.842633 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6r7wj" event={"ID":"43f65f23-4ddd-471a-9cb3-b0945382d83c","Type":"ContainerStarted","Data":"12d7b3694646d56567eca7d91755e9f20d337f030fc98de33a83a0325e5d714b"} Feb 16 17:24:05.845056 master-0 kubenswrapper[4652]: I0216 17:24:05.844566 4652 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="5881960966daaf04ceec4414519cc07cc722d1b95aa6f19462c28cdead3c5c45" exitCode=0 Feb 16 17:24:05.845056 master-0 kubenswrapper[4652]: I0216 17:24:05.844603 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"5881960966daaf04ceec4414519cc07cc722d1b95aa6f19462c28cdead3c5c45"} Feb 16 17:24:05.846837 master-0 kubenswrapper[4652]: I0216 17:24:05.846791 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerStarted","Data":"c10bf35f014a3280dc7aaee3934cc96e72b8738183e88a20b73e44eb7ab27de8"} Feb 16 17:24:05.848670 master-0 kubenswrapper[4652]: I0216 17:24:05.848203 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerStarted","Data":"34c6e72e51adafca60b0cb9eb6bc13fc494e68d27a52222018f3908d241df117"} Feb 16 17:24:05.848670 master-0 kubenswrapper[4652]: I0216 17:24:05.848224 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerStarted","Data":"7a806df4a1ab3d8ddc1709861bef241db63b40b9de97b10935afa8be90be8bd3"} Feb 16 17:24:05.848670 master-0 kubenswrapper[4652]: I0216 17:24:05.848233 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-6fb8ffcd9b-8hlrz" event={"ID":"5a939dd0-fc27-4d47-b81b-96e13e4bbca9","Type":"ContainerStarted","Data":"ee92028dd47f4c99851ef137b9abe8182271c27282e5dbd4e71da86d8de702fd"} Feb 16 17:24:05.856438 master-0 kubenswrapper[4652]: I0216 17:24:05.856381 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"6cecafebb45d039f14c8f3d96b4b79a656530a60ef10f08074d586e7599a3da6"} Feb 16 17:24:05.861883 master-0 kubenswrapper[4652]: I0216 17:24:05.861839 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"a250361cc064a601b3add50879e35562f8dd63da970683440d79d61fbd3ac6a0"} Feb 16 17:24:05.861883 master-0 kubenswrapper[4652]: I0216 17:24:05.861867 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"bdff7aeb337a767dce4b6b5e597c85634e7545ff652bd54b2e1a7b0a4f7db179"} Feb 16 17:24:05.861883 master-0 kubenswrapper[4652]: I0216 17:24:05.861878 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"d344f4cc6237b9fcee1bc177e6cda69fcce55c09aca80c68c3f1f20be6ae2fff"} Feb 16 17:24:05.861883 master-0 kubenswrapper[4652]: I0216 17:24:05.861887 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"d2862f9f1aed820ae89943f06be568049eeef901027558d2a04135f1f506b984"} Feb 16 17:24:05.862164 master-0 kubenswrapper[4652]: I0216 17:24:05.861895 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"628f402e33fb8a3f339c2798ebf7b851a452ecc65f4947e85f798768a3189eb4"} Feb 16 17:24:05.863351 master-0 kubenswrapper[4652]: I0216 17:24:05.863330 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:24:05.863557 master-0 kubenswrapper[4652]: I0216 17:24:05.863327 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-649c4f5445-vt6wb" event={"ID":"b6ad958f-25e4-40cb-89ec-5da9cb6395c7","Type":"ContainerStarted","Data":"f12fbce99708f4008dc34ab8e21bf1c87846b98eab8d8b8a3bed899f12098f55"} Feb 16 17:24:05.901290 master-0 kubenswrapper[4652]: I0216 17:24:05.899663 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:05.901290 master-0 kubenswrapper[4652]: I0216 17:24:05.899725 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:05.901290 master-0 kubenswrapper[4652]: I0216 17:24:05.899768 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:05.901290 master-0 kubenswrapper[4652]: I0216 17:24:05.899837 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:05.901290 master-0 kubenswrapper[4652]: I0216 17:24:05.899885 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:05.901290 master-0 kubenswrapper[4652]: I0216 17:24:05.899913 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:05.901613 master-0 kubenswrapper[4652]: E0216 17:24:05.901528 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:05.901613 master-0 kubenswrapper[4652]: E0216 17:24:05.901547 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.901613 master-0 kubenswrapper[4652]: E0216 17:24:05.901559 4652 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.901613 master-0 kubenswrapper[4652]: E0216 17:24:05.901598 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.901582996 +0000 UTC m=+5.289751572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.901734 master-0 kubenswrapper[4652]: E0216 17:24:05.901654 4652 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Feb 16 17:24:05.901734 master-0 kubenswrapper[4652]: E0216 17:24:05.901667 4652 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.901734 master-0 kubenswrapper[4652]: E0216 17:24:05.901676 4652 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.901734 master-0 kubenswrapper[4652]: E0216 17:24:05.901703 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.901695199 +0000 UTC m=+5.289863805 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.901856 master-0 kubenswrapper[4652]: E0216 17:24:05.901753 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:24:05.901856 master-0 kubenswrapper[4652]: E0216 17:24:05.901766 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.901856 master-0 kubenswrapper[4652]: E0216 17:24:05.901776 4652 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.901856 master-0 kubenswrapper[4652]: E0216 17:24:05.901800 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.901792162 +0000 UTC m=+5.289960768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.901856 master-0 kubenswrapper[4652]: E0216 17:24:05.901847 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:05.901856 master-0 kubenswrapper[4652]: E0216 17:24:05.901860 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.902027 master-0 kubenswrapper[4652]: E0216 17:24:05.901870 4652 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.902027 master-0 kubenswrapper[4652]: E0216 17:24:05.901894 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.901886594 +0000 UTC m=+5.290055110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.902027 master-0 kubenswrapper[4652]: E0216 17:24:05.901943 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:05.902027 master-0 kubenswrapper[4652]: E0216 17:24:05.901954 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.902027 master-0 kubenswrapper[4652]: E0216 17:24:05.901963 4652 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.902027 master-0 kubenswrapper[4652]: E0216 17:24:05.901987 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.901979887 +0000 UTC m=+5.290148403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.902027 master-0 kubenswrapper[4652]: E0216 17:24:05.902032 4652 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:24:05.902228 master-0 kubenswrapper[4652]: E0216 17:24:05.902044 4652 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:05.902228 master-0 kubenswrapper[4652]: E0216 17:24:05.902052 4652 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:05.902228 master-0 kubenswrapper[4652]: E0216 17:24:05.902076 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:07.902068529 +0000 UTC m=+5.290237045 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.007566 master-0 kubenswrapper[4652]: I0216 17:24:06.003048 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:06.007566 master-0 kubenswrapper[4652]: E0216 17:24:06.003323 4652 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:06.007566 master-0 kubenswrapper[4652]: E0216 17:24:06.003391 4652 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.007566 master-0 kubenswrapper[4652]: E0216 17:24:06.003411 4652 projected.go:194] Error preparing data for projected volume kube-api-access-st6bv for pod openshift-console/console-599b567ff7-nrcpr: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.007566 master-0 kubenswrapper[4652]: E0216 17:24:06.003602 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.003576844 +0000 UTC m=+5.391745390 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-st6bv" (UniqueName: "kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.007566 master-0 kubenswrapper[4652]: I0216 17:24:06.003748 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:06.007566 master-0 kubenswrapper[4652]: E0216 17:24:06.004385 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:06.007566 master-0 kubenswrapper[4652]: E0216 17:24:06.004410 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.007566 master-0 kubenswrapper[4652]: E0216 17:24:06.004425 4652 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.007566 master-0 kubenswrapper[4652]: E0216 17:24:06.004488 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.004473198 +0000 UTC m=+5.392641754 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.087772 master-0 kubenswrapper[4652]: I0216 17:24:06.087734 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:06.087772 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:06.087772 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:06.087772 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:06.087987 master-0 kubenswrapper[4652]: I0216 17:24:06.087780 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:06.110543 master-0 kubenswrapper[4652]: I0216 17:24:06.110493 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:06.110753 master-0 kubenswrapper[4652]: E0216 17:24:06.110700 4652 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:06.110753 master-0 kubenswrapper[4652]: E0216 17:24:06.110724 4652 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.110753 master-0 kubenswrapper[4652]: E0216 17:24:06.110739 4652 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.110906 master-0 kubenswrapper[4652]: E0216 17:24:06.110789 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.110771591 +0000 UTC m=+5.498940197 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.212593 master-0 kubenswrapper[4652]: I0216 17:24:06.212538 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:06.212792 master-0 kubenswrapper[4652]: I0216 17:24:06.212614 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:06.212837 master-0 kubenswrapper[4652]: E0216 17:24:06.212809 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:06.212873 master-0 kubenswrapper[4652]: E0216 17:24:06.212835 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.212873 master-0 kubenswrapper[4652]: E0216 17:24:06.212868 4652 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.212965 master-0 kubenswrapper[4652]: E0216 17:24:06.212937 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.212914883 +0000 UTC m=+5.601083439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.213399 master-0 kubenswrapper[4652]: E0216 17:24:06.212938 4652 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:24:06.213456 master-0 kubenswrapper[4652]: E0216 17:24:06.213406 4652 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.213456 master-0 kubenswrapper[4652]: E0216 17:24:06.213420 4652 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.213516 master-0 kubenswrapper[4652]: E0216 17:24:06.213479 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.213463938 +0000 UTC m=+5.601632584 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.215399 master-0 kubenswrapper[4652]: I0216 17:24:06.215377 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:06.215468 master-0 kubenswrapper[4652]: I0216 17:24:06.215406 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:06.215524 master-0 kubenswrapper[4652]: E0216 17:24:06.215500 4652 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:06.215524 master-0 kubenswrapper[4652]: E0216 17:24:06.215510 4652 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.215524 master-0 kubenswrapper[4652]: E0216 17:24:06.215516 4652 projected.go:194] Error preparing data for projected volume kube-api-access-sbrtz for pod openshift-console-operator/console-operator-7777d5cc66-64vhv: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.215617 master-0 kubenswrapper[4652]: E0216 17:24:06.215544 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:06.215617 master-0 kubenswrapper[4652]: E0216 17:24:06.215564 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.215617 master-0 kubenswrapper[4652]: E0216 17:24:06.215575 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.215617 master-0 kubenswrapper[4652]: E0216 17:24:06.215551 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.215543923 +0000 UTC m=+5.603712439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-sbrtz" (UniqueName: "kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.215768 master-0 kubenswrapper[4652]: E0216 17:24:06.215633 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.215618745 +0000 UTC m=+5.603787331 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.318170 master-0 kubenswrapper[4652]: I0216 17:24:06.318036 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:06.318170 master-0 kubenswrapper[4652]: I0216 17:24:06.318105 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:06.318431 master-0 kubenswrapper[4652]: E0216 17:24:06.318267 4652 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Feb 16 17:24:06.318431 master-0 kubenswrapper[4652]: E0216 17:24:06.318286 4652 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.318431 master-0 kubenswrapper[4652]: E0216 17:24:06.318287 4652 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:06.318431 master-0 kubenswrapper[4652]: E0216 17:24:06.318321 4652 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.318431 master-0 kubenswrapper[4652]: E0216 17:24:06.318332 4652 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.318431 master-0 kubenswrapper[4652]: E0216 17:24:06.318300 4652 projected.go:194] Error preparing data for projected volume kube-api-access-7mrkc for pod openshift-authentication/oauth-openshift-64f85b8fc9-n9msn: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.318431 master-0 kubenswrapper[4652]: E0216 17:24:06.318390 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.318374303 +0000 UTC m=+5.706542819 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.318683 master-0 kubenswrapper[4652]: E0216 17:24:06.318445 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.318423415 +0000 UTC m=+5.706591991 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-7mrkc" (UniqueName: "kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.424333 master-0 kubenswrapper[4652]: I0216 17:24:06.424216 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:06.424598 master-0 kubenswrapper[4652]: E0216 17:24:06.424470 4652 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:06.424598 master-0 kubenswrapper[4652]: E0216 17:24:06.424528 4652 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.424598 master-0 kubenswrapper[4652]: E0216 17:24:06.424547 4652 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.425159 master-0 kubenswrapper[4652]: I0216 17:24:06.425075 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:06.425723 master-0 kubenswrapper[4652]: I0216 17:24:06.425647 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:06.425810 master-0 kubenswrapper[4652]: E0216 17:24:06.425762 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.425730294 +0000 UTC m=+5.813898840 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.425920 master-0 kubenswrapper[4652]: E0216 17:24:06.425873 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:06.425920 master-0 kubenswrapper[4652]: E0216 17:24:06.425910 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.426072 master-0 kubenswrapper[4652]: E0216 17:24:06.425926 4652 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.426072 master-0 kubenswrapper[4652]: E0216 17:24:06.425989 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.42596568 +0000 UTC m=+5.814134226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.426072 master-0 kubenswrapper[4652]: I0216 17:24:06.425920 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:06.426072 master-0 kubenswrapper[4652]: E0216 17:24:06.426049 4652 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:06.426072 master-0 kubenswrapper[4652]: E0216 17:24:06.426075 4652 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.426428 master-0 kubenswrapper[4652]: E0216 17:24:06.426082 4652 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:06.426428 master-0 kubenswrapper[4652]: E0216 17:24:06.426094 4652 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.426428 master-0 kubenswrapper[4652]: E0216 17:24:06.426122 4652 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.426428 master-0 kubenswrapper[4652]: E0216 17:24:06.426149 4652 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.426428 master-0 kubenswrapper[4652]: E0216 17:24:06.426235 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.426207397 +0000 UTC m=+5.814375953 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.427929 master-0 kubenswrapper[4652]: E0216 17:24:06.427881 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.42785408 +0000 UTC m=+5.816022636 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.529496 master-0 kubenswrapper[4652]: I0216 17:24:06.529420 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:06.529716 master-0 kubenswrapper[4652]: E0216 17:24:06.529627 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:06.529716 master-0 kubenswrapper[4652]: E0216 17:24:06.529655 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.529716 master-0 kubenswrapper[4652]: E0216 17:24:06.529666 4652 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.529716 master-0 kubenswrapper[4652]: E0216 17:24:06.529719 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.529702955 +0000 UTC m=+5.917871471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.530011 master-0 kubenswrapper[4652]: I0216 17:24:06.529851 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:06.530011 master-0 kubenswrapper[4652]: I0216 17:24:06.529957 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:06.530138 master-0 kubenswrapper[4652]: E0216 17:24:06.530057 4652 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:06.530138 master-0 kubenswrapper[4652]: E0216 17:24:06.530080 4652 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.530138 master-0 kubenswrapper[4652]: E0216 17:24:06.530094 4652 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.530138 master-0 kubenswrapper[4652]: E0216 17:24:06.530135 4652 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:06.530474 master-0 kubenswrapper[4652]: E0216 17:24:06.530152 4652 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.530474 master-0 kubenswrapper[4652]: I0216 17:24:06.530151 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:06.530474 master-0 kubenswrapper[4652]: E0216 17:24:06.530162 4652 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.530474 master-0 kubenswrapper[4652]: E0216 17:24:06.530200 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:06.530474 master-0 kubenswrapper[4652]: E0216 17:24:06.530182 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.530168287 +0000 UTC m=+5.918337003 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.530474 master-0 kubenswrapper[4652]: E0216 17:24:06.530233 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.530474 master-0 kubenswrapper[4652]: E0216 17:24:06.530316 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.530474 master-0 kubenswrapper[4652]: E0216 17:24:06.530455 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.530432454 +0000 UTC m=+5.918601110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.530474 master-0 kubenswrapper[4652]: E0216 17:24:06.530484 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.530477985 +0000 UTC m=+5.918646501 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.532729 master-0 kubenswrapper[4652]: I0216 17:24:06.532681 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:06.532864 master-0 kubenswrapper[4652]: E0216 17:24:06.532835 4652 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:06.532864 master-0 kubenswrapper[4652]: E0216 17:24:06.532858 4652 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.532988 master-0 kubenswrapper[4652]: E0216 17:24:06.532872 4652 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.532988 master-0 kubenswrapper[4652]: E0216 17:24:06.532934 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.53291693 +0000 UTC m=+5.921085446 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.739894 master-0 kubenswrapper[4652]: I0216 17:24:06.739847 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:06.740318 master-0 kubenswrapper[4652]: E0216 17:24:06.740055 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:24:06.740405 master-0 kubenswrapper[4652]: E0216 17:24:06.740335 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:06.740405 master-0 kubenswrapper[4652]: E0216 17:24:06.740366 4652 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.740875 master-0 kubenswrapper[4652]: E0216 17:24:06.740834 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:08.74080016 +0000 UTC m=+6.128968676 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:06.870531 master-0 kubenswrapper[4652]: I0216 17:24:06.870488 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"825ea71ec0b63f04fd295ed80f3c877478c1878de165b09435584fc35e56f0b2"} Feb 16 17:24:06.872536 master-0 kubenswrapper[4652]: I0216 17:24:06.872472 4652 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="a201428d7d9bb67ba2dc81efba942acc3e39b97fff167a2223393757dd249417" exitCode=0 Feb 16 17:24:06.872697 master-0 kubenswrapper[4652]: I0216 17:24:06.872561 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"a201428d7d9bb67ba2dc81efba942acc3e39b97fff167a2223393757dd249417"} Feb 16 17:24:06.873907 master-0 kubenswrapper[4652]: I0216 17:24:06.873852 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/6.log" Feb 16 17:24:06.874709 master-0 kubenswrapper[4652]: I0216 17:24:06.874646 4652 generic.go:334] "Generic (PLEG): container finished" podID="702322ac-7610-4568-9a68-b6acbd1f0c12" containerID="f75975b9b63160b4c431de4bb563ca9301a34de5ce610dce1d3a3bd1522eead5" exitCode=255 Feb 16 17:24:06.874810 master-0 kubenswrapper[4652]: I0216 17:24:06.874765 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerDied","Data":"f75975b9b63160b4c431de4bb563ca9301a34de5ce610dce1d3a3bd1522eead5"} Feb 16 17:24:06.875990 master-0 kubenswrapper[4652]: I0216 17:24:06.875951 4652 scope.go:117] "RemoveContainer" containerID="f75975b9b63160b4c431de4bb563ca9301a34de5ce610dce1d3a3bd1522eead5" Feb 16 17:24:07.081619 master-0 kubenswrapper[4652]: I0216 17:24:07.081577 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:07.081619 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:07.081619 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:07.081619 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:07.081824 master-0 kubenswrapper[4652]: I0216 17:24:07.081629 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:07.237031 master-0 kubenswrapper[4652]: I0216 17:24:07.236543 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:24:07.477382 master-0 kubenswrapper[4652]: I0216 17:24:07.477298 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:07.477382 master-0 kubenswrapper[4652]: I0216 17:24:07.477394 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:07.477773 master-0 kubenswrapper[4652]: E0216 17:24:07.477546 4652 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:07.477773 master-0 kubenswrapper[4652]: E0216 17:24:07.477663 4652 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:07.477773 master-0 kubenswrapper[4652]: E0216 17:24:07.477705 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.477684946 +0000 UTC m=+8.865853562 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:07.477773 master-0 kubenswrapper[4652]: E0216 17:24:07.477723 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.477716217 +0000 UTC m=+8.865884733 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:07.477773 master-0 kubenswrapper[4652]: I0216 17:24:07.477713 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.478358 master-0 kubenswrapper[4652]: E0216 17:24:07.477779 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:07.478358 master-0 kubenswrapper[4652]: E0216 17:24:07.477865 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.477842661 +0000 UTC m=+8.866011207 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:07.478358 master-0 kubenswrapper[4652]: I0216 17:24:07.477904 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:07.478358 master-0 kubenswrapper[4652]: I0216 17:24:07.477947 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:07.478358 master-0 kubenswrapper[4652]: E0216 17:24:07.478093 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:07.478358 master-0 kubenswrapper[4652]: I0216 17:24:07.478229 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:07.478358 master-0 kubenswrapper[4652]: E0216 17:24:07.478312 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:07.478358 master-0 kubenswrapper[4652]: I0216 17:24:07.478366 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:07.478358 master-0 kubenswrapper[4652]: E0216 17:24:07.478381 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:07.478358 master-0 kubenswrapper[4652]: E0216 17:24:07.478384 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.478368935 +0000 UTC m=+8.866537481 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.478423 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.478410206 +0000 UTC m=+8.866578752 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: I0216 17:24:07.478457 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: I0216 17:24:07.478505 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.478513 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.478579 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.478525469 +0000 UTC m=+8.866694015 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.478615 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.478597151 +0000 UTC m=+8.866765837 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.478642 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.478649 4652 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.478703 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.478680443 +0000 UTC m=+8.866848969 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.478731 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.478720314 +0000 UTC m=+8.866888840 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: I0216 17:24:07.478772 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: I0216 17:24:07.478813 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: I0216 17:24:07.478845 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: I0216 17:24:07.478877 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.478936 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.478964 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.478991 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.478975141 +0000 UTC m=+8.867143697 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.479020 4652 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.478939 4652 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.479047 4652 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.479067 4652 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.479064 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.479052713 +0000 UTC m=+8.867221269 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: I0216 17:24:07.479039 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.479098 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.479087444 +0000 UTC m=+8.867256000 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.479122 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.479111914 +0000 UTC m=+8.867280470 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: I0216 17:24:07.479165 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: I0216 17:24:07.479212 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.479217 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:24:07.479207 master-0 kubenswrapper[4652]: E0216 17:24:07.479290 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.479300 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.479320 4652 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.479344 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.479397 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.479407 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.479379781 +0000 UTC m=+8.867548327 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.479467 4652 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.479499 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.479473884 +0000 UTC m=+8.867642470 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.479551 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.479523385 +0000 UTC m=+8.867692021 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.479568 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.479611 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.479600377 +0000 UTC m=+8.867768903 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.479749 4652 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.479784 4652 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.479834 4652 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.479848 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.479826323 +0000 UTC m=+8.867994869 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.479782 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.479881 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.479868134 +0000 UTC m=+8.868036680 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.479909 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.479944 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.479975 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.480020 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.480047 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.480074 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480124 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.480127 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480138 4652 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480158 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0 podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.480148682 +0000 UTC m=+8.868317208 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480191 4652 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480217 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.480194673 +0000 UTC m=+8.868363319 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480285 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.480235644 +0000 UTC m=+8.868404310 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"service-ca" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480333 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480435 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.480381408 +0000 UTC m=+8.868549964 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480502 4652 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.480585 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480620 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480688 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.480666766 +0000 UTC m=+8.868835372 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480731 4652 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.480777 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480782 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480826 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.480801709 +0000 UTC m=+8.868970265 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480859 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.48084401 +0000 UTC m=+8.869012556 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480890 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.480878261 +0000 UTC m=+8.869046807 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480914 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.480953 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.480995 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.480970764 +0000 UTC m=+8.869139470 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481021 4652 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.481052 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481064 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.481052606 +0000 UTC m=+8.869221162 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"audit" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.481135 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481167 4652 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481298 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.481243211 +0000 UTC m=+8.869411767 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481307 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.481354 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481372 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.481355024 +0000 UTC m=+8.869523670 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481442 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481507 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.481486907 +0000 UTC m=+8.869655543 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.481572 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481642 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.481659 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481697 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.481682453 +0000 UTC m=+8.869850999 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481738 4652 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.481757 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481786 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.481771115 +0000 UTC m=+8.869939661 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481819 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.481821 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.481868 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.481911 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481936 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.481949 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481972 4652 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: I0216 17:24:07.481988 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.481998 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.481985201 +0000 UTC m=+8.870153747 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.482043 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.482023962 +0000 UTC m=+8.870192628 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.482064 4652 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.482076 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.482061393 +0000 UTC m=+8.870230069 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.482093 4652 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.482112 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.482099744 +0000 UTC m=+8.870268290 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:07.481952 master-0 kubenswrapper[4652]: E0216 17:24:07.482125 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482139 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.482125074 +0000 UTC m=+8.870293620 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.482169 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482178 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.482161785 +0000 UTC m=+8.870330451 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.482243 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.482324 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482289 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.482363 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482385 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.482376251 +0000 UTC m=+8.870544757 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.482423 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482442 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.482459 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482482 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.482469303 +0000 UTC m=+8.870637849 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482505 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.482513 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482342 4652 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482603 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482530 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.482524005 +0000 UTC m=+8.870692521 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482642 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.482628188 +0000 UTC m=+8.870796734 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482643 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482663 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.482652658 +0000 UTC m=+8.870821204 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482702 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.482687039 +0000 UTC m=+8.870855595 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.482727 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482812 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482868 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.482853774 +0000 UTC m=+8.871022330 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.482863 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.482934 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.483004 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483023 4652 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.483046 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483057 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.483049499 +0000 UTC m=+8.871218015 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483055 4652 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.483086 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483114 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.48310072 +0000 UTC m=+8.871269266 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483143 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.483132681 +0000 UTC m=+8.871301237 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483182 4652 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483183 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483204 4652 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.483245 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483304 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.483242844 +0000 UTC m=+8.871411400 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483360 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.483345957 +0000 UTC m=+8.871514513 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483362 4652 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.483424 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.483472 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483512 4652 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483538 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.483517251 +0000 UTC m=+8.871685887 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.483512 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483564 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483607 4652 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483571 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.483559472 +0000 UTC m=+8.871728008 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483655 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.483642305 +0000 UTC m=+8.871810851 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.483689 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483771 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.483757748 +0000 UTC m=+8.871926294 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.483899 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.483950 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.483992 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.483952 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.484054 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.484040805 +0000 UTC m=+8.872209351 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.484088 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.484293 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.484321 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.48423871 +0000 UTC m=+8.872407266 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.484387 4652 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.484418 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.484344123 +0000 UTC m=+8.872512769 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.484531 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.484663 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.484829 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.484920 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.484904308 +0000 UTC m=+8.873072824 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.484951 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485019 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485049 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.485042662 +0000 UTC m=+8.873211178 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485059 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485147 4652 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485191 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.485167055 +0000 UTC m=+8.873335661 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485227 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485229 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.485211916 +0000 UTC m=+8.873380592 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.485332 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485345 4652 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485368 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.48535385 +0000 UTC m=+8.873522396 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.485448 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.485500 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485515 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.485505834 +0000 UTC m=+8.873674350 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485547 4652 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.485567 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485594 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.485582206 +0000 UTC m=+8.873750752 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485634 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485636 4652 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485660 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.485654758 +0000 UTC m=+8.873823274 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.485698 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485762 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.485775 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485800 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.485784652 +0000 UTC m=+8.873953198 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485828 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.485816332 +0000 UTC m=+8.873984878 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485859 4652 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.485863 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485898 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.485887644 +0000 UTC m=+8.874056170 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.485922 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485928 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.485968 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.485999 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.485990257 +0000 UTC m=+8.874158783 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.486018 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.486036 4652 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.486064 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.486050629 +0000 UTC m=+8.874219185 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.486087 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.486074679 +0000 UTC m=+8.874243235 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.486121 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.486166 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.486206 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.486287 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.486297 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.486308 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.486351 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.486325 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.486315746 +0000 UTC m=+8.874484392 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.486372 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.486462 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.486503 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.48649184 +0000 UTC m=+8.874660366 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.486523 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.486515721 +0000 UTC m=+8.874684247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: E0216 17:24:07.486538 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.486531121 +0000 UTC m=+8.874699647 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:07.489314 master-0 kubenswrapper[4652]: I0216 17:24:07.486591 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.486593 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.486622 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.486661 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.486641254 +0000 UTC m=+8.874809900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.486700 4652 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.486716 4652 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.486711 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.486782 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.486728 4652 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.486845 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.486863 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.48685037 +0000 UTC m=+8.875018996 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.486767 4652 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.486816 4652 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.486913 4652 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.486912 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.486925 4652 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.486982 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487013 4652 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.486967 4652 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487023 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.487012954 +0000 UTC m=+8.875181470 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487109 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487105 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.487084546 +0000 UTC m=+8.875253122 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487133 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.487167 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487179 4652 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487213 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.487198059 +0000 UTC m=+8.875366605 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487104 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487242 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.48723049 +0000 UTC m=+8.875399046 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487373 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.487425 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487461 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.487438735 +0000 UTC m=+8.875607351 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487509 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487557 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.487542908 +0000 UTC m=+8.875711454 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487588 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.487575529 +0000 UTC m=+8.875744085 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487610 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.48759977 +0000 UTC m=+8.875768316 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.487508 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.487670 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.487713 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487728 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.487754 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487805 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.487781804 +0000 UTC m=+8.875950360 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487847 4652 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487889 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.487876027 +0000 UTC m=+8.876044573 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487888 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.487920 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487925 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488008 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488025 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.487937 4652 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488064 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.487983 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488083 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488050 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.488040551 +0000 UTC m=+8.876209077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488136 4652 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488165 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.488142864 +0000 UTC m=+8.876311460 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488201 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.488180635 +0000 UTC m=+8.876349321 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488235 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.488217696 +0000 UTC m=+8.876386332 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.488324 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.488398 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488524 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.488576 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488588 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.488568565 +0000 UTC m=+8.876737191 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488524 4652 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488670 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488682 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.488667518 +0000 UTC m=+8.876836064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.488767 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488814 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.488790481 +0000 UTC m=+8.876959027 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.488871 4652 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.488937 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489001 4652 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489032 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.489013867 +0000 UTC m=+8.877182413 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.489301 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489342 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.489327266 +0000 UTC m=+8.877495812 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.489382 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.489432 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489384 4652 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489476 4652 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489493 4652 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489534 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.489522271 +0000 UTC m=+8.877690827 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489592 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.489624 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.489680 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489762 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489772 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.489694075 +0000 UTC m=+8.877862731 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489783 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489803 4652 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489802 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489779 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: I0216 17:24:07.489834 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489848 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.489832569 +0000 UTC m=+8.878001125 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489896 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.4898856 +0000 UTC m=+8.878054126 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489919 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.489912011 +0000 UTC m=+8.878080537 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.489940 4652 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:07.500759 master-0 kubenswrapper[4652]: E0216 17:24:07.490029 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.490009594 +0000 UTC m=+8.878178200 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:07.595279 master-0 kubenswrapper[4652]: I0216 17:24:07.595154 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:07.595512 master-0 kubenswrapper[4652]: I0216 17:24:07.595305 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:07.595512 master-0 kubenswrapper[4652]: E0216 17:24:07.595384 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:07.595512 master-0 kubenswrapper[4652]: E0216 17:24:07.595439 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.595423563 +0000 UTC m=+8.983592089 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:07.595512 master-0 kubenswrapper[4652]: I0216 17:24:07.595379 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:07.595761 master-0 kubenswrapper[4652]: E0216 17:24:07.595453 4652 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:07.595761 master-0 kubenswrapper[4652]: I0216 17:24:07.595588 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:07.595761 master-0 kubenswrapper[4652]: E0216 17:24:07.595487 4652 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:07.595761 master-0 kubenswrapper[4652]: E0216 17:24:07.595709 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.595677909 +0000 UTC m=+8.983846465 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:07.595761 master-0 kubenswrapper[4652]: E0216 17:24:07.595726 4652 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.595761 master-0 kubenswrapper[4652]: E0216 17:24:07.595748 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.596081 master-0 kubenswrapper[4652]: I0216 17:24:07.595772 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:07.596081 master-0 kubenswrapper[4652]: E0216 17:24:07.595804 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.595785532 +0000 UTC m=+8.983954338 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.596081 master-0 kubenswrapper[4652]: I0216 17:24:07.595852 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:07.596081 master-0 kubenswrapper[4652]: E0216 17:24:07.595885 4652 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.596081 master-0 kubenswrapper[4652]: E0216 17:24:07.595917 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.596081 master-0 kubenswrapper[4652]: I0216 17:24:07.595925 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:07.596081 master-0 kubenswrapper[4652]: E0216 17:24:07.595999 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:07.596081 master-0 kubenswrapper[4652]: E0216 17:24:07.595981 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.595957277 +0000 UTC m=+8.984125853 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.596081 master-0 kubenswrapper[4652]: E0216 17:24:07.596061 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.596043619 +0000 UTC m=+8.984212195 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:07.596081 master-0 kubenswrapper[4652]: E0216 17:24:07.596079 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:07.596675 master-0 kubenswrapper[4652]: E0216 17:24:07.596103 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.596095421 +0000 UTC m=+8.984264047 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:07.596675 master-0 kubenswrapper[4652]: E0216 17:24:07.596151 4652 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:07.596675 master-0 kubenswrapper[4652]: I0216 17:24:07.596088 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:07.596675 master-0 kubenswrapper[4652]: E0216 17:24:07.596153 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.596131242 +0000 UTC m=+8.984299798 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:07.596675 master-0 kubenswrapper[4652]: E0216 17:24:07.596200 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.596192023 +0000 UTC m=+8.984360529 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:07.596675 master-0 kubenswrapper[4652]: I0216 17:24:07.596241 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:07.596675 master-0 kubenswrapper[4652]: I0216 17:24:07.596361 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:07.596675 master-0 kubenswrapper[4652]: I0216 17:24:07.596427 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:07.596675 master-0 kubenswrapper[4652]: I0216 17:24:07.596498 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:07.596675 master-0 kubenswrapper[4652]: I0216 17:24:07.596572 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:07.596675 master-0 kubenswrapper[4652]: I0216 17:24:07.596636 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.596432 4652 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: I0216 17:24:07.596703 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.596526 4652 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: I0216 17:24:07.596772 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.596817 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.596793739 +0000 UTC m=+8.984962305 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: I0216 17:24:07.596912 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.596923 4652 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.596974 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.596626 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.597014 4652 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.597032 4652 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.596978 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.597040 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.597019445 +0000 UTC m=+8.985188001 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.596693 4652 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: I0216 17:24:07.597003 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.596940 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.597091 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.597068826 +0000 UTC m=+8.985237382 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: I0216 17:24:07.597139 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.597170 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.597161559 +0000 UTC m=+8.985330075 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.597183 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.597177309 +0000 UTC m=+8.985345825 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:07.597205 master-0 kubenswrapper[4652]: E0216 17:24:07.597205 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.59719241 +0000 UTC m=+8.985360926 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597227 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.59721835 +0000 UTC m=+8.985386936 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597236 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597302 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597329 4652 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597274 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.597238461 +0000 UTC m=+8.985407077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-config" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597417 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.597396055 +0000 UTC m=+8.985564621 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597464 4652 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: I0216 17:24:07.597467 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: I0216 17:24:07.597538 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597483 4652 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597585 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597603 4652 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597618 4652 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597521 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597629 4652 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597541 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.597531679 +0000 UTC m=+8.985700195 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: I0216 17:24:07.597686 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: I0216 17:24:07.597732 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597763 4652 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: I0216 17:24:07.597780 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597789 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.597781765 +0000 UTC m=+8.985950281 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597835 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.597814546 +0000 UTC m=+8.985983112 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597865 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597870 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.597854007 +0000 UTC m=+8.986022583 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: I0216 17:24:07.597904 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597917 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.597899588 +0000 UTC m=+8.986068154 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597936 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597949 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.597931049 +0000 UTC m=+8.986099605 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.597979 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: I0216 17:24:07.598000 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.598020 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.597995251 +0000 UTC m=+8.986163767 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: I0216 17:24:07.598073 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.598117 4652 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: I0216 17:24:07.598141 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:07.598204 master-0 kubenswrapper[4652]: E0216 17:24:07.598199 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.598174346 +0000 UTC m=+8.986342902 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598306 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598308 4652 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.598297 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598336 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.598315709 +0000 UTC m=+8.986484305 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598374 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.598356351 +0000 UTC m=+8.986524907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.598424 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598452 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598457 4652 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.598545 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598573 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.598537735 +0000 UTC m=+8.986706291 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598607 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598610 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.598592057 +0000 UTC m=+8.986760613 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598625 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598640 4652 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598668 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.598633698 +0000 UTC m=+8.986802304 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.598721 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598746 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.598772 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598793 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.598784402 +0000 UTC m=+8.986952918 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.598837 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.598875 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598906 4652 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598967 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598975 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.598954996 +0000 UTC m=+8.987123562 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.598998 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.598988927 +0000 UTC m=+8.987157523 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.598908 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599028 4652 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599065 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.599057299 +0000 UTC m=+8.987225945 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599059 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599102 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.59909305 +0000 UTC m=+8.987261676 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.599073 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599172 4652 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599175 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.599150542 +0000 UTC m=+8.987319098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.599325 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.599395 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599423 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.599461 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599535 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.599495871 +0000 UTC m=+8.987664427 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599581 4652 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.599622 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599649 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.599627024 +0000 UTC m=+8.987795600 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599686 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.599666525 +0000 UTC m=+8.987835181 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.599739 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599780 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.599805 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599850 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.59982791 +0000 UTC m=+8.987996496 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.599899 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599940 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599967 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.599967 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.600036 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599977 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.599989 4652 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.600106 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.600150 4652 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.600022 4652 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.600228 4652 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.600243 4652 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.600283 4652 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.600057 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.600067 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: E0216 17:24:07.600110 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.600100597 +0000 UTC m=+8.988269113 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:07.600381 master-0 kubenswrapper[4652]: I0216 17:24:07.600447 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.600521 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.600584 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.600653 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.600667 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.600648991 +0000 UTC m=+8.988817517 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.600709 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.600733 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.600704513 +0000 UTC m=+8.988873079 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.600769 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.600753454 +0000 UTC m=+8.988922060 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.600791 4652 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.600800 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.600784915 +0000 UTC m=+8.988953511 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.600907 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.600887708 +0000 UTC m=+8.989056324 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.600928 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.600944 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.600925549 +0000 UTC m=+8.989094165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.600990 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601089 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601156 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601220 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601340 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601407 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601422 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.601404662 +0000 UTC m=+8.989573218 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601447 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.601436282 +0000 UTC m=+8.989604908 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601478 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601496 4652 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601520 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601559 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.601539225 +0000 UTC m=+8.989707781 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601577 4652 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601613 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.601602037 +0000 UTC m=+8.989770633 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601605 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601653 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601681 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601700 4652 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601735 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601765 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.601745231 +0000 UTC m=+8.989913797 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601791 4652 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601813 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601827 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.601816592 +0000 UTC m=+8.989985128 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601860 4652 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.600803 4652 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601889 4652 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601894 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601913 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.601905485 +0000 UTC m=+8.990074011 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601956 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.601960 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.601980 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.601973167 +0000 UTC m=+8.990141693 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602034 4652 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602044 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602060 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.602051999 +0000 UTC m=+8.990220525 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.602081 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602106 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.60208504 +0000 UTC m=+8.990253606 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602131 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602152 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.602145951 +0000 UTC m=+8.990314477 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.602155 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602178 4652 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602199 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.602192842 +0000 UTC m=+8.990361368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.602219 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.602274 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602310 4652 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.602331 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602378 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.602357737 +0000 UTC m=+8.990526303 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602410 4652 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602431 4652 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602438 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.602430179 +0000 UTC m=+8.990598705 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602519 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.60247533 +0000 UTC m=+8.990643846 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602526 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602553 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602535 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.602528561 +0000 UTC m=+8.990697067 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.602427 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602616 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.602600943 +0000 UTC m=+8.990769569 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602644 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.602634364 +0000 UTC m=+8.990802970 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602662 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602750 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.602725737 +0000 UTC m=+8.990894333 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602797 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602847 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.602832919 +0000 UTC m=+8.991001445 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.602962 4652 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.603038 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.603015684 +0000 UTC m=+8.991184250 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.603201 4652 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.603262 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.60323388 +0000 UTC m=+8.991402416 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.603309 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.603320 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.603341 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.603333363 +0000 UTC m=+8.991501959 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.603385 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.603371984 +0000 UTC m=+8.991540580 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.603391 4652 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.603420 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.603413205 +0000 UTC m=+8.991581821 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.603457 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.603479 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.603472996 +0000 UTC m=+8.991641612 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.603486 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.603573 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.603621 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.603751 4652 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: I0216 17:24:07.603800 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:07.603980 master-0 kubenswrapper[4652]: E0216 17:24:07.603812 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.603793295 +0000 UTC m=+8.991961851 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:07.608133 master-0 kubenswrapper[4652]: E0216 17:24:07.604181 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:07.608133 master-0 kubenswrapper[4652]: E0216 17:24:07.604210 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.604203466 +0000 UTC m=+8.992371982 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:07.608133 master-0 kubenswrapper[4652]: E0216 17:24:07.603869 4652 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:07.608133 master-0 kubenswrapper[4652]: I0216 17:24:07.604134 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:07.608133 master-0 kubenswrapper[4652]: E0216 17:24:07.604052 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:07.608133 master-0 kubenswrapper[4652]: E0216 17:24:07.604325 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.604300938 +0000 UTC m=+8.992469494 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:07.608133 master-0 kubenswrapper[4652]: I0216 17:24:07.604421 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:07.608133 master-0 kubenswrapper[4652]: E0216 17:24:07.604460 4652 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:07.608133 master-0 kubenswrapper[4652]: E0216 17:24:07.604492 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.604484923 +0000 UTC m=+8.992653439 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:07.608133 master-0 kubenswrapper[4652]: E0216 17:24:07.604521 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.604499924 +0000 UTC m=+8.992668590 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:07.707742 master-0 kubenswrapper[4652]: I0216 17:24:07.707703 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:07.707936 master-0 kubenswrapper[4652]: E0216 17:24:07.707879 4652 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:07.707936 master-0 kubenswrapper[4652]: E0216 17:24:07.707898 4652 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.707936 master-0 kubenswrapper[4652]: E0216 17:24:07.707929 4652 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.708059 master-0 kubenswrapper[4652]: E0216 17:24:07.707980 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.707964921 +0000 UTC m=+9.096133437 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.708059 master-0 kubenswrapper[4652]: I0216 17:24:07.708029 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:07.708134 master-0 kubenswrapper[4652]: I0216 17:24:07.708118 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:07.708218 master-0 kubenswrapper[4652]: E0216 17:24:07.708196 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:07.708295 master-0 kubenswrapper[4652]: E0216 17:24:07.708217 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:07.708295 master-0 kubenswrapper[4652]: E0216 17:24:07.708275 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.708295 master-0 kubenswrapper[4652]: E0216 17:24:07.708287 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.708419 master-0 kubenswrapper[4652]: E0216 17:24:07.708220 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.708419 master-0 kubenswrapper[4652]: E0216 17:24:07.708328 4652 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.708419 master-0 kubenswrapper[4652]: E0216 17:24:07.708338 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.70830764 +0000 UTC m=+9.096476156 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.708419 master-0 kubenswrapper[4652]: E0216 17:24:07.708365 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.708355691 +0000 UTC m=+9.096524207 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.745145 master-0 kubenswrapper[4652]: I0216 17:24:07.745011 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:07.745145 master-0 kubenswrapper[4652]: I0216 17:24:07.745011 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:07.745145 master-0 kubenswrapper[4652]: I0216 17:24:07.745048 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:07.745145 master-0 kubenswrapper[4652]: I0216 17:24:07.745087 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:07.745145 master-0 kubenswrapper[4652]: I0216 17:24:07.745091 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:07.745145 master-0 kubenswrapper[4652]: I0216 17:24:07.745089 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745265 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745284 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: E0216 17:24:07.745269 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745289 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745306 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745327 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745338 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745356 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745349 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745372 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745402 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745411 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745444 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745468 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745476 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: E0216 17:24:07.745475 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745501 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745507 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745510 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745525 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745526 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745565 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745572 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745569 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745577 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745616 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745626 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745638 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745577 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745653 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745610 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745627 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745585 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745571 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745585 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745610 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: E0216 17:24:07.745778 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="b04ee64e-5e83-499c-812d-749b2b6824c6" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745792 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745813 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745832 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745855 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745868 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745873 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745886 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745895 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745864 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745903 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745913 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745931 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745947 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745906 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745908 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745896 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745860 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745924 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745924 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.745895 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: E0216 17:24:07.746063 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.746131 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.746148 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.746158 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.746175 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.746132 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: I0216 17:24:07.746195 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:07.746199 master-0 kubenswrapper[4652]: E0216 17:24:07.746239 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.746362 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.746485 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.746595 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.746691 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.746839 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.746941 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.747015 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.747098 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.747172 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.747267 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.747402 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.747593 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.747711 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.747825 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.748056 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.748212 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.748335 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.748479 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.748541 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.748643 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.748763 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.748897 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.748973 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.749077 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.749173 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.749321 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.749400 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.749482 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.749572 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:24:07.749754 master-0 kubenswrapper[4652]: E0216 17:24:07.749699 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.749844 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.749915 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.750026 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.750273 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.750193 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.750362 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.750477 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.750575 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.750708 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.750788 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.750947 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.751038 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.751185 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.751298 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.751400 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.751735 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:24:07.751848 master-0 kubenswrapper[4652]: E0216 17:24:07.751808 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:24:07.752914 master-0 kubenswrapper[4652]: E0216 17:24:07.751935 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:24:07.752914 master-0 kubenswrapper[4652]: E0216 17:24:07.752031 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:24:07.752914 master-0 kubenswrapper[4652]: E0216 17:24:07.752160 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:24:07.752914 master-0 kubenswrapper[4652]: E0216 17:24:07.752349 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:24:07.752914 master-0 kubenswrapper[4652]: E0216 17:24:07.752496 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:24:07.752914 master-0 kubenswrapper[4652]: E0216 17:24:07.752634 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:24:07.752914 master-0 kubenswrapper[4652]: E0216 17:24:07.752760 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:24:07.752914 master-0 kubenswrapper[4652]: E0216 17:24:07.752861 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:24:07.753393 master-0 kubenswrapper[4652]: E0216 17:24:07.753002 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:24:07.753393 master-0 kubenswrapper[4652]: E0216 17:24:07.753144 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:24:07.753393 master-0 kubenswrapper[4652]: E0216 17:24:07.753348 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:24:07.753596 master-0 kubenswrapper[4652]: E0216 17:24:07.753523 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:24:07.753685 master-0 kubenswrapper[4652]: E0216 17:24:07.753653 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:24:07.753785 master-0 kubenswrapper[4652]: E0216 17:24:07.753753 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" podUID="6f44170a-3c1c-4944-b971-251f75a51fc3" Feb 16 17:24:07.759189 master-0 kubenswrapper[4652]: I0216 17:24:07.759153 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:07.759303 master-0 kubenswrapper[4652]: I0216 17:24:07.759290 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:24:07.765596 master-0 kubenswrapper[4652]: I0216 17:24:07.765544 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:24:07.816276 master-0 kubenswrapper[4652]: I0216 17:24:07.814896 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:07.816276 master-0 kubenswrapper[4652]: I0216 17:24:07.815039 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:07.816276 master-0 kubenswrapper[4652]: E0216 17:24:07.815451 4652 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:24:07.816276 master-0 kubenswrapper[4652]: E0216 17:24:07.815469 4652 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.816276 master-0 kubenswrapper[4652]: E0216 17:24:07.815480 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.816276 master-0 kubenswrapper[4652]: E0216 17:24:07.815524 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.815510757 +0000 UTC m=+9.203679273 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.816276 master-0 kubenswrapper[4652]: E0216 17:24:07.815589 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.816276 master-0 kubenswrapper[4652]: E0216 17:24:07.815599 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.816276 master-0 kubenswrapper[4652]: E0216 17:24:07.815608 4652 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.816276 master-0 kubenswrapper[4652]: E0216 17:24:07.815634 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.81562716 +0000 UTC m=+9.203795676 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.881154 master-0 kubenswrapper[4652]: I0216 17:24:07.881083 4652 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="64747be02f9fa0ca7a7dcd16841ccbd10e763b2bb88c32dc012d8ed46e8ff613" exitCode=0 Feb 16 17:24:07.881154 master-0 kubenswrapper[4652]: I0216 17:24:07.881157 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"64747be02f9fa0ca7a7dcd16841ccbd10e763b2bb88c32dc012d8ed46e8ff613"} Feb 16 17:24:07.883163 master-0 kubenswrapper[4652]: I0216 17:24:07.883119 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/7.log" Feb 16 17:24:07.883798 master-0 kubenswrapper[4652]: I0216 17:24:07.883764 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/6.log" Feb 16 17:24:07.884423 master-0 kubenswrapper[4652]: I0216 17:24:07.884370 4652 generic.go:334] "Generic (PLEG): container finished" podID="702322ac-7610-4568-9a68-b6acbd1f0c12" containerID="19be8681c0a46d475538415f31af270ce62d3e7bda1a682c75b5e072b00f3769" exitCode=255 Feb 16 17:24:07.884525 master-0 kubenswrapper[4652]: I0216 17:24:07.884456 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerDied","Data":"19be8681c0a46d475538415f31af270ce62d3e7bda1a682c75b5e072b00f3769"} Feb 16 17:24:07.884596 master-0 kubenswrapper[4652]: I0216 17:24:07.884555 4652 scope.go:117] "RemoveContainer" containerID="f75975b9b63160b4c431de4bb563ca9301a34de5ce610dce1d3a3bd1522eead5" Feb 16 17:24:07.884868 master-0 kubenswrapper[4652]: I0216 17:24:07.884828 4652 scope.go:117] "RemoveContainer" containerID="19be8681c0a46d475538415f31af270ce62d3e7bda1a682c75b5e072b00f3769" Feb 16 17:24:07.885656 master-0 kubenswrapper[4652]: E0216 17:24:07.885052 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-approver-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=machine-approver-controller pod=machine-approver-8569dd85ff-4vxmz_openshift-cluster-machine-approver(702322ac-7610-4568-9a68-b6acbd1f0c12)\"" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" podUID="702322ac-7610-4568-9a68-b6acbd1f0c12" Feb 16 17:24:07.901909 master-0 kubenswrapper[4652]: E0216 17:24:07.900468 4652 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:24:07.919044 master-0 kubenswrapper[4652]: I0216 17:24:07.918992 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:07.919581 master-0 kubenswrapper[4652]: I0216 17:24:07.919516 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:07.919716 master-0 kubenswrapper[4652]: E0216 17:24:07.919687 4652 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Feb 16 17:24:07.919716 master-0 kubenswrapper[4652]: E0216 17:24:07.919713 4652 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.919842 master-0 kubenswrapper[4652]: E0216 17:24:07.919724 4652 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.919842 master-0 kubenswrapper[4652]: E0216 17:24:07.919761 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:07.919842 master-0 kubenswrapper[4652]: I0216 17:24:07.919691 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:07.919967 master-0 kubenswrapper[4652]: E0216 17:24:07.919821 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.919967 master-0 kubenswrapper[4652]: E0216 17:24:07.919906 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:24:07.919967 master-0 kubenswrapper[4652]: E0216 17:24:07.919964 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.920100 master-0 kubenswrapper[4652]: E0216 17:24:07.919968 4652 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.920100 master-0 kubenswrapper[4652]: E0216 17:24:07.919987 4652 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.920100 master-0 kubenswrapper[4652]: E0216 17:24:07.919790 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.919776955 +0000 UTC m=+9.307945471 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.920275 master-0 kubenswrapper[4652]: E0216 17:24:07.920136 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.920080803 +0000 UTC m=+9.308249389 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.920339 master-0 kubenswrapper[4652]: E0216 17:24:07.920303 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.920276719 +0000 UTC m=+9.308445235 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.920434 master-0 kubenswrapper[4652]: I0216 17:24:07.920392 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:07.920492 master-0 kubenswrapper[4652]: I0216 17:24:07.920471 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:07.920543 master-0 kubenswrapper[4652]: E0216 17:24:07.920504 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:07.920543 master-0 kubenswrapper[4652]: E0216 17:24:07.920519 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.920543 master-0 kubenswrapper[4652]: I0216 17:24:07.920515 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:07.920543 master-0 kubenswrapper[4652]: E0216 17:24:07.920530 4652 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.920709 master-0 kubenswrapper[4652]: E0216 17:24:07.920564 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.920557496 +0000 UTC m=+9.308726012 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.920709 master-0 kubenswrapper[4652]: E0216 17:24:07.920665 4652 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:24:07.920709 master-0 kubenswrapper[4652]: E0216 17:24:07.920681 4652 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.920709 master-0 kubenswrapper[4652]: E0216 17:24:07.920693 4652 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.920875 master-0 kubenswrapper[4652]: E0216 17:24:07.920731 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.92071831 +0000 UTC m=+9.308886836 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.920875 master-0 kubenswrapper[4652]: E0216 17:24:07.920864 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:07.920965 master-0 kubenswrapper[4652]: E0216 17:24:07.920886 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:07.920965 master-0 kubenswrapper[4652]: E0216 17:24:07.920898 4652 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:07.920965 master-0 kubenswrapper[4652]: E0216 17:24:07.920963 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:11.920937176 +0000 UTC m=+9.309105692 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.025052 master-0 kubenswrapper[4652]: I0216 17:24:08.025005 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:08.025202 master-0 kubenswrapper[4652]: E0216 17:24:08.025184 4652 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:08.025237 master-0 kubenswrapper[4652]: E0216 17:24:08.025207 4652 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.025237 master-0 kubenswrapper[4652]: E0216 17:24:08.025219 4652 projected.go:194] Error preparing data for projected volume kube-api-access-st6bv for pod openshift-console/console-599b567ff7-nrcpr: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.025314 master-0 kubenswrapper[4652]: E0216 17:24:08.025295 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.025273797 +0000 UTC m=+9.413442353 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-st6bv" (UniqueName: "kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.025495 master-0 kubenswrapper[4652]: I0216 17:24:08.025465 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:08.025690 master-0 kubenswrapper[4652]: E0216 17:24:08.025665 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:08.025727 master-0 kubenswrapper[4652]: E0216 17:24:08.025694 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.025727 master-0 kubenswrapper[4652]: E0216 17:24:08.025709 4652 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.025869 master-0 kubenswrapper[4652]: E0216 17:24:08.025851 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.025833471 +0000 UTC m=+9.414001997 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.082217 master-0 kubenswrapper[4652]: I0216 17:24:08.082145 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:08.082217 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:08.082217 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:08.082217 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:08.082648 master-0 kubenswrapper[4652]: I0216 17:24:08.082219 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:08.128647 master-0 kubenswrapper[4652]: I0216 17:24:08.128555 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Feb 16 17:24:08.130283 master-0 kubenswrapper[4652]: I0216 17:24:08.130172 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:08.130450 master-0 kubenswrapper[4652]: E0216 17:24:08.130416 4652 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:08.130532 master-0 kubenswrapper[4652]: E0216 17:24:08.130453 4652 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.130532 master-0 kubenswrapper[4652]: E0216 17:24:08.130472 4652 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.130688 master-0 kubenswrapper[4652]: E0216 17:24:08.130659 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.130634984 +0000 UTC m=+9.518803530 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.142540 master-0 kubenswrapper[4652]: I0216 17:24:08.142482 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Feb 16 17:24:08.232908 master-0 kubenswrapper[4652]: I0216 17:24:08.232822 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:08.233181 master-0 kubenswrapper[4652]: E0216 17:24:08.233057 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:08.233181 master-0 kubenswrapper[4652]: E0216 17:24:08.233117 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.233181 master-0 kubenswrapper[4652]: E0216 17:24:08.233138 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.233181 master-0 kubenswrapper[4652]: I0216 17:24:08.233141 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:08.233497 master-0 kubenswrapper[4652]: E0216 17:24:08.233219 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.233196288 +0000 UTC m=+9.621364834 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.233497 master-0 kubenswrapper[4652]: E0216 17:24:08.233359 4652 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:08.233497 master-0 kubenswrapper[4652]: E0216 17:24:08.233394 4652 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.233497 master-0 kubenswrapper[4652]: E0216 17:24:08.233415 4652 projected.go:194] Error preparing data for projected volume kube-api-access-sbrtz for pod openshift-console-operator/console-operator-7777d5cc66-64vhv: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.233497 master-0 kubenswrapper[4652]: I0216 17:24:08.233452 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:08.233824 master-0 kubenswrapper[4652]: E0216 17:24:08.233490 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.233464965 +0000 UTC m=+9.621633521 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-sbrtz" (UniqueName: "kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.233824 master-0 kubenswrapper[4652]: E0216 17:24:08.233579 4652 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:24:08.233824 master-0 kubenswrapper[4652]: E0216 17:24:08.233598 4652 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.233824 master-0 kubenswrapper[4652]: E0216 17:24:08.233613 4652 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.233824 master-0 kubenswrapper[4652]: I0216 17:24:08.233679 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:08.233824 master-0 kubenswrapper[4652]: E0216 17:24:08.233752 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.233725222 +0000 UTC m=+9.621893778 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.233824 master-0 kubenswrapper[4652]: E0216 17:24:08.233782 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:08.233824 master-0 kubenswrapper[4652]: E0216 17:24:08.233805 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.233824 master-0 kubenswrapper[4652]: E0216 17:24:08.233819 4652 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.234504 master-0 kubenswrapper[4652]: E0216 17:24:08.233861 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.233847485 +0000 UTC m=+9.622016031 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.342238 master-0 kubenswrapper[4652]: I0216 17:24:08.341965 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:08.342643 master-0 kubenswrapper[4652]: E0216 17:24:08.342271 4652 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Feb 16 17:24:08.342643 master-0 kubenswrapper[4652]: E0216 17:24:08.342316 4652 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.342643 master-0 kubenswrapper[4652]: E0216 17:24:08.342337 4652 projected.go:194] Error preparing data for projected volume kube-api-access-7mrkc for pod openshift-authentication/oauth-openshift-64f85b8fc9-n9msn: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.342643 master-0 kubenswrapper[4652]: E0216 17:24:08.342416 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.342395757 +0000 UTC m=+9.730564293 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-7mrkc" (UniqueName: "kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.342643 master-0 kubenswrapper[4652]: I0216 17:24:08.342470 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:08.343148 master-0 kubenswrapper[4652]: E0216 17:24:08.342743 4652 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:08.343148 master-0 kubenswrapper[4652]: E0216 17:24:08.342773 4652 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.343148 master-0 kubenswrapper[4652]: E0216 17:24:08.342793 4652 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.343148 master-0 kubenswrapper[4652]: E0216 17:24:08.342954 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.342942372 +0000 UTC m=+9.731110898 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.447024 master-0 kubenswrapper[4652]: I0216 17:24:08.446884 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:08.447297 master-0 kubenswrapper[4652]: E0216 17:24:08.447165 4652 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:08.447297 master-0 kubenswrapper[4652]: E0216 17:24:08.447211 4652 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.447297 master-0 kubenswrapper[4652]: E0216 17:24:08.447233 4652 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.447405 master-0 kubenswrapper[4652]: I0216 17:24:08.447290 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:08.447440 master-0 kubenswrapper[4652]: E0216 17:24:08.447399 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.447369705 +0000 UTC m=+9.835538261 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.447484 master-0 kubenswrapper[4652]: E0216 17:24:08.447438 4652 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:08.447484 master-0 kubenswrapper[4652]: E0216 17:24:08.447465 4652 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.447484 master-0 kubenswrapper[4652]: E0216 17:24:08.447483 4652 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.447611 master-0 kubenswrapper[4652]: E0216 17:24:08.447547 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.447521629 +0000 UTC m=+9.835690185 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.451693 master-0 kubenswrapper[4652]: I0216 17:24:08.451639 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:08.451875 master-0 kubenswrapper[4652]: I0216 17:24:08.451839 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:08.452036 master-0 kubenswrapper[4652]: E0216 17:24:08.452000 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:08.452083 master-0 kubenswrapper[4652]: E0216 17:24:08.452038 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.452083 master-0 kubenswrapper[4652]: E0216 17:24:08.452057 4652 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.452225 master-0 kubenswrapper[4652]: E0216 17:24:08.452120 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.45210075 +0000 UTC m=+9.840269306 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.452225 master-0 kubenswrapper[4652]: E0216 17:24:08.451889 4652 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:08.452225 master-0 kubenswrapper[4652]: E0216 17:24:08.452168 4652 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.452225 master-0 kubenswrapper[4652]: E0216 17:24:08.452183 4652 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.452506 master-0 kubenswrapper[4652]: E0216 17:24:08.452231 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.452217083 +0000 UTC m=+9.840385639 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.555090 master-0 kubenswrapper[4652]: I0216 17:24:08.554977 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:08.555364 master-0 kubenswrapper[4652]: I0216 17:24:08.555178 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:08.555364 master-0 kubenswrapper[4652]: I0216 17:24:08.555242 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:08.555364 master-0 kubenswrapper[4652]: I0216 17:24:08.555287 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:08.555364 master-0 kubenswrapper[4652]: I0216 17:24:08.555365 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:08.555892 master-0 kubenswrapper[4652]: E0216 17:24:08.555833 4652 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:08.555892 master-0 kubenswrapper[4652]: E0216 17:24:08.555858 4652 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.555892 master-0 kubenswrapper[4652]: E0216 17:24:08.555870 4652 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.556147 master-0 kubenswrapper[4652]: E0216 17:24:08.555915 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.555902617 +0000 UTC m=+9.944071133 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.556147 master-0 kubenswrapper[4652]: E0216 17:24:08.556038 4652 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:08.556147 master-0 kubenswrapper[4652]: E0216 17:24:08.556090 4652 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.556147 master-0 kubenswrapper[4652]: E0216 17:24:08.556111 4652 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.556458 master-0 kubenswrapper[4652]: E0216 17:24:08.556187 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.556159773 +0000 UTC m=+9.944328319 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.556458 master-0 kubenswrapper[4652]: E0216 17:24:08.556199 4652 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:08.556458 master-0 kubenswrapper[4652]: E0216 17:24:08.556234 4652 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.556458 master-0 kubenswrapper[4652]: E0216 17:24:08.556288 4652 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.556458 master-0 kubenswrapper[4652]: E0216 17:24:08.556395 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:08.556458 master-0 kubenswrapper[4652]: E0216 17:24:08.556417 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.556458 master-0 kubenswrapper[4652]: E0216 17:24:08.556432 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.556915 master-0 kubenswrapper[4652]: E0216 17:24:08.556481 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.556460191 +0000 UTC m=+9.944628737 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.556915 master-0 kubenswrapper[4652]: E0216 17:24:08.556591 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:08.556915 master-0 kubenswrapper[4652]: E0216 17:24:08.556694 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.556915 master-0 kubenswrapper[4652]: E0216 17:24:08.556711 4652 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.556915 master-0 kubenswrapper[4652]: E0216 17:24:08.556647 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.556631286 +0000 UTC m=+9.944799832 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.557543 master-0 kubenswrapper[4652]: E0216 17:24:08.557492 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.557462178 +0000 UTC m=+9.945630734 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.762659 master-0 kubenswrapper[4652]: I0216 17:24:08.762574 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:08.762944 master-0 kubenswrapper[4652]: E0216 17:24:08.762852 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:24:08.762944 master-0 kubenswrapper[4652]: E0216 17:24:08.762897 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:08.762944 master-0 kubenswrapper[4652]: E0216 17:24:08.762914 4652 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.763152 master-0 kubenswrapper[4652]: E0216 17:24:08.763002 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:12.762976925 +0000 UTC m=+10.151145541 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:08.890236 master-0 kubenswrapper[4652]: I0216 17:24:08.890190 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/7.log" Feb 16 17:24:08.891516 master-0 kubenswrapper[4652]: I0216 17:24:08.891328 4652 scope.go:117] "RemoveContainer" containerID="19be8681c0a46d475538415f31af270ce62d3e7bda1a682c75b5e072b00f3769" Feb 16 17:24:08.891598 master-0 kubenswrapper[4652]: E0216 17:24:08.891527 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-approver-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=machine-approver-controller pod=machine-approver-8569dd85ff-4vxmz_openshift-cluster-machine-approver(702322ac-7610-4568-9a68-b6acbd1f0c12)\"" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" podUID="702322ac-7610-4568-9a68-b6acbd1f0c12" Feb 16 17:24:09.056165 master-0 kubenswrapper[4652]: I0216 17:24:09.056109 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Feb 16 17:24:09.083154 master-0 kubenswrapper[4652]: I0216 17:24:09.083098 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:09.083154 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:09.083154 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:09.083154 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:09.083502 master-0 kubenswrapper[4652]: I0216 17:24:09.083182 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:09.197353 master-0 kubenswrapper[4652]: I0216 17:24:09.197307 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:09.202449 master-0 kubenswrapper[4652]: I0216 17:24:09.202421 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:09.745178 master-0 kubenswrapper[4652]: I0216 17:24:09.745102 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:09.745533 master-0 kubenswrapper[4652]: E0216 17:24:09.745410 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:24:09.745660 master-0 kubenswrapper[4652]: I0216 17:24:09.745637 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:09.745784 master-0 kubenswrapper[4652]: I0216 17:24:09.745661 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:09.745784 master-0 kubenswrapper[4652]: I0216 17:24:09.745720 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:09.745945 master-0 kubenswrapper[4652]: I0216 17:24:09.745775 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:09.745945 master-0 kubenswrapper[4652]: I0216 17:24:09.745826 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:09.745945 master-0 kubenswrapper[4652]: I0216 17:24:09.745844 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:09.745945 master-0 kubenswrapper[4652]: I0216 17:24:09.745845 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:09.745945 master-0 kubenswrapper[4652]: I0216 17:24:09.745877 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:09.745945 master-0 kubenswrapper[4652]: I0216 17:24:09.745784 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:09.745945 master-0 kubenswrapper[4652]: I0216 17:24:09.745734 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:09.745945 master-0 kubenswrapper[4652]: I0216 17:24:09.745916 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:09.745945 master-0 kubenswrapper[4652]: I0216 17:24:09.745922 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:09.745945 master-0 kubenswrapper[4652]: I0216 17:24:09.745857 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:09.745945 master-0 kubenswrapper[4652]: I0216 17:24:09.745856 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:09.745945 master-0 kubenswrapper[4652]: I0216 17:24:09.745672 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:09.745945 master-0 kubenswrapper[4652]: E0216 17:24:09.745842 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745967 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745989 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745908 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746044 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746058 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745934 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746114 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745943 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745958 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746165 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745850 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746191 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745975 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: E0216 17:24:09.746190 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746232 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746239 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746287 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746150 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746408 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745672 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746461 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746503 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746511 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746235 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746540 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746548 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746290 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746569 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746583 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746597 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746313 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746346 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746356 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746359 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: E0216 17:24:09.746691 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746373 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746375 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746382 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745880 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746419 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746215 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746480 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745906 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746506 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746545 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745945 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745916 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: E0216 17:24:09.746882 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746295 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746310 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.745967 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746325 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: E0216 17:24:09.746489 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:24:09.746943 master-0 kubenswrapper[4652]: I0216 17:24:09.746547 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.747046 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.747172 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" podUID="6f44170a-3c1c-4944-b971-251f75a51fc3" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.747354 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.747543 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.747756 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.747826 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.747970 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.748101 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.748240 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.748584 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.748659 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.748865 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.749086 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.749233 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.749450 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.749638 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.749805 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.749927 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.750039 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.750185 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.750317 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.750428 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.750547 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.750663 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.750756 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.750980 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.751169 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:24:09.751519 master-0 kubenswrapper[4652]: E0216 17:24:09.751390 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:24:09.753846 master-0 kubenswrapper[4652]: E0216 17:24:09.751633 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:24:09.753846 master-0 kubenswrapper[4652]: E0216 17:24:09.751760 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:24:09.753846 master-0 kubenswrapper[4652]: E0216 17:24:09.751920 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:24:09.753846 master-0 kubenswrapper[4652]: E0216 17:24:09.752125 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:24:09.753846 master-0 kubenswrapper[4652]: E0216 17:24:09.752332 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:24:09.753846 master-0 kubenswrapper[4652]: E0216 17:24:09.752467 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:24:09.753846 master-0 kubenswrapper[4652]: E0216 17:24:09.752667 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:24:09.753846 master-0 kubenswrapper[4652]: E0216 17:24:09.752804 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:24:09.753846 master-0 kubenswrapper[4652]: E0216 17:24:09.752920 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:24:09.753846 master-0 kubenswrapper[4652]: E0216 17:24:09.753029 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:24:09.753846 master-0 kubenswrapper[4652]: E0216 17:24:09.753137 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:24:09.753846 master-0 kubenswrapper[4652]: E0216 17:24:09.753826 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.754096 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="b04ee64e-5e83-499c-812d-749b2b6824c6" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.754202 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.754341 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.754482 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.754587 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.754677 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.754782 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.754935 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.755101 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.755181 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.755405 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.755495 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.755612 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:24:09.755796 master-0 kubenswrapper[4652]: E0216 17:24:09.755760 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:24:09.756836 master-0 kubenswrapper[4652]: E0216 17:24:09.755818 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:24:09.756836 master-0 kubenswrapper[4652]: E0216 17:24:09.755981 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:24:09.756836 master-0 kubenswrapper[4652]: E0216 17:24:09.756131 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:24:09.756836 master-0 kubenswrapper[4652]: E0216 17:24:09.756330 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:24:09.756836 master-0 kubenswrapper[4652]: E0216 17:24:09.756392 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" Feb 16 17:24:09.756836 master-0 kubenswrapper[4652]: E0216 17:24:09.756482 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:24:09.897450 master-0 kubenswrapper[4652]: I0216 17:24:09.897312 4652 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="52e86c3061fde579411dfde313159dd26166bd780d57a6b18822b49652accbe2" exitCode=0 Feb 16 17:24:09.897450 master-0 kubenswrapper[4652]: I0216 17:24:09.897449 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"52e86c3061fde579411dfde313159dd26166bd780d57a6b18822b49652accbe2"} Feb 16 17:24:09.902757 master-0 kubenswrapper[4652]: I0216 17:24:09.902635 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"1177c0dc9aeae863ced472551b34358e762dcbc0a2024f5afffab79bf3ec6e90"} Feb 16 17:24:09.911276 master-0 kubenswrapper[4652]: I0216 17:24:09.911197 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:10.082625 master-0 kubenswrapper[4652]: I0216 17:24:10.082533 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:10.082625 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:10.082625 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:10.082625 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:10.082894 master-0 kubenswrapper[4652]: I0216 17:24:10.082686 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:10.314232 master-0 kubenswrapper[4652]: I0216 17:24:10.314152 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:10.318791 master-0 kubenswrapper[4652]: I0216 17:24:10.318744 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:10.908487 master-0 kubenswrapper[4652]: I0216 17:24:10.908423 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerStarted","Data":"65bfe95ef481524aa4f6ee3acb1220cfe5f051cf396aceda0db2f0191c2009f8"} Feb 16 17:24:10.909884 master-0 kubenswrapper[4652]: I0216 17:24:10.909834 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-czzz2" event={"ID":"b3fa6ac1-781f-446c-b6b4-18bdb7723c23","Type":"ContainerStarted","Data":"8f935be76793ef1cb7d5998eb69159eaecab475987bb7deee5bffd19c6ef7c00"} Feb 16 17:24:10.918612 master-0 kubenswrapper[4652]: I0216 17:24:10.918557 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:24:11.083275 master-0 kubenswrapper[4652]: I0216 17:24:11.083107 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:11.083275 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:11.083275 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:11.083275 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:11.083275 master-0 kubenswrapper[4652]: I0216 17:24:11.083216 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:11.484947 master-0 kubenswrapper[4652]: I0216 17:24:11.484895 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:11.485050 master-0 kubenswrapper[4652]: E0216 17:24:11.485021 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:11.485126 master-0 kubenswrapper[4652]: E0216 17:24:11.485097 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.485076704 +0000 UTC m=+16.873245220 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:11.485126 master-0 kubenswrapper[4652]: I0216 17:24:11.485073 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:11.485406 master-0 kubenswrapper[4652]: E0216 17:24:11.485144 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:11.485406 master-0 kubenswrapper[4652]: E0216 17:24:11.485190 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.485176926 +0000 UTC m=+16.873345442 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:11.485406 master-0 kubenswrapper[4652]: I0216 17:24:11.485147 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:11.485406 master-0 kubenswrapper[4652]: E0216 17:24:11.485200 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:11.485406 master-0 kubenswrapper[4652]: E0216 17:24:11.485233 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.485223217 +0000 UTC m=+16.873391733 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:11.485406 master-0 kubenswrapper[4652]: I0216 17:24:11.485228 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:11.485406 master-0 kubenswrapper[4652]: I0216 17:24:11.485287 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:11.485406 master-0 kubenswrapper[4652]: E0216 17:24:11.485304 4652 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:11.485406 master-0 kubenswrapper[4652]: I0216 17:24:11.485336 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:11.485406 master-0 kubenswrapper[4652]: E0216 17:24:11.485354 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.485348571 +0000 UTC m=+16.873517077 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485423 4652 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485443 4652 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: I0216 17:24:11.485459 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485482 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.485468624 +0000 UTC m=+16.873637150 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: I0216 17:24:11.485507 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485512 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485548 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.485540056 +0000 UTC m=+16.873708582 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485585 4652 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485645 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.485630658 +0000 UTC m=+16.873799174 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: I0216 17:24:11.485680 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: I0216 17:24:11.485710 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485730 4652 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485757 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.485749441 +0000 UTC m=+16.873917957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: I0216 17:24:11.485774 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485780 4652 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: I0216 17:24:11.485795 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485803 4652 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485803 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.485797403 +0000 UTC m=+16.873965919 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485836 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485847 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.485837094 +0000 UTC m=+16.874005620 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485862 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.485853174 +0000 UTC m=+16.874021810 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485880 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: I0216 17:24:11.485908 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:11.485896 master-0 kubenswrapper[4652]: E0216 17:24:11.485921 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.485909936 +0000 UTC m=+16.874078552 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.485952 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.485983 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.485975047 +0000 UTC m=+16.874143653 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.485952 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486003 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486024 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486036 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.486027099 +0000 UTC m=+16.874195695 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486060 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486086 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486100 4652 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486115 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486142 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.486131292 +0000 UTC m=+16.874299898 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486213 4652 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486286 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486308 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.486292216 +0000 UTC m=+16.874460762 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486343 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486356 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486359 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486313 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486389 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.486381668 +0000 UTC m=+16.874550184 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486388 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486404 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.486397839 +0000 UTC m=+16.874566345 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486420 4652 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486428 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486440 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.48643306 +0000 UTC m=+16.874601576 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486491 4652 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486511 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.486499901 +0000 UTC m=+16.874668447 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486533 4652 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486488 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486535 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.486525632 +0000 UTC m=+16.874694178 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486512 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486577 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.486571163 +0000 UTC m=+16.874739679 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486575 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486610 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.486599594 +0000 UTC m=+16.874768140 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486614 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486656 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.486646395 +0000 UTC m=+16.874814951 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486688 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486730 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486772 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486805 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486808 4652 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486831 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.48682452 +0000 UTC m=+16.874993036 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486838 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486844 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.48683854 +0000 UTC m=+16.875007056 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486878 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486900 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486924 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486933 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486946 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.486939273 +0000 UTC m=+16.875107789 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.486963 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486981 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.486966184 +0000 UTC m=+16.875134730 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.487009 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.487030 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.487024375 +0000 UTC m=+16.875192891 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.487035 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.487050 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.486981 4652 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.487080 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.487074337 +0000 UTC m=+16.875242853 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.487076 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.487094 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.487087657 +0000 UTC m=+16.875256173 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.487134 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.487143 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.487160 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.487167 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: E0216 17:24:11.487181 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.487170129 +0000 UTC m=+16.875338685 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:11.487127 master-0 kubenswrapper[4652]: I0216 17:24:11.487212 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.487287 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487219 4652 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487371 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.487358374 +0000 UTC m=+16.875526990 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487388 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.487379955 +0000 UTC m=+16.875548601 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487418 4652 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487465 4652 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487475 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.487458967 +0000 UTC m=+16.875627523 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487235 4652 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487499 4652 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487503 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.487487258 +0000 UTC m=+16.875655864 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487322 4652 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487525 4652 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.487422 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487535 4652 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487511 4652 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487659 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.487649052 +0000 UTC m=+16.875817668 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.487867 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.487845037 +0000 UTC m=+16.876013593 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.489602 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.489668 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.489727 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.489764 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.489730 4652 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.489805 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.489797629 +0000 UTC m=+16.877966145 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.489751 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.489822 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.489773 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.489858 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.489867 4652 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.489898 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.489870 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.489861311 +0000 UTC m=+16.878029827 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.489923 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.489940 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.489934573 +0000 UTC m=+16.878103089 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.489974 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.489993 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.489988174 +0000 UTC m=+16.878156690 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490005 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.490000754 +0000 UTC m=+16.878169270 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490017 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.490012315 +0000 UTC m=+16.878180831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490030 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490064 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.490057026 +0000 UTC m=+16.878225542 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490087 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490119 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490149 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490163 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490188 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.490179809 +0000 UTC m=+16.878348325 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490202 4652 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490210 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490230 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.49022464 +0000 UTC m=+16.878393156 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490267 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490284 4652 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490291 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490190 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490314 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.490305192 +0000 UTC m=+16.878473788 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490331 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.490324073 +0000 UTC m=+16.878492719 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490347 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490356 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490366 4652 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490376 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490393 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.490385334 +0000 UTC m=+16.878553850 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490496 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490541 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.490524768 +0000 UTC m=+16.878693314 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490558 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490593 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.49058575 +0000 UTC m=+16.878754266 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490616 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490640 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490658 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490663 4652 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490678 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490689 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.490683492 +0000 UTC m=+16.878852008 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490706 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490732 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490751 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490767 4652 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: I0216 17:24:11.490785 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490787 4652 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490809 4652 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490811 4652 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:24:11.490753 master-0 kubenswrapper[4652]: E0216 17:24:11.490831 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.490868 4652 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.490832 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.490824706 +0000 UTC m=+16.878993222 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.490873 4652 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.490900 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.490886628 +0000 UTC m=+16.879055184 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.490900 4652 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.490922 4652 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.490924 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.490912419 +0000 UTC m=+16.879080975 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.490959 4652 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.490966 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.490977 4652 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.490986 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.49097814 +0000 UTC m=+16.879146736 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.491009 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.491038 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491065 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491081 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.491068933 +0000 UTC m=+16.879237449 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491095 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.491064 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491106 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491100 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.491094873 +0000 UTC m=+16.879263389 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491142 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.491134314 +0000 UTC m=+16.879302930 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"service-ca" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491157 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.491149695 +0000 UTC m=+16.879318311 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491176 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0 podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.491167125 +0000 UTC m=+16.879335741 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.491196 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.491241 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491269 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.491260678 +0000 UTC m=+16.879429194 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491284 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.491279518 +0000 UTC m=+16.879448034 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.491302 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491320 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.491334 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491351 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.4913433 +0000 UTC m=+16.879511866 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491363 4652 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491360 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491393 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.491380851 +0000 UTC m=+16.879549467 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"audit" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491411 4652 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491429 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491453 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.491421392 +0000 UTC m=+16.879589958 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491496 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.491479934 +0000 UTC m=+16.879648510 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.491372 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491527 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.491513434 +0000 UTC m=+16.879682010 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.491654 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.491757 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491834 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.491847 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491900 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.491942 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491980 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.491967836 +0000 UTC m=+16.880136352 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.491996 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.491990127 +0000 UTC m=+16.880158643 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.492019 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492050 4652 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492053 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.492087 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492074 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492119 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.49209562 +0000 UTC m=+16.880264186 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492129 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492163 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.492152841 +0000 UTC m=+16.880321437 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.492166 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492188 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.492179132 +0000 UTC m=+16.880347648 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492203 4652 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.492243 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492294 4652 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492317 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.492293185 +0000 UTC m=+16.880461741 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492357 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.492340896 +0000 UTC m=+16.880509512 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.492405 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492556 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.492547432 +0000 UTC m=+16.880715948 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.492551 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492564 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.492599 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.492625 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492644 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.492638014 +0000 UTC m=+16.880806530 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492643 4652 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492687 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492714 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.492694946 +0000 UTC m=+16.880863502 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492727 4652 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492746 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.492729827 +0000 UTC m=+16.880898383 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.492780 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.492761748 +0000 UTC m=+16.880930314 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.492839 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: I0216 17:24:11.492903 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.493009 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.493053 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.493092 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.493069886 +0000 UTC m=+16.881238442 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:11.495148 master-0 kubenswrapper[4652]: E0216 17:24:11.493127 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.493107327 +0000 UTC m=+16.881275883 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:11.595654 master-0 kubenswrapper[4652]: I0216 17:24:11.595617 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:11.595734 master-0 kubenswrapper[4652]: I0216 17:24:11.595700 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:11.595893 master-0 kubenswrapper[4652]: I0216 17:24:11.595862 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:11.595941 master-0 kubenswrapper[4652]: I0216 17:24:11.595907 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:11.596043 master-0 kubenswrapper[4652]: E0216 17:24:11.596014 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:11.596043 master-0 kubenswrapper[4652]: E0216 17:24:11.596026 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:11.596159 master-0 kubenswrapper[4652]: E0216 17:24:11.596074 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.59605523 +0000 UTC m=+16.984223756 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:11.596159 master-0 kubenswrapper[4652]: E0216 17:24:11.596100 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.596081151 +0000 UTC m=+16.984249707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:11.596159 master-0 kubenswrapper[4652]: E0216 17:24:11.596098 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:11.596159 master-0 kubenswrapper[4652]: E0216 17:24:11.596149 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.596440 master-0 kubenswrapper[4652]: E0216 17:24:11.596172 4652 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.596440 master-0 kubenswrapper[4652]: E0216 17:24:11.596188 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:11.596440 master-0 kubenswrapper[4652]: E0216 17:24:11.596224 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:11.596440 master-0 kubenswrapper[4652]: I0216 17:24:11.596145 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.596440 master-0 kubenswrapper[4652]: E0216 17:24:11.596294 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.596226175 +0000 UTC m=+16.984394731 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.596440 master-0 kubenswrapper[4652]: I0216 17:24:11.596386 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:11.596440 master-0 kubenswrapper[4652]: E0216 17:24:11.596408 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.596338128 +0000 UTC m=+16.984506704 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:11.596732 master-0 kubenswrapper[4652]: I0216 17:24:11.596456 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:11.596732 master-0 kubenswrapper[4652]: E0216 17:24:11.596464 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.596447901 +0000 UTC m=+16.984616457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:11.596732 master-0 kubenswrapper[4652]: E0216 17:24:11.596546 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:11.596732 master-0 kubenswrapper[4652]: E0216 17:24:11.596582 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.596732 master-0 kubenswrapper[4652]: I0216 17:24:11.596599 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:11.596732 master-0 kubenswrapper[4652]: E0216 17:24:11.596632 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.596732 master-0 kubenswrapper[4652]: I0216 17:24:11.596646 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:11.596732 master-0 kubenswrapper[4652]: E0216 17:24:11.596665 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.596732 master-0 kubenswrapper[4652]: E0216 17:24:11.596588 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.596732 master-0 kubenswrapper[4652]: E0216 17:24:11.596727 4652 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:11.596732 master-0 kubenswrapper[4652]: E0216 17:24:11.596728 4652 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: E0216 17:24:11.596747 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.596714158 +0000 UTC m=+16.984882714 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: E0216 17:24:11.596781 4652 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: E0216 17:24:11.596782 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.596768649 +0000 UTC m=+16.984937205 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: I0216 17:24:11.596827 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: E0216 17:24:11.596864 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: I0216 17:24:11.596882 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: E0216 17:24:11.596899 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.596886452 +0000 UTC m=+16.985055048 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: E0216 17:24:11.596982 4652 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: I0216 17:24:11.596987 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: I0216 17:24:11.597034 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: E0216 17:24:11.597071 4652 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: I0216 17:24:11.597076 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: E0216 17:24:11.597117 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.597100258 +0000 UTC m=+16.985268814 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: E0216 17:24:11.597141 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.597130549 +0000 UTC m=+16.985299095 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: E0216 17:24:11.597159 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: I0216 17:24:11.597176 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: E0216 17:24:11.597204 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.59718561 +0000 UTC m=+16.985354126 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: I0216 17:24:11.597229 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: E0216 17:24:11.597269 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:11.597279 master-0 kubenswrapper[4652]: E0216 17:24:11.597286 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.597269443 +0000 UTC m=+16.985437969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597314 4652 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597326 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.597308034 +0000 UTC m=+16.985476570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597340 4652 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: I0216 17:24:11.597363 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: I0216 17:24:11.597407 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597373 4652 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597494 4652 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597502 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597512 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597443 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.597419567 +0000 UTC m=+16.985588113 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597564 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.59755153 +0000 UTC m=+16.985720056 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597587 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.597575301 +0000 UTC m=+16.985743927 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: I0216 17:24:11.597631 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597669 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.597654123 +0000 UTC m=+16.985822779 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597684 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.597677163 +0000 UTC m=+16.985845689 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597747 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597777 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597795 4652 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: I0216 17:24:11.597800 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: I0216 17:24:11.597832 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597866 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.597842128 +0000 UTC m=+16.986010704 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597893 4652 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.597928 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.59791689 +0000 UTC m=+16.986085506 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: I0216 17:24:11.597923 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: I0216 17:24:11.597983 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598008 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: I0216 17:24:11.598025 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: I0216 17:24:11.598081 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: I0216 17:24:11.598130 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598142 4652 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598166 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: I0216 17:24:11.598183 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598204 4652 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598278 4652 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598302 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598213 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.598199297 +0000 UTC m=+16.986367833 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598312 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598353 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.598326271 +0000 UTC m=+16.986494837 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598214 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: I0216 17:24:11.598400 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598428 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.598408863 +0000 UTC m=+16.986577499 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598458 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.598440064 +0000 UTC m=+16.986608760 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598489 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.598472815 +0000 UTC m=+16.986641451 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598529 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.598511326 +0000 UTC m=+16.986679942 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598555 4652 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598614 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.598603 master-0 kubenswrapper[4652]: E0216 17:24:11.598557 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.598543276 +0000 UTC m=+16.986711892 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.598702 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.598778 4652 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.598786 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.598841 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.598818654 +0000 UTC m=+16.986987210 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.598866 4652 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.598871 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.598859375 +0000 UTC m=+16.987027921 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.598927 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.598984 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599026 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.598988 4652 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599071 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599078 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.5990652 +0000 UTC m=+16.987233726 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599104 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599127 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599151 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.599138662 +0000 UTC m=+16.987307318 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599165 4652 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599180 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599184 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599203 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599206 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.599196244 +0000 UTC m=+16.987364770 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599235 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599273 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.599234965 +0000 UTC m=+16.987403511 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599283 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599310 4652 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599315 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599322 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.599309227 +0000 UTC m=+16.987477873 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599108 4652 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599367 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.599355248 +0000 UTC m=+16.987523804 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599390 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.599379629 +0000 UTC m=+16.987548185 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599407 4652 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599421 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599428 4652 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599460 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599466 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.599452811 +0000 UTC m=+16.987621337 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-config" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599499 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.599489202 +0000 UTC m=+16.987657758 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599525 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599546 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599560 4652 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599574 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599599 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.599586664 +0000 UTC m=+16.987755300 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599652 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599657 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599704 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.599693507 +0000 UTC m=+16.987862143 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599729 4652 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599747 4652 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599761 4652 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.599799 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.599787169 +0000 UTC m=+16.987955705 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599836 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599872 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599910 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599950 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.599984 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.600019 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.600058 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.600199 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.600273 4652 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.600313 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.600287683 +0000 UTC m=+16.988456239 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.600413 4652 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.600452 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.600432917 +0000 UTC m=+16.988601563 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: I0216 17:24:11.600409 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.600487 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.600472278 +0000 UTC m=+16.988640834 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:11.600675 master-0 kubenswrapper[4652]: E0216 17:24:11.600525 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.600525 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.600574 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.600568 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.600611 4652 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.600652 4652 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.600690 4652 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.600936 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.600758215 +0000 UTC m=+16.988926741 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.601033 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601153 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.601226 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601297 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.601266839 +0000 UTC m=+16.989435385 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601335 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.60132302 +0000 UTC m=+16.989491566 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601351 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601357 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.601347471 +0000 UTC m=+16.989516017 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601370 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601384 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.601374292 +0000 UTC m=+16.989542848 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601386 4652 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601409 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.601396142 +0000 UTC m=+16.989564688 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601433 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.601419673 +0000 UTC m=+16.989588269 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601458 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.601447664 +0000 UTC m=+16.989616340 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.601508 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.601550 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.601589 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601651 4652 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.601659 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601684 4652 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601659 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601710 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.60169598 +0000 UTC m=+16.989864526 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601747 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.601736311 +0000 UTC m=+16.989904857 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.601798 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601847 4652 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.601868 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601886 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.601876975 +0000 UTC m=+16.990045491 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601914 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601941 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.601925756 +0000 UTC m=+16.990094312 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.601958 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.601988 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602023 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.602009118 +0000 UTC m=+16.990177714 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602045 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602111 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.602094721 +0000 UTC m=+16.990263267 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602123 4652 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602164 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.602152032 +0000 UTC m=+16.990320638 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.602216 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602227 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.602205754 +0000 UTC m=+16.990374300 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.602296 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.602340 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.602381 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.602423 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602570 4652 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602589 4652 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602602 4652 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602643 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.602629755 +0000 UTC m=+16.990798351 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602697 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602769 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.602748988 +0000 UTC m=+16.990917564 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602806 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602858 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602876 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.602857841 +0000 UTC m=+16.991026397 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.602901 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.602889032 +0000 UTC m=+16.991057658 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.602935 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.602975 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.603019 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603047 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.603090 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603119 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.603088317 +0000 UTC m=+16.991256903 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603136 4652 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603193 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.60317943 +0000 UTC m=+16.991347986 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603192 4652 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603242 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.603229811 +0000 UTC m=+16.991398457 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.603316 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603333 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603358 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.603358 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603377 4652 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603424 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.603411196 +0000 UTC m=+16.991579752 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603472 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603500 4652 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603519 4652 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603533 4652 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603548 4652 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603535 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.603515938 +0000 UTC m=+16.991684574 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603588 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.60357614 +0000 UTC m=+16.991744786 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603611 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.603600921 +0000 UTC m=+16.991769557 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.603400 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.603675 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.603716 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.603754 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603781 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603804 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.603796936 +0000 UTC m=+16.991965452 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603880 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.603790 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603914 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603884 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.603937 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.603923159 +0000 UTC m=+16.992091715 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.604042 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.604094 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.604081913 +0000 UTC m=+16.992250589 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.604114 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.604106494 +0000 UTC m=+16.992275150 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.604089 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.604166 4652 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.604202 4652 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.604166 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.604224 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.604210937 +0000 UTC m=+16.992379553 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.604166 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: E0216 17:24:11.604287 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.604276639 +0000 UTC m=+16.992445155 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:11.604221 master-0 kubenswrapper[4652]: I0216 17:24:11.604400 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.604434 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.604469 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.604460233 +0000 UTC m=+16.992628849 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.604468 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.604490 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.604482114 +0000 UTC m=+16.992650790 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.604515 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.604532 4652 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.604547 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.604577 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.604561856 +0000 UTC m=+16.992730402 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.604585 4652 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.604615 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.604606847 +0000 UTC m=+16.992775483 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.604638 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.604693 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.604694 4652 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.604738 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.604729271 +0000 UTC m=+16.992897917 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.604774 4652 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.604795 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.604825 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.604817323 +0000 UTC m=+16.992985979 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.604841 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.604832993 +0000 UTC m=+16.993001619 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.604874 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.604904 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.604932 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.604957 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.604982 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.605011 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.605036 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.605064 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605074 4652 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605104 4652 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: I0216 17:24:11.605083 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605144 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.605124321 +0000 UTC m=+16.993292917 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605243 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.605228224 +0000 UTC m=+16.993396840 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605145 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605308 4652 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605344 4652 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605313 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.605299766 +0000 UTC m=+16.993468302 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605199 4652 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605415 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.605392908 +0000 UTC m=+16.993561504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605455 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.605438149 +0000 UTC m=+16.993606785 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605486 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.60547294 +0000 UTC m=+16.993641596 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605517 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605561 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605593 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.605572913 +0000 UTC m=+16.993741469 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605620 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605623 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.605609724 +0000 UTC m=+16.993778570 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:11.609586 master-0 kubenswrapper[4652]: E0216 17:24:11.605654 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.605642445 +0000 UTC m=+16.993810991 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:11.708801 master-0 kubenswrapper[4652]: I0216 17:24:11.708618 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:11.709035 master-0 kubenswrapper[4652]: E0216 17:24:11.708865 4652 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/kube-root-ca.crt: object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:11.709035 master-0 kubenswrapper[4652]: E0216 17:24:11.708908 4652 projected.go:288] Couldn't get configMap openshift-oauth-apiserver/openshift-service-ca.crt: object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.709035 master-0 kubenswrapper[4652]: E0216 17:24:11.708928 4652 projected.go:194] Error preparing data for projected volume kube-api-access-5v65g for pod openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc: [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.709035 master-0 kubenswrapper[4652]: E0216 17:24:11.708991 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.708972899 +0000 UTC m=+17.097141455 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5v65g" (UniqueName: "kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : [object "openshift-oauth-apiserver"/"kube-root-ca.crt" not registered, object "openshift-oauth-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.709374 master-0 kubenswrapper[4652]: I0216 17:24:11.709058 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:11.709374 master-0 kubenswrapper[4652]: I0216 17:24:11.709293 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:11.709636 master-0 kubenswrapper[4652]: I0216 17:24:11.709584 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:11.709727 master-0 kubenswrapper[4652]: I0216 17:24:11.709646 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:11.709727 master-0 kubenswrapper[4652]: I0216 17:24:11.709690 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:11.709863 master-0 kubenswrapper[4652]: I0216 17:24:11.709732 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:11.709863 master-0 kubenswrapper[4652]: I0216 17:24:11.709771 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:11.709863 master-0 kubenswrapper[4652]: I0216 17:24:11.709812 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:11.710153 master-0 kubenswrapper[4652]: E0216 17:24:11.710106 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:11.710288 master-0 kubenswrapper[4652]: E0216 17:24:11.710165 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.71015089 +0000 UTC m=+17.098319446 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:11.710288 master-0 kubenswrapper[4652]: E0216 17:24:11.710189 4652 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:11.710288 master-0 kubenswrapper[4652]: E0216 17:24:11.710218 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:11.710524 master-0 kubenswrapper[4652]: E0216 17:24:11.710289 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.710244943 +0000 UTC m=+17.098413489 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:11.710524 master-0 kubenswrapper[4652]: E0216 17:24:11.710328 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.710317815 +0000 UTC m=+17.098486361 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:11.710524 master-0 kubenswrapper[4652]: E0216 17:24:11.710113 4652 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:11.710524 master-0 kubenswrapper[4652]: E0216 17:24:11.710369 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.710359176 +0000 UTC m=+17.098527732 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:11.710524 master-0 kubenswrapper[4652]: E0216 17:24:11.710394 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:11.710524 master-0 kubenswrapper[4652]: E0216 17:24:11.710422 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.710524 master-0 kubenswrapper[4652]: E0216 17:24:11.710443 4652 projected.go:194] Error preparing data for projected volume kube-api-access-p5rwv for pod openshift-marketplace/redhat-marketplace-4kd66: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.710524 master-0 kubenswrapper[4652]: E0216 17:24:11.710498 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv podName:0393fe12-2533-4c9c-a8e4-a58003c88f36 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.710479449 +0000 UTC m=+17.098648005 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5rwv" (UniqueName: "kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv") pod "redhat-marketplace-4kd66" (UID: "0393fe12-2533-4c9c-a8e4-a58003c88f36") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.711142 master-0 kubenswrapper[4652]: E0216 17:24:11.710547 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:11.711142 master-0 kubenswrapper[4652]: E0216 17:24:11.710569 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.711142 master-0 kubenswrapper[4652]: E0216 17:24:11.710584 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xtk9h for pod openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.711142 master-0 kubenswrapper[4652]: E0216 17:24:11.710622 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.710610182 +0000 UTC m=+17.098778728 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-xtk9h" (UniqueName: "kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.711142 master-0 kubenswrapper[4652]: E0216 17:24:11.710667 4652 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:11.711142 master-0 kubenswrapper[4652]: E0216 17:24:11.710713 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.710699415 +0000 UTC m=+17.098867961 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:11.711142 master-0 kubenswrapper[4652]: E0216 17:24:11.710753 4652 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:11.711142 master-0 kubenswrapper[4652]: E0216 17:24:11.710789 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.710778277 +0000 UTC m=+17.098946823 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 17:24:11.745725 master-0 kubenswrapper[4652]: I0216 17:24:11.745598 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:11.745725 master-0 kubenswrapper[4652]: I0216 17:24:11.745691 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.745753 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.745868 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: E0216 17:24:11.745852 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.745882 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.745932 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.745898 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.745965 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.745928 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.745975 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.745615 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746088 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: E0216 17:24:11.746122 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746157 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746176 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746187 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746202 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746225 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746242 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746237 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746306 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746329 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746165 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746350 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746371 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746161 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746199 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746416 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746173 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.745666 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: E0216 17:24:11.746503 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746116 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746315 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746538 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746569 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746612 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746147 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746651 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746662 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746665 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746689 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746384 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746394 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746290 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746772 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746311 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746320 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746545 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746332 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746885 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: E0216 17:24:11.746869 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746918 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746583 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746587 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746361 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746644 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746362 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746216 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746669 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746381 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:11.746952 master-0 kubenswrapper[4652]: I0216 17:24:11.746713 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: I0216 17:24:11.746285 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: I0216 17:24:11.746444 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: I0216 17:24:11.746888 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: I0216 17:24:11.746559 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.747108 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: I0216 17:24:11.746907 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: I0216 17:24:11.746925 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: I0216 17:24:11.746608 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: I0216 17:24:11.746192 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.747310 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.747435 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.747567 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.747619 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.747667 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.747712 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.747828 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.747923 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.748030 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.748193 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.748366 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.748456 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.748568 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.748763 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.748867 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.748940 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.749038 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.749141 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.749239 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.749372 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.749522 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.749647 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.749711 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" podUID="6f44170a-3c1c-4944-b971-251f75a51fc3" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.749853 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.749987 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.750062 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.750225 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.750340 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.750452 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.750583 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.750719 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:24:11.751142 master-0 kubenswrapper[4652]: E0216 17:24:11.751100 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.751389 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.751503 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.751672 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.751760 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.751836 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.751953 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.752063 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.752141 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.752358 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.752496 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.752589 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.752686 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.752751 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.752833 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.753070 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.753177 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.753287 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.753420 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.753530 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.753669 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:24:11.754107 master-0 kubenswrapper[4652]: E0216 17:24:11.753911 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" Feb 16 17:24:11.756085 master-0 kubenswrapper[4652]: E0216 17:24:11.754327 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="b04ee64e-5e83-499c-812d-749b2b6824c6" Feb 16 17:24:11.756085 master-0 kubenswrapper[4652]: E0216 17:24:11.754435 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:24:11.756085 master-0 kubenswrapper[4652]: E0216 17:24:11.754620 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:24:11.756085 master-0 kubenswrapper[4652]: E0216 17:24:11.754746 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:24:11.756085 master-0 kubenswrapper[4652]: E0216 17:24:11.754898 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:24:11.756085 master-0 kubenswrapper[4652]: E0216 17:24:11.755012 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:24:11.756085 master-0 kubenswrapper[4652]: E0216 17:24:11.755162 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:24:11.756085 master-0 kubenswrapper[4652]: E0216 17:24:11.755338 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:24:11.917173 master-0 kubenswrapper[4652]: I0216 17:24:11.917098 4652 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="65bfe95ef481524aa4f6ee3acb1220cfe5f051cf396aceda0db2f0191c2009f8" exitCode=0 Feb 16 17:24:11.918653 master-0 kubenswrapper[4652]: I0216 17:24:11.917207 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"65bfe95ef481524aa4f6ee3acb1220cfe5f051cf396aceda0db2f0191c2009f8"} Feb 16 17:24:11.921391 master-0 kubenswrapper[4652]: I0216 17:24:11.921348 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:11.921576 master-0 kubenswrapper[4652]: I0216 17:24:11.921545 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:11.922009 master-0 kubenswrapper[4652]: E0216 17:24:11.921896 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.922009 master-0 kubenswrapper[4652]: E0216 17:24:11.921933 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.922009 master-0 kubenswrapper[4652]: I0216 17:24:11.921908 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:11.922009 master-0 kubenswrapper[4652]: I0216 17:24:11.921949 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" event={"ID":"9f9bf4ab-5415-4616-aa36-ea387c699ea9","Type":"ContainerStarted","Data":"b8c96ad701097c97eb2b351add0f0757ecf8c0a703ab5235ac1642b182f6deaa"} Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: I0216 17:24:11.922023 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.921954 4652 projected.go:194] Error preparing data for projected volume kube-api-access-rxbdv for pod openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: I0216 17:24:11.922072 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922134 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv podName:80d3b238-70c3-4e71-96a1-99405352033f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.922111808 +0000 UTC m=+17.310280394 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rxbdv" (UniqueName: "kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv") pod "csi-snapshot-controller-74b6595c6d-pfzq2" (UID: "80d3b238-70c3-4e71-96a1-99405352033f") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.921998 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922159 4652 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: object "openshift-service-ca"/"kube-root-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922174 4652 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: object "openshift-service-ca"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922174 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922181 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: I0216 17:24:11.922184 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922193 4652 projected.go:194] Error preparing data for projected volume kube-api-access-qwh24 for pod openshift-marketplace/community-operators-7w4km: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: I0216 17:24:11.922276 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922328 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24 podName:cc9a20f4-255a-4312-8f43-174a28c06340 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.922311843 +0000 UTC m=+17.310480499 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qwh24" (UniqueName: "kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24") pod "community-operators-7w4km" (UID: "cc9a20f4-255a-4312-8f43-174a28c06340") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922185 4652 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922345 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: I0216 17:24:11.922354 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922365 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922386 4652 projected.go:194] Error preparing data for projected volume kube-api-access-tbq2b for pod openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922047 4652 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: object "openshift-insights"/"kube-root-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922417 4652 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: object "openshift-insights"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922425 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922270 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922418 4652 projected.go:288] Couldn't get configMap openshift-catalogd/kube-root-ca.crt: object "openshift-catalogd"/"kube-root-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922446 4652 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922452 4652 projected.go:194] Error preparing data for projected volume kube-api-access-7p9ld for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922446 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922470 4652 projected.go:194] Error preparing data for projected volume kube-api-access-25g7f for pod openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922378 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.922369165 +0000 UTC m=+17.310537831 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : [object "openshift-service-ca"/"kube-root-ca.crt" not registered, object "openshift-service-ca"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922503 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.922495238 +0000 UTC m=+17.310663754 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tbq2b" (UniqueName: "kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922594 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.922581081 +0000 UTC m=+17.310749657 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : [object "openshift-insights"/"kube-root-ca.crt" not registered, object "openshift-insights"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922613 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.922606491 +0000 UTC m=+17.310775147 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7p9ld" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"kube-root-ca.crt" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922627 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.922621342 +0000 UTC m=+17.310789988 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-25g7f" (UniqueName: "kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922868 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.922935 4652 projected.go:194] Error preparing data for projected volume kube-api-access-bs597 for pod openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6: [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:11.923145 master-0 kubenswrapper[4652]: E0216 17:24:11.923058 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597 podName:62fc29f4-557f-4a75-8b78-6ca425c81b81 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:19.923023152 +0000 UTC m=+17.311191708 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bs597" (UniqueName: "kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597") pod "migrator-5bd989df77-gcfg6" (UID: "62fc29f4-557f-4a75-8b78-6ca425c81b81") : [object "openshift-kube-storage-version-migrator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.030657 master-0 kubenswrapper[4652]: I0216 17:24:12.030357 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:12.030657 master-0 kubenswrapper[4652]: E0216 17:24:12.030449 4652 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:12.030657 master-0 kubenswrapper[4652]: E0216 17:24:12.030662 4652 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.030907 master-0 kubenswrapper[4652]: E0216 17:24:12.030710 4652 projected.go:194] Error preparing data for projected volume kube-api-access-st6bv for pod openshift-console/console-599b567ff7-nrcpr: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.030907 master-0 kubenswrapper[4652]: E0216 17:24:12.030791 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.030771903 +0000 UTC m=+17.418940419 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-st6bv" (UniqueName: "kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.031341 master-0 kubenswrapper[4652]: I0216 17:24:12.031313 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:12.031512 master-0 kubenswrapper[4652]: E0216 17:24:12.031486 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:12.031577 master-0 kubenswrapper[4652]: E0216 17:24:12.031551 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.031577 master-0 kubenswrapper[4652]: E0216 17:24:12.031564 4652 projected.go:194] Error preparing data for projected volume kube-api-access-djfsw for pod openshift-marketplace/redhat-operators-lnzfx: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.031646 master-0 kubenswrapper[4652]: E0216 17:24:12.031631 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw podName:822e1750-652e-4ceb-8fea-b2c1c905b0f1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.031620516 +0000 UTC m=+17.419789032 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-djfsw" (UniqueName: "kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw") pod "redhat-operators-lnzfx" (UID: "822e1750-652e-4ceb-8fea-b2c1c905b0f1") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.085565 master-0 kubenswrapper[4652]: I0216 17:24:12.085515 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:12.085565 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:12.085565 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:12.085565 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:12.085931 master-0 kubenswrapper[4652]: I0216 17:24:12.085584 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:12.122270 master-0 kubenswrapper[4652]: I0216 17:24:12.121656 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:12.138483 master-0 kubenswrapper[4652]: I0216 17:24:12.136747 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:12.138483 master-0 kubenswrapper[4652]: E0216 17:24:12.136962 4652 projected.go:288] Couldn't get configMap openshift-etcd-operator/kube-root-ca.crt: object "openshift-etcd-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:12.138483 master-0 kubenswrapper[4652]: E0216 17:24:12.136986 4652 projected.go:288] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.138483 master-0 kubenswrapper[4652]: E0216 17:24:12.136996 4652 projected.go:194] Error preparing data for projected volume kube-api-access-rjd5j for pod openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8: [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.138483 master-0 kubenswrapper[4652]: E0216 17:24:12.137042 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.137027845 +0000 UTC m=+17.525196361 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-rjd5j" (UniqueName: "kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : [object "openshift-etcd-operator"/"kube-root-ca.crt" not registered, object "openshift-etcd-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.144457 master-0 kubenswrapper[4652]: I0216 17:24:12.144426 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:12.243103 master-0 kubenswrapper[4652]: I0216 17:24:12.243032 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:12.243312 master-0 kubenswrapper[4652]: I0216 17:24:12.243130 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:12.243312 master-0 kubenswrapper[4652]: E0216 17:24:12.243285 4652 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: object "openshift-console-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:12.243373 master-0 kubenswrapper[4652]: E0216 17:24:12.243329 4652 projected.go:288] Couldn't get configMap openshift-console-operator/openshift-service-ca.crt: object "openshift-console-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.243373 master-0 kubenswrapper[4652]: E0216 17:24:12.243343 4652 projected.go:194] Error preparing data for projected volume kube-api-access-sbrtz for pod openshift-console-operator/console-operator-7777d5cc66-64vhv: [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.243462 master-0 kubenswrapper[4652]: E0216 17:24:12.243383 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:12.243462 master-0 kubenswrapper[4652]: E0216 17:24:12.243410 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.243462 master-0 kubenswrapper[4652]: E0216 17:24:12.243417 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.243382309 +0000 UTC m=+17.631550825 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-sbrtz" (UniqueName: "kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : [object "openshift-console-operator"/"kube-root-ca.crt" not registered, object "openshift-console-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.243462 master-0 kubenswrapper[4652]: E0216 17:24:12.243424 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hh2cd for pod openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.243462 master-0 kubenswrapper[4652]: I0216 17:24:12.243455 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:12.243635 master-0 kubenswrapper[4652]: E0216 17:24:12.243490 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.243470771 +0000 UTC m=+17.631639297 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-hh2cd" (UniqueName: "kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.243635 master-0 kubenswrapper[4652]: E0216 17:24:12.243538 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:12.243635 master-0 kubenswrapper[4652]: E0216 17:24:12.243567 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.243635 master-0 kubenswrapper[4652]: E0216 17:24:12.243578 4652 projected.go:194] Error preparing data for projected volume kube-api-access-dzpnw for pod openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.243635 master-0 kubenswrapper[4652]: E0216 17:24:12.243603 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.243596654 +0000 UTC m=+17.631765170 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-dzpnw" (UniqueName: "kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.243635 master-0 kubenswrapper[4652]: I0216 17:24:12.243563 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:12.243635 master-0 kubenswrapper[4652]: E0216 17:24:12.243621 4652 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: object "openshift-ingress-canary"/"kube-root-ca.crt" not registered Feb 16 17:24:12.243635 master-0 kubenswrapper[4652]: E0216 17:24:12.243636 4652 projected.go:288] Couldn't get configMap openshift-ingress-canary/openshift-service-ca.crt: object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.243901 master-0 kubenswrapper[4652]: E0216 17:24:12.243646 4652 projected.go:194] Error preparing data for projected volume kube-api-access-6fmhb for pod openshift-ingress-canary/ingress-canary-qqvg4: [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.243901 master-0 kubenswrapper[4652]: E0216 17:24:12.243681 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.243672346 +0000 UTC m=+17.631840872 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-6fmhb" (UniqueName: "kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : [object "openshift-ingress-canary"/"kube-root-ca.crt" not registered, object "openshift-ingress-canary"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.348181 master-0 kubenswrapper[4652]: I0216 17:24:12.348050 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:12.348388 master-0 kubenswrapper[4652]: I0216 17:24:12.348362 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:12.348437 master-0 kubenswrapper[4652]: E0216 17:24:12.348393 4652 projected.go:288] Couldn't get configMap openshift-authentication/kube-root-ca.crt: object "openshift-authentication"/"kube-root-ca.crt" not registered Feb 16 17:24:12.348437 master-0 kubenswrapper[4652]: E0216 17:24:12.348431 4652 projected.go:288] Couldn't get configMap openshift-authentication/openshift-service-ca.crt: object "openshift-authentication"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.348506 master-0 kubenswrapper[4652]: E0216 17:24:12.348453 4652 projected.go:194] Error preparing data for projected volume kube-api-access-7mrkc for pod openshift-authentication/oauth-openshift-64f85b8fc9-n9msn: [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.348537 master-0 kubenswrapper[4652]: E0216 17:24:12.348521 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.34849672 +0000 UTC m=+17.736665266 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-7mrkc" (UniqueName: "kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : [object "openshift-authentication"/"kube-root-ca.crt" not registered, object "openshift-authentication"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.348658 master-0 kubenswrapper[4652]: E0216 17:24:12.348611 4652 projected.go:288] Couldn't get configMap openshift-route-controller-manager/kube-root-ca.crt: object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:12.348717 master-0 kubenswrapper[4652]: E0216 17:24:12.348656 4652 projected.go:288] Couldn't get configMap openshift-route-controller-manager/openshift-service-ca.crt: object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.348717 master-0 kubenswrapper[4652]: E0216 17:24:12.348682 4652 projected.go:194] Error preparing data for projected volume kube-api-access-wzlnz for pod openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl: [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.348908 master-0 kubenswrapper[4652]: E0216 17:24:12.348871 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.348850889 +0000 UTC m=+17.737019435 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wzlnz" (UniqueName: "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : [object "openshift-route-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-route-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.452300 master-0 kubenswrapper[4652]: I0216 17:24:12.452135 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:12.452621 master-0 kubenswrapper[4652]: E0216 17:24:12.452363 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:12.452621 master-0 kubenswrapper[4652]: E0216 17:24:12.452406 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.452621 master-0 kubenswrapper[4652]: E0216 17:24:12.452425 4652 projected.go:194] Error preparing data for projected volume kube-api-access-kx9vc for pod openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.452621 master-0 kubenswrapper[4652]: I0216 17:24:12.452428 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:12.452621 master-0 kubenswrapper[4652]: E0216 17:24:12.452491 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.452465571 +0000 UTC m=+17.840634147 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-kx9vc" (UniqueName: "kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.452621 master-0 kubenswrapper[4652]: E0216 17:24:12.452569 4652 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:12.452621 master-0 kubenswrapper[4652]: E0216 17:24:12.452592 4652 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/openshift-service-ca.crt: object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.452621 master-0 kubenswrapper[4652]: E0216 17:24:12.452605 4652 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.453503 master-0 kubenswrapper[4652]: E0216 17:24:12.452661 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.452644565 +0000 UTC m=+17.840813151 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : [object "openshift-cluster-samples-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-samples-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.453503 master-0 kubenswrapper[4652]: I0216 17:24:12.452600 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:12.453503 master-0 kubenswrapper[4652]: E0216 17:24:12.452703 4652 projected.go:288] Couldn't get configMap openshift-service-ca-operator/kube-root-ca.crt: object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:12.453503 master-0 kubenswrapper[4652]: E0216 17:24:12.452723 4652 projected.go:288] Couldn't get configMap openshift-service-ca-operator/openshift-service-ca.crt: object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.453503 master-0 kubenswrapper[4652]: E0216 17:24:12.452740 4652 projected.go:194] Error preparing data for projected volume kube-api-access-r9bv7 for pod openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5: [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.453503 master-0 kubenswrapper[4652]: E0216 17:24:12.452904 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7 podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.452883252 +0000 UTC m=+17.841051798 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-r9bv7" (UniqueName: "kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : [object "openshift-service-ca-operator"/"kube-root-ca.crt" not registered, object "openshift-service-ca-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.456361 master-0 kubenswrapper[4652]: I0216 17:24:12.456292 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:12.456550 master-0 kubenswrapper[4652]: E0216 17:24:12.456473 4652 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: object "openshift-apiserver"/"kube-root-ca.crt" not registered Feb 16 17:24:12.456550 master-0 kubenswrapper[4652]: E0216 17:24:12.456497 4652 projected.go:288] Couldn't get configMap openshift-apiserver/openshift-service-ca.crt: object "openshift-apiserver"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.456550 master-0 kubenswrapper[4652]: E0216 17:24:12.456507 4652 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.456868 master-0 kubenswrapper[4652]: E0216 17:24:12.456589 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.45657471 +0000 UTC m=+17.844743306 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : [object "openshift-apiserver"/"kube-root-ca.crt" not registered, object "openshift-apiserver"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.558734 master-0 kubenswrapper[4652]: I0216 17:24:12.558343 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:12.559013 master-0 kubenswrapper[4652]: E0216 17:24:12.558529 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:12.559013 master-0 kubenswrapper[4652]: E0216 17:24:12.558810 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/openshift-service-ca.crt: object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.559013 master-0 kubenswrapper[4652]: E0216 17:24:12.558825 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.559013 master-0 kubenswrapper[4652]: E0216 17:24:12.558965 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.558946318 +0000 UTC m=+17.947114874 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : [object "openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" not registered, object "openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.561222 master-0 kubenswrapper[4652]: I0216 17:24:12.561162 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:12.561492 master-0 kubenswrapper[4652]: E0216 17:24:12.561417 4652 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/kube-root-ca.crt: object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:12.561492 master-0 kubenswrapper[4652]: E0216 17:24:12.561447 4652 projected.go:288] Couldn't get configMap openshift-cluster-olm-operator/openshift-service-ca.crt: object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.561492 master-0 kubenswrapper[4652]: E0216 17:24:12.561460 4652 projected.go:194] Error preparing data for projected volume kube-api-access-2dxw9 for pod openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv: [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: E0216 17:24:12.561515 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9 podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.561498796 +0000 UTC m=+17.949667312 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2dxw9" (UniqueName: "kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : [object "openshift-cluster-olm-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-olm-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: I0216 17:24:12.561545 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: I0216 17:24:12.561608 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: I0216 17:24:12.561636 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: E0216 17:24:12.561720 4652 projected.go:288] Couldn't get configMap openshift-apiserver-operator/kube-root-ca.crt: object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: E0216 17:24:12.561737 4652 projected.go:288] Couldn't get configMap openshift-apiserver-operator/openshift-service-ca.crt: object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: E0216 17:24:12.561748 4652 projected.go:194] Error preparing data for projected volume kube-api-access-5dpp2 for pod openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v: [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: E0216 17:24:12.561784 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2 podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.561772683 +0000 UTC m=+17.949941249 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-5dpp2" (UniqueName: "kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : [object "openshift-apiserver-operator"/"kube-root-ca.crt" not registered, object "openshift-apiserver-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: E0216 17:24:12.561791 4652 projected.go:288] Couldn't get configMap openshift-marketplace/kube-root-ca.crt: object "openshift-marketplace"/"kube-root-ca.crt" not registered Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: E0216 17:24:12.561811 4652 projected.go:288] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: object "openshift-marketplace"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: E0216 17:24:12.561821 4652 projected.go:194] Error preparing data for projected volume kube-api-access-qhz6z for pod openshift-marketplace/certified-operators-z69zq: [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: E0216 17:24:12.561863 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z podName:f3beb7bf-922f-425d-8a19-fd407a7153a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.561849575 +0000 UTC m=+17.950018161 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qhz6z" (UniqueName: "kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z") pod "certified-operators-z69zq" (UID: "f3beb7bf-922f-425d-8a19-fd407a7153a8") : [object "openshift-marketplace"/"kube-root-ca.crt" not registered, object "openshift-marketplace"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: E0216 17:24:12.561884 4652 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: object "openshift-controller-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: E0216 17:24:12.561913 4652 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: object "openshift-controller-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: E0216 17:24:12.561924 4652 projected.go:194] Error preparing data for projected volume kube-api-access-2cjmj for pod openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd: [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.562312 master-0 kubenswrapper[4652]: E0216 17:24:12.561983 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.561967798 +0000 UTC m=+17.950136314 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2cjmj" (UniqueName: "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : [object "openshift-controller-manager"/"kube-root-ca.crt" not registered, object "openshift-controller-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.773231 master-0 kubenswrapper[4652]: I0216 17:24:12.768631 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:12.773231 master-0 kubenswrapper[4652]: E0216 17:24:12.771014 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/kube-root-ca.crt: object "openshift-operator-controller"/"kube-root-ca.crt" not registered Feb 16 17:24:12.773231 master-0 kubenswrapper[4652]: E0216 17:24:12.771050 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:12.773231 master-0 kubenswrapper[4652]: E0216 17:24:12.771073 4652 projected.go:194] Error preparing data for projected volume kube-api-access-w4wht for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.773231 master-0 kubenswrapper[4652]: E0216 17:24:12.771150 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:20.771126131 +0000 UTC m=+18.159294687 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-w4wht" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"kube-root-ca.crt" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:12.901964 master-0 kubenswrapper[4652]: E0216 17:24:12.901911 4652 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 17:24:12.932191 master-0 kubenswrapper[4652]: I0216 17:24:12.931576 4652 generic.go:334] "Generic (PLEG): container finished" podID="ab5760f1-b2e0-4138-9383-e4827154ac50" containerID="1f8ffc7feb7cf28296b8e3c0bddd87e7aad86596899633f221734779c005443a" exitCode=0 Feb 16 17:24:12.932191 master-0 kubenswrapper[4652]: I0216 17:24:12.931737 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:24:12.932191 master-0 kubenswrapper[4652]: I0216 17:24:12.931749 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:24:12.934706 master-0 kubenswrapper[4652]: I0216 17:24:12.933379 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerDied","Data":"1f8ffc7feb7cf28296b8e3c0bddd87e7aad86596899633f221734779c005443a"} Feb 16 17:24:13.084850 master-0 kubenswrapper[4652]: I0216 17:24:13.084365 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:13.084850 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:13.084850 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:13.084850 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:13.084850 master-0 kubenswrapper[4652]: I0216 17:24:13.084787 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:13.414389 master-0 kubenswrapper[4652]: I0216 17:24:13.414300 4652 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Feb 16 17:24:13.414389 master-0 kubenswrapper[4652]: I0216 17:24:13.414339 4652 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T17:24:13Z","lastTransitionTime":"2026-02-16T17:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 17:24:13.745472 master-0 kubenswrapper[4652]: I0216 17:24:13.745412 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:13.745472 master-0 kubenswrapper[4652]: I0216 17:24:13.745459 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:13.745472 master-0 kubenswrapper[4652]: I0216 17:24:13.745475 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745436 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745422 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745564 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: E0216 17:24:13.745558 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745592 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745412 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745614 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745631 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745641 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745648 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745635 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745685 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745741 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: E0216 17:24:13.745739 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745741 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745768 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745755 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745791 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:13.745781 master-0 kubenswrapper[4652]: I0216 17:24:13.745772 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.745802 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.745810 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.745826 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.745839 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.745965 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.745977 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.745989 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: E0216 17:24:13.745982 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746018 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746031 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746009 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.745991 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746021 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.745996 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746056 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746075 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746011 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746093 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746059 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746105 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746035 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746063 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746120 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746150 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746083 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: E0216 17:24:13.746166 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746096 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746105 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746060 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746069 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746078 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746043 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746263 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746278 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746289 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746295 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746235 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746172 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746087 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746218 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746294 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: E0216 17:24:13.746377 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746401 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746414 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746432 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746441 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746453 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746460 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: E0216 17:24:13.746532 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: I0216 17:24:13.746546 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: E0216 17:24:13.746616 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: E0216 17:24:13.746724 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:24:13.746945 master-0 kubenswrapper[4652]: E0216 17:24:13.746930 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.747109 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.747168 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.747271 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.747342 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.747404 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" podUID="6f44170a-3c1c-4944-b971-251f75a51fc3" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.747481 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.747557 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.747609 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.747683 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.747755 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.747846 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.747901 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.747979 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.748043 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.748112 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.748272 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="b04ee64e-5e83-499c-812d-749b2b6824c6" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.748344 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.748436 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.748507 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.748540 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.748622 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:24:13.748748 master-0 kubenswrapper[4652]: E0216 17:24:13.748684 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:24:13.749394 master-0 kubenswrapper[4652]: E0216 17:24:13.748815 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:24:13.749394 master-0 kubenswrapper[4652]: E0216 17:24:13.748992 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:24:13.749394 master-0 kubenswrapper[4652]: E0216 17:24:13.749048 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:24:13.749394 master-0 kubenswrapper[4652]: E0216 17:24:13.749219 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:24:13.749394 master-0 kubenswrapper[4652]: E0216 17:24:13.749361 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:24:13.749542 master-0 kubenswrapper[4652]: E0216 17:24:13.749440 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:24:13.749542 master-0 kubenswrapper[4652]: E0216 17:24:13.749502 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:24:13.749707 master-0 kubenswrapper[4652]: E0216 17:24:13.749657 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:24:13.749859 master-0 kubenswrapper[4652]: E0216 17:24:13.749818 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:24:13.750049 master-0 kubenswrapper[4652]: E0216 17:24:13.750008 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:24:13.750114 master-0 kubenswrapper[4652]: E0216 17:24:13.750089 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:24:13.750174 master-0 kubenswrapper[4652]: E0216 17:24:13.750148 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:24:13.750421 master-0 kubenswrapper[4652]: E0216 17:24:13.750373 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:24:13.750503 master-0 kubenswrapper[4652]: E0216 17:24:13.750482 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:24:13.750544 master-0 kubenswrapper[4652]: E0216 17:24:13.750524 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:24:13.750603 master-0 kubenswrapper[4652]: E0216 17:24:13.750580 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:24:13.750754 master-0 kubenswrapper[4652]: E0216 17:24:13.750726 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" Feb 16 17:24:13.750856 master-0 kubenswrapper[4652]: E0216 17:24:13.750834 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:24:13.750894 master-0 kubenswrapper[4652]: E0216 17:24:13.750882 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:24:13.750963 master-0 kubenswrapper[4652]: E0216 17:24:13.750945 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:24:13.751149 master-0 kubenswrapper[4652]: E0216 17:24:13.751117 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:24:13.751210 master-0 kubenswrapper[4652]: E0216 17:24:13.751188 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:24:13.751323 master-0 kubenswrapper[4652]: E0216 17:24:13.751302 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:24:13.751425 master-0 kubenswrapper[4652]: E0216 17:24:13.751400 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:24:13.751550 master-0 kubenswrapper[4652]: E0216 17:24:13.751525 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:24:13.751605 master-0 kubenswrapper[4652]: E0216 17:24:13.751580 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:24:13.751686 master-0 kubenswrapper[4652]: E0216 17:24:13.751666 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:24:13.751887 master-0 kubenswrapper[4652]: E0216 17:24:13.751848 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" Feb 16 17:24:13.752026 master-0 kubenswrapper[4652]: E0216 17:24:13.751988 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:24:13.752164 master-0 kubenswrapper[4652]: E0216 17:24:13.752130 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:24:13.752356 master-0 kubenswrapper[4652]: E0216 17:24:13.752321 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:24:13.752536 master-0 kubenswrapper[4652]: E0216 17:24:13.752499 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:24:13.752649 master-0 kubenswrapper[4652]: E0216 17:24:13.752613 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:24:13.752776 master-0 kubenswrapper[4652]: E0216 17:24:13.752744 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:24:13.752868 master-0 kubenswrapper[4652]: E0216 17:24:13.752843 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:24:13.904033 master-0 kubenswrapper[4652]: I0216 17:24:13.903959 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:13.936534 master-0 kubenswrapper[4652]: I0216 17:24:13.936476 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:13.938420 master-0 kubenswrapper[4652]: I0216 17:24:13.938345 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rjdlk" event={"ID":"ab5760f1-b2e0-4138-9383-e4827154ac50","Type":"ContainerStarted","Data":"c77c0e40089c8742249c349be518d95213f26da9adbd9700516776396cb66670"} Feb 16 17:24:13.938420 master-0 kubenswrapper[4652]: I0216 17:24:13.938407 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:24:14.084489 master-0 kubenswrapper[4652]: I0216 17:24:14.084325 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:14.084489 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:14.084489 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:14.084489 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:14.084489 master-0 kubenswrapper[4652]: I0216 17:24:14.084430 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:14.943282 master-0 kubenswrapper[4652]: I0216 17:24:14.943157 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:24:15.083130 master-0 kubenswrapper[4652]: I0216 17:24:15.083063 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:15.083130 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:15.083130 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:15.083130 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:15.083407 master-0 kubenswrapper[4652]: I0216 17:24:15.083151 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:15.744832 master-0 kubenswrapper[4652]: I0216 17:24:15.744758 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:15.744832 master-0 kubenswrapper[4652]: I0216 17:24:15.744788 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:15.744832 master-0 kubenswrapper[4652]: I0216 17:24:15.744828 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.744869 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.744889 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745003 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: E0216 17:24:15.745011 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745039 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745059 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745010 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745069 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745100 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745083 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745097 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745136 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745084 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745071 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745162 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745305 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745321 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745310 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745346 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745354 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745382 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745386 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: E0216 17:24:15.745308 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745389 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745368 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745442 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745426 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745397 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745402 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745519 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745535 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745559 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: E0216 17:24:15.745480 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745574 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745574 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745610 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745588 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:15.745552 master-0 kubenswrapper[4652]: I0216 17:24:15.745600 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745609 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745534 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745586 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745638 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745638 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745785 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: E0216 17:24:15.745795 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745803 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745804 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745842 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745823 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745862 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745833 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745853 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745843 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745899 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745884 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745914 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745936 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745894 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745938 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.745934 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: E0216 17:24:15.746808 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: E0216 17:24:15.747452 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.746891 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.746980 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.747010 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.747210 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: E0216 17:24:15.747648 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.747272 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: I0216 17:24:15.746933 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: E0216 17:24:15.747356 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:24:15.748281 master-0 kubenswrapper[4652]: E0216 17:24:15.747853 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:24:15.753927 master-0 kubenswrapper[4652]: E0216 17:24:15.753863 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:24:15.753927 master-0 kubenswrapper[4652]: E0216 17:24:15.753895 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.753923 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.753953 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.753953 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.753959 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.753966 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754029 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754041 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754066 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.753905 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754087 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" podUID="6f44170a-3c1c-4944-b971-251f75a51fc3" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754103 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754107 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754006 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.753928 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754020 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754149 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754158 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754164 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754186 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754194 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.753959 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.753986 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:24:15.754174 master-0 kubenswrapper[4652]: E0216 17:24:15.754078 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.753977 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754093 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.753992 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.753934 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754122 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.753997 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: I0216 17:24:15.754363 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754393 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754158 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754164 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754039 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754022 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754180 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754192 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754180 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754057 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754082 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754134 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="b04ee64e-5e83-499c-812d-749b2b6824c6" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754020 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754743 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.754898 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.755673 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.755785 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.755813 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.755925 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.755949 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.755998 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.756061 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.756136 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.756220 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.756437 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.756471 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:24:15.756749 master-0 kubenswrapper[4652]: E0216 17:24:15.756536 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:24:16.083908 master-0 kubenswrapper[4652]: I0216 17:24:16.083736 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:16.083908 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:16.083908 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:16.083908 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:16.083908 master-0 kubenswrapper[4652]: I0216 17:24:16.083830 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:17.082033 master-0 kubenswrapper[4652]: I0216 17:24:17.081978 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:17.082033 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:17.082033 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:17.082033 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:17.082406 master-0 kubenswrapper[4652]: I0216 17:24:17.082093 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:17.745642 master-0 kubenswrapper[4652]: I0216 17:24:17.745580 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745646 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745672 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745753 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745765 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: E0216 17:24:17.745754 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" podUID="5a275679-b7b6-4c28-b389-94cd2b014d6c" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745796 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745819 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745791 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745825 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745907 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745919 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: E0216 17:24:17.745923 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns/dns-default-qcgxx" podUID="2d96ccdc-0b09-437d-bfca-1958af5d9953" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745932 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745978 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745995 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745979 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746005 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746016 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746022 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746028 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746047 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746140 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746165 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746173 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746184 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: E0216 17:24:17.746181 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" podUID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746209 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746231 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746239 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746272 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746282 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746274 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.745589 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746296 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746307 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746319 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746327 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746331 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746341 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746346 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746359 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746370 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746380 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746370 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746399 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746414 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:17.746357 master-0 kubenswrapper[4652]: I0216 17:24:17.746420 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.746494 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" podUID="4488757c-f0fd-48fa-a3f9-6373b0bcafe4" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746518 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746520 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746536 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746547 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746562 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746567 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746588 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746590 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746602 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746614 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746573 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746577 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746620 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746649 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746625 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746638 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746703 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746735 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.746696 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-marketplace-4kd66" podUID="0393fe12-2533-4c9c-a8e4-a58003c88f36" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746719 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.746786 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" podUID="edbaac23-11f0-4bc7-a7ce-b593c774c0fa" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746772 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: I0216 17:24:17.746805 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.746841 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" podUID="9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.746920 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" podUID="48801344-a48a-493e-aea4-19d998d0b708" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.747118 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-canary/ingress-canary-qqvg4" podUID="1363cb7b-62cc-497b-af6f-4d5e0eb7f174" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.747369 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" podUID="06067627-6ccf-4cc8-bd20-dabdd776bb46" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.747464 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/downloads-dcd7b7d95-dhhfh" podUID="08a90dc5-b0d8-4aad-a002-736492b6c1a9" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.747575 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" podUID="8e623376-9e14-4341-9dcf-7a7c218b6f9f" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.747684 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.747784 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" podUID="c2511146-1d04-4ecd-a28e-79662ef7b9d3" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.747896 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-279g6" podUID="ad805251-19d0-4d2f-b741-7d11158f1f03" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.748018 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" podUID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.748076 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" podUID="ba37ef0e-373c-4ccc-b082-668630399765" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.748310 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" podUID="8e90be63-ff6c-4e9e-8b9e-1ad9cf941845" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.748482 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" podUID="9609a4f3-b947-47af-a685-baae26c50fa3" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.748661 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" Feb 16 17:24:17.748753 master-0 kubenswrapper[4652]: E0216 17:24:17.748715 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" podUID="737fcc7d-d850-4352-9f17-383c85d5bc28" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.748811 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.748938 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" podUID="7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.749053 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" podUID="544c6815-81d7-422a-9e4a-5fcbfabe8da8" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.749149 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" podUID="80d3b238-70c3-4e71-96a1-99405352033f" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.749307 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" podUID="970d4376-f299-412c-a8ee-90aa980c689e" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.749407 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" podUID="2d1636c0-f34d-444c-822d-77f1d203ddc4" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.749578 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" podUID="fe8e8e5d-cebb-4361-b765-5ff737f5e838" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.749900 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/prometheus-k8s-0" podUID="b04ee64e-5e83-499c-812d-749b2b6824c6" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.749972 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" podUID="e10d0b0c-4c2a-45b3-8d69-3070d566b97d" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.750075 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" podUID="62220aa5-4065-472c-8a17-c0a58942ab8a" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.750130 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" podUID="442600dc-09b2-4fee-9f89-777296b2ee40" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.750188 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.750291 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" podUID="404c402a-705f-4352-b9df-b89562070d9c" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.750429 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" podUID="6f44170a-3c1c-4944-b971-251f75a51fc3" Feb 16 17:24:17.750489 master-0 kubenswrapper[4652]: E0216 17:24:17.750470 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" podUID="62fc29f4-557f-4a75-8b78-6ca425c81b81" Feb 16 17:24:17.751438 master-0 kubenswrapper[4652]: E0216 17:24:17.750537 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" podUID="0ff68421-1741-41c1-93d5-5c722dfd295e" Feb 16 17:24:17.751438 master-0 kubenswrapper[4652]: E0216 17:24:17.750608 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-vwvwx" podUID="c303189e-adae-4fe2-8dd7-cc9b80f73e66" Feb 16 17:24:17.751438 master-0 kubenswrapper[4652]: E0216 17:24:17.750709 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-z69zq" podUID="f3beb7bf-922f-425d-8a19-fd407a7153a8" Feb 16 17:24:17.751438 master-0 kubenswrapper[4652]: E0216 17:24:17.750750 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" podUID="642e5115-b7f2-4561-bc6b-1a74b6d891c4" Feb 16 17:24:17.751438 master-0 kubenswrapper[4652]: E0216 17:24:17.750809 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" podUID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" Feb 16 17:24:17.751438 master-0 kubenswrapper[4652]: E0216 17:24:17.750889 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" podUID="5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd" Feb 16 17:24:17.751438 master-0 kubenswrapper[4652]: E0216 17:24:17.751090 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/alertmanager-main-0" podUID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" Feb 16 17:24:17.751438 master-0 kubenswrapper[4652]: E0216 17:24:17.751159 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" podUID="d020c902-2adb-4919-8dd9-0c2109830580" Feb 16 17:24:17.751438 master-0 kubenswrapper[4652]: E0216 17:24:17.751214 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" podUID="54fba066-0e9e-49f6-8a86-34d5b4b660df" Feb 16 17:24:17.751438 master-0 kubenswrapper[4652]: E0216 17:24:17.751302 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" podUID="6b3e071c-1c62-489b-91c1-aef0d197f40b" Feb 16 17:24:17.751438 master-0 kubenswrapper[4652]: E0216 17:24:17.751376 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" podUID="29402454-a920-471e-895e-764235d16eb4" Feb 16 17:24:17.751873 master-0 kubenswrapper[4652]: E0216 17:24:17.751450 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" podUID="ee84198d-6357-4429-a90c-455c3850a788" Feb 16 17:24:17.751873 master-0 kubenswrapper[4652]: E0216 17:24:17.751529 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" Feb 16 17:24:17.751873 master-0 kubenswrapper[4652]: E0216 17:24:17.751630 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" podUID="55d635cd-1f0d-4086-96f2-9f3524f3f18c" Feb 16 17:24:17.751873 master-0 kubenswrapper[4652]: E0216 17:24:17.751686 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" podUID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" Feb 16 17:24:17.751873 master-0 kubenswrapper[4652]: E0216 17:24:17.751852 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" Feb 16 17:24:17.752087 master-0 kubenswrapper[4652]: E0216 17:24:17.751973 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" podUID="d1524fc1-d157-435a-8bf8-7e877c45909d" Feb 16 17:24:17.752087 master-0 kubenswrapper[4652]: E0216 17:24:17.752025 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" podUID="18e9a9d3-9b18-4c19-9558-f33c68101922" Feb 16 17:24:17.752232 master-0 kubenswrapper[4652]: E0216 17:24:17.752188 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" Feb 16 17:24:17.752374 master-0 kubenswrapper[4652]: E0216 17:24:17.752348 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" podUID="4e51bba5-0ebe-4e55-a588-38b71548c605" Feb 16 17:24:17.752636 master-0 kubenswrapper[4652]: E0216 17:24:17.752580 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/community-operators-7w4km" podUID="cc9a20f4-255a-4312-8f43-174a28c06340" Feb 16 17:24:17.752699 master-0 kubenswrapper[4652]: E0216 17:24:17.752634 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" podUID="d9859457-f0d1-4754-a6c5-cf05d5abf447" Feb 16 17:24:17.752768 master-0 kubenswrapper[4652]: E0216 17:24:17.752729 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" podUID="eaf7edff-0a89-4ac0-b9dd-511e098b5434" Feb 16 17:24:17.753023 master-0 kubenswrapper[4652]: E0216 17:24:17.752967 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" podUID="ae20b683-dac8-419e-808a-ddcdb3c564e1" Feb 16 17:24:17.753080 master-0 kubenswrapper[4652]: E0216 17:24:17.753052 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" podUID="c8729b1a-e365-4cf7-8a05-91a9987dabe9" Feb 16 17:24:17.753129 master-0 kubenswrapper[4652]: E0216 17:24:17.753110 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" podUID="5192fa49-d81c-47ce-b2ab-f90996cc0bd5" Feb 16 17:24:17.753272 master-0 kubenswrapper[4652]: E0216 17:24:17.753234 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" Feb 16 17:24:17.753337 master-0 kubenswrapper[4652]: E0216 17:24:17.753320 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" Feb 16 17:24:17.753588 master-0 kubenswrapper[4652]: E0216 17:24:17.753544 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" podUID="f3c7d762-e2fe-49ca-ade5-3982d91ec2a2" Feb 16 17:24:17.753785 master-0 kubenswrapper[4652]: E0216 17:24:17.753748 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" podUID="54f29618-42c2-4270-9af7-7d82852d7cec" Feb 16 17:24:18.082472 master-0 kubenswrapper[4652]: I0216 17:24:18.082340 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:18.082472 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:18.082472 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:18.082472 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:18.082472 master-0 kubenswrapper[4652]: I0216 17:24:18.082416 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:19.083225 master-0 kubenswrapper[4652]: I0216 17:24:19.083142 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:19.083225 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:19.083225 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:19.083225 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:19.083225 master-0 kubenswrapper[4652]: I0216 17:24:19.083216 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:19.583768 master-0 kubenswrapper[4652]: I0216 17:24:19.583730 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:19.583985 master-0 kubenswrapper[4652]: I0216 17:24:19.583776 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:19.583985 master-0 kubenswrapper[4652]: I0216 17:24:19.583814 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:19.583985 master-0 kubenswrapper[4652]: E0216 17:24:19.583902 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:19.584120 master-0 kubenswrapper[4652]: E0216 17:24:19.584003 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.583969101 +0000 UTC m=+32.972137617 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"authentication-operator-config" not registered Feb 16 17:24:19.584120 master-0 kubenswrapper[4652]: I0216 17:24:19.584075 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.584213 master-0 kubenswrapper[4652]: E0216 17:24:19.584125 4652 secret.go:189] Couldn't get secret openshift-authentication-operator/serving-cert: object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:19.584213 master-0 kubenswrapper[4652]: E0216 17:24:19.584176 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-operator-config: object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:19.584213 master-0 kubenswrapper[4652]: E0216 17:24:19.584183 4652 secret.go:189] Couldn't get secret openshift-console/console-oauth-config: object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:19.584213 master-0 kubenswrapper[4652]: E0216 17:24:19.584213 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.584198927 +0000 UTC m=+32.972367533 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-config" not registered Feb 16 17:24:19.584437 master-0 kubenswrapper[4652]: I0216 17:24:19.584139 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:19.584437 master-0 kubenswrapper[4652]: E0216 17:24:19.584234 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-trusted-ca-bundle: object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:19.584437 master-0 kubenswrapper[4652]: E0216 17:24:19.584242 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.584224058 +0000 UTC m=+32.972392624 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-oauth-config" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-oauth-config" not registered Feb 16 17:24:19.584437 master-0 kubenswrapper[4652]: E0216 17:24:19.584314 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.58430288 +0000 UTC m=+32.972471466 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-trusted-ca-bundle" not registered Feb 16 17:24:19.584437 master-0 kubenswrapper[4652]: E0216 17:24:19.584343 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.584332611 +0000 UTC m=+32.972501127 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"serving-cert" not registered Feb 16 17:24:19.584437 master-0 kubenswrapper[4652]: I0216 17:24:19.584395 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:19.584679 master-0 kubenswrapper[4652]: I0216 17:24:19.584462 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:19.584679 master-0 kubenswrapper[4652]: E0216 17:24:19.584506 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/etcd-serving-ca: object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:19.584679 master-0 kubenswrapper[4652]: E0216 17:24:19.584522 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-tls: object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:19.584679 master-0 kubenswrapper[4652]: E0216 17:24:19.584544 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.584534466 +0000 UTC m=+32.972703052 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:19.584679 master-0 kubenswrapper[4652]: E0216 17:24:19.584564 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.584555547 +0000 UTC m=+32.972724133 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemeter-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-tls" not registered Feb 16 17:24:19.584679 master-0 kubenswrapper[4652]: I0216 17:24:19.584589 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:19.584679 master-0 kubenswrapper[4652]: I0216 17:24:19.584617 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:19.584679 master-0 kubenswrapper[4652]: I0216 17:24:19.584640 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:19.584679 master-0 kubenswrapper[4652]: E0216 17:24:19.584649 4652 configmap.go:193] Couldn't get configMap openshift-apiserver-operator/openshift-apiserver-operator-config: object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:19.584679 master-0 kubenswrapper[4652]: I0216 17:24:19.584664 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.584679 master-0 kubenswrapper[4652]: E0216 17:24:19.584679 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.58467085 +0000 UTC m=+32.972839456 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-config" not registered Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: I0216 17:24:19.584704 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: E0216 17:24:19.584727 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s: object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: I0216 17:24:19.584732 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: E0216 17:24:19.584751 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.584745332 +0000 UTC m=+32.972913848 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s" not registered Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: E0216 17:24:19.584807 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: E0216 17:24:19.584843 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.584835114 +0000 UTC m=+32.973003630 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"trusted-ca-bundle" not registered Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: E0216 17:24:19.584919 4652 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: I0216 17:24:19.584972 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: E0216 17:24:19.585022 4652 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: E0216 17:24:19.585049 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.585018699 +0000 UTC m=+32.973187215 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cco-trusted-ca" not registered Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: E0216 17:24:19.585061 4652 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: E0216 17:24:19.585072 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: E0216 17:24:19.585077 4652 projected.go:288] Couldn't get configMap openshift-catalogd/openshift-service-ca.crt: object "openshift-catalogd"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.585096 master-0 kubenswrapper[4652]: E0216 17:24:19.585092 4652 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr: [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585075 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.58506499 +0000 UTC m=+32.973233506 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default-metrics-tls" not registered Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: I0216 17:24:19.585140 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585168 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.585158483 +0000 UTC m=+32.973327189 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" not registered Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585186 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.585176823 +0000 UTC m=+32.973345549 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : [object "openshift-catalogd"/"catalogd-trusted-ca-bundle" not registered, object "openshift-catalogd"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585206 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/operator-controller-trusted-ca-bundle: object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585221 4652 projected.go:288] Couldn't get configMap openshift-operator-controller/openshift-service-ca.crt: object "openshift-operator-controller"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585235 4652 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b: [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: I0216 17:24:19.585288 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: I0216 17:24:19.585323 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585328 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-serving-cert: object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: I0216 17:24:19.585354 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585360 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.585350488 +0000 UTC m=+32.973519204 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-serving-cert" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-serving-cert" not registered Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585389 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs podName:54f29618-42c2-4270-9af7-7d82852d7cec nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.585378849 +0000 UTC m=+32.973547365 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs") pod "operator-controller-controller-manager-85c9b89969-lj58b" (UID: "54f29618-42c2-4270-9af7-7d82852d7cec") : [object "openshift-operator-controller"/"operator-controller-trusted-ca-bundle" not registered, object "openshift-operator-controller"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585460 4652 projected.go:263] Couldn't get secret openshift-monitoring/prometheus-k8s-tls-assets-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585480 4652 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/prometheus-k8s-0: object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585512 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.585504942 +0000 UTC m=+32.973673458 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls-assets-0" not registered Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585515 4652 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: object "openshift-dns"/"dns-default" not registered Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: I0216 17:24:19.585579 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585596 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume podName:2d96ccdc-0b09-437d-bfca-1958af5d9953 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.585572784 +0000 UTC m=+32.973741310 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume") pod "dns-default-qcgxx" (UID: "2d96ccdc-0b09-437d-bfca-1958af5d9953") : object "openshift-dns"/"dns-default" not registered Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: I0216 17:24:19.585622 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: I0216 17:24:19.585658 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:19.585675 master-0 kubenswrapper[4652]: E0216 17:24:19.585682 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-provider-selection: object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: I0216 17:24:19.585683 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.585714 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.585706207 +0000 UTC m=+32.973874713 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-provider-selection" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-provider-selection" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: I0216 17:24:19.585768 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.585797 4652 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.585805 4652 configmap.go:193] Couldn't get configMap openshift-ingress-operator/trusted-ca: object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: I0216 17:24:19.585819 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.585799 4652 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.585873 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-service-ca: object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.585835 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.585824711 +0000 UTC m=+32.973993397 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-cabundle" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.585832 4652 configmap.go:193] Couldn't get configMap openshift-console/service-ca: object "openshift-console"/"service-ca" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.585901 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.585894672 +0000 UTC m=+32.974063188 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-service-ca" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-service-ca" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.585925 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.585915143 +0000 UTC m=+32.974083869 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"service-ca" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.585946 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.585938164 +0000 UTC m=+32.974106910 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"trusted-ca" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: I0216 17:24:19.585968 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: I0216 17:24:19.585995 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586010 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.586001555 +0000 UTC m=+32.974170071 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : object "openshift-ingress-operator"/"metrics-tls" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586024 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-trusted-ca-bundle-8i12ta5c71j38: object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586044 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: I0216 17:24:19.586057 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586064 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.586054177 +0000 UTC m=+32.974222893 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586095 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0 podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.586086908 +0000 UTC m=+32.974255414 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-rulefiles-0" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: I0216 17:24:19.586111 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586121 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: I0216 17:24:19.586143 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586153 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.586144519 +0000 UTC m=+32.974313205 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586183 4652 configmap.go:193] Couldn't get configMap openshift-authentication/audit: object "openshift-authentication"/"audit" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: I0216 17:24:19.586191 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586223 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.586213431 +0000 UTC m=+32.974381977 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"audit" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586271 4652 secret.go:189] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: I0216 17:24:19.586283 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586290 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-sidecar-tls: object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586328 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-1: object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586306 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.586296793 +0000 UTC m=+32.974465309 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586354 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.586344844 +0000 UTC m=+32.974513550 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-thanos-sidecar-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586373 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.586364275 +0000 UTC m=+32.974533001 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"audit-1" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: I0216 17:24:19.586415 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: I0216 17:24:19.586461 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586532 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: E0216 17:24:19.586567 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.58655757 +0000 UTC m=+32.974726266 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-serving-ca" not registered Feb 16 17:24:19.586586 master-0 kubenswrapper[4652]: I0216 17:24:19.586597 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.586644 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.586533 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.586670 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.586698 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.586721 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.586712704 +0000 UTC m=+32.974881400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-images" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.586735 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.586763 4652 configmap.go:193] Couldn't get configMap openshift-console/oauth-serving-cert: object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.586766 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.586773 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.586794 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.586786996 +0000 UTC m=+32.974955512 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "oauth-serving-cert" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"oauth-serving-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.586815 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.586803487 +0000 UTC m=+32.974972163 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.586821 4652 configmap.go:193] Couldn't get configMap openshift-cluster-node-tuning-operator/trusted-ca: object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.586841 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.586852 4652 secret.go:189] Couldn't get secret openshift-apiserver/etcd-client: object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.586857 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.586846228 +0000 UTC m=+32.975014754 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"trusted-ca" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.586906 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-rules: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.586960 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.58694822 +0000 UTC m=+32.975116806 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-rules" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.586984 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587014 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587042 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587069 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587103 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587075574 +0000 UTC m=+32.975244170 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"etcd-client" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587113 4652 secret.go:189] Couldn't get secret openshift-console/console-serving-cert: object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587134 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587122795 +0000 UTC m=+32.975291551 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"kube-rbac-proxy" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587148 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-metric: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587156 4652 secret.go:189] Couldn't get secret openshift-apiserver/encryption-config-1: object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587181 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587172166 +0000 UTC m=+32.975340882 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-metric" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587177 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587199 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587191937 +0000 UTC m=+32.975360643 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"encryption-config-1" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587213 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/etcd-client: object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587231 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587259 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587232438 +0000 UTC m=+32.975401034 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"etcd-client" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587233 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587279 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587270579 +0000 UTC m=+32.975439315 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-serving-cert" (UniqueName: "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-serving-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587298 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587289669 +0000 UTC m=+32.975458406 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587323 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587330 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587373 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587375 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587388 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587372592 +0000 UTC m=+32.975541298 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : object "openshift-operator-lifecycle-manager"/"packageserver-service-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587409 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587397652 +0000 UTC m=+32.975566178 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-session" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587435 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager-operator/openshift-controller-manager-operator-config: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587472 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587463524 +0000 UTC m=+32.975632050 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587474 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587435 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587506 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587497855 +0000 UTC m=+32.975666381 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-tls" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587529 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587559 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587579 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587598 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587618 4652 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587629 4652 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587651 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587642689 +0000 UTC m=+32.975811215 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587670 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587660389 +0000 UTC m=+32.975828925 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587672 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587696 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587745 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert podName:62220aa5-4065-472c-8a17-c0a58942ab8a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.58769709 +0000 UTC m=+32.975865616 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert") pod "olm-operator-6b56bd877c-p7k2k" (UID: "62220aa5-4065-472c-8a17-c0a58942ab8a") : object "openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587747 4652 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587769 4652 projected.go:263] Couldn't get secret openshift-monitoring/alertmanager-main-tls-assets-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587783 4652 projected.go:194] Error preparing data for projected volume tls-assets for pod openshift-monitoring/alertmanager-main-0: object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587787 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587777342 +0000 UTC m=+32.975946028 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"config" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587771 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587812 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587803523 +0000 UTC m=+32.975972229 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls-assets-0" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587815 4652 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587836 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587843 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert podName:1363cb7b-62cc-497b-af6f-4d5e0eb7f174 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587836244 +0000 UTC m=+32.976004770 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert") pod "ingress-canary-qqvg4" (UID: "1363cb7b-62cc-497b-af6f-4d5e0eb7f174") : object "openshift-ingress-canary"/"canary-serving-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587886 4652 secret.go:189] Couldn't get secret openshift-config-operator/config-operator-serving-cert: object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.587913 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.587906106 +0000 UTC m=+32.976074632 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : object "openshift-config-operator"/"config-operator-serving-cert" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.587970 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.588015 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.588044 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.588037939 +0000 UTC m=+32.976206455 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-cliconfig" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.588086 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.588152 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-router-certs: object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: E0216 17:24:19.588188 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.588179243 +0000 UTC m=+32.976347769 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-router-certs" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-router-certs" not registered Feb 16 17:24:19.588163 master-0 kubenswrapper[4652]: I0216 17:24:19.588226 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.588303 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588331 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/config: object "openshift-apiserver"/"config" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.588378 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.588415 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588436 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.58842847 +0000 UTC m=+32.976596986 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"config" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588452 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588476 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client-kube-rbac-proxy-config: object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.588457 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588512 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.588493931 +0000 UTC m=+32.976662627 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : object "openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588523 4652 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.588537 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588563 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.588553043 +0000 UTC m=+32.976721569 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : object "openshift-service-ca"/"signing-key" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588588 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.588575314 +0000 UTC m=+32.976743840 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588590 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/pprof-cert: object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588608 4652 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588626 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.588616215 +0000 UTC m=+32.976784731 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "profile-collector-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"pprof-cert" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588647 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.588636105 +0000 UTC m=+32.976804641 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-tls" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.588682 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.588714 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.588743 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.588775 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588782 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-web: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588785 4652 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588810 4652 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.588819 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588826 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.58881621 +0000 UTC m=+32.976984906 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588852 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.588843801 +0000 UTC m=+32.977012337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588861 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-grpc-tls-4vdvea1506oin: object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588869 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.588861881 +0000 UTC m=+32.977030417 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"mco-proxy-tls" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588870 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588893 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.588883522 +0000 UTC m=+32.977052118 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.588908 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.588899412 +0000 UTC m=+32.977068128 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-tls" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.588932 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.588962 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.589014 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589064 4652 configmap.go:193] Couldn't get configMap openshift-console/trusted-ca-bundle: object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589101 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.589091957 +0000 UTC m=+32.977260493 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"trusted-ca-bundle" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.589120 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.589165 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589171 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.589193 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589207 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.58919821 +0000 UTC m=+32.977366916 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589218 4652 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-trusted-ca-bundle: object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589266 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.589242681 +0000 UTC m=+32.977411217 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589282 4652 secret.go:189] Couldn't get secret openshift-monitoring/monitoring-plugin-cert: object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589314 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert podName:54fba066-0e9e-49f6-8a86-34d5b4b660df nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.589305853 +0000 UTC m=+32.977474369 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "monitoring-plugin-cert" (UniqueName: "kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert") pod "monitoring-plugin-555857f695-nlrnr" (UID: "54fba066-0e9e-49f6-8a86-34d5b4b660df") : object "openshift-monitoring"/"monitoring-plugin-cert" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.589311 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589324 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/alertmanager-trusted-ca-bundle: object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.589363 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589386 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.589379345 +0000 UTC m=+32.977547861 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-trusted-ca-bundle" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589388 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589349 4652 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589428 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config podName:55d635cd-1f0d-4086-96f2-9f3524f3f18c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.589418526 +0000 UTC m=+32.977587252 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7cc9598d54-8j5rk" (UID: "55d635cd-1f0d-4086-96f2-9f3524f3f18c") : object "openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589430 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589446 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.589439047 +0000 UTC m=+32.977607783 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"serving-cert" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.589426 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589450 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemeter-client-serving-certs-ca-bundle: object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589469 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.589459337 +0000 UTC m=+32.977627863 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-tls" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.589521 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.589560 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589588 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.589587 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589614 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.58958834 +0000 UTC m=+32.977756976 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589631 4652 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589643 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.589636032 +0000 UTC m=+32.977804548 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-tls" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589669 4652 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/kube-root-ca.crt: object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.589674 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589685 4652 projected.go:288] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589695 4652 projected.go:194] Error preparing data for projected volume kube-api-access-dptnc for pod openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9: [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.589710 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589722 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.589713684 +0000 UTC m=+32.977882380 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-dptnc" (UniqueName: "kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : [object "openshift-controller-manager-operator"/"kube-root-ca.crt" not registered, object "openshift-controller-manager-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589738 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.589729724 +0000 UTC m=+32.977898320 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"serving-cert" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589752 4652 projected.go:288] Couldn't get configMap openshift-dns-operator/kube-root-ca.crt: object "openshift-dns-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589754 4652 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589763 4652 projected.go:288] Couldn't get configMap openshift-dns-operator/openshift-service-ca.crt: object "openshift-dns-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589776 4652 projected.go:194] Error preparing data for projected volume kube-api-access-t4gl5 for pod openshift-dns-operator/dns-operator-86b8869b79-nhxlp: [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589786 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.589779246 +0000 UTC m=+32.977947762 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"client-ca" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: I0216 17:24:19.589784 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589799 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5 podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.589792466 +0000 UTC m=+32.977960982 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-t4gl5" (UniqueName: "kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : [object "openshift-dns-operator"/"kube-root-ca.crt" not registered, object "openshift-dns-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589826 4652 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:19.591625 master-0 kubenswrapper[4652]: E0216 17:24:19.589927 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls podName:d9859457-f0d1-4754-a6c5-cf05d5abf447 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.589917529 +0000 UTC m=+32.978086235 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls") pod "dns-operator-86b8869b79-nhxlp" (UID: "d9859457-f0d1-4754-a6c5-cf05d5abf447") : object "openshift-dns-operator"/"metrics-tls" not registered Feb 16 17:24:19.692534 master-0 kubenswrapper[4652]: I0216 17:24:19.692468 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:19.692732 master-0 kubenswrapper[4652]: I0216 17:24:19.692560 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:19.692858 master-0 kubenswrapper[4652]: E0216 17:24:19.692798 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-tls: object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:19.692905 master-0 kubenswrapper[4652]: E0216 17:24:19.692859 4652 configmap.go:193] Couldn't get configMap openshift-console/console-config: object "openshift-console"/"console-config" not registered Feb 16 17:24:19.692947 master-0 kubenswrapper[4652]: E0216 17:24:19.692935 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config podName:ed3d89d0-bc00-482e-a656-7fdf4646ab0a nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.692910464 +0000 UTC m=+33.081079020 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "console-config" (UniqueName: "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config") pod "console-599b567ff7-nrcpr" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a") : object "openshift-console"/"console-config" not registered Feb 16 17:24:19.692987 master-0 kubenswrapper[4652]: E0216 17:24:19.692963 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.692951285 +0000 UTC m=+33.081119841 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-tls" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-tls" not registered Feb 16 17:24:19.693062 master-0 kubenswrapper[4652]: I0216 17:24:19.693009 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:19.693117 master-0 kubenswrapper[4652]: I0216 17:24:19.693056 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:19.693117 master-0 kubenswrapper[4652]: I0216 17:24:19.693097 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:19.693204 master-0 kubenswrapper[4652]: I0216 17:24:19.693139 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.693270 master-0 kubenswrapper[4652]: I0216 17:24:19.693199 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:19.693270 master-0 kubenswrapper[4652]: E0216 17:24:19.693142 4652 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:19.693361 master-0 kubenswrapper[4652]: I0216 17:24:19.693241 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:19.693361 master-0 kubenswrapper[4652]: E0216 17:24:19.693215 4652 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:19.693460 master-0 kubenswrapper[4652]: E0216 17:24:19.693360 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.693314865 +0000 UTC m=+33.081483381 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" not registered Feb 16 17:24:19.693460 master-0 kubenswrapper[4652]: E0216 17:24:19.693398 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert podName:6f44170a-3c1c-4944-b971-251f75a51fc3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.693380856 +0000 UTC m=+33.081549372 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert") pod "networking-console-plugin-bd6d6f87f-jhjct" (UID: "6f44170a-3c1c-4944-b971-251f75a51fc3") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 17:24:19.693460 master-0 kubenswrapper[4652]: E0216 17:24:19.693412 4652 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:19.693563 master-0 kubenswrapper[4652]: I0216 17:24:19.693453 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:19.693563 master-0 kubenswrapper[4652]: E0216 17:24:19.693467 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs podName:0d980a9a-2574-41b9-b970-0718cd97c8cd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.693455028 +0000 UTC m=+33.081623794 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs") pod "multus-admission-controller-6d678b8d67-5n9cl" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd") : object "openshift-multus"/"multus-admission-controller-secret" not registered Feb 16 17:24:19.693563 master-0 kubenswrapper[4652]: E0216 17:24:19.693335 4652 projected.go:288] Couldn't get configMap openshift-kube-controller-manager-operator/kube-root-ca.crt: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.693563 master-0 kubenswrapper[4652]: E0216 17:24:19.693554 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k: object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.693674 master-0 kubenswrapper[4652]: E0216 17:24:19.693612 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access podName:442600dc-09b2-4fee-9f89-777296b2ee40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.693593892 +0000 UTC m=+33.081762438 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access") pod "kube-controller-manager-operator-78ff47c7c5-txr5k" (UID: "442600dc-09b2-4fee-9f89-777296b2ee40") : object "openshift-kube-controller-manager-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.693674 master-0 kubenswrapper[4652]: E0216 17:24:19.693345 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/trusted-ca-bundle: object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:19.693736 master-0 kubenswrapper[4652]: E0216 17:24:19.693688 4652 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:19.693811 master-0 kubenswrapper[4652]: E0216 17:24:19.693698 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.693685405 +0000 UTC m=+33.081853921 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:19.693876 master-0 kubenswrapper[4652]: E0216 17:24:19.693223 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-grpc-tls-6nhmo5tgfmegb: object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:19.693916 master-0 kubenswrapper[4652]: I0216 17:24:19.693866 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:19.693975 master-0 kubenswrapper[4652]: E0216 17:24:19.693916 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.69389519 +0000 UTC m=+33.082063916 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"image-registry-operator-tls" not registered Feb 16 17:24:19.693975 master-0 kubenswrapper[4652]: E0216 17:24:19.693943 4652 projected.go:288] Couldn't get configMap openshift-config-operator/kube-root-ca.crt: object "openshift-config-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.693975 master-0 kubenswrapper[4652]: E0216 17:24:19.693952 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.693939141 +0000 UTC m=+33.082107937 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-grpc-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" not registered Feb 16 17:24:19.693975 master-0 kubenswrapper[4652]: E0216 17:24:19.693959 4652 projected.go:288] Couldn't get configMap openshift-config-operator/openshift-service-ca.crt: object "openshift-config-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.693975 master-0 kubenswrapper[4652]: E0216 17:24:19.693982 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xr8t6 for pod openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8: [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.694238 master-0 kubenswrapper[4652]: I0216 17:24:19.694042 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:19.694238 master-0 kubenswrapper[4652]: E0216 17:24:19.694063 4652 configmap.go:193] Couldn't get configMap openshift-console-operator/console-operator-config: object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:19.694238 master-0 kubenswrapper[4652]: I0216 17:24:19.694109 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:19.694238 master-0 kubenswrapper[4652]: E0216 17:24:19.694152 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6 podName:e69d8c51-e2a6-4f61-9c26-072784f6cf40 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.694136817 +0000 UTC m=+33.082305333 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-xr8t6" (UniqueName: "kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6") pod "openshift-config-operator-7c6bdb986f-v8dr8" (UID: "e69d8c51-e2a6-4f61-9c26-072784f6cf40") : [object "openshift-config-operator"/"kube-root-ca.crt" not registered, object "openshift-config-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.694238 master-0 kubenswrapper[4652]: E0216 17:24:19.694167 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.694161387 +0000 UTC m=+33.082329903 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"console-operator-config" not registered Feb 16 17:24:19.694412 master-0 kubenswrapper[4652]: I0216 17:24:19.694278 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:19.694412 master-0 kubenswrapper[4652]: E0216 17:24:19.694312 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:19.694412 master-0 kubenswrapper[4652]: E0216 17:24:19.694356 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.694412 master-0 kubenswrapper[4652]: E0216 17:24:19.694379 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-service-ca-bundle: object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:19.694412 master-0 kubenswrapper[4652]: E0216 17:24:19.694385 4652 projected.go:194] Error preparing data for projected volume kube-api-access-57xvt for pod openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.694412 master-0 kubenswrapper[4652]: E0216 17:24:19.694413 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.694401724 +0000 UTC m=+33.082570460 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-service-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-service-ca-bundle" not registered Feb 16 17:24:19.694587 master-0 kubenswrapper[4652]: I0216 17:24:19.694320 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:19.694587 master-0 kubenswrapper[4652]: E0216 17:24:19.694387 4652 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/kube-root-ca.crt: object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.694587 master-0 kubenswrapper[4652]: E0216 17:24:19.694486 4652 projected.go:288] Couldn't get configMap openshift-cloud-credential-operator/openshift-service-ca.crt: object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.695059 master-0 kubenswrapper[4652]: E0216 17:24:19.695015 4652 projected.go:194] Error preparing data for projected volume kube-api-access-zdxgd for pod openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq: [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.702317 master-0 kubenswrapper[4652]: E0216 17:24:19.694478 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt podName:e73ee493-de15-44c2-bd51-e12fcbb27a15 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.694444315 +0000 UTC m=+33.082613001 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-57xvt" (UniqueName: "kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt") pod "packageserver-6d5d8c8c95-kzfjw" (UID: "e73ee493-de15-44c2-bd51-e12fcbb27a15") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.702317 master-0 kubenswrapper[4652]: I0216 17:24:19.701984 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:19.702317 master-0 kubenswrapper[4652]: E0216 17:24:19.702084 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.702012186 +0000 UTC m=+33.090180732 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-zdxgd" (UniqueName: "kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : [object "openshift-cloud-credential-operator"/"kube-root-ca.crt" not registered, object "openshift-cloud-credential-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.702317 master-0 kubenswrapper[4652]: I0216 17:24:19.702175 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:19.702317 master-0 kubenswrapper[4652]: E0216 17:24:19.702261 4652 configmap.go:193] Couldn't get configMap openshift-kube-apiserver-operator/kube-apiserver-operator-config: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:19.702317 master-0 kubenswrapper[4652]: I0216 17:24:19.702282 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:19.702317 master-0 kubenswrapper[4652]: E0216 17:24:19.702327 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.702307604 +0000 UTC m=+33.090476120 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" not registered Feb 16 17:24:19.702721 master-0 kubenswrapper[4652]: I0216 17:24:19.702365 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:19.702721 master-0 kubenswrapper[4652]: I0216 17:24:19.702406 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:19.702721 master-0 kubenswrapper[4652]: I0216 17:24:19.702451 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:19.702721 master-0 kubenswrapper[4652]: E0216 17:24:19.702456 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:19.702721 master-0 kubenswrapper[4652]: I0216 17:24:19.702487 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:19.702721 master-0 kubenswrapper[4652]: E0216 17:24:19.702523 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.702507559 +0000 UTC m=+33.090676085 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-webhook-server-cert" not registered Feb 16 17:24:19.702721 master-0 kubenswrapper[4652]: E0216 17:24:19.702660 4652 secret.go:189] Couldn't get secret openshift-monitoring/telemeter-client: object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:19.702721 master-0 kubenswrapper[4652]: E0216 17:24:19.702678 4652 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:19.702961 master-0 kubenswrapper[4652]: E0216 17:24:19.702747 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.702723785 +0000 UTC m=+33.090892341 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-telemeter-client" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"telemeter-client" not registered Feb 16 17:24:19.702961 master-0 kubenswrapper[4652]: E0216 17:24:19.702778 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.702763446 +0000 UTC m=+33.090931992 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" not registered Feb 16 17:24:19.702961 master-0 kubenswrapper[4652]: E0216 17:24:19.702888 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-login: object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:19.702961 master-0 kubenswrapper[4652]: I0216 17:24:19.702938 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:19.703080 master-0 kubenswrapper[4652]: I0216 17:24:19.702976 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:19.703179 master-0 kubenswrapper[4652]: I0216 17:24:19.703113 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:19.703179 master-0 kubenswrapper[4652]: E0216 17:24:19.703137 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:19.703265 master-0 kubenswrapper[4652]: E0216 17:24:19.703184 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:19.703265 master-0 kubenswrapper[4652]: E0216 17:24:19.703216 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.703179687 +0000 UTC m=+33.091348243 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"trusted-ca-bundle" not registered Feb 16 17:24:19.703334 master-0 kubenswrapper[4652]: E0216 17:24:19.703276 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.703232828 +0000 UTC m=+33.091401384 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-login" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-login" not registered Feb 16 17:24:19.703378 master-0 kubenswrapper[4652]: E0216 17:24:19.703347 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.703410 master-0 kubenswrapper[4652]: I0216 17:24:19.703368 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:19.703453 master-0 kubenswrapper[4652]: E0216 17:24:19.703372 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.703453 master-0 kubenswrapper[4652]: E0216 17:24:19.703426 4652 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-client: object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:19.703510 master-0 kubenswrapper[4652]: E0216 17:24:19.703442 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.703407703 +0000 UTC m=+33.091576239 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"baremetal-kube-rbac-proxy" not registered Feb 16 17:24:19.703510 master-0 kubenswrapper[4652]: E0216 17:24:19.703443 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: object "openshift-controller-manager"/"config" not registered Feb 16 17:24:19.703582 master-0 kubenswrapper[4652]: I0216 17:24:19.703509 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:19.703582 master-0 kubenswrapper[4652]: E0216 17:24:19.703444 4652 projected.go:194] Error preparing data for projected volume kube-api-access-pmbll for pod openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.703646 master-0 kubenswrapper[4652]: E0216 17:24:19.703531 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.703503035 +0000 UTC m=+33.091671581 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-client" not registered Feb 16 17:24:19.703646 master-0 kubenswrapper[4652]: E0216 17:24:19.703639 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.703602678 +0000 UTC m=+33.091771214 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"config" not registered Feb 16 17:24:19.703718 master-0 kubenswrapper[4652]: E0216 17:24:19.703675 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.70366087 +0000 UTC m=+33.091829396 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-pmbll" (UniqueName: "kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.703752 master-0 kubenswrapper[4652]: E0216 17:24:19.703706 4652 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:19.703752 master-0 kubenswrapper[4652]: I0216 17:24:19.703731 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:19.703825 master-0 kubenswrapper[4652]: E0216 17:24:19.703769 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.703753282 +0000 UTC m=+33.091921818 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"serving-cert" not registered Feb 16 17:24:19.703858 master-0 kubenswrapper[4652]: I0216 17:24:19.703815 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:19.703934 master-0 kubenswrapper[4652]: E0216 17:24:19.703894 4652 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:19.703934 master-0 kubenswrapper[4652]: I0216 17:24:19.703910 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.704186 master-0 kubenswrapper[4652]: I0216 17:24:19.703969 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.704260 master-0 kubenswrapper[4652]: E0216 17:24:19.703973 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.703955407 +0000 UTC m=+33.092123943 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"config" not registered Feb 16 17:24:19.704260 master-0 kubenswrapper[4652]: E0216 17:24:19.704179 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:19.704369 master-0 kubenswrapper[4652]: E0216 17:24:19.704037 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:19.704415 master-0 kubenswrapper[4652]: E0216 17:24:19.704361 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.704331137 +0000 UTC m=+33.092499703 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:19.704415 master-0 kubenswrapper[4652]: E0216 17:24:19.704395 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-web-config: object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:19.704493 master-0 kubenswrapper[4652]: E0216 17:24:19.704428 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config podName:2d1636c0-f34d-444c-822d-77f1d203ddc4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.704408449 +0000 UTC m=+33.092576975 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-7485d645b8-zxxwd" (UID: "2d1636c0-f34d-444c-822d-77f1d203ddc4") : object "openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" not registered Feb 16 17:24:19.704493 master-0 kubenswrapper[4652]: E0216 17:24:19.704444 4652 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:19.704575 master-0 kubenswrapper[4652]: E0216 17:24:19.704504 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.704450971 +0000 UTC m=+33.092619507 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-web-config" not registered Feb 16 17:24:19.704621 master-0 kubenswrapper[4652]: E0216 17:24:19.704573 4652 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:19.704657 master-0 kubenswrapper[4652]: E0216 17:24:19.704617 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert podName:5a275679-b7b6-4c28-b389-94cd2b014d6c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.704580204 +0000 UTC m=+33.092748730 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-75b869db96-twmsp" (UID: "5a275679-b7b6-4c28-b389-94cd2b014d6c") : object "openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" not registered Feb 16 17:24:19.704768 master-0 kubenswrapper[4652]: I0216 17:24:19.704575 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:19.705527 master-0 kubenswrapper[4652]: E0216 17:24:19.705493 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls podName:642e5115-b7f2-4561-bc6b-1a74b6d891c4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.704959994 +0000 UTC m=+33.093128530 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-d8bf84b88-m66tx" (UID: "642e5115-b7f2-4561-bc6b-1a74b6d891c4") : object "openshift-machine-api"/"control-plane-machine-set-operator-tls" not registered Feb 16 17:24:19.705596 master-0 kubenswrapper[4652]: I0216 17:24:19.705537 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:19.705596 master-0 kubenswrapper[4652]: I0216 17:24:19.705575 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:19.705675 master-0 kubenswrapper[4652]: I0216 17:24:19.705610 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:19.705675 master-0 kubenswrapper[4652]: I0216 17:24:19.705646 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:19.705761 master-0 kubenswrapper[4652]: I0216 17:24:19.705679 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:19.705761 master-0 kubenswrapper[4652]: I0216 17:24:19.705709 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:19.705761 master-0 kubenswrapper[4652]: I0216 17:24:19.705739 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:19.705872 master-0 kubenswrapper[4652]: E0216 17:24:19.705808 4652 configmap.go:193] Couldn't get configMap openshift-etcd-operator/etcd-ca-bundle: object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:19.705913 master-0 kubenswrapper[4652]: E0216 17:24:19.705883 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:19.705960 master-0 kubenswrapper[4652]: E0216 17:24:19.705928 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/telemetry-config: object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:19.706001 master-0 kubenswrapper[4652]: E0216 17:24:19.705994 4652 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:19.706107 master-0 kubenswrapper[4652]: E0216 17:24:19.706078 4652 projected.go:288] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: object "openshift-ingress-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.706107 master-0 kubenswrapper[4652]: E0216 17:24:19.706099 4652 projected.go:288] Couldn't get configMap openshift-ingress-operator/openshift-service-ca.crt: object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.706189 master-0 kubenswrapper[4652]: E0216 17:24:19.706110 4652 projected.go:194] Error preparing data for projected volume kube-api-access-t24jh for pod openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d: [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.706189 master-0 kubenswrapper[4652]: E0216 17:24:19.706168 4652 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:19.706293 master-0 kubenswrapper[4652]: E0216 17:24:19.706240 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:19.706340 master-0 kubenswrapper[4652]: I0216 17:24:19.706296 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:19.706340 master-0 kubenswrapper[4652]: I0216 17:24:19.706333 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:19.706423 master-0 kubenswrapper[4652]: I0216 17:24:19.706365 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.706423 master-0 kubenswrapper[4652]: I0216 17:24:19.706399 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:19.706511 master-0 kubenswrapper[4652]: I0216 17:24:19.706429 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.706557 master-0 kubenswrapper[4652]: E0216 17:24:19.706521 4652 projected.go:288] Couldn't get configMap openshift-console/kube-root-ca.crt: object "openshift-console"/"kube-root-ca.crt" not registered Feb 16 17:24:19.706557 master-0 kubenswrapper[4652]: E0216 17:24:19.706536 4652 projected.go:288] Couldn't get configMap openshift-console/openshift-service-ca.crt: object "openshift-console"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.706557 master-0 kubenswrapper[4652]: E0216 17:24:19.706552 4652 projected.go:194] Error preparing data for projected volume kube-api-access-p6xfw for pod openshift-console/downloads-dcd7b7d95-dhhfh: [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.706670 master-0 kubenswrapper[4652]: E0216 17:24:19.706605 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw podName:08a90dc5-b0d8-4aad-a002-736492b6c1a9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.706588737 +0000 UTC m=+33.094757263 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-p6xfw" (UniqueName: "kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw") pod "downloads-dcd7b7d95-dhhfh" (UID: "08a90dc5-b0d8-4aad-a002-736492b6c1a9") : [object "openshift-console"/"kube-root-ca.crt" not registered, object "openshift-console"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.706721 master-0 kubenswrapper[4652]: E0216 17:24:19.706674 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.706646299 +0000 UTC m=+33.094814825 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etcd-ca" (UniqueName: "kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-ca-bundle" not registered Feb 16 17:24:19.706721 master-0 kubenswrapper[4652]: E0216 17:24:19.706694 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.70668558 +0000 UTC m=+33.094854106 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"openshift-global-ca" not registered Feb 16 17:24:19.706721 master-0 kubenswrapper[4652]: E0216 17:24:19.706714 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.70670489 +0000 UTC m=+33.094873416 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"cluster-monitoring-operator-tls" not registered Feb 16 17:24:19.706845 master-0 kubenswrapper[4652]: E0216 17:24:19.706732 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.706723271 +0000 UTC m=+33.094891837 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-operator-metrics" not registered Feb 16 17:24:19.706845 master-0 kubenswrapper[4652]: E0216 17:24:19.706804 4652 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:19.706845 master-0 kubenswrapper[4652]: E0216 17:24:19.706841 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs podName:8e90be63-ff6c-4e9e-8b9e-1ad9cf941845 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.706830924 +0000 UTC m=+33.094999460 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs") pod "catalogd-controller-manager-67bc7c997f-mn6cr" (UID: "8e90be63-ff6c-4e9e-8b9e-1ad9cf941845") : object "openshift-catalogd"/"catalogserver-cert" not registered Feb 16 17:24:19.706970 master-0 kubenswrapper[4652]: E0216 17:24:19.706885 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:19.706970 master-0 kubenswrapper[4652]: E0216 17:24:19.706921 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.706909546 +0000 UTC m=+33.095078082 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kubelet-serving-ca-bundle" not registered Feb 16 17:24:19.707057 master-0 kubenswrapper[4652]: E0216 17:24:19.706986 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:19.707057 master-0 kubenswrapper[4652]: E0216 17:24:19.707024 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.707012168 +0000 UTC m=+33.095180695 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy" not registered Feb 16 17:24:19.707127 master-0 kubenswrapper[4652]: E0216 17:24:19.707084 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-thanos-prometheus-http-client-file: object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:19.707127 master-0 kubenswrapper[4652]: E0216 17:24:19.707119 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.707107911 +0000 UTC m=+33.095276447 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" not registered Feb 16 17:24:19.707184 master-0 kubenswrapper[4652]: E0216 17:24:19.707151 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config podName:e10d0b0c-4c2a-45b3-8d69-3070d566b97d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.707140712 +0000 UTC m=+33.095309238 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "telemetry-config" (UniqueName: "kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config") pod "cluster-monitoring-operator-756d64c8c4-ln4wm" (UID: "e10d0b0c-4c2a-45b3-8d69-3070d566b97d") : object "openshift-monitoring"/"telemetry-config" not registered Feb 16 17:24:19.707220 master-0 kubenswrapper[4652]: E0216 17:24:19.707191 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh podName:9609a4f3-b947-47af-a685-baae26c50fa3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.707180723 +0000 UTC m=+33.095349249 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-t24jh" (UniqueName: "kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh") pod "ingress-operator-c588d8cb4-wjr7d" (UID: "9609a4f3-b947-47af-a685-baae26c50fa3") : [object "openshift-ingress-operator"/"kube-root-ca.crt" not registered, object "openshift-ingress-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.707220 master-0 kubenswrapper[4652]: E0216 17:24:19.707207 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.707311 master-0 kubenswrapper[4652]: E0216 17:24:19.707222 4652 projected.go:194] Error preparing data for projected volume kube-api-access-v2s8l for pod openshift-network-diagnostics/network-check-target-vwvwx: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.707311 master-0 kubenswrapper[4652]: E0216 17:24:19.707281 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l podName:c303189e-adae-4fe2-8dd7-cc9b80f73e66 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.707245585 +0000 UTC m=+33.095414121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-v2s8l" (UniqueName: "kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l") pod "network-check-target-vwvwx" (UID: "c303189e-adae-4fe2-8dd7-cc9b80f73e66") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.707311 master-0 kubenswrapper[4652]: I0216 17:24:19.707298 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:19.707425 master-0 kubenswrapper[4652]: I0216 17:24:19.707345 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:19.707425 master-0 kubenswrapper[4652]: I0216 17:24:19.707397 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:19.707502 master-0 kubenswrapper[4652]: I0216 17:24:19.707475 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:19.707533 master-0 kubenswrapper[4652]: I0216 17:24:19.707517 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:19.707591 master-0 kubenswrapper[4652]: I0216 17:24:19.707555 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.707710 master-0 kubenswrapper[4652]: E0216 17:24:19.707683 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-tls: object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:19.707752 master-0 kubenswrapper[4652]: E0216 17:24:19.707744 4652 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:19.707822 master-0 kubenswrapper[4652]: E0216 17:24:19.707798 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:19.707880 master-0 kubenswrapper[4652]: E0216 17:24:19.707864 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-generated: object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:19.707936 master-0 kubenswrapper[4652]: E0216 17:24:19.707921 4652 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:19.707979 master-0 kubenswrapper[4652]: E0216 17:24:19.707973 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-tls: object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:19.708043 master-0 kubenswrapper[4652]: I0216 17:24:19.708025 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:19.708079 master-0 kubenswrapper[4652]: I0216 17:24:19.708066 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:19.708147 master-0 kubenswrapper[4652]: I0216 17:24:19.708100 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:19.708180 master-0 kubenswrapper[4652]: I0216 17:24:19.708149 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:19.708302 master-0 kubenswrapper[4652]: E0216 17:24:19.708284 4652 secret.go:189] Couldn't get secret openshift-cluster-olm-operator/cluster-olm-operator-serving-cert: object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:19.708424 master-0 kubenswrapper[4652]: E0216 17:24:19.708391 4652 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:19.708465 master-0 kubenswrapper[4652]: E0216 17:24:19.708446 4652 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:19.708509 master-0 kubenswrapper[4652]: I0216 17:24:19.708491 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:19.708542 master-0 kubenswrapper[4652]: I0216 17:24:19.708529 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:19.708628 master-0 kubenswrapper[4652]: E0216 17:24:19.708614 4652 configmap.go:193] Couldn't get configMap openshift-image-registry/trusted-ca: object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:19.708664 master-0 kubenswrapper[4652]: E0216 17:24:19.708648 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca podName:5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.708637962 +0000 UTC m=+33.096806488 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca") pod "cluster-image-registry-operator-96c8c64b8-zwwnk" (UID: "5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd") : object "openshift-image-registry"/"trusted-ca" not registered Feb 16 17:24:19.708727 master-0 kubenswrapper[4652]: E0216 17:24:19.708709 4652 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:19.708823 master-0 kubenswrapper[4652]: I0216 17:24:19.708781 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.708877 master-0 kubenswrapper[4652]: I0216 17:24:19.708849 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:19.708911 master-0 kubenswrapper[4652]: I0216 17:24:19.708893 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:19.708968 master-0 kubenswrapper[4652]: E0216 17:24:19.708944 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert podName:4e51bba5-0ebe-4e55-a588-38b71548c605 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.708900979 +0000 UTC m=+33.097069715 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-olm-operator-serving-cert" (UniqueName: "kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert") pod "cluster-olm-operator-55b69c6c48-7chjv" (UID: "4e51bba5-0ebe-4e55-a588-38b71548c605") : object "openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" not registered Feb 16 17:24:19.709009 master-0 kubenswrapper[4652]: E0216 17:24:19.708995 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls podName:5192fa49-d81c-47ce-b2ab-f90996cc0bd5 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.708972841 +0000 UTC m=+33.097141547 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls") pod "cluster-node-tuning-operator-ff6c9b66-6j4ts" (UID: "5192fa49-d81c-47ce-b2ab-f90996cc0bd5") : object "openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" not registered Feb 16 17:24:19.709042 master-0 kubenswrapper[4652]: E0216 17:24:19.709018 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca podName:74b2561b-933b-4c58-a63a-7a8c671d0ae9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.709006881 +0000 UTC m=+33.097175637 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca") pod "marketplace-operator-6cc5b65c6b-s4gp2" (UID: "74b2561b-933b-4c58-a63a-7a8c671d0ae9") : object "openshift-marketplace"/"marketplace-trusted-ca" not registered Feb 16 17:24:19.709042 master-0 kubenswrapper[4652]: E0216 17:24:19.709040 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.709027712 +0000 UTC m=+33.097196438 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"serving-cert" not registered Feb 16 17:24:19.709102 master-0 kubenswrapper[4652]: E0216 17:24:19.709058 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.709049073 +0000 UTC m=+33.097217789 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-generated" not registered Feb 16 17:24:19.709102 master-0 kubenswrapper[4652]: E0216 17:24:19.709071 4652 secret.go:189] Couldn't get secret openshift-monitoring/kube-rbac-proxy: object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:19.709102 master-0 kubenswrapper[4652]: E0216 17:24:19.708357 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-ocp-branding-template: object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:19.709189 master-0 kubenswrapper[4652]: E0216 17:24:19.709143 4652 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:19.709281 master-0 kubenswrapper[4652]: E0216 17:24:19.709078 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert podName:7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.709067953 +0000 UTC m=+33.097236679 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-595c8f9ff-b9nvq" (UID: "7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4") : object "openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" not registered Feb 16 17:24:19.709318 master-0 kubenswrapper[4652]: E0216 17:24:19.709242 4652 secret.go:189] Couldn't get secret openshift-apiserver-operator/openshift-apiserver-operator-serving-cert: object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:19.709318 master-0 kubenswrapper[4652]: E0216 17:24:19.709306 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.709283979 +0000 UTC m=+33.097452635 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-tls" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-tls" not registered Feb 16 17:24:19.709386 master-0 kubenswrapper[4652]: E0216 17:24:19.709330 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config podName:29402454-a920-471e-895e-764235d16eb4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.70931813 +0000 UTC m=+33.097486656 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config") pod "service-ca-operator-5dc4688546-pl7r5" (UID: "29402454-a920-471e-895e-764235d16eb4") : object "openshift-service-ca-operator"/"service-ca-operator-config" not registered Feb 16 17:24:19.709386 master-0 kubenswrapper[4652]: E0216 17:24:19.709354 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.70934358 +0000 UTC m=+33.097512306 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"service-ca-bundle" not registered Feb 16 17:24:19.709386 master-0 kubenswrapper[4652]: E0216 17:24:19.709382 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.709371621 +0000 UTC m=+33.097540367 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-main-tls" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-tls" not registered Feb 16 17:24:19.709479 master-0 kubenswrapper[4652]: I0216 17:24:19.709428 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:19.709518 master-0 kubenswrapper[4652]: E0216 17:24:19.709484 4652 secret.go:189] Couldn't get secret openshift-monitoring/federate-client-certs: object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:19.709560 master-0 kubenswrapper[4652]: E0216 17:24:19.709534 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls podName:06067627-6ccf-4cc8-bd20-dabdd776bb46 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.709519505 +0000 UTC m=+33.097688191 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "federate-client-tls" (UniqueName: "kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls") pod "telemeter-client-6bbd87b65b-mt2mz" (UID: "06067627-6ccf-4cc8-bd20-dabdd776bb46") : object "openshift-monitoring"/"federate-client-certs" not registered Feb 16 17:24:19.709560 master-0 kubenswrapper[4652]: E0216 17:24:19.709535 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:19.709648 master-0 kubenswrapper[4652]: I0216 17:24:19.709486 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:19.709978 master-0 kubenswrapper[4652]: E0216 17:24:19.709946 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.709545186 +0000 UTC m=+33.097713872 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"kube-rbac-proxy" not registered Feb 16 17:24:19.710038 master-0 kubenswrapper[4652]: E0216 17:24:19.709993 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca podName:e1a7c783-2e23-4284-b648-147984cf1022 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.709978007 +0000 UTC m=+33.098146703 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca") pod "controller-manager-7fc9897cf8-9rjwd" (UID: "e1a7c783-2e23-4284-b648-147984cf1022") : object "openshift-controller-manager"/"client-ca" not registered Feb 16 17:24:19.710038 master-0 kubenswrapper[4652]: E0216 17:24:19.710021 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert podName:737fcc7d-d850-4352-9f17-383c85d5bc28 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.710010758 +0000 UTC m=+33.098179474 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert") pod "openshift-apiserver-operator-6d4655d9cf-qhn9v" (UID: "737fcc7d-d850-4352-9f17-383c85d5bc28") : object "openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" not registered Feb 16 17:24:19.710118 master-0 kubenswrapper[4652]: E0216 17:24:19.710049 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.710037839 +0000 UTC m=+33.098206565 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-audit-profiles" not registered Feb 16 17:24:19.710118 master-0 kubenswrapper[4652]: I0216 17:24:19.710095 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:19.710185 master-0 kubenswrapper[4652]: I0216 17:24:19.710146 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:19.710216 master-0 kubenswrapper[4652]: E0216 17:24:19.710164 4652 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:19.710298 master-0 kubenswrapper[4652]: I0216 17:24:19.710194 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:19.710298 master-0 kubenswrapper[4652]: E0216 17:24:19.710276 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.710192053 +0000 UTC m=+33.098360569 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-ocp-branding-template" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-system-ocp-branding-template" not registered Feb 16 17:24:19.710369 master-0 kubenswrapper[4652]: E0216 17:24:19.710295 4652 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:19.710369 master-0 kubenswrapper[4652]: E0216 17:24:19.710341 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.710326446 +0000 UTC m=+33.098494972 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : object "openshift-machine-api"/"machine-api-operator-tls" not registered Feb 16 17:24:19.710369 master-0 kubenswrapper[4652]: E0216 17:24:19.710368 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.710356887 +0000 UTC m=+33.098525623 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"openshift-insights-serving-cert" not registered Feb 16 17:24:19.710499 master-0 kubenswrapper[4652]: I0216 17:24:19.710462 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:19.710535 master-0 kubenswrapper[4652]: I0216 17:24:19.710517 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:19.710622 master-0 kubenswrapper[4652]: E0216 17:24:19.710595 4652 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/audit-1: object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:19.710659 master-0 kubenswrapper[4652]: I0216 17:24:19.710634 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:19.710690 master-0 kubenswrapper[4652]: E0216 17:24:19.710649 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.710634815 +0000 UTC m=+33.098803361 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit-policies" (UniqueName: "kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"audit-1" not registered Feb 16 17:24:19.710743 master-0 kubenswrapper[4652]: I0216 17:24:19.710720 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:19.710779 master-0 kubenswrapper[4652]: E0216 17:24:19.710742 4652 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:19.710820 master-0 kubenswrapper[4652]: I0216 17:24:19.710777 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:19.710820 master-0 kubenswrapper[4652]: E0216 17:24:19.710786 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images podName:4488757c-f0fd-48fa-a3f9-6373b0bcafe4 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.710774058 +0000 UTC m=+33.098942784 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images") pod "cluster-baremetal-operator-7bc947fc7d-4j7pn" (UID: "4488757c-f0fd-48fa-a3f9-6373b0bcafe4") : object "openshift-machine-api"/"cluster-baremetal-operator-images" not registered Feb 16 17:24:19.710820 master-0 kubenswrapper[4652]: E0216 17:24:19.710601 4652 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-user-template-error: object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:19.710933 master-0 kubenswrapper[4652]: E0216 17:24:19.710852 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error podName:2be9d55c-a4ec-48cd-93d2-0a1dced745a8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.71084015 +0000 UTC m=+33.099008686 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-user-template-error" (UniqueName: "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error") pod "oauth-openshift-64f85b8fc9-n9msn" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8") : object "openshift-authentication"/"v4-0-config-user-template-error" not registered Feb 16 17:24:19.711595 master-0 kubenswrapper[4652]: E0216 17:24:19.711559 4652 secret.go:189] Couldn't get secret openshift-etcd-operator/etcd-operator-serving-cert: object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:19.711660 master-0 kubenswrapper[4652]: E0216 17:24:19.711650 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert podName:6b3e071c-1c62-489b-91c1-aef0d197f40b nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.711637211 +0000 UTC m=+33.099805717 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert") pod "etcd-operator-67bf55ccdd-cppj8" (UID: "6b3e071c-1c62-489b-91c1-aef0d197f40b") : object "openshift-etcd-operator"/"etcd-operator-serving-cert" not registered Feb 16 17:24:19.711739 master-0 kubenswrapper[4652]: E0216 17:24:19.711693 4652 secret.go:189] Couldn't get secret openshift-kube-apiserver-operator/kube-apiserver-operator-serving-cert: object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:19.711905 master-0 kubenswrapper[4652]: E0216 17:24:19.711870 4652 secret.go:189] Couldn't get secret openshift-controller-manager-operator/openshift-controller-manager-operator-serving-cert: object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:19.712028 master-0 kubenswrapper[4652]: I0216 17:24:19.711995 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:19.712077 master-0 kubenswrapper[4652]: I0216 17:24:19.712046 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:19.712128 master-0 kubenswrapper[4652]: E0216 17:24:19.712091 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy-web: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:19.712128 master-0 kubenswrapper[4652]: E0216 17:24:19.712117 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.712092613 +0000 UTC m=+33.100261129 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" not registered Feb 16 17:24:19.712221 master-0 kubenswrapper[4652]: E0216 17:24:19.712137 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert podName:edbaac23-11f0-4bc7-a7ce-b593c774c0fa nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.712131904 +0000 UTC m=+33.100300420 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert") pod "openshift-controller-manager-operator-5f5f84757d-ktmm9" (UID: "edbaac23-11f0-4bc7-a7ce-b593c774c0fa") : object "openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" not registered Feb 16 17:24:19.712221 master-0 kubenswrapper[4652]: E0216 17:24:19.712146 4652 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:19.712221 master-0 kubenswrapper[4652]: E0216 17:24:19.712177 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.712169185 +0000 UTC m=+33.100337701 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : object "openshift-insights"/"trusted-ca-bundle" not registered Feb 16 17:24:19.712402 master-0 kubenswrapper[4652]: E0216 17:24:19.712333 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.712321149 +0000 UTC m=+33.100489666 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" not registered Feb 16 17:24:19.713844 master-0 kubenswrapper[4652]: I0216 17:24:19.713808 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:19.713965 master-0 kubenswrapper[4652]: E0216 17:24:19.713929 4652 secret.go:189] Couldn't get secret openshift-monitoring/thanos-querier-kube-rbac-proxy-metrics: object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:19.714021 master-0 kubenswrapper[4652]: E0216 17:24:19.714006 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics podName:fe8e8e5d-cebb-4361-b765-5ff737f5e838 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.713989344 +0000 UTC m=+33.102157870 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" (UniqueName: "kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics") pod "thanos-querier-64bf6cdbbc-tpd6h" (UID: "fe8e8e5d-cebb-4361-b765-5ff737f5e838") : object "openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" not registered Feb 16 17:24:19.714107 master-0 kubenswrapper[4652]: I0216 17:24:19.714070 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:19.714146 master-0 kubenswrapper[4652]: I0216 17:24:19.714130 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:19.714230 master-0 kubenswrapper[4652]: I0216 17:24:19.714199 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.714320 master-0 kubenswrapper[4652]: I0216 17:24:19.714282 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:19.714360 master-0 kubenswrapper[4652]: I0216 17:24:19.714329 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:19.714390 master-0 kubenswrapper[4652]: I0216 17:24:19.714374 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:19.714442 master-0 kubenswrapper[4652]: I0216 17:24:19.714418 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:19.714493 master-0 kubenswrapper[4652]: I0216 17:24:19.714471 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:19.714533 master-0 kubenswrapper[4652]: I0216 17:24:19.714516 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:19.714584 master-0 kubenswrapper[4652]: I0216 17:24:19.714563 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.714628 master-0 kubenswrapper[4652]: I0216 17:24:19.714608 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:19.714678 master-0 kubenswrapper[4652]: I0216 17:24:19.714656 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:19.714735 master-0 kubenswrapper[4652]: I0216 17:24:19.714713 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:19.714786 master-0 kubenswrapper[4652]: I0216 17:24:19.714767 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:19.714821 master-0 kubenswrapper[4652]: E0216 17:24:19.714781 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/kube-root-ca.crt: object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered Feb 16 17:24:19.714821 master-0 kubenswrapper[4652]: E0216 17:24:19.714802 4652 projected.go:288] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.714821 master-0 kubenswrapper[4652]: E0216 17:24:19.714815 4652 projected.go:194] Error preparing data for projected volume kube-api-access-6bbcf for pod openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6: [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.714821 master-0 kubenswrapper[4652]: I0216 17:24:19.714815 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:19.714929 master-0 kubenswrapper[4652]: E0216 17:24:19.714852 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf podName:18e9a9d3-9b18-4c19-9558-f33c68101922 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.714841996 +0000 UTC m=+33.103010502 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-6bbcf" (UniqueName: "kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf") pod "package-server-manager-5c696dbdcd-qrrc6" (UID: "18e9a9d3-9b18-4c19-9558-f33c68101922") : [object "openshift-operator-lifecycle-manager"/"kube-root-ca.crt" not registered, object "openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.714929 master-0 kubenswrapper[4652]: E0216 17:24:19.714778 4652 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:19.714929 master-0 kubenswrapper[4652]: E0216 17:24:19.714891 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert podName:188e42e5-9f9c-42af-ba15-5548c4fa4b52 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.714885778 +0000 UTC m=+33.103054504 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert") pod "catalog-operator-588944557d-5drhs" (UID: "188e42e5-9f9c-42af-ba15-5548c4fa4b52") : object "openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" not registered Feb 16 17:24:19.715020 master-0 kubenswrapper[4652]: E0216 17:24:19.714945 4652 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:19.715020 master-0 kubenswrapper[4652]: E0216 17:24:19.714966 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.71496087 +0000 UTC m=+33.103129386 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : object "openshift-kube-storage-version-migrator-operator"/"serving-cert" not registered Feb 16 17:24:19.715020 master-0 kubenswrapper[4652]: I0216 17:24:19.714992 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:19.715020 master-0 kubenswrapper[4652]: I0216 17:24:19.715017 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:19.715130 master-0 kubenswrapper[4652]: E0216 17:24:19.715025 4652 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:19.715130 master-0 kubenswrapper[4652]: E0216 17:24:19.715086 4652 secret.go:189] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:19.715130 master-0 kubenswrapper[4652]: E0216 17:24:19.715108 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config podName:7390ccc6-dfbe-4f51-960c-7628f49bffb7 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.715103553 +0000 UTC m=+33.103272069 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config") pod "apiserver-66788cb45c-dp9bc" (UID: "7390ccc6-dfbe-4f51-960c-7628f49bffb7") : object "openshift-oauth-apiserver"/"encryption-config-1" not registered Feb 16 17:24:19.715130 master-0 kubenswrapper[4652]: I0216 17:24:19.715039 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:19.715265 master-0 kubenswrapper[4652]: E0216 17:24:19.715123 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.715115764 +0000 UTC m=+33.103284280 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : object "openshift-cluster-samples-operator"/"samples-operator-tls" not registered Feb 16 17:24:19.715265 master-0 kubenswrapper[4652]: E0216 17:24:19.715104 4652 configmap.go:193] Couldn't get configMap openshift-monitoring/serving-certs-ca-bundle: object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:19.715265 master-0 kubenswrapper[4652]: I0216 17:24:19.715220 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:19.715366 master-0 kubenswrapper[4652]: E0216 17:24:19.715232 4652 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:19.715366 master-0 kubenswrapper[4652]: I0216 17:24:19.715317 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:19.715422 master-0 kubenswrapper[4652]: E0216 17:24:19.715340 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.715317819 +0000 UTC m=+33.103486495 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "configmap-serving-certs-ca-bundle" (UniqueName: "kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"serving-certs-ca-bundle" not registered Feb 16 17:24:19.715422 master-0 kubenswrapper[4652]: E0216 17:24:19.715387 4652 projected.go:288] Couldn't get configMap openshift-authentication-operator/kube-root-ca.crt: object "openshift-authentication-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.715422 master-0 kubenswrapper[4652]: E0216 17:24:19.715399 4652 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:19.715422 master-0 kubenswrapper[4652]: E0216 17:24:19.715412 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images podName:f3c7d762-e2fe-49ca-ade5-3982d91ec2a2 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.715396641 +0000 UTC m=+33.103565367 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images") pod "machine-config-operator-84976bb859-rsnqc" (UID: "f3c7d762-e2fe-49ca-ade5-3982d91ec2a2") : object "openshift-machine-config-operator"/"machine-config-operator-images" not registered Feb 16 17:24:19.715539 master-0 kubenswrapper[4652]: E0216 17:24:19.715287 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/kube-root-ca.crt: object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.715539 master-0 kubenswrapper[4652]: I0216 17:24:19.715445 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:19.715539 master-0 kubenswrapper[4652]: E0216 17:24:19.715461 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert podName:78be97a3-18d1-4962-804f-372974dc8ccc nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.715435492 +0000 UTC m=+33.103604048 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert") pod "route-controller-manager-dcdb76cc6-5rcvl" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc") : object "openshift-route-controller-manager"/"serving-cert" not registered Feb 16 17:24:19.715539 master-0 kubenswrapper[4652]: E0216 17:24:19.715356 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-main-web-config: object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:19.715539 master-0 kubenswrapper[4652]: E0216 17:24:19.715484 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-3enh2b6fkpcog: object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:19.715539 master-0 kubenswrapper[4652]: I0216 17:24:19.715497 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:19.715539 master-0 kubenswrapper[4652]: E0216 17:24:19.715513 4652 secret.go:189] Couldn't get secret openshift-monitoring/alertmanager-kube-rbac-proxy: object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:19.715539 master-0 kubenswrapper[4652]: E0216 17:24:19.715489 4652 secret.go:189] Couldn't get secret openshift-console-operator/serving-cert: object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:19.715539 master-0 kubenswrapper[4652]: E0216 17:24:19.715516 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.715503824 +0000 UTC m=+33.103672380 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-main-web-config" not registered Feb 16 17:24:19.715781 master-0 kubenswrapper[4652]: E0216 17:24:19.715464 4652 projected.go:288] Couldn't get configMap openshift-cluster-storage-operator/openshift-service-ca.crt: object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.715781 master-0 kubenswrapper[4652]: E0216 17:24:19.715590 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hqstc for pod openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf: [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.715781 master-0 kubenswrapper[4652]: E0216 17:24:19.715293 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:19.715781 master-0 kubenswrapper[4652]: E0216 17:24:19.715633 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 17:24:19.715781 master-0 kubenswrapper[4652]: E0216 17:24:19.715662 4652 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.715781 master-0 kubenswrapper[4652]: E0216 17:24:19.715685 4652 projected.go:194] Error preparing data for projected volume kube-api-access-n6rwz for pod openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.715781 master-0 kubenswrapper[4652]: E0216 17:24:19.715159 4652 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:19.715781 master-0 kubenswrapper[4652]: E0216 17:24:19.715740 4652 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:19.715781 master-0 kubenswrapper[4652]: E0216 17:24:19.715330 4652 configmap.go:193] Couldn't get configMap openshift-console-operator/trusted-ca: object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:19.716099 master-0 kubenswrapper[4652]: E0216 17:24:19.715441 4652 projected.go:288] Couldn't get configMap openshift-authentication-operator/openshift-service-ca.crt: object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.716139 master-0 kubenswrapper[4652]: E0216 17:24:19.716106 4652 projected.go:194] Error preparing data for projected volume kube-api-access-f42cr for pod openshift-authentication-operator/authentication-operator-755d954778-lf4cb: [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.716357 master-0 kubenswrapper[4652]: E0216 17:24:19.715566 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.715554695 +0000 UTC m=+33.103723431 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-server-3enh2b6fkpcog" not registered Feb 16 17:24:19.716415 master-0 kubenswrapper[4652]: E0216 17:24:19.716397 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy podName:2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.716369547 +0000 UTC m=+33.104538163 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-alertmanager-kube-rbac-proxy" (UniqueName: "kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy") pod "alertmanager-main-0" (UID: "2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e") : object "openshift-monitoring"/"alertmanager-kube-rbac-proxy" not registered Feb 16 17:24:19.716450 master-0 kubenswrapper[4652]: E0216 17:24:19.715217 4652 projected.go:288] Couldn't get configMap openshift-machine-api/kube-root-ca.crt: object "openshift-machine-api"/"kube-root-ca.crt" not registered Feb 16 17:24:19.716486 master-0 kubenswrapper[4652]: E0216 17:24:19.716473 4652 projected.go:288] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: object "openshift-machine-api"/"openshift-service-ca.crt" not registered Feb 16 17:24:19.716517 master-0 kubenswrapper[4652]: E0216 17:24:19.716497 4652 projected.go:194] Error preparing data for projected volume kube-api-access-vkqml for pod openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx: [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.716567 master-0 kubenswrapper[4652]: E0216 17:24:19.715828 4652 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-k8s-kube-rbac-proxy-web: object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:19.716608 master-0 kubenswrapper[4652]: E0216 17:24:19.715855 4652 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:19.716638 master-0 kubenswrapper[4652]: E0216 17:24:19.716428 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.716416198 +0000 UTC m=+33.104584964 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"serving-cert" not registered Feb 16 17:24:19.716675 master-0 kubenswrapper[4652]: E0216 17:24:19.716662 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc podName:970d4376-f299-412c-a8ee-90aa980c689e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.716639664 +0000 UTC m=+33.104808220 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hqstc" (UniqueName: "kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc") pod "csi-snapshot-controller-operator-7b87b97578-q55rf" (UID: "970d4376-f299-412c-a8ee-90aa980c689e") : [object "openshift-cluster-storage-operator"/"kube-root-ca.crt" not registered, object "openshift-cluster-storage-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.716711 master-0 kubenswrapper[4652]: E0216 17:24:19.716689 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates podName:544c6815-81d7-422a-9e4a-5fcbfabe8da8 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.716675825 +0000 UTC m=+33.104844381 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates") pod "prometheus-operator-admission-webhook-695b766898-h94zg" (UID: "544c6815-81d7-422a-9e4a-5fcbfabe8da8") : object "openshift-monitoring"/"prometheus-operator-admission-webhook-tls" not registered Feb 16 17:24:19.716744 master-0 kubenswrapper[4652]: E0216 17:24:19.716713 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz podName:0ff68421-1741-41c1-93d5-5c722dfd295e nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.716701286 +0000 UTC m=+33.104869842 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-n6rwz" (UniqueName: "kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz") pod "network-check-source-7d8f4c8c66-qjq9w" (UID: "0ff68421-1741-41c1-93d5-5c722dfd295e") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.716780 master-0 kubenswrapper[4652]: E0216 17:24:19.716746 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs podName:ba37ef0e-373c-4ccc-b082-668630399765 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.716735847 +0000 UTC m=+33.104904403 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs") pod "metrics-server-745bd8d89b-qr4zh" (UID: "ba37ef0e-373c-4ccc-b082-668630399765") : object "openshift-monitoring"/"metrics-client-certs" not registered Feb 16 17:24:19.716780 master-0 kubenswrapper[4652]: E0216 17:24:19.716769 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config podName:ae20b683-dac8-419e-808a-ddcdb3c564e1 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.716758237 +0000 UTC m=+33.104926793 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-546cc7d765-94nfl" (UID: "ae20b683-dac8-419e-808a-ddcdb3c564e1") : object "openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" not registered Feb 16 17:24:19.716850 master-0 kubenswrapper[4652]: I0216 17:24:19.716811 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:19.716885 master-0 kubenswrapper[4652]: E0216 17:24:19.716854 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca podName:0517b180-00ee-47fe-a8e7-36a3931b7e72 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.71684297 +0000 UTC m=+33.105011526 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca") pod "console-operator-7777d5cc66-64vhv" (UID: "0517b180-00ee-47fe-a8e7-36a3931b7e72") : object "openshift-console-operator"/"trusted-ca" not registered Feb 16 17:24:19.716885 master-0 kubenswrapper[4652]: E0216 17:24:19.716876 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.71686573 +0000 UTC m=+33.105034286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-f42cr" (UniqueName: "kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : [object "openshift-authentication-operator"/"kube-root-ca.crt" not registered, object "openshift-authentication-operator"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.716950 master-0 kubenswrapper[4652]: E0216 17:24:19.716898 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml podName:404c402a-705f-4352-b9df-b89562070d9c nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.716887891 +0000 UTC m=+33.105056447 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-vkqml" (UniqueName: "kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml") pod "machine-api-operator-bd7dd5c46-92rqx" (UID: "404c402a-705f-4352-b9df-b89562070d9c") : [object "openshift-machine-api"/"kube-root-ca.crt" not registered, object "openshift-machine-api"/"openshift-service-ca.crt" not registered] Feb 16 17:24:19.716950 master-0 kubenswrapper[4652]: E0216 17:24:19.716919 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web podName:b04ee64e-5e83-499c-812d-749b2b6824c6 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.716908981 +0000 UTC m=+33.105077537 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "secret-prometheus-k8s-kube-rbac-proxy-web" (UniqueName: "kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web") pod "prometheus-k8s-0" (UID: "b04ee64e-5e83-499c-812d-749b2b6824c6") : object "openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" not registered Feb 16 17:24:19.716950 master-0 kubenswrapper[4652]: E0216 17:24:19.716919 4652 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:19.717044 master-0 kubenswrapper[4652]: E0216 17:24:19.716941 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.716931242 +0000 UTC m=+33.105099798 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : object "openshift-apiserver"/"image-import-ca" not registered Feb 16 17:24:19.717044 master-0 kubenswrapper[4652]: I0216 17:24:19.716982 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:19.717044 master-0 kubenswrapper[4652]: E0216 17:24:19.716992 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls podName:c8729b1a-e365-4cf7-8a05-91a9987dabe9 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.716975853 +0000 UTC m=+33.105144549 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls") pod "machine-config-controller-686c884b4d-ksx48" (UID: "c8729b1a-e365-4cf7-8a05-91a9987dabe9") : object "openshift-machine-config-operator"/"mcc-proxy-tls" not registered Feb 16 17:24:19.717133 master-0 kubenswrapper[4652]: E0216 17:24:19.717082 4652 projected.go:288] Couldn't get configMap openshift-kube-scheduler-operator/kube-root-ca.crt: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.717133 master-0 kubenswrapper[4652]: E0216 17:24:19.717103 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf: object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.717192 master-0 kubenswrapper[4652]: I0216 17:24:19.717164 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:19.717265 master-0 kubenswrapper[4652]: E0216 17:24:19.717230 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access podName:eaf7edff-0a89-4ac0-b9dd-511e098b5434 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.717210579 +0000 UTC m=+33.105379175 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access") pod "openshift-kube-scheduler-operator-7485d55966-sgmpf" (UID: "eaf7edff-0a89-4ac0-b9dd-511e098b5434") : object "openshift-kube-scheduler-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.717309 master-0 kubenswrapper[4652]: E0216 17:24:19.717281 4652 configmap.go:193] Couldn't get configMap openshift-authentication-operator/service-ca-bundle: object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:19.717343 master-0 kubenswrapper[4652]: I0216 17:24:19.717318 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:19.717373 master-0 kubenswrapper[4652]: E0216 17:24:19.717334 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle podName:9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.717319452 +0000 UTC m=+33.105488008 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle") pod "authentication-operator-755d954778-lf4cb" (UID: "9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41") : object "openshift-authentication-operator"/"service-ca-bundle" not registered Feb 16 17:24:19.717451 master-0 kubenswrapper[4652]: E0216 17:24:19.717429 4652 projected.go:288] Couldn't get configMap openshift-kube-apiserver-operator/kube-root-ca.crt: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.717482 master-0 kubenswrapper[4652]: E0216 17:24:19.717457 4652 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv: object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.717515 master-0 kubenswrapper[4652]: I0216 17:24:19.717487 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:19.717633 master-0 kubenswrapper[4652]: E0216 17:24:19.717499 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access podName:d020c902-2adb-4919-8dd9-0c2109830580 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.717484537 +0000 UTC m=+33.105653093 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access") pod "kube-apiserver-operator-54984b6678-gp8gv" (UID: "d020c902-2adb-4919-8dd9-0c2109830580") : object "openshift-kube-apiserver-operator"/"kube-root-ca.crt" not registered Feb 16 17:24:19.717633 master-0 kubenswrapper[4652]: E0216 17:24:19.717578 4652 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:19.717698 master-0 kubenswrapper[4652]: E0216 17:24:19.717675 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert podName:ee84198d-6357-4429-a90c-455c3850a788 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:35.717663821 +0000 UTC m=+33.105832337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert") pod "cluster-autoscaler-operator-67fd9768b5-zcwwd" (UID: "ee84198d-6357-4429-a90c-455c3850a788") : object "openshift-machine-api"/"cluster-autoscaler-operator-cert" not registered Feb 16 17:24:19.745142 master-0 kubenswrapper[4652]: I0216 17:24:19.745072 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:19.745142 master-0 kubenswrapper[4652]: I0216 17:24:19.745119 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:19.745142 master-0 kubenswrapper[4652]: I0216 17:24:19.745071 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:19.745501 master-0 kubenswrapper[4652]: I0216 17:24:19.745191 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:19.745501 master-0 kubenswrapper[4652]: I0216 17:24:19.745275 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:19.745501 master-0 kubenswrapper[4652]: I0216 17:24:19.745326 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:19.745501 master-0 kubenswrapper[4652]: I0216 17:24:19.745339 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:19.745501 master-0 kubenswrapper[4652]: I0216 17:24:19.745400 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:19.745501 master-0 kubenswrapper[4652]: I0216 17:24:19.745419 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:19.745501 master-0 kubenswrapper[4652]: I0216 17:24:19.745428 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:19.745501 master-0 kubenswrapper[4652]: I0216 17:24:19.745472 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:19.745501 master-0 kubenswrapper[4652]: I0216 17:24:19.745485 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:19.745840 master-0 kubenswrapper[4652]: I0216 17:24:19.745082 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:19.745840 master-0 kubenswrapper[4652]: I0216 17:24:19.745670 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:19.745840 master-0 kubenswrapper[4652]: I0216 17:24:19.745707 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:19.745840 master-0 kubenswrapper[4652]: I0216 17:24:19.745741 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:19.745840 master-0 kubenswrapper[4652]: I0216 17:24:19.745771 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:19.745840 master-0 kubenswrapper[4652]: I0216 17:24:19.745800 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:19.745840 master-0 kubenswrapper[4652]: I0216 17:24:19.745828 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:19.746108 master-0 kubenswrapper[4652]: I0216 17:24:19.745857 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:19.746108 master-0 kubenswrapper[4652]: I0216 17:24:19.745902 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:19.746108 master-0 kubenswrapper[4652]: I0216 17:24:19.745945 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:19.746108 master-0 kubenswrapper[4652]: I0216 17:24:19.745986 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:19.746108 master-0 kubenswrapper[4652]: I0216 17:24:19.746006 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:19.746108 master-0 kubenswrapper[4652]: I0216 17:24:19.746045 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:19.746108 master-0 kubenswrapper[4652]: I0216 17:24:19.746084 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:19.746405 master-0 kubenswrapper[4652]: I0216 17:24:19.746148 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:19.746405 master-0 kubenswrapper[4652]: I0216 17:24:19.746187 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:19.746405 master-0 kubenswrapper[4652]: I0216 17:24:19.746202 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:19.746405 master-0 kubenswrapper[4652]: I0216 17:24:19.746225 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:19.746405 master-0 kubenswrapper[4652]: I0216 17:24:19.746291 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:19.746405 master-0 kubenswrapper[4652]: I0216 17:24:19.746308 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:19.746405 master-0 kubenswrapper[4652]: I0216 17:24:19.746335 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:19.746405 master-0 kubenswrapper[4652]: I0216 17:24:19.746387 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:19.746405 master-0 kubenswrapper[4652]: I0216 17:24:19.746398 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:19.746746 master-0 kubenswrapper[4652]: I0216 17:24:19.746428 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:19.746746 master-0 kubenswrapper[4652]: I0216 17:24:19.746394 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:19.746746 master-0 kubenswrapper[4652]: I0216 17:24:19.746443 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:19.746746 master-0 kubenswrapper[4652]: I0216 17:24:19.746503 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:19.746746 master-0 kubenswrapper[4652]: I0216 17:24:19.746542 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:19.746746 master-0 kubenswrapper[4652]: I0216 17:24:19.746548 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:19.746746 master-0 kubenswrapper[4652]: I0216 17:24:19.746570 4652 scope.go:117] "RemoveContainer" containerID="19be8681c0a46d475538415f31af270ce62d3e7bda1a682c75b5e072b00f3769" Feb 16 17:24:19.746746 master-0 kubenswrapper[4652]: I0216 17:24:19.746606 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:19.746746 master-0 kubenswrapper[4652]: I0216 17:24:19.746663 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:19.746746 master-0 kubenswrapper[4652]: I0216 17:24:19.746682 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:19.746746 master-0 kubenswrapper[4652]: I0216 17:24:19.746692 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:19.746746 master-0 kubenswrapper[4652]: I0216 17:24:19.746716 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:19.746746 master-0 kubenswrapper[4652]: I0216 17:24:19.746749 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.746780 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.746902 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.746921 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.746930 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.746963 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.746961 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.747035 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.747071 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.747082 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.747098 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.747113 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.747194 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.747222 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.747265 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.747381 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.747489 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.747556 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:19.750086 master-0 kubenswrapper[4652]: I0216 17:24:19.747861 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:19.754898 master-0 kubenswrapper[4652]: I0216 17:24:19.753658 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:19.766480 master-0 kubenswrapper[4652]: I0216 17:24:19.766441 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 17:24:19.766717 master-0 kubenswrapper[4652]: I0216 17:24:19.766549 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-sk6rc" Feb 16 17:24:19.767240 master-0 kubenswrapper[4652]: I0216 17:24:19.766549 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 17:24:19.767326 master-0 kubenswrapper[4652]: I0216 17:24:19.766746 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 16 17:24:19.767326 master-0 kubenswrapper[4652]: I0216 17:24:19.766806 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 17:24:19.767451 master-0 kubenswrapper[4652]: I0216 17:24:19.766834 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 17:24:19.767451 master-0 kubenswrapper[4652]: I0216 17:24:19.766936 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-lcpkn" Feb 16 17:24:19.767635 master-0 kubenswrapper[4652]: I0216 17:24:19.767590 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 16 17:24:19.767635 master-0 kubenswrapper[4652]: I0216 17:24:19.767625 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 17:24:19.767866 master-0 kubenswrapper[4652]: I0216 17:24:19.767844 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 17:24:19.767925 master-0 kubenswrapper[4652]: I0216 17:24:19.767855 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-mzz6s" Feb 16 17:24:19.768932 master-0 kubenswrapper[4652]: I0216 17:24:19.768289 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-j874l" Feb 16 17:24:19.769038 master-0 kubenswrapper[4652]: I0216 17:24:19.768329 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 17:24:19.769038 master-0 kubenswrapper[4652]: I0216 17:24:19.768349 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 17:24:19.769106 master-0 kubenswrapper[4652]: I0216 17:24:19.768393 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 17:24:19.769154 master-0 kubenswrapper[4652]: I0216 17:24:19.768418 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 17:24:19.769224 master-0 kubenswrapper[4652]: I0216 17:24:19.768473 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 17:24:19.769287 master-0 kubenswrapper[4652]: I0216 17:24:19.768483 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 17:24:19.769368 master-0 kubenswrapper[4652]: I0216 17:24:19.768514 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 17:24:19.769459 master-0 kubenswrapper[4652]: I0216 17:24:19.768538 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 17:24:19.769549 master-0 kubenswrapper[4652]: I0216 17:24:19.768555 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 16 17:24:19.769653 master-0 kubenswrapper[4652]: I0216 17:24:19.768593 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-r5p9m" Feb 16 17:24:19.769735 master-0 kubenswrapper[4652]: I0216 17:24:19.768640 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 16 17:24:19.769735 master-0 kubenswrapper[4652]: I0216 17:24:19.768642 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 16 17:24:19.769799 master-0 kubenswrapper[4652]: I0216 17:24:19.768706 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 17:24:19.770053 master-0 kubenswrapper[4652]: I0216 17:24:19.770025 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 16 17:24:19.776813 master-0 kubenswrapper[4652]: I0216 17:24:19.776757 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 17:24:19.777235 master-0 kubenswrapper[4652]: I0216 17:24:19.777184 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 16 17:24:19.777395 master-0 kubenswrapper[4652]: I0216 17:24:19.777234 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 16 17:24:19.777784 master-0 kubenswrapper[4652]: I0216 17:24:19.777541 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 17:24:19.777784 master-0 kubenswrapper[4652]: I0216 17:24:19.777560 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 17:24:19.777784 master-0 kubenswrapper[4652]: I0216 17:24:19.777658 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-hk5sk" Feb 16 17:24:19.777784 master-0 kubenswrapper[4652]: I0216 17:24:19.777668 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 17:24:19.777784 master-0 kubenswrapper[4652]: I0216 17:24:19.777729 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 17:24:19.777784 master-0 kubenswrapper[4652]: I0216 17:24:19.777771 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 17:24:19.778028 master-0 kubenswrapper[4652]: I0216 17:24:19.777861 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 17:24:19.778028 master-0 kubenswrapper[4652]: I0216 17:24:19.777871 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 17:24:19.778028 master-0 kubenswrapper[4652]: I0216 17:24:19.777895 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 17:24:19.778028 master-0 kubenswrapper[4652]: I0216 17:24:19.777984 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 17:24:19.778186 master-0 kubenswrapper[4652]: I0216 17:24:19.778131 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 16 17:24:19.778186 master-0 kubenswrapper[4652]: I0216 17:24:19.778179 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-3enh2b6fkpcog" Feb 16 17:24:19.778284 master-0 kubenswrapper[4652]: I0216 17:24:19.777988 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 17:24:19.778408 master-0 kubenswrapper[4652]: I0216 17:24:19.778365 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 16 17:24:19.778565 master-0 kubenswrapper[4652]: I0216 17:24:19.778491 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 17:24:19.778629 master-0 kubenswrapper[4652]: I0216 17:24:19.778610 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 17:24:19.778683 master-0 kubenswrapper[4652]: I0216 17:24:19.778650 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 17:24:19.778733 master-0 kubenswrapper[4652]: I0216 17:24:19.778709 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 17:24:19.778779 master-0 kubenswrapper[4652]: I0216 17:24:19.778741 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 17:24:19.778909 master-0 kubenswrapper[4652]: I0216 17:24:19.778873 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 17:24:19.778909 master-0 kubenswrapper[4652]: I0216 17:24:19.778902 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 17:24:19.779010 master-0 kubenswrapper[4652]: I0216 17:24:19.778956 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 17:24:19.779688 master-0 kubenswrapper[4652]: I0216 17:24:19.779060 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 17:24:19.779688 master-0 kubenswrapper[4652]: I0216 17:24:19.779073 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 17:24:19.779688 master-0 kubenswrapper[4652]: I0216 17:24:19.779092 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 16 17:24:19.779688 master-0 kubenswrapper[4652]: I0216 17:24:19.779106 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 17:24:19.779688 master-0 kubenswrapper[4652]: I0216 17:24:19.779282 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 17:24:19.779688 master-0 kubenswrapper[4652]: I0216 17:24:19.779359 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 17:24:19.779688 master-0 kubenswrapper[4652]: I0216 17:24:19.779434 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 17:24:19.779688 master-0 kubenswrapper[4652]: I0216 17:24:19.779363 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 17:24:19.780118 master-0 kubenswrapper[4652]: I0216 17:24:19.780089 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 16 17:24:19.780456 master-0 kubenswrapper[4652]: I0216 17:24:19.780387 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.780612 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.780633 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.780668 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-nkhdh" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.780694 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.780795 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.780866 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.780887 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.780935 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.781007 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.781014 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.780947 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.781039 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.781051 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.781126 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.781136 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.781207 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.781215 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.781241 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 17:24:19.781314 master-0 kubenswrapper[4652]: I0216 17:24:19.781289 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 17:24:19.781853 master-0 kubenswrapper[4652]: I0216 17:24:19.781216 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 17:24:19.781853 master-0 kubenswrapper[4652]: I0216 17:24:19.781387 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 17:24:19.781853 master-0 kubenswrapper[4652]: I0216 17:24:19.781403 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 17:24:19.781853 master-0 kubenswrapper[4652]: I0216 17:24:19.781458 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 17:24:19.781853 master-0 kubenswrapper[4652]: I0216 17:24:19.781407 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 17:24:19.781853 master-0 kubenswrapper[4652]: I0216 17:24:19.781532 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 16 17:24:19.781853 master-0 kubenswrapper[4652]: I0216 17:24:19.781583 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 17:24:19.781853 master-0 kubenswrapper[4652]: I0216 17:24:19.781616 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 17:24:19.781853 master-0 kubenswrapper[4652]: I0216 17:24:19.781691 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 17:24:19.781853 master-0 kubenswrapper[4652]: I0216 17:24:19.781733 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 17:24:19.781853 master-0 kubenswrapper[4652]: I0216 17:24:19.781754 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 17:24:19.781853 master-0 kubenswrapper[4652]: I0216 17:24:19.781830 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 16 17:24:19.782376 master-0 kubenswrapper[4652]: I0216 17:24:19.782284 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 16 17:24:19.790360 master-0 kubenswrapper[4652]: I0216 17:24:19.790304 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 17:24:19.812820 master-0 kubenswrapper[4652]: I0216 17:24:19.812177 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 17:24:19.813970 master-0 kubenswrapper[4652]: I0216 17:24:19.813940 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 16 17:24:19.819042 master-0 kubenswrapper[4652]: I0216 17:24:19.818919 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:19.820517 master-0 kubenswrapper[4652]: I0216 17:24:19.820304 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-qmzhq" Feb 16 17:24:19.828111 master-0 kubenswrapper[4652]: I0216 17:24:19.828075 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 17:24:19.828417 master-0 kubenswrapper[4652]: I0216 17:24:19.828385 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 16 17:24:19.828599 master-0 kubenswrapper[4652]: I0216 17:24:19.828583 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 17:24:19.828906 master-0 kubenswrapper[4652]: I0216 17:24:19.828890 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 16 17:24:19.829309 master-0 kubenswrapper[4652]: I0216 17:24:19.829237 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 17:24:19.832045 master-0 kubenswrapper[4652]: I0216 17:24:19.831988 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:19.832212 master-0 kubenswrapper[4652]: I0216 17:24:19.832110 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:19.832343 master-0 kubenswrapper[4652]: I0216 17:24:19.832270 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:19.833661 master-0 kubenswrapper[4652]: I0216 17:24:19.833613 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-x2982" Feb 16 17:24:19.834211 master-0 kubenswrapper[4652]: I0216 17:24:19.834115 4652 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 17:24:19.834457 master-0 kubenswrapper[4652]: I0216 17:24:19.834429 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 17:24:19.842382 master-0 kubenswrapper[4652]: I0216 17:24:19.842289 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v65g\" (UniqueName: \"kubernetes.io/projected/7390ccc6-dfbe-4f51-960c-7628f49bffb7-kube-api-access-5v65g\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:19.842382 master-0 kubenswrapper[4652]: I0216 17:24:19.842347 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtk9h\" (UniqueName: \"kubernetes.io/projected/62220aa5-4065-472c-8a17-c0a58942ab8a-kube-api-access-xtk9h\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:19.843519 master-0 kubenswrapper[4652]: I0216 17:24:19.843470 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5rwv\" (UniqueName: \"kubernetes.io/projected/0393fe12-2533-4c9c-a8e4-a58003c88f36-kube-api-access-p5rwv\") pod \"redhat-marketplace-4kd66\" (UID: \"0393fe12-2533-4c9c-a8e4-a58003c88f36\") " pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:19.848003 master-0 kubenswrapper[4652]: I0216 17:24:19.847950 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 17:24:19.869051 master-0 kubenswrapper[4652]: I0216 17:24:19.869003 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 17:24:19.887858 master-0 kubenswrapper[4652]: I0216 17:24:19.887822 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 17:24:19.906695 master-0 kubenswrapper[4652]: I0216 17:24:19.906647 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 17:24:19.927295 master-0 kubenswrapper[4652]: I0216 17:24:19.927262 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 17:24:19.934875 master-0 kubenswrapper[4652]: I0216 17:24:19.934817 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:19.935078 master-0 kubenswrapper[4652]: I0216 17:24:19.935048 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:19.935321 master-0 kubenswrapper[4652]: I0216 17:24:19.935293 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:19.935375 master-0 kubenswrapper[4652]: I0216 17:24:19.935337 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:19.935375 master-0 kubenswrapper[4652]: I0216 17:24:19.935367 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:19.935442 master-0 kubenswrapper[4652]: I0216 17:24:19.935394 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:19.935442 master-0 kubenswrapper[4652]: I0216 17:24:19.935414 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:19.935442 master-0 kubenswrapper[4652]: I0216 17:24:19.935432 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:19.938501 master-0 kubenswrapper[4652]: I0216 17:24:19.938459 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwh24\" (UniqueName: \"kubernetes.io/projected/cc9a20f4-255a-4312-8f43-174a28c06340-kube-api-access-qwh24\") pod \"community-operators-7w4km\" (UID: \"cc9a20f4-255a-4312-8f43-174a28c06340\") " pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:19.938940 master-0 kubenswrapper[4652]: I0216 17:24:19.938903 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbq2b\" (UniqueName: \"kubernetes.io/projected/ee84198d-6357-4429-a90c-455c3850a788-kube-api-access-tbq2b\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:19.939137 master-0 kubenswrapper[4652]: I0216 17:24:19.939105 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p9ld\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-kube-api-access-7p9ld\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:19.939175 master-0 kubenswrapper[4652]: I0216 17:24:19.939145 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs597\" (UniqueName: \"kubernetes.io/projected/62fc29f4-557f-4a75-8b78-6ca425c81b81-kube-api-access-bs597\") pod \"migrator-5bd989df77-gcfg6\" (UID: \"62fc29f4-557f-4a75-8b78-6ca425c81b81\") " pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:19.940779 master-0 kubenswrapper[4652]: I0216 17:24:19.940745 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25g7f\" (UniqueName: \"kubernetes.io/projected/188e42e5-9f9c-42af-ba15-5548c4fa4b52-kube-api-access-25g7f\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:19.941006 master-0 kubenswrapper[4652]: I0216 17:24:19.940979 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxbdv\" (UniqueName: \"kubernetes.io/projected/80d3b238-70c3-4e71-96a1-99405352033f-kube-api-access-rxbdv\") pod \"csi-snapshot-controller-74b6595c6d-pfzq2\" (UID: \"80d3b238-70c3-4e71-96a1-99405352033f\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:19.947010 master-0 kubenswrapper[4652]: I0216 17:24:19.946964 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 17:24:19.967311 master-0 kubenswrapper[4652]: I0216 17:24:19.967271 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 17:24:19.987237 master-0 kubenswrapper[4652]: I0216 17:24:19.987184 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 17:24:20.007035 master-0 kubenswrapper[4652]: I0216 17:24:20.007005 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 17:24:20.027866 master-0 kubenswrapper[4652]: I0216 17:24:20.027837 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-94r9k" Feb 16 17:24:20.040420 master-0 kubenswrapper[4652]: I0216 17:24:20.039528 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:20.042123 master-0 kubenswrapper[4652]: I0216 17:24:20.041192 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:20.042365 master-0 kubenswrapper[4652]: I0216 17:24:20.042336 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:20.047195 master-0 kubenswrapper[4652]: I0216 17:24:20.046138 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djfsw\" (UniqueName: \"kubernetes.io/projected/822e1750-652e-4ceb-8fea-b2c1c905b0f1-kube-api-access-djfsw\") pod \"redhat-operators-lnzfx\" (UID: \"822e1750-652e-4ceb-8fea-b2c1c905b0f1\") " pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:20.047752 master-0 kubenswrapper[4652]: I0216 17:24:20.047722 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:24:20.048222 master-0 kubenswrapper[4652]: I0216 17:24:20.048192 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:20.067192 master-0 kubenswrapper[4652]: I0216 17:24:20.067155 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 17:24:20.082655 master-0 kubenswrapper[4652]: I0216 17:24:20.082210 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:20.082655 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:20.082655 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:20.082655 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:20.082655 master-0 kubenswrapper[4652]: I0216 17:24:20.082276 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:20.089077 master-0 kubenswrapper[4652]: I0216 17:24:20.088738 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:24:20.109568 master-0 kubenswrapper[4652]: I0216 17:24:20.109493 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ztpz8" Feb 16 17:24:20.128317 master-0 kubenswrapper[4652]: I0216 17:24:20.128036 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:24:20.148024 master-0 kubenswrapper[4652]: I0216 17:24:20.147808 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:24:20.153757 master-0 kubenswrapper[4652]: I0216 17:24:20.153718 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:20.158547 master-0 kubenswrapper[4652]: I0216 17:24:20.158480 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjd5j\" (UniqueName: \"kubernetes.io/projected/6b3e071c-1c62-489b-91c1-aef0d197f40b-kube-api-access-rjd5j\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:20.167805 master-0 kubenswrapper[4652]: I0216 17:24:20.167751 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:24:20.176145 master-0 kubenswrapper[4652]: I0216 17:24:20.176100 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" Feb 16 17:24:20.187762 master-0 kubenswrapper[4652]: I0216 17:24:20.187730 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 16 17:24:20.207343 master-0 kubenswrapper[4652]: I0216 17:24:20.207294 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 17:24:20.227465 master-0 kubenswrapper[4652]: I0216 17:24:20.227430 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 16 17:24:20.255673 master-0 kubenswrapper[4652]: I0216 17:24:20.255633 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 16 17:24:20.258085 master-0 kubenswrapper[4652]: I0216 17:24:20.258045 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:20.258178 master-0 kubenswrapper[4652]: I0216 17:24:20.258145 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:20.258214 master-0 kubenswrapper[4652]: I0216 17:24:20.258202 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:20.258279 master-0 kubenswrapper[4652]: I0216 17:24:20.258267 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:20.258342 master-0 kubenswrapper[4652]: W0216 17:24:20.258301 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0393fe12_2533_4c9c_a8e4_a58003c88f36.slice/crio-9150ec5c92e59e33d34178cf21691200b58f7460713295cd74b621cfef76f34d WatchSource:0}: Error finding container 9150ec5c92e59e33d34178cf21691200b58f7460713295cd74b621cfef76f34d: Status 404 returned error can't find the container with id 9150ec5c92e59e33d34178cf21691200b58f7460713295cd74b621cfef76f34d Feb 16 17:24:20.265753 master-0 kubenswrapper[4652]: I0216 17:24:20.263544 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbrtz\" (UniqueName: \"kubernetes.io/projected/0517b180-00ee-47fe-a8e7-36a3931b7e72-kube-api-access-sbrtz\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:20.265753 master-0 kubenswrapper[4652]: I0216 17:24:20.263759 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh2cd\" (UniqueName: \"kubernetes.io/projected/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-kube-api-access-hh2cd\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:20.268042 master-0 kubenswrapper[4652]: I0216 17:24:20.267672 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 17:24:20.274108 master-0 kubenswrapper[4652]: I0216 17:24:20.274073 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzpnw\" (UniqueName: \"kubernetes.io/projected/642e5115-b7f2-4561-bc6b-1a74b6d891c4-kube-api-access-dzpnw\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:20.288314 master-0 kubenswrapper[4652]: I0216 17:24:20.288277 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 17:24:20.307091 master-0 kubenswrapper[4652]: I0216 17:24:20.307044 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 16 17:24:20.327450 master-0 kubenswrapper[4652]: I0216 17:24:20.327423 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 17:24:20.341966 master-0 kubenswrapper[4652]: W0216 17:24:20.341880 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62fc29f4_557f_4a75_8b78_6ca425c81b81.slice/crio-5cd999d268cd906fbe9d2f9fdf2f05cdf56fe14f6d096e03728ce467f6885bb4 WatchSource:0}: Error finding container 5cd999d268cd906fbe9d2f9fdf2f05cdf56fe14f6d096e03728ce467f6885bb4: Status 404 returned error can't find the container with id 5cd999d268cd906fbe9d2f9fdf2f05cdf56fe14f6d096e03728ce467f6885bb4 Feb 16 17:24:20.346873 master-0 kubenswrapper[4652]: I0216 17:24:20.346841 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 17:24:20.361032 master-0 kubenswrapper[4652]: I0216 17:24:20.361005 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:20.361172 master-0 kubenswrapper[4652]: I0216 17:24:20.361159 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:20.364956 master-0 kubenswrapper[4652]: I0216 17:24:20.364926 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:20.365281 master-0 kubenswrapper[4652]: I0216 17:24:20.365193 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:20.374490 master-0 kubenswrapper[4652]: I0216 17:24:20.374463 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 17:24:20.387569 master-0 kubenswrapper[4652]: I0216 17:24:20.387164 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 17:24:20.407781 master-0 kubenswrapper[4652]: I0216 17:24:20.407740 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 17:24:20.427798 master-0 kubenswrapper[4652]: I0216 17:24:20.427749 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:24:20.447564 master-0 kubenswrapper[4652]: I0216 17:24:20.447507 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 16 17:24:20.465896 master-0 kubenswrapper[4652]: I0216 17:24:20.465408 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:20.465896 master-0 kubenswrapper[4652]: I0216 17:24:20.465536 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:20.465896 master-0 kubenswrapper[4652]: I0216 17:24:20.465748 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:20.465896 master-0 kubenswrapper[4652]: I0216 17:24:20.465810 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:20.467410 master-0 kubenswrapper[4652]: I0216 17:24:20.467369 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-b9gfw" Feb 16 17:24:20.468851 master-0 kubenswrapper[4652]: I0216 17:24:20.468816 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9bv7\" (UniqueName: \"kubernetes.io/projected/29402454-a920-471e-895e-764235d16eb4-kube-api-access-r9bv7\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:20.469397 master-0 kubenswrapper[4652]: I0216 17:24:20.469352 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx9vc\" (UniqueName: \"kubernetes.io/projected/74b2561b-933b-4c58-a63a-7a8c671d0ae9-kube-api-access-kx9vc\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:20.487299 master-0 kubenswrapper[4652]: I0216 17:24:20.487102 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 17:24:20.508324 master-0 kubenswrapper[4652]: I0216 17:24:20.508266 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 17:24:20.526884 master-0 kubenswrapper[4652]: I0216 17:24:20.526828 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 17:24:20.547540 master-0 kubenswrapper[4652]: I0216 17:24:20.547497 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 17:24:20.567495 master-0 kubenswrapper[4652]: I0216 17:24:20.567452 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 17:24:20.570818 master-0 kubenswrapper[4652]: I0216 17:24:20.570751 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:20.571174 master-0 kubenswrapper[4652]: I0216 17:24:20.571127 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:20.571326 master-0 kubenswrapper[4652]: I0216 17:24:20.571284 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:20.571398 master-0 kubenswrapper[4652]: I0216 17:24:20.571361 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:20.571675 master-0 kubenswrapper[4652]: I0216 17:24:20.571635 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:20.574572 master-0 kubenswrapper[4652]: I0216 17:24:20.574553 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhz6z\" (UniqueName: \"kubernetes.io/projected/f3beb7bf-922f-425d-8a19-fd407a7153a8-kube-api-access-qhz6z\") pod \"certified-operators-z69zq\" (UID: \"f3beb7bf-922f-425d-8a19-fd407a7153a8\") " pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:20.575348 master-0 kubenswrapper[4652]: I0216 17:24:20.575310 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dpp2\" (UniqueName: \"kubernetes.io/projected/737fcc7d-d850-4352-9f17-383c85d5bc28-kube-api-access-5dpp2\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:20.587352 master-0 kubenswrapper[4652]: I0216 17:24:20.587306 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-kh5s4" Feb 16 17:24:20.608363 master-0 kubenswrapper[4652]: I0216 17:24:20.608323 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 17:24:20.627421 master-0 kubenswrapper[4652]: I0216 17:24:20.627358 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 17:24:20.647187 master-0 kubenswrapper[4652]: I0216 17:24:20.647138 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 17:24:20.667660 master-0 kubenswrapper[4652]: I0216 17:24:20.667611 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 17:24:20.692904 master-0 kubenswrapper[4652]: I0216 17:24:20.692857 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 17:24:20.708065 master-0 kubenswrapper[4652]: I0216 17:24:20.707446 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 17:24:20.727165 master-0 kubenswrapper[4652]: I0216 17:24:20.727138 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 17:24:20.731733 master-0 kubenswrapper[4652]: I0216 17:24:20.731661 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fmhb\" (UniqueName: \"kubernetes.io/projected/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-kube-api-access-6fmhb\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:20.747459 master-0 kubenswrapper[4652]: I0216 17:24:20.747436 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 17:24:20.766221 master-0 kubenswrapper[4652]: I0216 17:24:20.766184 4652 request.go:700] Waited for 1.006771253s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&limit=500&resourceVersion=0 Feb 16 17:24:20.767649 master-0 kubenswrapper[4652]: I0216 17:24:20.767622 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 17:24:20.783003 master-0 kubenswrapper[4652]: I0216 17:24:20.782944 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:20.786812 master-0 kubenswrapper[4652]: I0216 17:24:20.786751 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wht\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-kube-api-access-w4wht\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:20.787574 master-0 kubenswrapper[4652]: I0216 17:24:20.787526 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 17:24:20.807880 master-0 kubenswrapper[4652]: I0216 17:24:20.807842 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 17:24:20.819328 master-0 kubenswrapper[4652]: E0216 17:24:20.819292 4652 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Feb 16 17:24:20.819756 master-0 kubenswrapper[4652]: E0216 17:24:20.819369 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs podName:ad805251-19d0-4d2f-b741-7d11158f1f03 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:36.819351163 +0000 UTC m=+34.207519669 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs") pod "network-metrics-daemon-279g6" (UID: "ad805251-19d0-4d2f-b741-7d11158f1f03") : failed to sync secret cache: timed out waiting for the condition Feb 16 17:24:20.826752 master-0 kubenswrapper[4652]: I0216 17:24:20.826711 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 17:24:20.834782 master-0 kubenswrapper[4652]: I0216 17:24:20.834676 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dxw9\" (UniqueName: \"kubernetes.io/projected/4e51bba5-0ebe-4e55-a588-38b71548c605-kube-api-access-2dxw9\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:20.847748 master-0 kubenswrapper[4652]: I0216 17:24:20.847703 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:24:20.868038 master-0 kubenswrapper[4652]: I0216 17:24:20.868006 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 17:24:20.884151 master-0 kubenswrapper[4652]: I0216 17:24:20.884116 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" Feb 16 17:24:20.887987 master-0 kubenswrapper[4652]: I0216 17:24:20.887649 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 17:24:20.907283 master-0 kubenswrapper[4652]: I0216 17:24:20.907185 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 17:24:20.927670 master-0 kubenswrapper[4652]: I0216 17:24:20.927518 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" Feb 16 17:24:20.935356 master-0 kubenswrapper[4652]: E0216 17:24:20.935309 4652 projected.go:288] Couldn't get configMap openshift-insights/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:20.936009 master-0 kubenswrapper[4652]: E0216 17:24:20.935980 4652 projected.go:288] Couldn't get configMap openshift-service-ca/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:20.947767 master-0 kubenswrapper[4652]: I0216 17:24:20.947372 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 17:24:20.967646 master-0 kubenswrapper[4652]: I0216 17:24:20.967610 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 17:24:20.988293 master-0 kubenswrapper[4652]: I0216 17:24:20.988006 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 17:24:21.006863 master-0 kubenswrapper[4652]: I0216 17:24:21.006834 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 17:24:21.007126 master-0 kubenswrapper[4652]: I0216 17:24:21.007097 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" event={"ID":"62fc29f4-557f-4a75-8b78-6ca425c81b81","Type":"ContainerStarted","Data":"506dbc1cfdf52cf8c7059dd20ced79175c7256b2cfbe9f487e38d31cd6503f85"} Feb 16 17:24:21.007192 master-0 kubenswrapper[4652]: I0216 17:24:21.007140 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" event={"ID":"62fc29f4-557f-4a75-8b78-6ca425c81b81","Type":"ContainerStarted","Data":"3dd23045740da49e6333bc1111155824f90baa10dfed02d2dd261ec71c57fcc0"} Feb 16 17:24:21.007192 master-0 kubenswrapper[4652]: I0216 17:24:21.007155 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-5bd989df77-gcfg6" event={"ID":"62fc29f4-557f-4a75-8b78-6ca425c81b81","Type":"ContainerStarted","Data":"5cd999d268cd906fbe9d2f9fdf2f05cdf56fe14f6d096e03728ce467f6885bb4"} Feb 16 17:24:21.009518 master-0 kubenswrapper[4652]: I0216 17:24:21.009480 4652 generic.go:334] "Generic (PLEG): container finished" podID="0393fe12-2533-4c9c-a8e4-a58003c88f36" containerID="47c0ad01088000ea52fb4116c51eaf83dfcc26849ea7c6b42288202c4b943db1" exitCode=0 Feb 16 17:24:21.009824 master-0 kubenswrapper[4652]: I0216 17:24:21.009595 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerDied","Data":"47c0ad01088000ea52fb4116c51eaf83dfcc26849ea7c6b42288202c4b943db1"} Feb 16 17:24:21.009864 master-0 kubenswrapper[4652]: I0216 17:24:21.009850 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerStarted","Data":"9150ec5c92e59e33d34178cf21691200b58f7460713295cd74b621cfef76f34d"} Feb 16 17:24:21.011905 master-0 kubenswrapper[4652]: I0216 17:24:21.011881 4652 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:24:21.012445 master-0 kubenswrapper[4652]: I0216 17:24:21.012420 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-8569dd85ff-4vxmz_702322ac-7610-4568-9a68-b6acbd1f0c12/machine-approver-controller/7.log" Feb 16 17:24:21.013070 master-0 kubenswrapper[4652]: I0216 17:24:21.013030 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-8569dd85ff-4vxmz" event={"ID":"702322ac-7610-4568-9a68-b6acbd1f0c12","Type":"ContainerStarted","Data":"6b79ebd81e96fe94af841e19982ef839de1783e10b5f15c4bd042bf4b678fa19"} Feb 16 17:24:21.029900 master-0 kubenswrapper[4652]: I0216 17:24:21.029865 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-bstss" Feb 16 17:24:21.046668 master-0 kubenswrapper[4652]: W0216 17:24:21.046208 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80d3b238_70c3_4e71_96a1_99405352033f.slice/crio-a48065e18c2456e58d2cf7ddd691e4ff2ab057e7457e28f0c451e96590943c25 WatchSource:0}: Error finding container a48065e18c2456e58d2cf7ddd691e4ff2ab057e7457e28f0c451e96590943c25: Status 404 returned error can't find the container with id a48065e18c2456e58d2cf7ddd691e4ff2ab057e7457e28f0c451e96590943c25 Feb 16 17:24:21.046866 master-0 kubenswrapper[4652]: I0216 17:24:21.046833 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 17:24:21.071323 master-0 kubenswrapper[4652]: I0216 17:24:21.071236 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 17:24:21.084413 master-0 kubenswrapper[4652]: I0216 17:24:21.084360 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:21.084413 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:21.084413 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:21.084413 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:21.084619 master-0 kubenswrapper[4652]: I0216 17:24:21.084414 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:21.094154 master-0 kubenswrapper[4652]: I0216 17:24:21.094130 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 17:24:21.108606 master-0 kubenswrapper[4652]: I0216 17:24:21.108574 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 17:24:21.127349 master-0 kubenswrapper[4652]: I0216 17:24:21.127313 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 17:24:21.147554 master-0 kubenswrapper[4652]: I0216 17:24:21.147508 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:24:21.167429 master-0 kubenswrapper[4652]: I0216 17:24:21.167386 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 17:24:21.187439 master-0 kubenswrapper[4652]: I0216 17:24:21.187406 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 17:24:21.207096 master-0 kubenswrapper[4652]: I0216 17:24:21.207068 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 17:24:21.227140 master-0 kubenswrapper[4652]: I0216 17:24:21.227104 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 17:24:21.247115 master-0 kubenswrapper[4652]: I0216 17:24:21.247066 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-wxz7g" Feb 16 17:24:21.267360 master-0 kubenswrapper[4652]: I0216 17:24:21.267316 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 17:24:21.287482 master-0 kubenswrapper[4652]: I0216 17:24:21.287453 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 17:24:21.307134 master-0 kubenswrapper[4652]: I0216 17:24:21.307089 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 17:24:21.327174 master-0 kubenswrapper[4652]: I0216 17:24:21.327124 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 17:24:21.347359 master-0 kubenswrapper[4652]: I0216 17:24:21.347312 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 17:24:21.367533 master-0 kubenswrapper[4652]: I0216 17:24:21.367419 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-q2gzj" Feb 16 17:24:21.387614 master-0 kubenswrapper[4652]: I0216 17:24:21.387569 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 17:24:21.407501 master-0 kubenswrapper[4652]: I0216 17:24:21.407456 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:24:21.426889 master-0 kubenswrapper[4652]: I0216 17:24:21.426857 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 17:24:21.453972 master-0 kubenswrapper[4652]: I0216 17:24:21.453936 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 17:24:21.466650 master-0 kubenswrapper[4652]: E0216 17:24:21.466548 4652 projected.go:288] Couldn't get configMap openshift-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:21.466650 master-0 kubenswrapper[4652]: E0216 17:24:21.466577 4652 projected.go:288] Couldn't get configMap openshift-cluster-samples-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:21.467756 master-0 kubenswrapper[4652]: I0216 17:24:21.467730 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 17:24:21.488062 master-0 kubenswrapper[4652]: I0216 17:24:21.487992 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:24:21.508455 master-0 kubenswrapper[4652]: I0216 17:24:21.508391 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-7mlbn" Feb 16 17:24:21.527306 master-0 kubenswrapper[4652]: I0216 17:24:21.527205 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:24:21.555666 master-0 kubenswrapper[4652]: I0216 17:24:21.555599 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:24:21.567134 master-0 kubenswrapper[4652]: I0216 17:24:21.567093 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:24:21.572329 master-0 kubenswrapper[4652]: E0216 17:24:21.572306 4652 projected.go:288] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:21.586548 master-0 kubenswrapper[4652]: I0216 17:24:21.586514 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:24:21.597322 master-0 kubenswrapper[4652]: I0216 17:24:21.597291 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:21.607300 master-0 kubenswrapper[4652]: I0216 17:24:21.607280 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:24:21.626876 master-0 kubenswrapper[4652]: I0216 17:24:21.626734 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 17:24:21.637170 master-0 kubenswrapper[4652]: E0216 17:24:21.637123 4652 projected.go:194] Error preparing data for projected volume kube-api-access-nrzjr for pod openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:21.637373 master-0 kubenswrapper[4652]: E0216 17:24:21.637242 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr podName:d1524fc1-d157-435a-8bf8-7e877c45909d nodeName:}" failed. No retries permitted until 2026-02-16 17:24:37.63721262 +0000 UTC m=+35.025381176 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nrzjr" (UniqueName: "kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr") pod "cluster-samples-operator-f8cbff74c-spxm9" (UID: "d1524fc1-d157-435a-8bf8-7e877c45909d") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:21.647773 master-0 kubenswrapper[4652]: I0216 17:24:21.647712 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 17:24:21.668149 master-0 kubenswrapper[4652]: I0216 17:24:21.668089 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 17:24:21.687501 master-0 kubenswrapper[4652]: I0216 17:24:21.686971 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 17:24:21.708268 master-0 kubenswrapper[4652]: I0216 17:24:21.708195 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-gtxjb" Feb 16 17:24:21.728198 master-0 kubenswrapper[4652]: I0216 17:24:21.728141 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 17:24:21.748416 master-0 kubenswrapper[4652]: I0216 17:24:21.748353 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 17:24:21.773887 master-0 kubenswrapper[4652]: I0216 17:24:21.773812 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 17:24:21.785658 master-0 kubenswrapper[4652]: I0216 17:24:21.785554 4652 request.go:700] Waited for 2.022851223s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&limit=500&resourceVersion=0 Feb 16 17:24:21.787410 master-0 kubenswrapper[4652]: I0216 17:24:21.787364 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 17:24:21.807346 master-0 kubenswrapper[4652]: I0216 17:24:21.807301 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 17:24:21.826773 master-0 kubenswrapper[4652]: I0216 17:24:21.826721 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 17:24:21.848629 master-0 kubenswrapper[4652]: I0216 17:24:21.848600 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 17:24:21.876315 master-0 kubenswrapper[4652]: I0216 17:24:21.876273 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 17:24:21.888095 master-0 kubenswrapper[4652]: I0216 17:24:21.887831 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 17:24:21.907675 master-0 kubenswrapper[4652]: I0216 17:24:21.906830 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 17:24:21.907675 master-0 kubenswrapper[4652]: E0216 17:24:21.906977 4652 projected.go:194] Error preparing data for projected volume kube-api-access-fhcw6 for pod openshift-apiserver/apiserver-fc4bf7f79-tqnlw: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:21.907675 master-0 kubenswrapper[4652]: E0216 17:24:21.907023 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6 podName:dce85b5e-6e92-4e0e-bee7-07b1a3634302 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:37.907008354 +0000 UTC m=+35.295176870 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-fhcw6" (UniqueName: "kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6") pod "apiserver-fc4bf7f79-tqnlw" (UID: "dce85b5e-6e92-4e0e-bee7-07b1a3634302") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:21.927841 master-0 kubenswrapper[4652]: I0216 17:24:21.927465 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 17:24:21.935884 master-0 kubenswrapper[4652]: E0216 17:24:21.935848 4652 projected.go:288] Couldn't get configMap openshift-insights/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:21.935884 master-0 kubenswrapper[4652]: E0216 17:24:21.935884 4652 projected.go:194] Error preparing data for projected volume kube-api-access-hnshv for pod openshift-insights/insights-operator-cb4f7b4cf-6qrw5: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:21.936058 master-0 kubenswrapper[4652]: E0216 17:24:21.935955 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv podName:c2511146-1d04-4ecd-a28e-79662ef7b9d3 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:37.935933662 +0000 UTC m=+35.324102238 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-hnshv" (UniqueName: "kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv") pod "insights-operator-cb4f7b4cf-6qrw5" (UID: "c2511146-1d04-4ecd-a28e-79662ef7b9d3") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:21.937309 master-0 kubenswrapper[4652]: E0216 17:24:21.937272 4652 projected.go:288] Couldn't get configMap openshift-service-ca/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:21.937363 master-0 kubenswrapper[4652]: E0216 17:24:21.937309 4652 projected.go:194] Error preparing data for projected volume kube-api-access-nqfds for pod openshift-service-ca/service-ca-676cd8b9b5-cp9rb: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:21.937394 master-0 kubenswrapper[4652]: E0216 17:24:21.937379 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds podName:48801344-a48a-493e-aea4-19d998d0b708 nodeName:}" failed. No retries permitted until 2026-02-16 17:24:37.93736061 +0000 UTC m=+35.325529216 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nqfds" (UniqueName: "kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds") pod "service-ca-676cd8b9b5-cp9rb" (UID: "48801344-a48a-493e-aea4-19d998d0b708") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:21.947175 master-0 kubenswrapper[4652]: I0216 17:24:21.947121 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-b4rnj" Feb 16 17:24:21.967503 master-0 kubenswrapper[4652]: I0216 17:24:21.967452 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 17:24:21.987039 master-0 kubenswrapper[4652]: I0216 17:24:21.987012 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 17:24:22.012967 master-0 kubenswrapper[4652]: I0216 17:24:22.012510 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-nslxl" Feb 16 17:24:22.013374 master-0 kubenswrapper[4652]: I0216 17:24:22.013322 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:22.018585 master-0 kubenswrapper[4652]: I0216 17:24:22.018541 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerStarted","Data":"192bce0819a8e3ef735e7432920ceeaf3409685ff9f6bffc8c696e373c264acc"} Feb 16 17:24:22.021498 master-0 kubenswrapper[4652]: I0216 17:24:22.021011 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" event={"ID":"80d3b238-70c3-4e71-96a1-99405352033f","Type":"ContainerStarted","Data":"4eb02e8608a9aa5d8b4329e07b7f818caff0b804dedd2417c34db1a3230353b3"} Feb 16 17:24:22.021498 master-0 kubenswrapper[4652]: I0216 17:24:22.021056 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-74b6595c6d-pfzq2" event={"ID":"80d3b238-70c3-4e71-96a1-99405352033f","Type":"ContainerStarted","Data":"a48065e18c2456e58d2cf7ddd691e4ff2ab057e7457e28f0c451e96590943c25"} Feb 16 17:24:22.027058 master-0 kubenswrapper[4652]: I0216 17:24:22.027009 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 17:24:22.047195 master-0 kubenswrapper[4652]: I0216 17:24:22.046965 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 17:24:22.068138 master-0 kubenswrapper[4652]: I0216 17:24:22.068071 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 17:24:22.084128 master-0 kubenswrapper[4652]: I0216 17:24:22.083995 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:22.084128 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:22.084128 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:22.084128 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:22.084128 master-0 kubenswrapper[4652]: I0216 17:24:22.084038 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:22.088764 master-0 kubenswrapper[4652]: I0216 17:24:22.088724 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 17:24:22.107886 master-0 kubenswrapper[4652]: I0216 17:24:22.107817 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 17:24:22.127809 master-0 kubenswrapper[4652]: I0216 17:24:22.127758 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 17:24:22.149824 master-0 kubenswrapper[4652]: I0216 17:24:22.149722 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 17:24:22.167830 master-0 kubenswrapper[4652]: I0216 17:24:22.167781 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 17:24:22.194139 master-0 kubenswrapper[4652]: I0216 17:24:22.194091 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 16 17:24:22.208953 master-0 kubenswrapper[4652]: I0216 17:24:22.208911 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 16 17:24:22.227498 master-0 kubenswrapper[4652]: I0216 17:24:22.227431 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 16 17:24:22.263275 master-0 kubenswrapper[4652]: I0216 17:24:22.251572 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 16 17:24:22.263275 master-0 kubenswrapper[4652]: I0216 17:24:22.251747 4652 kubelet_pods.go:1000] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/redhat-operators-lnzfx" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 16 17:24:22.263275 master-0 kubenswrapper[4652]: I0216 17:24:22.251801 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:22.270636 master-0 kubenswrapper[4652]: I0216 17:24:22.270585 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 16 17:24:22.287572 master-0 kubenswrapper[4652]: I0216 17:24:22.287531 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 17:24:22.307347 master-0 kubenswrapper[4652]: I0216 17:24:22.307295 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 17:24:22.318220 master-0 kubenswrapper[4652]: I0216 17:24:22.318171 4652 kubelet_pods.go:1000] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/community-operators-7w4km" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 16 17:24:22.318440 master-0 kubenswrapper[4652]: I0216 17:24:22.318298 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:22.327987 master-0 kubenswrapper[4652]: I0216 17:24:22.327933 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 17:24:22.347335 master-0 kubenswrapper[4652]: I0216 17:24:22.347284 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-rzjlw" Feb 16 17:24:22.366926 master-0 kubenswrapper[4652]: I0216 17:24:22.366888 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 17:24:22.387341 master-0 kubenswrapper[4652]: I0216 17:24:22.387309 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 17:24:22.394657 master-0 kubenswrapper[4652]: W0216 17:24:22.394609 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3beb7bf_922f_425d_8a19_fd407a7153a8.slice/crio-68c6a683399a0bcf7c673783f3752d8335c281a2ef01dacee94d8a902e837ebc WatchSource:0}: Error finding container 68c6a683399a0bcf7c673783f3752d8335c281a2ef01dacee94d8a902e837ebc: Status 404 returned error can't find the container with id 68c6a683399a0bcf7c673783f3752d8335c281a2ef01dacee94d8a902e837ebc Feb 16 17:24:22.407358 master-0 kubenswrapper[4652]: I0216 17:24:22.407320 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 17:24:22.428419 master-0 kubenswrapper[4652]: I0216 17:24:22.428374 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-5lx84" Feb 16 17:24:22.448196 master-0 kubenswrapper[4652]: I0216 17:24:22.447887 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 17:24:22.467989 master-0 kubenswrapper[4652]: I0216 17:24:22.467954 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 17:24:22.487445 master-0 kubenswrapper[4652]: I0216 17:24:22.487408 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 17:24:22.492823 master-0 kubenswrapper[4652]: E0216 17:24:22.492777 4652 projected.go:194] Error preparing data for projected volume kube-api-access-xvwzr for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:22.493022 master-0 kubenswrapper[4652]: E0216 17:24:22.492852 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr podName:8e623376-9e14-4341-9dcf-7a7c218b6f9f nodeName:}" failed. No retries permitted until 2026-02-16 17:24:38.492834009 +0000 UTC m=+35.881002515 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-xvwzr" (UniqueName: "kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr") pod "kube-storage-version-migrator-operator-cd5474998-829l6" (UID: "8e623376-9e14-4341-9dcf-7a7c218b6f9f") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:24:22.507665 master-0 kubenswrapper[4652]: I0216 17:24:22.507636 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 17:24:22.527066 master-0 kubenswrapper[4652]: I0216 17:24:22.527027 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 17:24:22.547628 master-0 kubenswrapper[4652]: I0216 17:24:22.547606 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-t46bw" Feb 16 17:24:22.567191 master-0 kubenswrapper[4652]: I0216 17:24:22.567169 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 17:24:22.587089 master-0 kubenswrapper[4652]: I0216 17:24:22.587073 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 17:24:22.608428 master-0 kubenswrapper[4652]: I0216 17:24:22.608330 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 17:24:22.627035 master-0 kubenswrapper[4652]: I0216 17:24:22.627001 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" Feb 16 17:24:22.627622 master-0 kubenswrapper[4652]: W0216 17:24:22.627594 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod822e1750_652e_4ceb_8fea_b2c1c905b0f1.slice/crio-3802156a378faaaf6273478b07a6a6c2f9b2439f75af9bf19b9d0513b5a4eca7 WatchSource:0}: Error finding container 3802156a378faaaf6273478b07a6a6c2f9b2439f75af9bf19b9d0513b5a4eca7: Status 404 returned error can't find the container with id 3802156a378faaaf6273478b07a6a6c2f9b2439f75af9bf19b9d0513b5a4eca7 Feb 16 17:24:22.648105 master-0 kubenswrapper[4652]: I0216 17:24:22.648079 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 17:24:22.667676 master-0 kubenswrapper[4652]: I0216 17:24:22.667596 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 17:24:22.682162 master-0 kubenswrapper[4652]: W0216 17:24:22.682110 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc9a20f4_255a_4312_8f43_174a28c06340.slice/crio-88a5a83d7b753108885c13b4535c60e57ad4883968504e6d578eb1fe2eb45677 WatchSource:0}: Error finding container 88a5a83d7b753108885c13b4535c60e57ad4883968504e6d578eb1fe2eb45677: Status 404 returned error can't find the container with id 88a5a83d7b753108885c13b4535c60e57ad4883968504e6d578eb1fe2eb45677 Feb 16 17:24:22.687162 master-0 kubenswrapper[4652]: I0216 17:24:22.687137 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-6858s" Feb 16 17:24:22.715391 master-0 kubenswrapper[4652]: I0216 17:24:22.714700 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 17:24:23.027615 master-0 kubenswrapper[4652]: I0216 17:24:23.027553 4652 generic.go:334] "Generic (PLEG): container finished" podID="f3beb7bf-922f-425d-8a19-fd407a7153a8" containerID="b530955f2fd6da97e91158214044b5e0c532d2247fdbc50670bdcbfaec06c865" exitCode=0 Feb 16 17:24:23.027906 master-0 kubenswrapper[4652]: I0216 17:24:23.027623 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerDied","Data":"b530955f2fd6da97e91158214044b5e0c532d2247fdbc50670bdcbfaec06c865"} Feb 16 17:24:23.027906 master-0 kubenswrapper[4652]: I0216 17:24:23.027721 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerStarted","Data":"68c6a683399a0bcf7c673783f3752d8335c281a2ef01dacee94d8a902e837ebc"} Feb 16 17:24:23.029800 master-0 kubenswrapper[4652]: I0216 17:24:23.029559 4652 generic.go:334] "Generic (PLEG): container finished" podID="cc9a20f4-255a-4312-8f43-174a28c06340" containerID="5cf2a56b82b6e323310745e18f0a4874831382569e85d2eb062d0da55372d34d" exitCode=0 Feb 16 17:24:23.029800 master-0 kubenswrapper[4652]: I0216 17:24:23.029622 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerDied","Data":"5cf2a56b82b6e323310745e18f0a4874831382569e85d2eb062d0da55372d34d"} Feb 16 17:24:23.029800 master-0 kubenswrapper[4652]: I0216 17:24:23.029645 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerStarted","Data":"88a5a83d7b753108885c13b4535c60e57ad4883968504e6d578eb1fe2eb45677"} Feb 16 17:24:23.031199 master-0 kubenswrapper[4652]: I0216 17:24:23.031172 4652 generic.go:334] "Generic (PLEG): container finished" podID="0393fe12-2533-4c9c-a8e4-a58003c88f36" containerID="192bce0819a8e3ef735e7432920ceeaf3409685ff9f6bffc8c696e373c264acc" exitCode=0 Feb 16 17:24:23.031384 master-0 kubenswrapper[4652]: I0216 17:24:23.031232 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerDied","Data":"192bce0819a8e3ef735e7432920ceeaf3409685ff9f6bffc8c696e373c264acc"} Feb 16 17:24:23.032725 master-0 kubenswrapper[4652]: I0216 17:24:23.032687 4652 generic.go:334] "Generic (PLEG): container finished" podID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" containerID="72cfa1b389d403ab72b02216274ff4230439528e448b41fad4f864af707f65d0" exitCode=0 Feb 16 17:24:23.032793 master-0 kubenswrapper[4652]: I0216 17:24:23.032724 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerDied","Data":"72cfa1b389d403ab72b02216274ff4230439528e448b41fad4f864af707f65d0"} Feb 16 17:24:23.032793 master-0 kubenswrapper[4652]: I0216 17:24:23.032751 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerStarted","Data":"3802156a378faaaf6273478b07a6a6c2f9b2439f75af9bf19b9d0513b5a4eca7"} Feb 16 17:24:23.083103 master-0 kubenswrapper[4652]: I0216 17:24:23.082924 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:23.083103 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:23.083103 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:23.083103 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:23.083103 master-0 kubenswrapper[4652]: I0216 17:24:23.083013 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:23.592137 master-0 kubenswrapper[4652]: I0216 17:24:23.592081 4652 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Feb 16 17:24:24.039376 master-0 kubenswrapper[4652]: I0216 17:24:24.039241 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerStarted","Data":"fc4a189ed97846fc245759006c660c2d7501f58f500a4b066e795aeb0d10008a"} Feb 16 17:24:24.040890 master-0 kubenswrapper[4652]: I0216 17:24:24.040861 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerStarted","Data":"3a7a5f0fe222a3d6c22888efe09495d3c05226feec7a1b489195ed2e1cfc9beb"} Feb 16 17:24:24.044239 master-0 kubenswrapper[4652]: I0216 17:24:24.044196 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4kd66" event={"ID":"0393fe12-2533-4c9c-a8e4-a58003c88f36","Type":"ContainerStarted","Data":"2be5b3e38beeb3939111bda8e303a8d0ccb7239aa6442b3c1a7465eef966474a"} Feb 16 17:24:24.084797 master-0 kubenswrapper[4652]: I0216 17:24:24.084734 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:24.084797 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:24.084797 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:24.084797 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:24.084797 master-0 kubenswrapper[4652]: I0216 17:24:24.084785 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:25.049885 master-0 kubenswrapper[4652]: I0216 17:24:25.049472 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerStarted","Data":"9848f9658d4c53082a0c3c870391c6ddfc59fddfdfd7ea8850f0d1c3f3f624c2"} Feb 16 17:24:25.051296 master-0 kubenswrapper[4652]: I0216 17:24:25.051000 4652 generic.go:334] "Generic (PLEG): container finished" podID="f3beb7bf-922f-425d-8a19-fd407a7153a8" containerID="3a7a5f0fe222a3d6c22888efe09495d3c05226feec7a1b489195ed2e1cfc9beb" exitCode=0 Feb 16 17:24:25.051296 master-0 kubenswrapper[4652]: I0216 17:24:25.051034 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerDied","Data":"3a7a5f0fe222a3d6c22888efe09495d3c05226feec7a1b489195ed2e1cfc9beb"} Feb 16 17:24:25.082414 master-0 kubenswrapper[4652]: I0216 17:24:25.082361 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:25.082414 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:25.082414 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:25.082414 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:25.082414 master-0 kubenswrapper[4652]: I0216 17:24:25.082409 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:26.057640 master-0 kubenswrapper[4652]: I0216 17:24:26.057564 4652 generic.go:334] "Generic (PLEG): container finished" podID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" containerID="fc4a189ed97846fc245759006c660c2d7501f58f500a4b066e795aeb0d10008a" exitCode=0 Feb 16 17:24:26.057640 master-0 kubenswrapper[4652]: I0216 17:24:26.057643 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerDied","Data":"fc4a189ed97846fc245759006c660c2d7501f58f500a4b066e795aeb0d10008a"} Feb 16 17:24:26.062088 master-0 kubenswrapper[4652]: I0216 17:24:26.062043 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z69zq" event={"ID":"f3beb7bf-922f-425d-8a19-fd407a7153a8","Type":"ContainerStarted","Data":"a9c25b32013951b921f10090fc2667160205aa4097c3d982571e84d90bb810dc"} Feb 16 17:24:26.064628 master-0 kubenswrapper[4652]: I0216 17:24:26.064562 4652 generic.go:334] "Generic (PLEG): container finished" podID="cc9a20f4-255a-4312-8f43-174a28c06340" containerID="9848f9658d4c53082a0c3c870391c6ddfc59fddfdfd7ea8850f0d1c3f3f624c2" exitCode=0 Feb 16 17:24:26.064628 master-0 kubenswrapper[4652]: I0216 17:24:26.064612 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerDied","Data":"9848f9658d4c53082a0c3c870391c6ddfc59fddfdfd7ea8850f0d1c3f3f624c2"} Feb 16 17:24:26.082833 master-0 kubenswrapper[4652]: I0216 17:24:26.082518 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:26.082833 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:26.082833 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:26.082833 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:26.082833 master-0 kubenswrapper[4652]: I0216 17:24:26.082827 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:27.070108 master-0 kubenswrapper[4652]: I0216 17:24:27.070065 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lnzfx" event={"ID":"822e1750-652e-4ceb-8fea-b2c1c905b0f1","Type":"ContainerStarted","Data":"bfa080e09f87215cd5db6fd760e3d2e5212c52487cff64c18ce82a6b1067bc20"} Feb 16 17:24:27.072625 master-0 kubenswrapper[4652]: I0216 17:24:27.072583 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7w4km" event={"ID":"cc9a20f4-255a-4312-8f43-174a28c06340","Type":"ContainerStarted","Data":"167a86509608364aabc48782ca387177de500bc50e030a975d5835ef557df8af"} Feb 16 17:24:27.082758 master-0 kubenswrapper[4652]: I0216 17:24:27.082698 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:27.082758 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:27.082758 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:27.082758 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:27.083052 master-0 kubenswrapper[4652]: I0216 17:24:27.082784 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:27.243109 master-0 kubenswrapper[4652]: I0216 17:24:27.243050 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:24:28.082991 master-0 kubenswrapper[4652]: I0216 17:24:28.082934 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:28.082991 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:28.082991 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:28.082991 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:28.084121 master-0 kubenswrapper[4652]: I0216 17:24:28.084079 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:29.082311 master-0 kubenswrapper[4652]: I0216 17:24:29.082263 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:29.082311 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:29.082311 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:29.082311 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:29.082657 master-0 kubenswrapper[4652]: I0216 17:24:29.082320 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:30.020558 master-0 kubenswrapper[4652]: I0216 17:24:30.020352 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:30.021269 master-0 kubenswrapper[4652]: I0216 17:24:30.020620 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:24:30.040234 master-0 kubenswrapper[4652]: I0216 17:24:30.040178 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:30.040499 master-0 kubenswrapper[4652]: I0216 17:24:30.040231 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:30.047867 master-0 kubenswrapper[4652]: I0216 17:24:30.047832 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-flr86" Feb 16 17:24:30.083451 master-0 kubenswrapper[4652]: I0216 17:24:30.083289 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:30.083451 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:30.083451 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:30.083451 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:30.083796 master-0 kubenswrapper[4652]: I0216 17:24:30.083485 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:30.131220 master-0 kubenswrapper[4652]: I0216 17:24:30.131012 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:30.187428 master-0 kubenswrapper[4652]: I0216 17:24:30.187388 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4kd66" Feb 16 17:24:31.081989 master-0 kubenswrapper[4652]: I0216 17:24:31.081920 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:31.081989 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:31.081989 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:31.081989 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:31.081989 master-0 kubenswrapper[4652]: I0216 17:24:31.081985 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:32.031113 master-0 kubenswrapper[4652]: I0216 17:24:32.030667 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:32.031113 master-0 kubenswrapper[4652]: I0216 17:24:32.030727 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:32.077101 master-0 kubenswrapper[4652]: I0216 17:24:32.077061 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:32.082455 master-0 kubenswrapper[4652]: I0216 17:24:32.082413 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:32.082455 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:32.082455 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:32.082455 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:32.083435 master-0 kubenswrapper[4652]: I0216 17:24:32.083371 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:32.141306 master-0 kubenswrapper[4652]: I0216 17:24:32.141243 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z69zq" Feb 16 17:24:32.252909 master-0 kubenswrapper[4652]: I0216 17:24:32.252771 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:32.252909 master-0 kubenswrapper[4652]: I0216 17:24:32.252831 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:32.318947 master-0 kubenswrapper[4652]: I0216 17:24:32.318867 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:32.320591 master-0 kubenswrapper[4652]: I0216 17:24:32.320555 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:32.367814 master-0 kubenswrapper[4652]: I0216 17:24:32.367761 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:33.083640 master-0 kubenswrapper[4652]: I0216 17:24:33.083564 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:33.083640 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:33.083640 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:33.083640 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:33.084689 master-0 kubenswrapper[4652]: I0216 17:24:33.083649 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:33.140484 master-0 kubenswrapper[4652]: I0216 17:24:33.140389 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7w4km" Feb 16 17:24:33.295802 master-0 kubenswrapper[4652]: I0216 17:24:33.295705 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lnzfx" podUID="822e1750-652e-4ceb-8fea-b2c1c905b0f1" containerName="registry-server" probeResult="failure" output=< Feb 16 17:24:33.295802 master-0 kubenswrapper[4652]: timeout: failed to connect service ":50051" within 1s Feb 16 17:24:33.295802 master-0 kubenswrapper[4652]: > Feb 16 17:24:34.082862 master-0 kubenswrapper[4652]: I0216 17:24:34.082780 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:34.082862 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:34.082862 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:34.082862 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:34.082862 master-0 kubenswrapper[4652]: I0216 17:24:34.082847 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:35.082755 master-0 kubenswrapper[4652]: I0216 17:24:35.082678 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:35.082755 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:35.082755 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:35.082755 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:35.082755 master-0 kubenswrapper[4652]: I0216 17:24:35.082744 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:35.584375 master-0 kubenswrapper[4652]: I0216 17:24:35.584302 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:35.584375 master-0 kubenswrapper[4652]: I0216 17:24:35.584361 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.584375 master-0 kubenswrapper[4652]: I0216 17:24:35.584382 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:35.584672 master-0 kubenswrapper[4652]: I0216 17:24:35.584400 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:35.584757 master-0 kubenswrapper[4652]: I0216 17:24:35.584654 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:35.584922 master-0 kubenswrapper[4652]: I0216 17:24:35.584832 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.585068 master-0 kubenswrapper[4652]: I0216 17:24:35.585025 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:35.585118 master-0 kubenswrapper[4652]: I0216 17:24:35.585093 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.585173 master-0 kubenswrapper[4652]: I0216 17:24:35.585146 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:35.585225 master-0 kubenswrapper[4652]: I0216 17:24:35.585200 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:35.585861 master-0 kubenswrapper[4652]: I0216 17:24:35.585286 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:35.585861 master-0 kubenswrapper[4652]: I0216 17:24:35.585419 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:35.585861 master-0 kubenswrapper[4652]: I0216 17:24:35.585516 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:35.585861 master-0 kubenswrapper[4652]: I0216 17:24:35.585606 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.585861 master-0 kubenswrapper[4652]: I0216 17:24:35.585695 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:35.585861 master-0 kubenswrapper[4652]: I0216 17:24:35.585782 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:35.586156 master-0 kubenswrapper[4652]: I0216 17:24:35.585870 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.586156 master-0 kubenswrapper[4652]: I0216 17:24:35.585920 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.586156 master-0 kubenswrapper[4652]: I0216 17:24:35.586010 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:35.586156 master-0 kubenswrapper[4652]: I0216 17:24:35.586100 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:35.586377 master-0 kubenswrapper[4652]: I0216 17:24:35.586236 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:35.586746 master-0 kubenswrapper[4652]: I0216 17:24:35.586700 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:35.586916 master-0 kubenswrapper[4652]: I0216 17:24:35.586834 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:35.587006 master-0 kubenswrapper[4652]: I0216 17:24:35.586937 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.587189 master-0 kubenswrapper[4652]: I0216 17:24:35.587110 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.587303 master-0 kubenswrapper[4652]: I0216 17:24:35.587268 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:35.587398 master-0 kubenswrapper[4652]: I0216 17:24:35.587370 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.587513 master-0 kubenswrapper[4652]: I0216 17:24:35.587468 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:35.587589 master-0 kubenswrapper[4652]: I0216 17:24:35.587563 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.587690 master-0 kubenswrapper[4652]: I0216 17:24:35.587661 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.587789 master-0 kubenswrapper[4652]: I0216 17:24:35.587760 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:35.588882 master-0 kubenswrapper[4652]: I0216 17:24:35.588775 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:35.589206 master-0 kubenswrapper[4652]: I0216 17:24:35.589157 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.590230 master-0 kubenswrapper[4652]: I0216 17:24:35.590186 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:35.590987 master-0 kubenswrapper[4652]: I0216 17:24:35.590953 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9609a4f3-b947-47af-a685-baae26c50fa3-metrics-tls\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:35.592030 master-0 kubenswrapper[4652]: I0216 17:24:35.591843 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/442600dc-09b2-4fee-9f89-777296b2ee40-serving-cert\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:35.592030 master-0 kubenswrapper[4652]: I0216 17:24:35.591918 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/2d96ccdc-0b09-437d-bfca-1958af5d9953-metrics-tls\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:35.592030 master-0 kubenswrapper[4652]: I0216 17:24:35.591936 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-profile-collector-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:35.592905 master-0 kubenswrapper[4652]: I0216 17:24:35.592864 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.592975 master-0 kubenswrapper[4652]: I0216 17:24:35.592931 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/b04ee64e-5e83-499c-812d-749b2b6824c6-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.594227 master-0 kubenswrapper[4652]: I0216 17:24:35.594189 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-serving-cert\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:35.594643 master-0 kubenswrapper[4652]: I0216 17:24:35.594614 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.596022 master-0 kubenswrapper[4652]: I0216 17:24:35.595964 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.596490 master-0 kubenswrapper[4652]: I0216 17:24:35.596445 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-ca-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:35.597074 master-0 kubenswrapper[4652]: I0216 17:24:35.596733 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-config\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:35.597074 master-0 kubenswrapper[4652]: I0216 17:24:35.596801 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-config\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:35.597074 master-0 kubenswrapper[4652]: I0216 17:24:35.596814 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.597074 master-0 kubenswrapper[4652]: I0216 17:24:35.596820 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:35.597074 master-0 kubenswrapper[4652]: I0216 17:24:35.596853 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/48801344-a48a-493e-aea4-19d998d0b708-signing-cabundle\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:35.597859 master-0 kubenswrapper[4652]: I0216 17:24:35.597818 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-telemeter-trusted-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:35.598157 master-0 kubenswrapper[4652]: I0216 17:24:35.598111 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.598864 master-0 kubenswrapper[4652]: I0216 17:24:35.598831 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d96ccdc-0b09-437d-bfca-1958af5d9953-config-volume\") pod \"dns-default-qcgxx\" (UID: \"2d96ccdc-0b09-437d-bfca-1958af5d9953\") " pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:35.598864 master-0 kubenswrapper[4652]: I0216 17:24:35.598844 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.598864 master-0 kubenswrapper[4652]: I0216 17:24:35.598845 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:35.599043 master-0 kubenswrapper[4652]: I0216 17:24:35.598892 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-audit\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.599190 master-0 kubenswrapper[4652]: I0216 17:24:35.599154 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/737fcc7d-d850-4352-9f17-383c85d5bc28-config\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:35.599669 master-0 kubenswrapper[4652]: I0216 17:24:35.599633 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/54f29618-42c2-4270-9af7-7d82852d7cec-ca-certs\") pod \"operator-controller-controller-manager-85c9b89969-lj58b\" (UID: \"54f29618-42c2-4270-9af7-7d82852d7cec\") " pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:35.600778 master-0 kubenswrapper[4652]: I0216 17:24:35.600167 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cco-trusted-ca\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:35.600778 master-0 kubenswrapper[4652]: I0216 17:24:35.600663 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9609a4f3-b947-47af-a685-baae26c50fa3-trusted-ca\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:35.600954 master-0 kubenswrapper[4652]: I0216 17:24:35.600916 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-serving-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.602407 master-0 kubenswrapper[4652]: I0216 17:24:35.602376 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-trusted-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:35.609953 master-0 kubenswrapper[4652]: I0216 17:24:35.609905 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:35.689526 master-0 kubenswrapper[4652]: I0216 17:24:35.689468 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:35.689526 master-0 kubenswrapper[4652]: I0216 17:24:35.689523 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:35.689743 master-0 kubenswrapper[4652]: I0216 17:24:35.689714 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:35.689794 master-0 kubenswrapper[4652]: I0216 17:24:35.689764 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.689826 master-0 kubenswrapper[4652]: I0216 17:24:35.689794 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.690460 master-0 kubenswrapper[4652]: I0216 17:24:35.690009 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:35.690460 master-0 kubenswrapper[4652]: I0216 17:24:35.690363 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.690460 master-0 kubenswrapper[4652]: I0216 17:24:35.690415 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:35.690460 master-0 kubenswrapper[4652]: I0216 17:24:35.690441 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:35.690640 master-0 kubenswrapper[4652]: I0216 17:24:35.690468 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:35.690640 master-0 kubenswrapper[4652]: I0216 17:24:35.690490 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:35.690640 master-0 kubenswrapper[4652]: I0216 17:24:35.690512 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:35.690640 master-0 kubenswrapper[4652]: I0216 17:24:35.690532 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:35.690640 master-0 kubenswrapper[4652]: I0216 17:24:35.690552 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:35.690640 master-0 kubenswrapper[4652]: I0216 17:24:35.690571 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:35.690640 master-0 kubenswrapper[4652]: I0216 17:24:35.690592 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:35.690640 master-0 kubenswrapper[4652]: I0216 17:24:35.690611 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:35.690640 master-0 kubenswrapper[4652]: I0216 17:24:35.690631 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:35.690902 master-0 kubenswrapper[4652]: I0216 17:24:35.690652 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:35.690902 master-0 kubenswrapper[4652]: I0216 17:24:35.690673 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.690902 master-0 kubenswrapper[4652]: I0216 17:24:35.690698 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:35.690902 master-0 kubenswrapper[4652]: I0216 17:24:35.690715 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.690902 master-0 kubenswrapper[4652]: I0216 17:24:35.690735 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:35.690902 master-0 kubenswrapper[4652]: I0216 17:24:35.690756 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:35.690902 master-0 kubenswrapper[4652]: I0216 17:24:35.690776 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:35.690902 master-0 kubenswrapper[4652]: I0216 17:24:35.690796 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:35.690902 master-0 kubenswrapper[4652]: I0216 17:24:35.690833 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:35.690902 master-0 kubenswrapper[4652]: I0216 17:24:35.690852 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:35.690902 master-0 kubenswrapper[4652]: I0216 17:24:35.690872 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:35.690902 master-0 kubenswrapper[4652]: I0216 17:24:35.690890 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:35.691381 master-0 kubenswrapper[4652]: I0216 17:24:35.690912 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:35.691875 master-0 kubenswrapper[4652]: I0216 17:24:35.691844 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:35.691925 master-0 kubenswrapper[4652]: I0216 17:24:35.691875 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:35.691925 master-0 kubenswrapper[4652]: I0216 17:24:35.691908 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:35.692007 master-0 kubenswrapper[4652]: I0216 17:24:35.691944 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:35.692007 master-0 kubenswrapper[4652]: I0216 17:24:35.691971 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:35.692007 master-0 kubenswrapper[4652]: I0216 17:24:35.691995 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692019 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692044 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692068 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692094 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692118 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692141 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692158 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692174 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692197 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692221 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692257 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692308 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692328 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692344 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692360 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.692979 master-0 kubenswrapper[4652]: I0216 17:24:35.692446 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.695509 master-0 kubenswrapper[4652]: I0216 17:24:35.694823 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/18e9a9d3-9b18-4c19-9558-f33c68101922-package-server-manager-serving-cert\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:35.695509 master-0 kubenswrapper[4652]: I0216 17:24:35.695207 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:35.695509 master-0 kubenswrapper[4652]: I0216 17:24:35.695461 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:35.696669 master-0 kubenswrapper[4652]: I0216 17:24:35.696630 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-profile-collector-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:35.696772 master-0 kubenswrapper[4652]: I0216 17:24:35.696733 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e69d8c51-e2a6-4f61-9c26-072784f6cf40-serving-cert\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:35.698391 master-0 kubenswrapper[4652]: I0216 17:24:35.698262 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:35.698453 master-0 kubenswrapper[4652]: I0216 17:24:35.698408 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-client\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:35.698668 master-0 kubenswrapper[4652]: I0216 17:24:35.698639 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1363cb7b-62cc-497b-af6f-4d5e0eb7f174-cert\") pod \"ingress-canary-qqvg4\" (UID: \"1363cb7b-62cc-497b-af6f-4d5e0eb7f174\") " pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:35.700205 master-0 kubenswrapper[4652]: I0216 17:24:35.700151 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:35.712244 master-0 kubenswrapper[4652]: I0216 17:24:35.711931 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-tls-assets\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.713070 master-0 kubenswrapper[4652]: I0216 17:24:35.712998 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/d9859457-f0d1-4754-a6c5-cf05d5abf447-kube-api-access-t4gl5\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:35.713362 master-0 kubenswrapper[4652]: I0216 17:24:35.713300 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:35.714323 master-0 kubenswrapper[4652]: I0216 17:24:35.713557 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-tls\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:35.714323 master-0 kubenswrapper[4652]: I0216 17:24:35.713893 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29402454-a920-471e-895e-764235d16eb4-serving-cert\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:35.714323 master-0 kubenswrapper[4652]: I0216 17:24:35.713897 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dptnc\" (UniqueName: \"kubernetes.io/projected/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-kube-api-access-dptnc\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:35.714323 master-0 kubenswrapper[4652]: I0216 17:24:35.713937 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.714323 master-0 kubenswrapper[4652]: I0216 17:24:35.713903 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-tls\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:35.714323 master-0 kubenswrapper[4652]: I0216 17:24:35.714087 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-server-tls\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:35.714654 master-0 kubenswrapper[4652]: I0216 17:24:35.714603 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-encryption-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.714839 master-0 kubenswrapper[4652]: I0216 17:24:35.714800 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.715119 master-0 kubenswrapper[4652]: I0216 17:24:35.715065 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/55d635cd-1f0d-4086-96f2-9f3524f3f18c-kube-state-metrics-tls\") pod \"kube-state-metrics-7cc9598d54-8j5rk\" (UID: \"55d635cd-1f0d-4086-96f2-9f3524f3f18c\") " pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:35.715360 master-0 kubenswrapper[4652]: I0216 17:24:35.715330 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54fba066-0e9e-49f6-8a86-34d5b4b660df-monitoring-plugin-cert\") pod \"monitoring-plugin-555857f695-nlrnr\" (UID: \"54fba066-0e9e-49f6-8a86-34d5b4b660df\") " pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:35.715607 master-0 kubenswrapper[4652]: I0216 17:24:35.715576 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-proxy-tls\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:35.715725 master-0 kubenswrapper[4652]: I0216 17:24:35.715700 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/48801344-a48a-493e-aea4-19d998d0b708-signing-key\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:35.729827 master-0 kubenswrapper[4652]: I0216 17:24:35.716683 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-etcd-client\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.729827 master-0 kubenswrapper[4652]: I0216 17:24:35.717033 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-apiservice-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:35.729827 master-0 kubenswrapper[4652]: I0216 17:24:35.717067 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/62220aa5-4065-472c-8a17-c0a58942ab8a-srv-cert\") pod \"olm-operator-6b56bd877c-p7k2k\" (UID: \"62220aa5-4065-472c-8a17-c0a58942ab8a\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:35.729827 master-0 kubenswrapper[4652]: I0216 17:24:35.717667 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaf7edff-0a89-4ac0-b9dd-511e098b5434-serving-cert\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:35.729827 master-0 kubenswrapper[4652]: I0216 17:24:35.717787 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e73ee493-de15-44c2-bd51-e12fcbb27a15-webhook-cert\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:35.729827 master-0 kubenswrapper[4652]: I0216 17:24:35.717788 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:35.729827 master-0 kubenswrapper[4652]: I0216 17:24:35.722551 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d9859457-f0d1-4754-a6c5-cf05d5abf447-metrics-tls\") pod \"dns-operator-86b8869b79-nhxlp\" (UID: \"d9859457-f0d1-4754-a6c5-cf05d5abf447\") " pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:35.743441 master-0 kubenswrapper[4652]: I0216 17:24:35.743367 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e623376-9e14-4341-9dcf-7a7c218b6f9f-config\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:35.743441 master-0 kubenswrapper[4652]: I0216 17:24:35.743403 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06067627-6ccf-4cc8-bd20-dabdd776bb46-serving-certs-ca-bundle\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:35.743441 master-0 kubenswrapper[4652]: I0216 17:24:35.743403 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/442600dc-09b2-4fee-9f89-777296b2ee40-config\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:35.743669 master-0 kubenswrapper[4652]: I0216 17:24:35.743403 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f44170a-3c1c-4944-b971-251f75a51fc3-nginx-conf\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:35.743669 master-0 kubenswrapper[4652]: I0216 17:24:35.743529 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.744354 master-0 kubenswrapper[4652]: I0216 17:24:35.744314 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-config\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:35.744722 master-0 kubenswrapper[4652]: I0216 17:24:35.744656 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:35.744722 master-0 kubenswrapper[4652]: I0216 17:24:35.744669 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-config\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:35.744998 master-0 kubenswrapper[4652]: I0216 17:24:35.744960 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/404c402a-705f-4352-b9df-b89562070d9c-images\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:35.745184 master-0 kubenswrapper[4652]: I0216 17:24:35.745110 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:35.745508 master-0 kubenswrapper[4652]: I0216 17:24:35.745474 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.745671 master-0 kubenswrapper[4652]: I0216 17:24:35.745646 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.746969 master-0 kubenswrapper[4652]: I0216 17:24:35.746939 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-config\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.747033 master-0 kubenswrapper[4652]: I0216 17:24:35.747006 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-grpc-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:35.747147 master-0 kubenswrapper[4652]: I0216 17:24:35.747098 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:35.749214 master-0 kubenswrapper[4652]: I0216 17:24:35.749173 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:35.749355 master-0 kubenswrapper[4652]: I0216 17:24:35.749325 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:35.749677 master-0 kubenswrapper[4652]: I0216 17:24:35.749606 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-trusted-ca\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:35.749948 master-0 kubenswrapper[4652]: I0216 17:24:35.749892 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:35.752432 master-0 kubenswrapper[4652]: I0216 17:24:35.752402 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ee84198d-6357-4429-a90c-455c3850a788-auth-proxy-config\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:35.752518 master-0 kubenswrapper[4652]: I0216 17:24:35.752494 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-etcd-serving-ca\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:35.763755 master-0 kubenswrapper[4652]: I0216 17:24:35.763698 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:35.767011 master-0 kubenswrapper[4652]: I0216 17:24:35.766950 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qqvg4" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.793894 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.793953 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.793980 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794010 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794037 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794063 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794108 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794149 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794183 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794258 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794338 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794364 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794388 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794411 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794440 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794575 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794620 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794658 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794750 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794775 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794798 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794832 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794857 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794882 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.794907 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795045 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795070 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795092 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795128 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795167 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795202 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795225 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795256 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795302 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795326 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795351 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795377 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795401 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795428 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795451 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795477 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795503 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795528 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795551 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:35.795525 master-0 kubenswrapper[4652]: I0216 17:24:35.795575 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.795947 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.798856 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.798935 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.798976 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.799011 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.798976 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f42cr\" (UniqueName: \"kubernetes.io/projected/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-kube-api-access-f42cr\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.799191 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-image-import-ca\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.799224 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edbaac23-11f0-4bc7-a7ce-b593c774c0fa-serving-cert\") pod \"openshift-controller-manager-operator-5f5f84757d-ktmm9\" (UID: \"edbaac23-11f0-4bc7-a7ce-b593c774c0fa\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.799258 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.799820 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0517b180-00ee-47fe-a8e7-36a3931b7e72-serving-cert\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.800249 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d020c902-2adb-4919-8dd9-0c2109830580-serving-cert\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.803610 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-web-config\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.803692 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.803781 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/188e42e5-9f9c-42af-ba15-5548c4fa4b52-srv-cert\") pod \"catalog-operator-588944557d-5drhs\" (UID: \"188e42e5-9f9c-42af-ba15-5548c4fa4b52\") " pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.803857 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-client-ca-bundle\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.804042 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.804083 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.804138 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.804601 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/737fcc7d-d850-4352-9f17-383c85d5bc28-serving-cert\") pod \"openshift-apiserver-operator-6d4655d9cf-qhn9v\" (UID: \"737fcc7d-d850-4352-9f17-383c85d5bc28\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.804667 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-federate-client-tls\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:35.805879 master-0 kubenswrapper[4652]: I0216 17:24:35.805140 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-encryption-config\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:35.806719 master-0 kubenswrapper[4652]: I0216 17:24:35.805939 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d1524fc1-d157-435a-8bf8-7e877c45909d-samples-operator-tls\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:35.806719 master-0 kubenswrapper[4652]: I0216 17:24:35.806068 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:35.806719 master-0 kubenswrapper[4652]: I0216 17:24:35.806149 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:35.806719 master-0 kubenswrapper[4652]: I0216 17:24:35.806178 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:35.806719 master-0 kubenswrapper[4652]: I0216 17:24:35.806207 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:35.806719 master-0 kubenswrapper[4652]: I0216 17:24:35.806232 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.806719 master-0 kubenswrapper[4652]: I0216 17:24:35.806281 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:35.806719 master-0 kubenswrapper[4652]: I0216 17:24:35.806307 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.807058 master-0 kubenswrapper[4652]: I0216 17:24:35.806842 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqstc\" (UniqueName: \"kubernetes.io/projected/970d4376-f299-412c-a8ee-90aa980c689e-kube-api-access-hqstc\") pod \"csi-snapshot-controller-operator-7b87b97578-q55rf\" (UID: \"970d4376-f299-412c-a8ee-90aa980c689e\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:35.808466 master-0 kubenswrapper[4652]: I0216 17:24:35.808437 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c8729b1a-e365-4cf7-8a05-91a9987dabe9-proxy-tls\") pod \"machine-config-controller-686c884b4d-ksx48\" (UID: \"c8729b1a-e365-4cf7-8a05-91a9987dabe9\") " pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:35.808466 master-0 kubenswrapper[4652]: I0216 17:24:35.808460 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-serving-cert\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:35.808763 master-0 kubenswrapper[4652]: I0216 17:24:35.808688 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:35.808763 master-0 kubenswrapper[4652]: I0216 17:24:35.808742 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:35.808870 master-0 kubenswrapper[4652]: I0216 17:24:35.808777 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:35.808870 master-0 kubenswrapper[4652]: I0216 17:24:35.808812 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:35.808870 master-0 kubenswrapper[4652]: I0216 17:24:35.808842 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/404c402a-705f-4352-b9df-b89562070d9c-machine-api-operator-tls\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:35.809032 master-0 kubenswrapper[4652]: I0216 17:24:35.808908 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7390ccc6-dfbe-4f51-960c-7628f49bffb7-serving-cert\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:35.809032 master-0 kubenswrapper[4652]: I0216 17:24:35.808916 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.809032 master-0 kubenswrapper[4652]: I0216 17:24:35.808962 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:35.809032 master-0 kubenswrapper[4652]: I0216 17:24:35.808996 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:35.809032 master-0 kubenswrapper[4652]: I0216 17:24:35.809030 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:35.809316 master-0 kubenswrapper[4652]: I0216 17:24:35.809058 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:35.809316 master-0 kubenswrapper[4652]: I0216 17:24:35.809087 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:35.809316 master-0 kubenswrapper[4652]: I0216 17:24:35.809113 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:35.809316 master-0 kubenswrapper[4652]: I0216 17:24:35.809138 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.809316 master-0 kubenswrapper[4652]: I0216 17:24:35.809165 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:35.809316 master-0 kubenswrapper[4652]: I0216 17:24:35.809190 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:35.809316 master-0 kubenswrapper[4652]: I0216 17:24:35.809209 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-config-volume\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.809316 master-0 kubenswrapper[4652]: I0216 17:24:35.809215 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:35.810879 master-0 kubenswrapper[4652]: I0216 17:24:35.809992 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:35.810879 master-0 kubenswrapper[4652]: I0216 17:24:35.810203 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.810879 master-0 kubenswrapper[4652]: I0216 17:24:35.810446 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:35.810879 master-0 kubenswrapper[4652]: I0216 17:24:35.810487 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:35.810879 master-0 kubenswrapper[4652]: I0216 17:24:35.810521 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:35.810879 master-0 kubenswrapper[4652]: I0216 17:24:35.810547 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.810879 master-0 kubenswrapper[4652]: I0216 17:24:35.810573 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:35.810879 master-0 kubenswrapper[4652]: I0216 17:24:35.810587 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bbcf\" (UniqueName: \"kubernetes.io/projected/18e9a9d3-9b18-4c19-9558-f33c68101922-kube-api-access-6bbcf\") pod \"package-server-manager-5c696dbdcd-qrrc6\" (UID: \"18e9a9d3-9b18-4c19-9558-f33c68101922\") " pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:35.810879 master-0 kubenswrapper[4652]: I0216 17:24:35.810607 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812113 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr8t6\" (UniqueName: \"kubernetes.io/projected/e69d8c51-e2a6-4f61-9c26-072784f6cf40-kube-api-access-xr8t6\") pod \"openshift-config-operator-7c6bdb986f-v8dr8\" (UID: \"e69d8c51-e2a6-4f61-9c26-072784f6cf40\") " pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812233 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812270 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812313 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812334 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812351 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812371 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812392 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812435 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812457 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812476 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812476 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-trusted-ca-bundle\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812497 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.812907 master-0 kubenswrapper[4652]: I0216 17:24:35.812609 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6f44170a-3c1c-4944-b971-251f75a51fc3-networking-console-plugin-cert\") pod \"networking-console-plugin-bd6d6f87f-jhjct\" (UID: \"6f44170a-3c1c-4944-b971-251f75a51fc3\") " pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:35.813621 master-0 kubenswrapper[4652]: I0216 17:24:35.813591 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2511146-1d04-4ecd-a28e-79662ef7b9d3-serving-cert\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:35.814117 master-0 kubenswrapper[4652]: I0216 17:24:35.814085 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/442600dc-09b2-4fee-9f89-777296b2ee40-kube-api-access\") pod \"kube-controller-manager-operator-78ff47c7c5-txr5k\" (UID: \"442600dc-09b2-4fee-9f89-777296b2ee40\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:35.814353 master-0 kubenswrapper[4652]: I0216 17:24:35.814329 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:35.814418 master-0 kubenswrapper[4652]: I0216 17:24:35.814366 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.814418 master-0 kubenswrapper[4652]: I0216 17:24:35.814398 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:35.814418 master-0 kubenswrapper[4652]: I0216 17:24:35.814416 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.814543 master-0 kubenswrapper[4652]: I0216 17:24:35.814436 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:35.814543 master-0 kubenswrapper[4652]: I0216 17:24:35.814455 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:35.814543 master-0 kubenswrapper[4652]: I0216 17:24:35.814485 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/ae20b683-dac8-419e-808a-ddcdb3c564e1-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-546cc7d765-94nfl\" (UID: \"ae20b683-dac8-419e-808a-ddcdb3c564e1\") " pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:35.815003 master-0 kubenswrapper[4652]: I0216 17:24:35.814901 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6rwz\" (UniqueName: \"kubernetes.io/projected/0ff68421-1741-41c1-93d5-5c722dfd295e-kube-api-access-n6rwz\") pod \"network-check-source-7d8f4c8c66-qjq9w\" (UID: \"0ff68421-1741-41c1-93d5-5c722dfd295e\") " pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:35.815067 master-0 kubenswrapper[4652]: I0216 17:24:35.815017 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:35.815100 master-0 kubenswrapper[4652]: I0216 17:24:35.815088 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.815393 master-0 kubenswrapper[4652]: I0216 17:24:35.815358 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:35.815444 master-0 kubenswrapper[4652]: I0216 17:24:35.815404 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-tls\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:35.815444 master-0 kubenswrapper[4652]: I0216 17:24:35.815379 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.815522 master-0 kubenswrapper[4652]: I0216 17:24:35.815480 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d020c902-2adb-4919-8dd9-0c2109830580-kube-api-access\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:35.815522 master-0 kubenswrapper[4652]: I0216 17:24:35.815485 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/ba37ef0e-373c-4ccc-b082-668630399765-secret-metrics-client-certs\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:35.815667 master-0 kubenswrapper[4652]: I0216 17:24:35.815643 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e623376-9e14-4341-9dcf-7a7c218b6f9f-serving-cert\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:35.815775 master-0 kubenswrapper[4652]: I0216 17:24:35.815754 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.816986 master-0 kubenswrapper[4652]: I0216 17:24:35.816949 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:35.817751 master-0 kubenswrapper[4652]: I0216 17:24:35.817607 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.818622 master-0 kubenswrapper[4652]: I0216 17:24:35.818593 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-cert\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:35.818700 master-0 kubenswrapper[4652]: I0216 17:24:35.818634 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-client\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:35.818737 master-0 kubenswrapper[4652]: I0216 17:24:35.818727 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee84198d-6357-4429-a90c-455c3850a788-cert\") pod \"cluster-autoscaler-operator-67fd9768b5-zcwwd\" (UID: \"ee84198d-6357-4429-a90c-455c3850a788\") " pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:35.818893 master-0 kubenswrapper[4652]: I0216 17:24:35.818861 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a275679-b7b6-4c28-b389-94cd2b014d6c-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:35.818949 master-0 kubenswrapper[4652]: I0216 17:24:35.818916 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdxgd\" (UniqueName: \"kubernetes.io/projected/7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4-kube-api-access-zdxgd\") pod \"cloud-credential-operator-595c8f9ff-b9nvq\" (UID: \"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:35.819183 master-0 kubenswrapper[4652]: I0216 17:24:35.819139 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/4e51bba5-0ebe-4e55-a588-38b71548c605-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-55b69c6c48-7chjv\" (UID: \"4e51bba5-0ebe-4e55-a588-38b71548c605\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:35.819377 master-0 kubenswrapper[4652]: I0216 17:24:35.819351 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.819440 master-0 kubenswrapper[4652]: I0216 17:24:35.819413 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eaf7edff-0a89-4ac0-b9dd-511e098b5434-kube-api-access\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:35.820998 master-0 kubenswrapper[4652]: I0216 17:24:35.820964 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:35.821125 master-0 kubenswrapper[4652]: I0216 17:24:35.821097 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t24jh\" (UniqueName: \"kubernetes.io/projected/9609a4f3-b947-47af-a685-baae26c50fa3-kube-api-access-t24jh\") pod \"ingress-operator-c588d8cb4-wjr7d\" (UID: \"9609a4f3-b947-47af-a685-baae26c50fa3\") " pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:35.821716 master-0 kubenswrapper[4652]: I0216 17:24:35.821511 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"oauth-openshift-64f85b8fc9-n9msn\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:35.821716 master-0 kubenswrapper[4652]: I0216 17:24:35.821622 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"multus-admission-controller-6d678b8d67-5n9cl\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:35.821716 master-0 kubenswrapper[4652]: I0216 17:24:35.821672 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dce85b5e-6e92-4e0e-bee7-07b1a3634302-serving-cert\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.821716 master-0 kubenswrapper[4652]: I0216 17:24:35.821685 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-image-registry-operator-tls\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:35.821716 master-0 kubenswrapper[4652]: I0216 17:24:35.821702 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-operator-metrics\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:35.821966 master-0 kubenswrapper[4652]: I0216 17:24:35.821897 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.822018 master-0 kubenswrapper[4652]: I0216 17:24:35.821975 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-web-config\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.822079 master-0 kubenswrapper[4652]: I0216 17:24:35.822048 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/642e5115-b7f2-4561-bc6b-1a74b6d891c4-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-d8bf84b88-m66tx\" (UID: \"642e5115-b7f2-4561-bc6b-1a74b6d891c4\") " pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:35.822203 master-0 kubenswrapper[4652]: I0216 17:24:35.822167 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/2d1636c0-f34d-444c-822d-77f1d203ddc4-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-7485d645b8-zxxwd\" (UID: \"2d1636c0-f34d-444c-822d-77f1d203ddc4\") " pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:35.822673 master-0 kubenswrapper[4652]: I0216 17:24:35.822629 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57xvt\" (UniqueName: \"kubernetes.io/projected/e73ee493-de15-44c2-bd51-e12fcbb27a15-kube-api-access-57xvt\") pod \"packageserver-6d5d8c8c95-kzfjw\" (UID: \"e73ee493-de15-44c2-bd51-e12fcbb27a15\") " pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:35.822757 master-0 kubenswrapper[4652]: I0216 17:24:35.822689 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/544c6815-81d7-422a-9e4a-5fcbfabe8da8-tls-certificates\") pod \"prometheus-operator-admission-webhook-695b766898-h94zg\" (UID: \"544c6815-81d7-422a-9e4a-5fcbfabe8da8\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:35.823018 master-0 kubenswrapper[4652]: I0216 17:24:35.822998 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/b04ee64e-5e83-499c-812d-749b2b6824c6-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.823695 master-0 kubenswrapper[4652]: I0216 17:24:35.823658 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/06067627-6ccf-4cc8-bd20-dabdd776bb46-secret-telemeter-client\") pod \"telemeter-client-6bbd87b65b-mt2mz\" (UID: \"06067627-6ccf-4cc8-bd20-dabdd776bb46\") " pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:35.824220 master-0 kubenswrapper[4652]: I0216 17:24:35.824180 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e\") " pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.824849 master-0 kubenswrapper[4652]: I0216 17:24:35.824789 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fe8e8e5d-cebb-4361-b765-5ff737f5e838-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-64bf6cdbbc-tpd6h\" (UID: \"fe8e8e5d-cebb-4361-b765-5ff737f5e838\") " pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:35.826635 master-0 kubenswrapper[4652]: I0216 17:24:35.826612 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" Feb 16 17:24:35.827272 master-0 kubenswrapper[4652]: I0216 17:24:35.827146 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:35.827272 master-0 kubenswrapper[4652]: I0216 17:24:35.827148 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d020c902-2adb-4919-8dd9-0c2109830580-config\") pod \"kube-apiserver-operator-54984b6678-gp8gv\" (UID: \"d020c902-2adb-4919-8dd9-0c2109830580\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:35.827272 master-0 kubenswrapper[4652]: I0216 17:24:35.827163 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmbll\" (UniqueName: \"kubernetes.io/projected/5a275679-b7b6-4c28-b389-94cd2b014d6c-kube-api-access-pmbll\") pod \"cluster-storage-operator-75b869db96-twmsp\" (UID: \"5a275679-b7b6-4c28-b389-94cd2b014d6c\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:35.827272 master-0 kubenswrapper[4652]: I0216 17:24:35.827184 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f3c7d762-e2fe-49ca-ade5-3982d91ec2a2-images\") pod \"machine-config-operator-84976bb859-rsnqc\" (UID: \"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2\") " pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:35.827272 master-0 kubenswrapper[4652]: I0216 17:24:35.827232 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:35.827678 master-0 kubenswrapper[4652]: I0216 17:24:35.827345 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6b3e071c-1c62-489b-91c1-aef0d197f40b-etcd-service-ca\") pod \"etcd-operator-67bf55ccdd-cppj8\" (UID: \"6b3e071c-1c62-489b-91c1-aef0d197f40b\") " pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:35.827678 master-0 kubenswrapper[4652]: I0216 17:24:35.827446 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-images\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:35.827678 master-0 kubenswrapper[4652]: I0216 17:24:35.827460 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"console-599b567ff7-nrcpr\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:35.827678 master-0 kubenswrapper[4652]: I0216 17:24:35.827495 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-service-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:35.827813 master-0 kubenswrapper[4652]: I0216 17:24:35.827790 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2s8l\" (UniqueName: \"kubernetes.io/projected/c303189e-adae-4fe2-8dd7-cc9b80f73e66-kube-api-access-v2s8l\") pod \"network-check-target-vwvwx\" (UID: \"c303189e-adae-4fe2-8dd7-cc9b80f73e66\") " pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:35.827813 master-0 kubenswrapper[4652]: I0216 17:24:35.827799 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5192fa49-d81c-47ce-b2ab-f90996cc0bd5-apiservice-cert\") pod \"cluster-node-tuning-operator-ff6c9b66-6j4ts\" (UID: \"5192fa49-d81c-47ce-b2ab-f90996cc0bd5\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:35.827876 master-0 kubenswrapper[4652]: I0216 17:24:35.827808 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaf7edff-0a89-4ac0-b9dd-511e098b5434-config\") pod \"openshift-kube-scheduler-operator-7485d55966-sgmpf\" (UID: \"eaf7edff-0a89-4ac0-b9dd-511e098b5434\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:35.827908 master-0 kubenswrapper[4652]: I0216 17:24:35.827869 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-config\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:35.827944 master-0 kubenswrapper[4652]: I0216 17:24:35.827933 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.828718 master-0 kubenswrapper[4652]: I0216 17:24:35.827984 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:35.828718 master-0 kubenswrapper[4652]: I0216 17:24:35.828094 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/e10d0b0c-4c2a-45b3-8d69-3070d566b97d-telemetry-config\") pod \"cluster-monitoring-operator-756d64c8c4-ln4wm\" (UID: \"e10d0b0c-4c2a-45b3-8d69-3070d566b97d\") " pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:35.828718 master-0 kubenswrapper[4652]: I0216 17:24:35.828122 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dce85b5e-6e92-4e0e-bee7-07b1a3634302-trusted-ca-bundle\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:35.828718 master-0 kubenswrapper[4652]: I0216 17:24:35.828352 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/8e90be63-ff6c-4e9e-8b9e-1ad9cf941845-catalogserver-certs\") pod \"catalogd-controller-manager-67bc7c997f-mn6cr\" (UID: \"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845\") " pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:35.828718 master-0 kubenswrapper[4652]: I0216 17:24:35.828517 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd-trusted-ca\") pod \"cluster-image-registry-operator-96c8c64b8-zwwnk\" (UID: \"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd\") " pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:35.828718 master-0 kubenswrapper[4652]: I0216 17:24:35.828625 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29402454-a920-471e-895e-764235d16eb4-config\") pod \"service-ca-operator-5dc4688546-pl7r5\" (UID: \"29402454-a920-471e-895e-764235d16eb4\") " pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:35.829033 master-0 kubenswrapper[4652]: I0216 17:24:35.828835 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7390ccc6-dfbe-4f51-960c-7628f49bffb7-audit-policies\") pod \"apiserver-66788cb45c-dp9bc\" (UID: \"7390ccc6-dfbe-4f51-960c-7628f49bffb7\") " pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:35.829033 master-0 kubenswrapper[4652]: I0216 17:24:35.828897 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6xfw\" (UniqueName: \"kubernetes.io/projected/08a90dc5-b0d8-4aad-a002-736492b6c1a9-kube-api-access-p6xfw\") pod \"downloads-dcd7b7d95-dhhfh\" (UID: \"08a90dc5-b0d8-4aad-a002-736492b6c1a9\") " pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:35.829127 master-0 kubenswrapper[4652]: I0216 17:24:35.829051 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/74b2561b-933b-4c58-a63a-7a8c671d0ae9-marketplace-trusted-ca\") pod \"marketplace-operator-6cc5b65c6b-s4gp2\" (UID: \"74b2561b-933b-4c58-a63a-7a8c671d0ae9\") " pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:35.829180 master-0 kubenswrapper[4652]: I0216 17:24:35.829151 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:35.829382 master-0 kubenswrapper[4652]: I0216 17:24:35.829341 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4488757c-f0fd-48fa-a3f9-6373b0bcafe4-config\") pod \"cluster-baremetal-operator-7bc947fc7d-4j7pn\" (UID: \"4488757c-f0fd-48fa-a3f9-6373b0bcafe4\") " pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:35.832988 master-0 kubenswrapper[4652]: I0216 17:24:35.832945 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqml\" (UniqueName: \"kubernetes.io/projected/404c402a-705f-4352-b9df-b89562070d9c-kube-api-access-vkqml\") pod \"machine-api-operator-bd7dd5c46-92rqx\" (UID: \"404c402a-705f-4352-b9df-b89562070d9c\") " pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:35.843349 master-0 kubenswrapper[4652]: I0216 17:24:35.843316 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" Feb 16 17:24:35.868585 master-0 kubenswrapper[4652]: I0216 17:24:35.856017 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41-service-ca-bundle\") pod \"authentication-operator-755d954778-lf4cb\" (UID: \"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41\") " pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:35.868585 master-0 kubenswrapper[4652]: I0216 17:24:35.856767 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2511146-1d04-4ecd-a28e-79662ef7b9d3-trusted-ca-bundle\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:35.868585 master-0 kubenswrapper[4652]: I0216 17:24:35.857271 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0517b180-00ee-47fe-a8e7-36a3931b7e72-trusted-ca\") pod \"console-operator-7777d5cc66-64vhv\" (UID: \"0517b180-00ee-47fe-a8e7-36a3931b7e72\") " pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:35.868585 master-0 kubenswrapper[4652]: I0216 17:24:35.859923 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:35.868585 master-0 kubenswrapper[4652]: I0216 17:24:35.859931 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:35.868585 master-0 kubenswrapper[4652]: I0216 17:24:35.861327 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"controller-manager-7fc9897cf8-9rjwd\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:35.868585 master-0 kubenswrapper[4652]: I0216 17:24:35.861438 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/ba37ef0e-373c-4ccc-b082-668630399765-metrics-server-audit-profiles\") pod \"metrics-server-745bd8d89b-qr4zh\" (UID: \"ba37ef0e-373c-4ccc-b082-668630399765\") " pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:35.868585 master-0 kubenswrapper[4652]: I0216 17:24:35.864360 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"route-controller-manager-dcdb76cc6-5rcvl\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:35.872339 master-0 kubenswrapper[4652]: I0216 17:24:35.871490 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" Feb 16 17:24:35.876882 master-0 kubenswrapper[4652]: I0216 17:24:35.876845 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Feb 16 17:24:35.878823 master-0 kubenswrapper[4652]: I0216 17:24:35.878795 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:35.881520 master-0 kubenswrapper[4652]: I0216 17:24:35.881480 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" Feb 16 17:24:35.893536 master-0 kubenswrapper[4652]: I0216 17:24:35.893505 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:35.901029 master-0 kubenswrapper[4652]: I0216 17:24:35.900987 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:24:35.912899 master-0 kubenswrapper[4652]: I0216 17:24:35.912866 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:35.941182 master-0 kubenswrapper[4652]: I0216 17:24:35.937803 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" Feb 16 17:24:35.941182 master-0 kubenswrapper[4652]: I0216 17:24:35.938203 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" Feb 16 17:24:35.941182 master-0 kubenswrapper[4652]: I0216 17:24:35.938229 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:35.941182 master-0 kubenswrapper[4652]: I0216 17:24:35.938701 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:35.941182 master-0 kubenswrapper[4652]: I0216 17:24:35.941029 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" Feb 16 17:24:35.946443 master-0 kubenswrapper[4652]: I0216 17:24:35.946384 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:35.948612 master-0 kubenswrapper[4652]: I0216 17:24:35.948567 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" Feb 16 17:24:35.952883 master-0 kubenswrapper[4652]: I0216 17:24:35.952038 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" Feb 16 17:24:35.965843 master-0 kubenswrapper[4652]: I0216 17:24:35.965797 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" Feb 16 17:24:35.974391 master-0 kubenswrapper[4652]: I0216 17:24:35.970761 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" Feb 16 17:24:35.974391 master-0 kubenswrapper[4652]: I0216 17:24:35.971245 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" Feb 16 17:24:35.975879 master-0 kubenswrapper[4652]: I0216 17:24:35.975849 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" Feb 16 17:24:35.980133 master-0 kubenswrapper[4652]: I0216 17:24:35.980061 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" Feb 16 17:24:35.981818 master-0 kubenswrapper[4652]: I0216 17:24:35.981780 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:35.993187 master-0 kubenswrapper[4652]: I0216 17:24:35.992947 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" Feb 16 17:24:35.994870 master-0 kubenswrapper[4652]: I0216 17:24:35.994846 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" Feb 16 17:24:35.996477 master-0 kubenswrapper[4652]: I0216 17:24:35.996432 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" Feb 16 17:24:36.001962 master-0 kubenswrapper[4652]: I0216 17:24:36.001919 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" Feb 16 17:24:36.006439 master-0 kubenswrapper[4652]: I0216 17:24:36.006391 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:36.006523 master-0 kubenswrapper[4652]: I0216 17:24:36.006483 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" Feb 16 17:24:36.021451 master-0 kubenswrapper[4652]: I0216 17:24:36.021405 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" Feb 16 17:24:36.023698 master-0 kubenswrapper[4652]: I0216 17:24:36.023669 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:36.034880 master-0 kubenswrapper[4652]: I0216 17:24:36.034593 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:36.037188 master-0 kubenswrapper[4652]: I0216 17:24:36.037148 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" Feb 16 17:24:36.037839 master-0 kubenswrapper[4652]: I0216 17:24:36.037761 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" Feb 16 17:24:36.038047 master-0 kubenswrapper[4652]: I0216 17:24:36.038026 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" Feb 16 17:24:36.066262 master-0 kubenswrapper[4652]: I0216 17:24:36.066177 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" Feb 16 17:24:36.088892 master-0 kubenswrapper[4652]: I0216 17:24:36.084636 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:36.088892 master-0 kubenswrapper[4652]: I0216 17:24:36.084911 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:36.088892 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:36.088892 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:36.088892 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:36.088892 master-0 kubenswrapper[4652]: I0216 17:24:36.084958 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" Feb 16 17:24:36.088892 master-0 kubenswrapper[4652]: I0216 17:24:36.084956 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:36.090009 master-0 kubenswrapper[4652]: I0216 17:24:36.089657 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:36.093601 master-0 kubenswrapper[4652]: I0216 17:24:36.093525 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" Feb 16 17:24:36.098324 master-0 kubenswrapper[4652]: I0216 17:24:36.097274 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" Feb 16 17:24:36.102953 master-0 kubenswrapper[4652]: I0216 17:24:36.102205 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:36.104774 master-0 kubenswrapper[4652]: I0216 17:24:36.104442 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:36.115153 master-0 kubenswrapper[4652]: I0216 17:24:36.115114 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:36.120708 master-0 kubenswrapper[4652]: I0216 17:24:36.120676 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" Feb 16 17:24:36.126059 master-0 kubenswrapper[4652]: I0216 17:24:36.126030 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" Feb 16 17:24:36.138583 master-0 kubenswrapper[4652]: I0216 17:24:36.137285 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:36.511809 master-0 kubenswrapper[4652]: I0216 17:24:36.511725 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/b04ee64e-5e83-499c-812d-749b2b6824c6-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"b04ee64e-5e83-499c-812d-749b2b6824c6\") " pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:36.750723 master-0 kubenswrapper[4652]: I0216 17:24:36.750103 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:36.857985 master-0 kubenswrapper[4652]: I0216 17:24:36.857831 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:36.861987 master-0 kubenswrapper[4652]: I0216 17:24:36.861954 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad805251-19d0-4d2f-b741-7d11158f1f03-metrics-certs\") pod \"network-metrics-daemon-279g6\" (UID: \"ad805251-19d0-4d2f-b741-7d11158f1f03\") " pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:37.009718 master-0 kubenswrapper[4652]: W0216 17:24:37.004855 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78be97a3_18d1_4962_804f_372974dc8ccc.slice/crio-58bcccf7860e493f67606a412e7a43eefc11f0d58cc1efa1e5e39da4b3054f04 WatchSource:0}: Error finding container 58bcccf7860e493f67606a412e7a43eefc11f0d58cc1efa1e5e39da4b3054f04: Status 404 returned error can't find the container with id 58bcccf7860e493f67606a412e7a43eefc11f0d58cc1efa1e5e39da4b3054f04 Feb 16 17:24:37.112711 master-0 kubenswrapper[4652]: I0216 17:24:37.112675 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-279g6" Feb 16 17:24:37.118320 master-0 kubenswrapper[4652]: I0216 17:24:37.118264 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerStarted","Data":"bc6d4009bf44d1e4a16456116b3127c69d28d745306949f312e0aa958e44798e"} Feb 16 17:24:37.127279 master-0 kubenswrapper[4652]: I0216 17:24:37.127193 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" event={"ID":"188e42e5-9f9c-42af-ba15-5548c4fa4b52","Type":"ContainerStarted","Data":"aaae62edd0b45ca78aec0f25710e4cca6bf6c13b3cfb49b947d92a0564e5148a"} Feb 16 17:24:37.131473 master-0 kubenswrapper[4652]: I0216 17:24:37.131407 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerStarted","Data":"e772eb2c8c638ead781f96868238e47e97f1bdb7225b1cfe9b67bca29cf6ffd2"} Feb 16 17:24:37.132447 master-0 kubenswrapper[4652]: I0216 17:24:37.132053 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:37.132447 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:37.132447 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:37.132447 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:37.132447 master-0 kubenswrapper[4652]: I0216 17:24:37.132099 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:37.141808 master-0 kubenswrapper[4652]: I0216 17:24:37.141761 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" event={"ID":"544c6815-81d7-422a-9e4a-5fcbfabe8da8","Type":"ContainerStarted","Data":"f0ff41adf450bb6b62127991243b6ca72d051aa9d0150fa47fbdaef01bcc7d88"} Feb 16 17:24:37.159360 master-0 kubenswrapper[4652]: I0216 17:24:37.158351 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" event={"ID":"2d1636c0-f34d-444c-822d-77f1d203ddc4","Type":"ContainerStarted","Data":"30363575aabdf97aee7f8b4828a3a57151bcab6ce7af524f5891e558b75a6abf"} Feb 16 17:24:37.162333 master-0 kubenswrapper[4652]: I0216 17:24:37.162295 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" event={"ID":"e73ee493-de15-44c2-bd51-e12fcbb27a15","Type":"ContainerStarted","Data":"a9e87f89e8a4ab0287bbb6b05d3af9f5e8ae5af80b184eca606f456af7e6bd75"} Feb 16 17:24:37.165410 master-0 kubenswrapper[4652]: I0216 17:24:37.164815 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" event={"ID":"78be97a3-18d1-4962-804f-372974dc8ccc","Type":"ContainerStarted","Data":"58bcccf7860e493f67606a412e7a43eefc11f0d58cc1efa1e5e39da4b3054f04"} Feb 16 17:24:37.171392 master-0 kubenswrapper[4652]: I0216 17:24:37.171353 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" event={"ID":"62220aa5-4065-472c-8a17-c0a58942ab8a","Type":"ContainerStarted","Data":"d3432cba1bcf6429d7596b326699fcbe14c35a81a69c30cfb3bfe0ba968d945a"} Feb 16 17:24:37.172525 master-0 kubenswrapper[4652]: I0216 17:24:37.172483 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" event={"ID":"54fba066-0e9e-49f6-8a86-34d5b4b660df","Type":"ContainerStarted","Data":"20119c718aa276ea7fec358839b3e770a8bb78be9454b686564362d3836a2fbd"} Feb 16 17:24:37.175344 master-0 kubenswrapper[4652]: I0216 17:24:37.175308 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"d3e7455918edb20c68625b5a2e3adc84d0404b754d1eb951ecb1d7e4cc7bcbee"} Feb 16 17:24:37.176171 master-0 kubenswrapper[4652]: I0216 17:24:37.176147 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" event={"ID":"0ff68421-1741-41c1-93d5-5c722dfd295e","Type":"ContainerStarted","Data":"7f88d5894244e4ddb72236d214022123994a728e1c258fdd91829f0e4c9a11fc"} Feb 16 17:24:37.180137 master-0 kubenswrapper[4652]: I0216 17:24:37.180104 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerStarted","Data":"87f713a085314b7412a439cc22ad4d340dd5d703258fdc351628001722bccec6"} Feb 16 17:24:37.186090 master-0 kubenswrapper[4652]: I0216 17:24:37.185761 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" event={"ID":"edbaac23-11f0-4bc7-a7ce-b593c774c0fa","Type":"ContainerStarted","Data":"3eb9bfb9671660fb9f3ffc8fabd5c198f918fd3b72132db0f8331fd62cf012a6"} Feb 16 17:24:37.188465 master-0 kubenswrapper[4652]: I0216 17:24:37.188196 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-dhhfh" event={"ID":"08a90dc5-b0d8-4aad-a002-736492b6c1a9","Type":"ContainerStarted","Data":"1cb053b5f9238c865c6f9706c4d56712cf155a0ff61ee416ccf51d59667b6b04"} Feb 16 17:24:37.190658 master-0 kubenswrapper[4652]: I0216 17:24:37.190610 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerStarted","Data":"5bcdb4a4da23a77b6ec44653d6844bca70f1bf2c4d2302a5c6b717ee9b4a0958"} Feb 16 17:24:37.192194 master-0 kubenswrapper[4652]: I0216 17:24:37.192161 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" event={"ID":"5a275679-b7b6-4c28-b389-94cd2b014d6c","Type":"ContainerStarted","Data":"144dc70873df58a650a2365cda0afda7630f4e17b570f429354ed93a9650cd83"} Feb 16 17:24:37.194323 master-0 kubenswrapper[4652]: I0216 17:24:37.193257 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qqvg4" event={"ID":"1363cb7b-62cc-497b-af6f-4d5e0eb7f174","Type":"ContainerStarted","Data":"cf48fdc172632887864f34b5bc9842628972afde1141159336c1f3eb78fb1cb0"} Feb 16 17:24:37.670019 master-0 kubenswrapper[4652]: W0216 17:24:37.669973 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9aa57eb4_c511_4ab8_a5d7_385e1ed9ee41.slice/crio-2571ae766810fb54f11d5aeb551caa1f520021ef82fd098a48949c5d5cdabf3f WatchSource:0}: Error finding container 2571ae766810fb54f11d5aeb551caa1f520021ef82fd098a48949c5d5cdabf3f: Status 404 returned error can't find the container with id 2571ae766810fb54f11d5aeb551caa1f520021ef82fd098a48949c5d5cdabf3f Feb 16 17:24:37.673278 master-0 kubenswrapper[4652]: I0216 17:24:37.673177 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:37.678041 master-0 kubenswrapper[4652]: I0216 17:24:37.677891 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrzjr\" (UniqueName: \"kubernetes.io/projected/d1524fc1-d157-435a-8bf8-7e877c45909d-kube-api-access-nrzjr\") pod \"cluster-samples-operator-f8cbff74c-spxm9\" (UID: \"d1524fc1-d157-435a-8bf8-7e877c45909d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:37.696421 master-0 kubenswrapper[4652]: W0216 17:24:37.696371 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18e9a9d3_9b18_4c19_9558_f33c68101922.slice/crio-b2dcb312200f3e856be703402a955223f5f42099ba4a1c486bf8b2bb376f0a23 WatchSource:0}: Error finding container b2dcb312200f3e856be703402a955223f5f42099ba4a1c486bf8b2bb376f0a23: Status 404 returned error can't find the container with id b2dcb312200f3e856be703402a955223f5f42099ba4a1c486bf8b2bb376f0a23 Feb 16 17:24:37.745801 master-0 kubenswrapper[4652]: W0216 17:24:37.745665 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9859457_f0d1_4754_a6c5_cf05d5abf447.slice/crio-4c843137d76568820badc308d7201554b26ce8497411ed7476460738b62bbd45 WatchSource:0}: Error finding container 4c843137d76568820badc308d7201554b26ce8497411ed7476460738b62bbd45: Status 404 returned error can't find the container with id 4c843137d76568820badc308d7201554b26ce8497411ed7476460738b62bbd45 Feb 16 17:24:37.855521 master-0 kubenswrapper[4652]: I0216 17:24:37.855486 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" Feb 16 17:24:37.976991 master-0 kubenswrapper[4652]: I0216 17:24:37.976947 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:37.977074 master-0 kubenswrapper[4652]: I0216 17:24:37.976999 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:37.977452 master-0 kubenswrapper[4652]: I0216 17:24:37.977423 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:37.985631 master-0 kubenswrapper[4652]: I0216 17:24:37.985586 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqfds\" (UniqueName: \"kubernetes.io/projected/48801344-a48a-493e-aea4-19d998d0b708-kube-api-access-nqfds\") pod \"service-ca-676cd8b9b5-cp9rb\" (UID: \"48801344-a48a-493e-aea4-19d998d0b708\") " pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:37.986023 master-0 kubenswrapper[4652]: I0216 17:24:37.985987 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhcw6\" (UniqueName: \"kubernetes.io/projected/dce85b5e-6e92-4e0e-bee7-07b1a3634302-kube-api-access-fhcw6\") pod \"apiserver-fc4bf7f79-tqnlw\" (UID: \"dce85b5e-6e92-4e0e-bee7-07b1a3634302\") " pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:37.992894 master-0 kubenswrapper[4652]: I0216 17:24:37.992525 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnshv\" (UniqueName: \"kubernetes.io/projected/c2511146-1d04-4ecd-a28e-79662ef7b9d3-kube-api-access-hnshv\") pod \"insights-operator-cb4f7b4cf-6qrw5\" (UID: \"c2511146-1d04-4ecd-a28e-79662ef7b9d3\") " pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:38.012745 master-0 kubenswrapper[4652]: I0216 17:24:38.011164 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" Feb 16 17:24:38.086427 master-0 kubenswrapper[4652]: I0216 17:24:38.086363 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:38.086427 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:38.086427 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:38.086427 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:38.086643 master-0 kubenswrapper[4652]: I0216 17:24:38.086470 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:38.187992 master-0 kubenswrapper[4652]: I0216 17:24:38.187948 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:38.205566 master-0 kubenswrapper[4652]: I0216 17:24:38.205517 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" event={"ID":"188e42e5-9f9c-42af-ba15-5548c4fa4b52","Type":"ContainerStarted","Data":"42c8295af408a3cd4b24cd29a1606e35eb53d95eb5896e08e5be436569090461"} Feb 16 17:24:38.206150 master-0 kubenswrapper[4652]: I0216 17:24:38.205795 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:38.208291 master-0 kubenswrapper[4652]: I0216 17:24:38.208230 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" event={"ID":"2d1636c0-f34d-444c-822d-77f1d203ddc4","Type":"ContainerStarted","Data":"969bf1bfcb7e82ee348d9c113bec457ecb7d1c9e7ba994f51c3de7be217d97ba"} Feb 16 17:24:38.209798 master-0 kubenswrapper[4652]: I0216 17:24:38.209761 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" event={"ID":"6f44170a-3c1c-4944-b971-251f75a51fc3","Type":"ContainerStarted","Data":"b1fc9a33dd972be1c026cd63cde581abfccc25672ea36cf1d1ffb52d7f17e391"} Feb 16 17:24:38.209893 master-0 kubenswrapper[4652]: I0216 17:24:38.209864 4652 patch_prober.go:28] interesting pod/catalog-operator-588944557d-5drhs container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.56:8443/healthz\": dial tcp 10.128.0.56:8443: connect: connection refused" start-of-body= Feb 16 17:24:38.209942 master-0 kubenswrapper[4652]: I0216 17:24:38.209899 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" podUID="188e42e5-9f9c-42af-ba15-5548c4fa4b52" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.128.0.56:8443/healthz\": dial tcp 10.128.0.56:8443: connect: connection refused" Feb 16 17:24:38.211667 master-0 kubenswrapper[4652]: I0216 17:24:38.211637 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerStarted","Data":"7248c9f4d08dc31836dce13d31c0e1d7c36884756b520e61d4fdd99036f1c83c"} Feb 16 17:24:38.214486 master-0 kubenswrapper[4652]: I0216 17:24:38.214459 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerStarted","Data":"89ee8c65632a0f21a64256e28aad6902d833d61e5814c2420bc9a3b8bc8705a1"} Feb 16 17:24:38.216685 master-0 kubenswrapper[4652]: I0216 17:24:38.216658 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-dcd7b7d95-dhhfh" event={"ID":"08a90dc5-b0d8-4aad-a002-736492b6c1a9","Type":"ContainerStarted","Data":"9b5a3d8bb8914d0a1d77974dc5c1fdfaec5ff3feff38532fa756e475a530a18f"} Feb 16 17:24:38.217562 master-0 kubenswrapper[4652]: I0216 17:24:38.217535 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerStarted","Data":"c583661e5e33797ca1d13255c948bcaf6544bd8dfe5baf1799d8f2972453dd92"} Feb 16 17:24:38.219098 master-0 kubenswrapper[4652]: I0216 17:24:38.219063 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" event={"ID":"544c6815-81d7-422a-9e4a-5fcbfabe8da8","Type":"ContainerStarted","Data":"1c82a506c0f483b739ddb955615a7ee44979c49a93b3ce1124229979693a0aaf"} Feb 16 17:24:38.219806 master-0 kubenswrapper[4652]: I0216 17:24:38.219776 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:38.221901 master-0 kubenswrapper[4652]: I0216 17:24:38.221862 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerStarted","Data":"b66f7c60a8af0715ebcf0e7da88ca1320faaf284e4ead935c9d5c9c6cfaefb71"} Feb 16 17:24:38.224883 master-0 kubenswrapper[4652]: I0216 17:24:38.224794 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-695b766898-h94zg" Feb 16 17:24:38.225019 master-0 kubenswrapper[4652]: I0216 17:24:38.224986 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" event={"ID":"e10d0b0c-4c2a-45b3-8d69-3070d566b97d","Type":"ContainerStarted","Data":"c0f77c89ba6b346989e98c8dab245c40c3126976c4aefaae9ace6ae13dba99c7"} Feb 16 17:24:38.226926 master-0 kubenswrapper[4652]: I0216 17:24:38.226869 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" event={"ID":"e73ee493-de15-44c2-bd51-e12fcbb27a15","Type":"ContainerStarted","Data":"7cd603bd2eeab026183021ca4e49fd841c21cbf684cf25196c892e756d8bf3f2"} Feb 16 17:24:38.227306 master-0 kubenswrapper[4652]: I0216 17:24:38.227266 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:38.229217 master-0 kubenswrapper[4652]: I0216 17:24:38.229183 4652 patch_prober.go:28] interesting pod/packageserver-6d5d8c8c95-kzfjw container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.59:5443/healthz\": dial tcp 10.128.0.59:5443: connect: connection refused" start-of-body= Feb 16 17:24:38.229316 master-0 kubenswrapper[4652]: I0216 17:24:38.229216 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" podUID="e73ee493-de15-44c2-bd51-e12fcbb27a15" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.59:5443/healthz\": dial tcp 10.128.0.59:5443: connect: connection refused" Feb 16 17:24:38.231818 master-0 kubenswrapper[4652]: I0216 17:24:38.231785 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerStarted","Data":"b2dcb312200f3e856be703402a955223f5f42099ba4a1c486bf8b2bb376f0a23"} Feb 16 17:24:38.232794 master-0 kubenswrapper[4652]: I0216 17:24:38.232750 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" event={"ID":"642e5115-b7f2-4561-bc6b-1a74b6d891c4","Type":"ContainerStarted","Data":"322bc1db1c4ff94ccb9b31af04187e2c15ca9bfb7d5b49f71db1a74c2f49d2d6"} Feb 16 17:24:38.238287 master-0 kubenswrapper[4652]: I0216 17:24:38.235097 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-5f5f84757d-ktmm9" event={"ID":"edbaac23-11f0-4bc7-a7ce-b593c774c0fa","Type":"ContainerStarted","Data":"e1549348f7682d3c2ac5e81a61ca6c3a17e91911d76f18f3649e2eabd3cbc4e6"} Feb 16 17:24:38.241204 master-0 kubenswrapper[4652]: I0216 17:24:38.241167 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" event={"ID":"ba37ef0e-373c-4ccc-b082-668630399765","Type":"ContainerStarted","Data":"da2a3b7d648268cb1f239b672444e8d3d961d938521e3d24f34a136dcca8f5cb"} Feb 16 17:24:38.246368 master-0 kubenswrapper[4652]: I0216 17:24:38.246334 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qqvg4" event={"ID":"1363cb7b-62cc-497b-af6f-4d5e0eb7f174","Type":"ContainerStarted","Data":"be8422637dfc447b4ca3d8e016ca79602fdbe31124ef8a1958bf35834d4a2abc"} Feb 16 17:24:38.249456 master-0 kubenswrapper[4652]: I0216 17:24:38.249422 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7d8f4c8c66-qjq9w" event={"ID":"0ff68421-1741-41c1-93d5-5c722dfd295e","Type":"ContainerStarted","Data":"96fbfaadcc017ee90c6a02913a7e6e81c12957480be8c5bacd41ee3b181135cd"} Feb 16 17:24:38.250734 master-0 kubenswrapper[4652]: I0216 17:24:38.250706 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" event={"ID":"54fba066-0e9e-49f6-8a86-34d5b4b660df","Type":"ContainerStarted","Data":"0edb450c086a0c0ce648b07554644ba9558fc825aac2dfbcb8ccf67688315fb6"} Feb 16 17:24:38.251493 master-0 kubenswrapper[4652]: I0216 17:24:38.251381 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:38.254556 master-0 kubenswrapper[4652]: I0216 17:24:38.254491 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" event={"ID":"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41","Type":"ContainerStarted","Data":"2571ae766810fb54f11d5aeb551caa1f520021ef82fd098a48949c5d5cdabf3f"} Feb 16 17:24:38.276834 master-0 kubenswrapper[4652]: I0216 17:24:38.264618 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-75b869db96-twmsp" event={"ID":"5a275679-b7b6-4c28-b389-94cd2b014d6c","Type":"ContainerStarted","Data":"9d0756fb7b17ce9a49ef66efdefa7314d98a9db50ec8d75d5ab7243af1810ee9"} Feb 16 17:24:38.276834 master-0 kubenswrapper[4652]: I0216 17:24:38.268542 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-555857f695-nlrnr" Feb 16 17:24:38.276834 master-0 kubenswrapper[4652]: I0216 17:24:38.275080 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerStarted","Data":"4c843137d76568820badc308d7201554b26ce8497411ed7476460738b62bbd45"} Feb 16 17:24:38.277538 master-0 kubenswrapper[4652]: I0216 17:24:38.277438 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" event={"ID":"78be97a3-18d1-4962-804f-372974dc8ccc","Type":"ContainerStarted","Data":"c35244f57d2aa8ccc41277b99392184f82a8403d9e906c7dd7bda19a4088a4a2"} Feb 16 17:24:38.278494 master-0 kubenswrapper[4652]: I0216 17:24:38.277815 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:38.278494 master-0 kubenswrapper[4652]: I0216 17:24:38.278045 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" Feb 16 17:24:38.281507 master-0 kubenswrapper[4652]: I0216 17:24:38.281478 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerStarted","Data":"ff64f5042172e9dffa5614904d748d2db158b783ee3c06a687a94581dc6e2736"} Feb 16 17:24:38.289979 master-0 kubenswrapper[4652]: I0216 17:24:38.289940 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerStarted","Data":"e64875b023fdf91a2c63340ce16e1ff1797cb51ed3769039bc839f8412826222"} Feb 16 17:24:38.591210 master-0 kubenswrapper[4652]: I0216 17:24:38.591170 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:38.597811 master-0 kubenswrapper[4652]: I0216 17:24:38.597760 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvwzr\" (UniqueName: \"kubernetes.io/projected/8e623376-9e14-4341-9dcf-7a7c218b6f9f-kube-api-access-xvwzr\") pod \"kube-storage-version-migrator-operator-cd5474998-829l6\" (UID: \"8e623376-9e14-4341-9dcf-7a7c218b6f9f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:38.666506 master-0 kubenswrapper[4652]: I0216 17:24:38.665660 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" Feb 16 17:24:39.004934 master-0 kubenswrapper[4652]: W0216 17:24:39.002505 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc303189e_adae_4fe2_8dd7_cc9b80f73e66.slice/crio-e6086a74b60969ba3ead93f7864985c1ac99ba5d877f017d54dcfd3347d78bb1 WatchSource:0}: Error finding container e6086a74b60969ba3ead93f7864985c1ac99ba5d877f017d54dcfd3347d78bb1: Status 404 returned error can't find the container with id e6086a74b60969ba3ead93f7864985c1ac99ba5d877f017d54dcfd3347d78bb1 Feb 16 17:24:39.019361 master-0 kubenswrapper[4652]: W0216 17:24:39.015862 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod970d4376_f299_412c_a8ee_90aa980c689e.slice/crio-1f25eed57907f13fea3e74782e3b869eea62ff3ea6f7aa18a7057a2cb2cbe437 WatchSource:0}: Error finding container 1f25eed57907f13fea3e74782e3b869eea62ff3ea6f7aa18a7057a2cb2cbe437: Status 404 returned error can't find the container with id 1f25eed57907f13fea3e74782e3b869eea62ff3ea6f7aa18a7057a2cb2cbe437 Feb 16 17:24:39.109226 master-0 kubenswrapper[4652]: I0216 17:24:39.101468 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:39.109226 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:39.109226 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:39.109226 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:39.109226 master-0 kubenswrapper[4652]: I0216 17:24:39.101513 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:39.182308 master-0 kubenswrapper[4652]: I0216 17:24:39.182238 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:24:39.238443 master-0 kubenswrapper[4652]: W0216 17:24:39.237632 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2511146_1d04_4ecd_a28e_79662ef7b9d3.slice/crio-2389c9dc47867beea1c7155e888ea93f716bf39e37d310dbe3973e1098a2a6c0 WatchSource:0}: Error finding container 2389c9dc47867beea1c7155e888ea93f716bf39e37d310dbe3973e1098a2a6c0: Status 404 returned error can't find the container with id 2389c9dc47867beea1c7155e888ea93f716bf39e37d310dbe3973e1098a2a6c0 Feb 16 17:24:39.326897 master-0 kubenswrapper[4652]: W0216 17:24:39.317772 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode69d8c51_e2a6_4f61_9c26_072784f6cf40.slice/crio-6eb9343c11dcb06c6fcadee15dcd182b096094c22ce105212e3d6a608c008d4b WatchSource:0}: Error finding container 6eb9343c11dcb06c6fcadee15dcd182b096094c22ce105212e3d6a608c008d4b: Status 404 returned error can't find the container with id 6eb9343c11dcb06c6fcadee15dcd182b096094c22ce105212e3d6a608c008d4b Feb 16 17:24:39.327897 master-0 kubenswrapper[4652]: I0216 17:24:39.327841 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-bd6d6f87f-jhjct" event={"ID":"6f44170a-3c1c-4944-b971-251f75a51fc3","Type":"ContainerStarted","Data":"60a43530f7db1d1d4602f7b8ae3dcb88a5b4135cbb72cb93dba1e53bf786a970"} Feb 16 17:24:39.332286 master-0 kubenswrapper[4652]: I0216 17:24:39.331423 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5dc4688546-pl7r5" event={"ID":"29402454-a920-471e-895e-764235d16eb4","Type":"ContainerStarted","Data":"5667d911e50b2162813d068ff0377b5e7aa8c2969dbfce82ea87ed00783a349c"} Feb 16 17:24:39.374280 master-0 kubenswrapper[4652]: I0216 17:24:39.361169 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-756d64c8c4-ln4wm" event={"ID":"e10d0b0c-4c2a-45b3-8d69-3070d566b97d","Type":"ContainerStarted","Data":"ccb32a26cf22f81c2a38a6804ff85cca47e88900248f901eb0a5199dda484a31"} Feb 16 17:24:39.374280 master-0 kubenswrapper[4652]: I0216 17:24:39.372175 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" event={"ID":"62220aa5-4065-472c-8a17-c0a58942ab8a","Type":"ContainerStarted","Data":"dfb2ca1f3f3a2163fc3d7f1f6f3d75467311d08d24884c5cdfab43a4229ccc10"} Feb 16 17:24:39.374280 master-0 kubenswrapper[4652]: I0216 17:24:39.372755 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:39.399502 master-0 kubenswrapper[4652]: I0216 17:24:39.399134 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerStarted","Data":"50c66ed61b907edd1ceec45a87aa52fa3fd05087ad18e9d732bbeaa727b7aef1"} Feb 16 17:24:39.435238 master-0 kubenswrapper[4652]: I0216 17:24:39.429952 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-755d954778-lf4cb" event={"ID":"9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41","Type":"ContainerStarted","Data":"52e211f6fe9eecc39bef491d00f5d519d919eca17814e7bbb20d74a61eb055b7"} Feb 16 17:24:39.481145 master-0 kubenswrapper[4652]: I0216 17:24:39.480368 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b56bd877c-p7k2k" Feb 16 17:24:39.497208 master-0 kubenswrapper[4652]: I0216 17:24:39.494627 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-7485d645b8-zxxwd" event={"ID":"2d1636c0-f34d-444c-822d-77f1d203ddc4","Type":"ContainerStarted","Data":"0a98e812fb887e711236754746c9b9577c52db772f46cfb9204fd87260fcab49"} Feb 16 17:24:39.498550 master-0 kubenswrapper[4652]: I0216 17:24:39.498475 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerStarted","Data":"1900fb6f76d7a89a2dea70c35aaf22f3fcd566aa532154767c4e6d580d9ad370"} Feb 16 17:24:39.520330 master-0 kubenswrapper[4652]: I0216 17:24:39.500775 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerStarted","Data":"5a92896832528f748187b505ef676962e2b66bed3c2cc4ab20c86dcc96a8df15"} Feb 16 17:24:39.520330 master-0 kubenswrapper[4652]: I0216 17:24:39.504253 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerStarted","Data":"b16d4ec347a09a6bf7a8549677d1c5a7503ae585fececceeba5af4779559d18c"} Feb 16 17:24:39.520330 master-0 kubenswrapper[4652]: I0216 17:24:39.507644 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerStarted","Data":"f267cf1f9a648760c918e093d2670359d8a10e3e67054e8d0e88d347d8ee1fed"} Feb 16 17:24:39.526003 master-0 kubenswrapper[4652]: I0216 17:24:39.525307 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" event={"ID":"442600dc-09b2-4fee-9f89-777296b2ee40","Type":"ContainerStarted","Data":"4e0a1ec70c6f0d2cfa61f95b5f9c2b72fd18f8710eaec35186a67d0d2b5b4099"} Feb 16 17:24:39.532283 master-0 kubenswrapper[4652]: W0216 17:24:39.532197 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e623376_9e14_4341_9dcf_7a7c218b6f9f.slice/crio-7b106c9fdb556be3e77262ba9b9d8caa59c76a301c426132cbfd8a84cb3d5de6 WatchSource:0}: Error finding container 7b106c9fdb556be3e77262ba9b9d8caa59c76a301c426132cbfd8a84cb3d5de6: Status 404 returned error can't find the container with id 7b106c9fdb556be3e77262ba9b9d8caa59c76a301c426132cbfd8a84cb3d5de6 Feb 16 17:24:39.545859 master-0 kubenswrapper[4652]: I0216 17:24:39.545810 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-d8bf84b88-m66tx" event={"ID":"642e5115-b7f2-4561-bc6b-1a74b6d891c4","Type":"ContainerStarted","Data":"718a10c5f689df63847606af1987bee21d9fc56becf2a373fa8082935b46da92"} Feb 16 17:24:39.547400 master-0 kubenswrapper[4652]: I0216 17:24:39.547352 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerStarted","Data":"2389c9dc47867beea1c7155e888ea93f716bf39e37d310dbe3973e1098a2a6c0"} Feb 16 17:24:39.556312 master-0 kubenswrapper[4652]: I0216 17:24:39.556247 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qcgxx" event={"ID":"2d96ccdc-0b09-437d-bfca-1958af5d9953","Type":"ContainerStarted","Data":"54bc3bbe17322a03654052d676ab4eacfcc2c1081769055c06085fd6ba2d4f36"} Feb 16 17:24:39.556410 master-0 kubenswrapper[4652]: I0216 17:24:39.556347 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:39.576900 master-0 kubenswrapper[4652]: I0216 17:24:39.576815 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" event={"ID":"eaf7edff-0a89-4ac0-b9dd-511e098b5434","Type":"ContainerStarted","Data":"27cb4777f5c265d72bbd9938e1d918c98bc521258dbe8e255dbed610a4bf1411"} Feb 16 17:24:39.616699 master-0 kubenswrapper[4652]: I0216 17:24:39.615971 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-595c8f9ff-b9nvq" event={"ID":"7b3b6a55-b1e4-4498-ace1-7d2ed1a886e4","Type":"ContainerStarted","Data":"ba6bb00bf5402d0f0a2766f3aa159e40f32f4518eac2b6549990b79cae19628b"} Feb 16 17:24:39.630370 master-0 kubenswrapper[4652]: I0216 17:24:39.630322 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"bb28d3f9460fce321480a4b089f52a8dccad7cb368f24b004332172da96fa3d2"} Feb 16 17:24:39.634996 master-0 kubenswrapper[4652]: I0216 17:24:39.634952 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerStarted","Data":"b9f7cb07b544e23438e74874373d9c5c287572c4828f938862c51be249b44c86"} Feb 16 17:24:39.664439 master-0 kubenswrapper[4652]: I0216 17:24:39.660305 4652 generic.go:334] "Generic (PLEG): container finished" podID="2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e" containerID="753eafc4992af63c530b55a1ebcd0984e9aa5c8afefc27764d292ea1314f63c6" exitCode=0 Feb 16 17:24:39.664439 master-0 kubenswrapper[4652]: I0216 17:24:39.660377 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerDied","Data":"753eafc4992af63c530b55a1ebcd0984e9aa5c8afefc27764d292ea1314f63c6"} Feb 16 17:24:39.692717 master-0 kubenswrapper[4652]: I0216 17:24:39.692672 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" event={"ID":"e1a7c783-2e23-4284-b648-147984cf1022","Type":"ContainerStarted","Data":"b7d96a080d0bb0d15c8208669e4d58bfabb25955758ae959c0b08a31cb343913"} Feb 16 17:24:39.693829 master-0 kubenswrapper[4652]: I0216 17:24:39.693796 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" event={"ID":"970d4376-f299-412c-a8ee-90aa980c689e","Type":"ContainerStarted","Data":"1f25eed57907f13fea3e74782e3b869eea62ff3ea6f7aa18a7057a2cb2cbe437"} Feb 16 17:24:39.696831 master-0 kubenswrapper[4652]: I0216 17:24:39.696801 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" event={"ID":"5192fa49-d81c-47ce-b2ab-f90996cc0bd5","Type":"ContainerStarted","Data":"beed317499b339bcf93ac3fcc97be4b23cbffa936a9373e31c8261c66dd441df"} Feb 16 17:24:39.700621 master-0 kubenswrapper[4652]: I0216 17:24:39.700577 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerStarted","Data":"c43816a1cf5e2232a06603588291a4343e9d9f6ba042f142bf1e33840f38f94d"} Feb 16 17:24:39.702874 master-0 kubenswrapper[4652]: I0216 17:24:39.702808 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" event={"ID":"7390ccc6-dfbe-4f51-960c-7628f49bffb7","Type":"ContainerStarted","Data":"244e14fa4c61b72fc4d116b6f8d42a768e20e6a2c8b206677cbe3b56b8587189"} Feb 16 17:24:39.724272 master-0 kubenswrapper[4652]: I0216 17:24:39.724213 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" event={"ID":"54f29618-42c2-4270-9af7-7d82852d7cec","Type":"ContainerStarted","Data":"cb066da32e50210f2220e18c8ec73a9d1b15972092aecd3b9ffac4b9cb1a8da9"} Feb 16 17:24:39.725346 master-0 kubenswrapper[4652]: I0216 17:24:39.725088 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:39.756786 master-0 kubenswrapper[4652]: I0216 17:24:39.756733 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vwvwx" event={"ID":"c303189e-adae-4fe2-8dd7-cc9b80f73e66","Type":"ContainerStarted","Data":"e6086a74b60969ba3ead93f7864985c1ac99ba5d877f017d54dcfd3347d78bb1"} Feb 16 17:24:39.759772 master-0 kubenswrapper[4652]: I0216 17:24:39.757448 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:24:39.818895 master-0 kubenswrapper[4652]: I0216 17:24:39.818104 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerStarted","Data":"458700cbc4f46addff9608df449399e80c82cef7be58f20baf77773b411274d7"} Feb 16 17:24:39.818895 master-0 kubenswrapper[4652]: I0216 17:24:39.818153 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-86b8869b79-nhxlp" event={"ID":"d9859457-f0d1-4754-a6c5-cf05d5abf447","Type":"ContainerStarted","Data":"87b95d548165ea23406cff36c9d4a65cbb95ae3177cef7099811d4183f8377d3"} Feb 16 17:24:39.822064 master-0 kubenswrapper[4652]: I0216 17:24:39.822029 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerStarted","Data":"c0385fd1d3dff660f58da22c149ec96126227ccd659d633d80f2c761aca5c960"} Feb 16 17:24:39.822896 master-0 kubenswrapper[4652]: I0216 17:24:39.822869 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerStarted","Data":"9555c8f512d630439883aa79bdc0b4b29957ded3f3f91cd979ec75979060608a"} Feb 16 17:24:39.824891 master-0 kubenswrapper[4652]: I0216 17:24:39.824862 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" event={"ID":"737fcc7d-d850-4352-9f17-383c85d5bc28","Type":"ContainerStarted","Data":"782317f4d63ebe91e553bdc066c64803c2164232eea3519a98f0be242bb59fe8"} Feb 16 17:24:39.828632 master-0 kubenswrapper[4652]: I0216 17:24:39.827902 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerStarted","Data":"75caf8b5dd0073680173e2effabd912ddeecbe98842d37e2afdb36f5e53af54a"} Feb 16 17:24:39.846256 master-0 kubenswrapper[4652]: I0216 17:24:39.845796 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:24:39.865021 master-0 kubenswrapper[4652]: I0216 17:24:39.857222 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerStarted","Data":"42ffec30e1f00e34bb1133e4f555bef62ffc32b806211b9666667bca0ed30120"} Feb 16 17:24:39.865021 master-0 kubenswrapper[4652]: I0216 17:24:39.861730 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" event={"ID":"ba37ef0e-373c-4ccc-b082-668630399765","Type":"ContainerStarted","Data":"f4e4edca0be0d9a3d5fc9814d30aa7f604b9e4f1ad4d972029b66731d8556b31"} Feb 16 17:24:39.871565 master-0 kubenswrapper[4652]: I0216 17:24:39.869577 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-6d5d8c8c95-kzfjw" Feb 16 17:24:39.875460 master-0 kubenswrapper[4652]: I0216 17:24:39.875005 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-588944557d-5drhs" Feb 16 17:24:40.083981 master-0 kubenswrapper[4652]: I0216 17:24:40.083079 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:40.083981 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:40.083981 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:40.083981 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:40.083981 master-0 kubenswrapper[4652]: I0216 17:24:40.083121 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:40.877346 master-0 kubenswrapper[4652]: I0216 17:24:40.877291 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" event={"ID":"e1a7c783-2e23-4284-b648-147984cf1022","Type":"ContainerStarted","Data":"5db2accda1febe7a3781694c54c7a20860a29834a8a8682f8a19a5eaabf37ccd"} Feb 16 17:24:40.881365 master-0 kubenswrapper[4652]: I0216 17:24:40.879802 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:40.881365 master-0 kubenswrapper[4652]: I0216 17:24:40.880141 4652 patch_prober.go:28] interesting pod/controller-manager-7fc9897cf8-9rjwd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" start-of-body= Feb 16 17:24:40.881365 master-0 kubenswrapper[4652]: I0216 17:24:40.880171 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.61:8443/healthz\": dial tcp 10.128.0.61:8443: connect: connection refused" Feb 16 17:24:40.884753 master-0 kubenswrapper[4652]: I0216 17:24:40.884712 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerStarted","Data":"ec9ce8bbc62ee65cbf3a82f589f546d36366f2ed8426939ce7c2a447d6b602cf"} Feb 16 17:24:40.891515 master-0 kubenswrapper[4652]: I0216 17:24:40.891417 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerStarted","Data":"b04eee004c9e073682226b131bed1ac63f426e7741c9e27aede8a9511b09599a"} Feb 16 17:24:40.896588 master-0 kubenswrapper[4652]: I0216 17:24:40.896555 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerStarted","Data":"7d1ae47b301a0b9bdfb0fd7ccbeefdab54aec9585ebd18637737c9c8fb841ce8"} Feb 16 17:24:40.898724 master-0 kubenswrapper[4652]: I0216 17:24:40.898450 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-vwvwx" event={"ID":"c303189e-adae-4fe2-8dd7-cc9b80f73e66","Type":"ContainerStarted","Data":"bd4fbba7ea4e62ef53639921f2669256eac0068d9459da6639e69079f8683be0"} Feb 16 17:24:40.903493 master-0 kubenswrapper[4652]: I0216 17:24:40.903461 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" event={"ID":"2be9d55c-a4ec-48cd-93d2-0a1dced745a8","Type":"ContainerStarted","Data":"24810449690d5775751417ebd8694b4596d66b3a929f6ba3ed10b3bf89940dc8"} Feb 16 17:24:40.903493 master-0 kubenswrapper[4652]: I0216 17:24:40.903489 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" event={"ID":"2be9d55c-a4ec-48cd-93d2-0a1dced745a8","Type":"ContainerStarted","Data":"233776721f1387ea746aa3a9cde87b3c2a4f461764784c200d330f2a97c3e715"} Feb 16 17:24:40.904738 master-0 kubenswrapper[4652]: I0216 17:24:40.904711 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerStarted","Data":"b2294bd714142697773fd8579e93cfe49f050eab552f4a4c7142d98a224a8099"} Feb 16 17:24:40.907039 master-0 kubenswrapper[4652]: I0216 17:24:40.907010 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78ff47c7c5-txr5k" event={"ID":"442600dc-09b2-4fee-9f89-777296b2ee40","Type":"ContainerStarted","Data":"1e1c74ee4022c9c9d130c2efb7fc45ddb103ba4b3e1c45124e46e2fbd13251d2"} Feb 16 17:24:40.908271 master-0 kubenswrapper[4652]: I0216 17:24:40.908213 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerStarted","Data":"b6d1882f068b81106a14808d3c7e725a7f4c9083c95c3376f468c3cbd393afc7"} Feb 16 17:24:40.909415 master-0 kubenswrapper[4652]: I0216 17:24:40.909384 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-6d4655d9cf-qhn9v" event={"ID":"737fcc7d-d850-4352-9f17-383c85d5bc28","Type":"ContainerStarted","Data":"6aa0e82fb41ec3052a4033baa1ae64a69bfa303a89218937c1830421acdb612a"} Feb 16 17:24:40.910602 master-0 kubenswrapper[4652]: I0216 17:24:40.910575 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerStarted","Data":"6eb9343c11dcb06c6fcadee15dcd182b096094c22ce105212e3d6a608c008d4b"} Feb 16 17:24:40.912084 master-0 kubenswrapper[4652]: I0216 17:24:40.912059 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerStarted","Data":"1039c718420f72d0ceb7837e24dc41fccfc749111294bde4ee33cf89282d3dde"} Feb 16 17:24:40.912084 master-0 kubenswrapper[4652]: I0216 17:24:40.912083 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-546cc7d765-94nfl" event={"ID":"ae20b683-dac8-419e-808a-ddcdb3c564e1","Type":"ContainerStarted","Data":"db0f8518b267ecd116e66236863d924aaaf65c2e98e710cc30a701bca5c67725"} Feb 16 17:24:40.915156 master-0 kubenswrapper[4652]: I0216 17:24:40.915130 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerStarted","Data":"c28c121ed28e2598a88750cd1d6b60a6a881fcbd8dd1a747605290f14e4dfc47"} Feb 16 17:24:40.916833 master-0 kubenswrapper[4652]: I0216 17:24:40.916780 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" event={"ID":"0517b180-00ee-47fe-a8e7-36a3931b7e72","Type":"ContainerStarted","Data":"69f1ff018067c4d51f39599b4c1aaf69000794f5504f48aeac73c6f9911ed9a9"} Feb 16 17:24:40.917898 master-0 kubenswrapper[4652]: I0216 17:24:40.917878 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerStarted","Data":"456d1bd6a9d214105b7ee58ddede43780bc09096ed783933ced7652347e27b79"} Feb 16 17:24:40.921392 master-0 kubenswrapper[4652]: I0216 17:24:40.921362 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" event={"ID":"8e623376-9e14-4341-9dcf-7a7c218b6f9f","Type":"ContainerStarted","Data":"7b106c9fdb556be3e77262ba9b9d8caa59c76a301c426132cbfd8a84cb3d5de6"} Feb 16 17:24:40.923526 master-0 kubenswrapper[4652]: I0216 17:24:40.923496 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerStarted","Data":"672947724e2d2191bd12971ed813f8dd4639a7595a792b95a5da6566fb07182b"} Feb 16 17:24:40.924756 master-0 kubenswrapper[4652]: I0216 17:24:40.924723 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerStarted","Data":"9009b0ac7bb219051c60923e00e4412dadaa53f20c5f4d796e51e9daf84a94fe"} Feb 16 17:24:40.926059 master-0 kubenswrapper[4652]: I0216 17:24:40.926033 4652 generic.go:334] "Generic (PLEG): container finished" podID="4e51bba5-0ebe-4e55-a588-38b71548c605" containerID="2340f14bc797c99a3d8a8eddc945c792e4284bf88368622afb467beed08582f1" exitCode=0 Feb 16 17:24:40.926320 master-0 kubenswrapper[4652]: I0216 17:24:40.926272 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerDied","Data":"2340f14bc797c99a3d8a8eddc945c792e4284bf88368622afb467beed08582f1"} Feb 16 17:24:40.928794 master-0 kubenswrapper[4652]: I0216 17:24:40.928733 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-ff6c9b66-6j4ts" event={"ID":"5192fa49-d81c-47ce-b2ab-f90996cc0bd5","Type":"ContainerStarted","Data":"9803e52262520bb9a45665eb930b69ec53bb9b94fbc7aa73fad43f310e658a8a"} Feb 16 17:24:40.931590 master-0 kubenswrapper[4652]: I0216 17:24:40.931485 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"bb884767378bcbf4b6207cb18c35deb595e182b877cf8cc2f99737095b4c4fdc"} Feb 16 17:24:40.933128 master-0 kubenswrapper[4652]: I0216 17:24:40.933080 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerStarted","Data":"2b4b0ac4882fdf9369ca2a731bae47a08335ff02d2c599cd90d41d3bd58128bc"} Feb 16 17:24:40.934960 master-0 kubenswrapper[4652]: I0216 17:24:40.934912 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" event={"ID":"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd","Type":"ContainerStarted","Data":"b545178b36872f59570de843a40d359bc948749e8b4236c8c145edf22bae689b"} Feb 16 17:24:40.936026 master-0 kubenswrapper[4652]: I0216 17:24:40.935999 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerStarted","Data":"17b8ad1094c5225679c8e89ed487359b66cfdb212218f6abb11bfbb51a6d41d9"} Feb 16 17:24:40.937825 master-0 kubenswrapper[4652]: I0216 17:24:40.937779 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-599b567ff7-nrcpr" event={"ID":"ed3d89d0-bc00-482e-a656-7fdf4646ab0a","Type":"ContainerStarted","Data":"2aae3f7e5e631f5ae4cd5fcc26116e549b52713c7c67b52fc8e5f6cceabd028a"} Feb 16 17:24:40.939525 master-0 kubenswrapper[4652]: I0216 17:24:40.939484 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" event={"ID":"48801344-a48a-493e-aea4-19d998d0b708","Type":"ContainerStarted","Data":"e0e9479c4a5142801c799ea60c74c4afa0f201945c39f8e0bb2c0d9dbe56dc4a"} Feb 16 17:24:40.941566 master-0 kubenswrapper[4652]: I0216 17:24:40.941532 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-7b87b97578-q55rf" event={"ID":"970d4376-f299-412c-a8ee-90aa980c689e","Type":"ContainerStarted","Data":"a4e0a9ba26d52d4e220bb686dca1da771b7757ab1fe597966e7742c08dcfe19e"} Feb 16 17:24:40.943015 master-0 kubenswrapper[4652]: I0216 17:24:40.942986 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerStarted","Data":"bdeb010759623665295857b8621d04369929ecc4f2a3369c09ef5f6e40ac1478"} Feb 16 17:24:40.944109 master-0 kubenswrapper[4652]: I0216 17:24:40.944067 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerStarted","Data":"1c612b682bd78c9609b897ca161119d88f4f3250bd738f795f7ce9f6f3f25ecd"} Feb 16 17:24:40.946421 master-0 kubenswrapper[4652]: I0216 17:24:40.946386 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" event={"ID":"18e9a9d3-9b18-4c19-9558-f33c68101922","Type":"ContainerStarted","Data":"5978e7392bd091ba76646cf6ac45dc4ea82e09e49227ade2c3d81e736815737a"} Feb 16 17:24:40.947536 master-0 kubenswrapper[4652]: I0216 17:24:40.947499 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerStarted","Data":"63a51cac840db81fc05fd273ca1b5fe51ad6f46ec6e747c538b64acba0c50607"} Feb 16 17:24:40.949067 master-0 kubenswrapper[4652]: I0216 17:24:40.949042 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerStarted","Data":"aabc2283fc2f4af415388f0cb48337e2e925961a2fb4c405beb854d15edd99a9"} Feb 16 17:24:40.951047 master-0 kubenswrapper[4652]: I0216 17:24:40.951009 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" event={"ID":"7390ccc6-dfbe-4f51-960c-7628f49bffb7","Type":"ContainerStarted","Data":"61e65136a27c6937678b8e2e384b0fc72dcb7462c295a7a9917cecd78feaa38c"} Feb 16 17:24:40.952600 master-0 kubenswrapper[4652]: I0216 17:24:40.952547 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"70e1ea277c7d2fa9306bab0ad244991902dccdc3128b4f4d1657a2e94f21ab5c"} Feb 16 17:24:41.088670 master-0 kubenswrapper[4652]: I0216 17:24:41.088204 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:41.088670 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:41.088670 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:41.088670 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:41.088670 master-0 kubenswrapper[4652]: I0216 17:24:41.088244 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:41.974056 master-0 kubenswrapper[4652]: I0216 17:24:41.969539 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerStarted","Data":"e61d25d8c5643c0677cc37f1e09301730ebc6dee19ac41926d08e1447653354f"} Feb 16 17:24:41.975784 master-0 kubenswrapper[4652]: I0216 17:24:41.974986 4652 generic.go:334] "Generic (PLEG): container finished" podID="7390ccc6-dfbe-4f51-960c-7628f49bffb7" containerID="61e65136a27c6937678b8e2e384b0fc72dcb7462c295a7a9917cecd78feaa38c" exitCode=0 Feb 16 17:24:41.975784 master-0 kubenswrapper[4652]: I0216 17:24:41.975058 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" event={"ID":"7390ccc6-dfbe-4f51-960c-7628f49bffb7","Type":"ContainerDied","Data":"61e65136a27c6937678b8e2e384b0fc72dcb7462c295a7a9917cecd78feaa38c"} Feb 16 17:24:41.984665 master-0 kubenswrapper[4652]: I0216 17:24:41.984611 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerStarted","Data":"d7f6df1d6a1e2e813dff6b5c54f6d864008f7233fb0c2d885ed67c294d84d58d"} Feb 16 17:24:41.995120 master-0 kubenswrapper[4652]: I0216 17:24:41.995016 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"2d219d8eb5187763ae6b323f8bbe61b1588bd1e36e3c22809601595a3e6d111e"} Feb 16 17:24:41.996583 master-0 kubenswrapper[4652]: I0216 17:24:41.996542 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"94d6ecf77e85b5288acaf9faa8e99b342975d46de77aef4276e1e3a83570c3a7"} Feb 16 17:24:41.998263 master-0 kubenswrapper[4652]: I0216 17:24:41.998212 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-67bf55ccdd-cppj8" event={"ID":"6b3e071c-1c62-489b-91c1-aef0d197f40b","Type":"ContainerStarted","Data":"38d5b61ad85a123304b45014a43449f49a1781929ba64dc244125c47872f1cfc"} Feb 16 17:24:41.999897 master-0 kubenswrapper[4652]: I0216 17:24:41.999564 4652 generic.go:334] "Generic (PLEG): container finished" podID="b04ee64e-5e83-499c-812d-749b2b6824c6" containerID="cc0372a52d74f80e6b19b06b7f21b70bdfcca2bf214bbfc44283f3529e91ed96" exitCode=0 Feb 16 17:24:41.999897 master-0 kubenswrapper[4652]: I0216 17:24:41.999614 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerDied","Data":"cc0372a52d74f80e6b19b06b7f21b70bdfcca2bf214bbfc44283f3529e91ed96"} Feb 16 17:24:42.001693 master-0 kubenswrapper[4652]: I0216 17:24:42.001646 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerStarted","Data":"af7ac34bbda6b1d4c9a2c5058409cd5cb3f9b9053872ad6be2caedcaa6a1087e"} Feb 16 17:24:42.003326 master-0 kubenswrapper[4652]: I0216 17:24:42.003294 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerStarted","Data":"f92bf595ef8055c8af1a9b8e0b741bcd6ba7fe3eac39aeb6a0df7c831ee9040d"} Feb 16 17:24:42.003326 master-0 kubenswrapper[4652]: I0216 17:24:42.003318 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" event={"ID":"8e90be63-ff6c-4e9e-8b9e-1ad9cf941845","Type":"ContainerStarted","Data":"802a4d64ddcd554fe734b79fb438b9955bf2fdab75387e4f09981d42fa7f5cac"} Feb 16 17:24:42.004890 master-0 kubenswrapper[4652]: I0216 17:24:42.004860 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-67fd9768b5-zcwwd" event={"ID":"ee84198d-6357-4429-a90c-455c3850a788","Type":"ContainerStarted","Data":"80b8f3ee0a12c95afb80c2646b6cfbd3bfccb9f3d15627e2730fc2cf2831c4da"} Feb 16 17:24:42.006135 master-0 kubenswrapper[4652]: I0216 17:24:42.006093 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-cd5474998-829l6" event={"ID":"8e623376-9e14-4341-9dcf-7a7c218b6f9f","Type":"ContainerStarted","Data":"f72118ae6902a2fb225c365103225f7bc9b96231a2a1cbc857debe477529726a"} Feb 16 17:24:42.008108 master-0 kubenswrapper[4652]: I0216 17:24:42.008088 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerStarted","Data":"0d3af59142d299995593909c601e1c56e7c988c5df886627d8be602aa92ccca3"} Feb 16 17:24:42.008192 master-0 kubenswrapper[4652]: I0216 17:24:42.008178 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-f8cbff74c-spxm9" event={"ID":"d1524fc1-d157-435a-8bf8-7e877c45909d","Type":"ContainerStarted","Data":"14be30d60f92ce261ac96b6e7181c1dc032a5f295f33f98c80494f09fc2f44d7"} Feb 16 17:24:42.009532 master-0 kubenswrapper[4652]: I0216 17:24:42.009509 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-599b567ff7-nrcpr" event={"ID":"ed3d89d0-bc00-482e-a656-7fdf4646ab0a","Type":"ContainerStarted","Data":"a6d8d66468455600a88bcb9dd3e219d233baab7041ba8c82c91c8eda93212ca9"} Feb 16 17:24:42.011317 master-0 kubenswrapper[4652]: I0216 17:24:42.011172 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-96c8c64b8-zwwnk" event={"ID":"5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd","Type":"ContainerStarted","Data":"b7252a3b5a750d239c071ff4651336c60de818afdfbefc5c473f12a275c4460c"} Feb 16 17:24:42.013211 master-0 kubenswrapper[4652]: I0216 17:24:42.013173 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-c588d8cb4-wjr7d" event={"ID":"9609a4f3-b947-47af-a685-baae26c50fa3","Type":"ContainerStarted","Data":"c25d3e4782beb7e418acc998556a5e9b882ec82770d83362c43235f864ea4c56"} Feb 16 17:24:42.015019 master-0 kubenswrapper[4652]: I0216 17:24:42.014998 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-cb4f7b4cf-6qrw5" event={"ID":"c2511146-1d04-4ecd-a28e-79662ef7b9d3","Type":"ContainerStarted","Data":"2844c8190aa4f581fed805458845647407a07b0006622b9411be86b5005fe13c"} Feb 16 17:24:42.019152 master-0 kubenswrapper[4652]: I0216 17:24:42.019120 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-686c884b4d-ksx48" event={"ID":"c8729b1a-e365-4cf7-8a05-91a9987dabe9","Type":"ContainerStarted","Data":"750c6c47c2a0f7d18fed5aa1739a44fcf378989e9e431e43e7a1d0c010d85004"} Feb 16 17:24:42.027478 master-0 kubenswrapper[4652]: I0216 17:24:42.027436 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" event={"ID":"0517b180-00ee-47fe-a8e7-36a3931b7e72","Type":"ContainerStarted","Data":"69f967fe7238c4553b8c0afacb15108edf2da475f9d3aceaf275de25407a33b4"} Feb 16 17:24:42.027681 master-0 kubenswrapper[4652]: I0216 17:24:42.027641 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:42.029494 master-0 kubenswrapper[4652]: I0216 17:24:42.029383 4652 patch_prober.go:28] interesting pod/console-operator-7777d5cc66-64vhv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.128.0.78:8443/readyz\": dial tcp 10.128.0.78:8443: connect: connection refused" start-of-body= Feb 16 17:24:42.029689 master-0 kubenswrapper[4652]: I0216 17:24:42.029589 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerStarted","Data":"4d326cb3bf9fbb366851310e18303289a6df5132567e52e1c47cc0426487d58e"} Feb 16 17:24:42.029689 master-0 kubenswrapper[4652]: I0216 17:24:42.029567 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" podUID="0517b180-00ee-47fe-a8e7-36a3931b7e72" containerName="console-operator" probeResult="failure" output="Get \"https://10.128.0.78:8443/readyz\": dial tcp 10.128.0.78:8443: connect: connection refused" Feb 16 17:24:42.031402 master-0 kubenswrapper[4652]: I0216 17:24:42.031316 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7485d55966-sgmpf" event={"ID":"eaf7edff-0a89-4ac0-b9dd-511e098b5434","Type":"ContainerStarted","Data":"a60f0b707113dc9b73f360fd65a5b0f511d7cf556e89698d5f0c6a8e9eae489a"} Feb 16 17:24:42.032437 master-0 kubenswrapper[4652]: I0216 17:24:42.032360 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerStarted","Data":"eb18d57ac7ec6256c2d5b56b25905b6b02053e12fd37385cecd26912be5bc63e"} Feb 16 17:24:42.033653 master-0 kubenswrapper[4652]: I0216 17:24:42.033628 4652 generic.go:334] "Generic (PLEG): container finished" podID="dce85b5e-6e92-4e0e-bee7-07b1a3634302" containerID="a861402b217b09fc45d1bdbea50efb6bf073598d2bbe537f16860369da4c5c26" exitCode=0 Feb 16 17:24:42.033769 master-0 kubenswrapper[4652]: I0216 17:24:42.033692 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerDied","Data":"a861402b217b09fc45d1bdbea50efb6bf073598d2bbe537f16860369da4c5c26"} Feb 16 17:24:42.035453 master-0 kubenswrapper[4652]: I0216 17:24:42.035421 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-54984b6678-gp8gv" event={"ID":"d020c902-2adb-4919-8dd9-0c2109830580","Type":"ContainerStarted","Data":"af18e410991d902f7fdb8825e6a0f78ca0e241ea024a0333504a7b987609fd7d"} Feb 16 17:24:42.037734 master-0 kubenswrapper[4652]: I0216 17:24:42.037303 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-7bc947fc7d-4j7pn" event={"ID":"4488757c-f0fd-48fa-a3f9-6373b0bcafe4","Type":"ContainerStarted","Data":"b5d4ec80d23b72f58642a2f397e522d9e150c496834471e60842158c65221b78"} Feb 16 17:24:42.038225 master-0 kubenswrapper[4652]: I0216 17:24:42.038190 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerStarted","Data":"8250721acdab80690c27e0f316df032c6d396bd7ed522000379a21d273622b27"} Feb 16 17:24:42.039198 master-0 kubenswrapper[4652]: I0216 17:24:42.039164 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-676cd8b9b5-cp9rb" event={"ID":"48801344-a48a-493e-aea4-19d998d0b708","Type":"ContainerStarted","Data":"0a74290ba56846db9ff4feed3437c15c1b481ef22aac8558b20bcf909de002ac"} Feb 16 17:24:42.050576 master-0 kubenswrapper[4652]: I0216 17:24:42.050527 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84976bb859-rsnqc" event={"ID":"f3c7d762-e2fe-49ca-ade5-3982d91ec2a2","Type":"ContainerStarted","Data":"717645bb987494832c9768cf9fa5cbad41ba1347bf648f7f0d38903e01d526e9"} Feb 16 17:24:42.053542 master-0 kubenswrapper[4652]: I0216 17:24:42.053515 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-bd7dd5c46-92rqx" event={"ID":"404c402a-705f-4352-b9df-b89562070d9c","Type":"ContainerStarted","Data":"1b388c4e67c8054086c4af6d2cf28cf3fb7e868d2d2cd8b02c179d48464a93d2"} Feb 16 17:24:42.057463 master-0 kubenswrapper[4652]: I0216 17:24:42.057416 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:24:42.082081 master-0 kubenswrapper[4652]: I0216 17:24:42.082039 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:42.082081 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:42.082081 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:42.082081 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:42.082363 master-0 kubenswrapper[4652]: I0216 17:24:42.082087 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:42.349737 master-0 kubenswrapper[4652]: I0216 17:24:42.349690 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:42.418297 master-0 kubenswrapper[4652]: I0216 17:24:42.418241 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lnzfx" Feb 16 17:24:43.071884 master-0 kubenswrapper[4652]: I0216 17:24:43.071793 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" event={"ID":"7390ccc6-dfbe-4f51-960c-7628f49bffb7","Type":"ContainerStarted","Data":"5938229d66e6b3b90cd4c997365a6c7f42e4c6c6811c8535131fc5297d3eebba"} Feb 16 17:24:43.082140 master-0 kubenswrapper[4652]: I0216 17:24:43.082085 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:43.082140 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:43.082140 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:43.082140 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:43.082493 master-0 kubenswrapper[4652]: I0216 17:24:43.082160 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:43.107793 master-0 kubenswrapper[4652]: I0216 17:24:43.107761 4652 generic.go:334] "Generic (PLEG): container finished" podID="4e51bba5-0ebe-4e55-a588-38b71548c605" containerID="d7f6df1d6a1e2e813dff6b5c54f6d864008f7233fb0c2d885ed67c294d84d58d" exitCode=0 Feb 16 17:24:43.109640 master-0 kubenswrapper[4652]: I0216 17:24:43.108682 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerDied","Data":"d7f6df1d6a1e2e813dff6b5c54f6d864008f7233fb0c2d885ed67c294d84d58d"} Feb 16 17:24:43.127718 master-0 kubenswrapper[4652]: I0216 17:24:43.127684 4652 generic.go:334] "Generic (PLEG): container finished" podID="e69d8c51-e2a6-4f61-9c26-072784f6cf40" containerID="8250721acdab80690c27e0f316df032c6d396bd7ed522000379a21d273622b27" exitCode=0 Feb 16 17:24:43.127948 master-0 kubenswrapper[4652]: I0216 17:24:43.127925 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerDied","Data":"8250721acdab80690c27e0f316df032c6d396bd7ed522000379a21d273622b27"} Feb 16 17:24:43.136678 master-0 kubenswrapper[4652]: I0216 17:24:43.136636 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"0449f2664a4f17e7e4af981587b5583d04150d55e22c06b9c032854ee1170aed"} Feb 16 17:24:43.136678 master-0 kubenswrapper[4652]: I0216 17:24:43.136680 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"787e1a2edb8c8e9bf417db6f346e911877f9d5b418563110f9a30f080f014990"} Feb 16 17:24:43.136887 master-0 kubenswrapper[4652]: I0216 17:24:43.136694 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"f1af783a4b24b89c8f4c430ea669ab1473c5bc0b323962f49855d50a3585efc9"} Feb 16 17:24:43.158769 master-0 kubenswrapper[4652]: I0216 17:24:43.158324 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"daa062ef0c57a8c134684f9d2ce505cc50f127b7a90438ba024205a321a61fb8"} Feb 16 17:24:43.158769 master-0 kubenswrapper[4652]: I0216 17:24:43.158370 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"e5e414ecc81332efeef92002ae4549f2b5ee62a34949a58dcf61be3554b33d28"} Feb 16 17:24:43.221167 master-0 kubenswrapper[4652]: I0216 17:24:43.220183 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerStarted","Data":"c6c1a6f85b548c837ff89dd2a57bfad3d9e98702764305689a3c288dc33f1ffa"} Feb 16 17:24:43.221167 master-0 kubenswrapper[4652]: I0216 17:24:43.220232 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" event={"ID":"dce85b5e-6e92-4e0e-bee7-07b1a3634302","Type":"ContainerStarted","Data":"93d5bd89f7fb0d0c5b4e787e84e3359fb1e5d9187aee8b5852991214c81cfde9"} Feb 16 17:24:43.237830 master-0 kubenswrapper[4652]: I0216 17:24:43.237745 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-279g6" event={"ID":"ad805251-19d0-4d2f-b741-7d11158f1f03","Type":"ContainerStarted","Data":"84802fa6c0380d16ce939a60852dbf4338d8814ee9138bbbc3fca88b02e69501"} Feb 16 17:24:43.285220 master-0 kubenswrapper[4652]: I0216 17:24:43.285134 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"484191bbfa1910c8a5904ab9ed175e1ac974648e9338e65db295208dea7db32f"} Feb 16 17:24:43.285220 master-0 kubenswrapper[4652]: I0216 17:24:43.285190 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"5caebbc1a819cfc9ee9233b11355b2c6eb8bf9d0331dccf71c7e6da6084b4986"} Feb 16 17:24:43.313653 master-0 kubenswrapper[4652]: I0216 17:24:43.308435 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7cc9598d54-8j5rk" event={"ID":"55d635cd-1f0d-4086-96f2-9f3524f3f18c","Type":"ContainerStarted","Data":"d50dc82796cc503db2f2aa24a1dbf210752dfdc38575b244d8a6002dfd17254c"} Feb 16 17:24:43.331055 master-0 kubenswrapper[4652]: I0216 17:24:43.328091 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerStarted","Data":"3d25b7d8ccdf97ac75d67a451435411044194993adcd7df7502e428981c0fdef"} Feb 16 17:24:43.331055 master-0 kubenswrapper[4652]: I0216 17:24:43.328139 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-6bbd87b65b-mt2mz" event={"ID":"06067627-6ccf-4cc8-bd20-dabdd776bb46","Type":"ContainerStarted","Data":"f26e8209c683d3600518372ada394c9432f5f4c1c2181e22c957e94018f377d2"} Feb 16 17:24:43.331055 master-0 kubenswrapper[4652]: I0216 17:24:43.328155 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:43.331055 master-0 kubenswrapper[4652]: I0216 17:24:43.328785 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:43.346319 master-0 kubenswrapper[4652]: I0216 17:24:43.340887 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-7777d5cc66-64vhv" Feb 16 17:24:43.346319 master-0 kubenswrapper[4652]: I0216 17:24:43.343990 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:24:44.084766 master-0 kubenswrapper[4652]: I0216 17:24:44.084710 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:44.084766 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:44.084766 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:44.084766 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:44.085199 master-0 kubenswrapper[4652]: I0216 17:24:44.084775 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:44.336186 master-0 kubenswrapper[4652]: I0216 17:24:44.336052 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-55b69c6c48-7chjv" event={"ID":"4e51bba5-0ebe-4e55-a588-38b71548c605","Type":"ContainerStarted","Data":"9774c9d2b2ef9b7ecd8c10832d9a8cb71f30b64c1410dfed8e3472c8ad3df788"} Feb 16 17:24:44.338447 master-0 kubenswrapper[4652]: I0216 17:24:44.338402 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" event={"ID":"e69d8c51-e2a6-4f61-9c26-072784f6cf40","Type":"ContainerStarted","Data":"240f417b7a6eb1f1899a55430215354604e620cc4d137ac0d9aba95a0216bcf9"} Feb 16 17:24:44.338671 master-0 kubenswrapper[4652]: I0216 17:24:44.338461 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:44.352564 master-0 kubenswrapper[4652]: I0216 17:24:44.352508 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"14ef4576a95989249820e61fc8f6a65d51c59f30d96f6c1cb95628156e3077dc"} Feb 16 17:24:44.353076 master-0 kubenswrapper[4652]: I0216 17:24:44.352560 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" event={"ID":"fe8e8e5d-cebb-4361-b765-5ff737f5e838","Type":"ContainerStarted","Data":"9cf2f423a4ee9be1cd94a07b0f0e5d5420ece3502337e1250fb9a620613a9f78"} Feb 16 17:24:44.355014 master-0 kubenswrapper[4652]: I0216 17:24:44.354963 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:44.388304 master-0 kubenswrapper[4652]: I0216 17:24:44.375343 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"4be458dd1c9bcc4d00913fa06225435e61648e2c8d2bb69d523837d53a40e7e7"} Feb 16 17:24:44.388304 master-0 kubenswrapper[4652]: I0216 17:24:44.375404 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"2e8e6dab-fb6e-4cf0-a9d1-56f9955d106e","Type":"ContainerStarted","Data":"dcf7dec44a5d488020c771562f27113a894f6f79924743e5791fdecd720d6d66"} Feb 16 17:24:44.408466 master-0 kubenswrapper[4652]: I0216 17:24:44.404381 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"6255528f654901c6e058da400889729801515695350b66551903648e17940817"} Feb 16 17:24:44.408466 master-0 kubenswrapper[4652]: I0216 17:24:44.404447 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"cea038d349a78143bf0d146cedf3fb3468dd170fe325bec5bf0d263a74803d73"} Feb 16 17:24:44.408466 master-0 kubenswrapper[4652]: I0216 17:24:44.404458 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"da4eeb23e7b616caeacb15f24a91c582c0a513ab5c80931a1fd4cc9cda893410"} Feb 16 17:24:45.082426 master-0 kubenswrapper[4652]: I0216 17:24:45.082387 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:45.082426 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:45.082426 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:45.082426 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:45.082732 master-0 kubenswrapper[4652]: I0216 17:24:45.082712 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:45.414030 master-0 kubenswrapper[4652]: I0216 17:24:45.413878 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"b04ee64e-5e83-499c-812d-749b2b6824c6","Type":"ContainerStarted","Data":"555d0ea3a95601e313c20dfacd82fca47ae5672bb9e5896d5124706c8252731c"} Feb 16 17:24:45.613421 master-0 kubenswrapper[4652]: I0216 17:24:45.613350 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-85c9b89969-lj58b" Feb 16 17:24:45.861033 master-0 kubenswrapper[4652]: I0216 17:24:45.860962 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:45.868873 master-0 kubenswrapper[4652]: I0216 17:24:45.868828 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-dcd7b7d95-dhhfh" Feb 16 17:24:46.018190 master-0 kubenswrapper[4652]: I0216 17:24:46.018141 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-64bf6cdbbc-tpd6h" Feb 16 17:24:46.084374 master-0 kubenswrapper[4652]: I0216 17:24:46.083432 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:46.084374 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:46.084374 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:46.084374 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:46.084374 master-0 kubenswrapper[4652]: I0216 17:24:46.083509 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:46.085795 master-0 kubenswrapper[4652]: I0216 17:24:46.085747 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:46.085860 master-0 kubenswrapper[4652]: I0216 17:24:46.085795 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:24:46.086875 master-0 kubenswrapper[4652]: I0216 17:24:46.086830 4652 patch_prober.go:28] interesting pod/console-599b567ff7-nrcpr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.13:8443/health\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 17:24:46.086934 master-0 kubenswrapper[4652]: I0216 17:24:46.086885 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.13:8443/health\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 17:24:46.103395 master-0 kubenswrapper[4652]: I0216 17:24:46.103289 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:46.103473 master-0 kubenswrapper[4652]: I0216 17:24:46.103410 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:46.106648 master-0 kubenswrapper[4652]: I0216 17:24:46.106605 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-67bc7c997f-mn6cr" Feb 16 17:24:46.112166 master-0 kubenswrapper[4652]: I0216 17:24:46.112088 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:46.116004 master-0 kubenswrapper[4652]: I0216 17:24:46.115976 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:46.123303 master-0 kubenswrapper[4652]: I0216 17:24:46.123244 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:24:46.434444 master-0 kubenswrapper[4652]: I0216 17:24:46.434371 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-66788cb45c-dp9bc" Feb 16 17:24:46.760326 master-0 kubenswrapper[4652]: I0216 17:24:46.760049 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:24:47.083425 master-0 kubenswrapper[4652]: I0216 17:24:47.083289 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:47.083425 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:47.083425 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:47.083425 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:47.083425 master-0 kubenswrapper[4652]: I0216 17:24:47.083343 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:48.042356 master-0 kubenswrapper[4652]: I0216 17:24:48.042309 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7c6bdb986f-v8dr8" Feb 16 17:24:48.083537 master-0 kubenswrapper[4652]: I0216 17:24:48.083434 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:48.083537 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:48.083537 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:48.083537 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:48.083915 master-0 kubenswrapper[4652]: I0216 17:24:48.083547 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:48.188678 master-0 kubenswrapper[4652]: I0216 17:24:48.188582 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:48.188678 master-0 kubenswrapper[4652]: I0216 17:24:48.188659 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:48.199102 master-0 kubenswrapper[4652]: I0216 17:24:48.199051 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:48.448012 master-0 kubenswrapper[4652]: I0216 17:24:48.447902 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-fc4bf7f79-tqnlw" Feb 16 17:24:49.083297 master-0 kubenswrapper[4652]: I0216 17:24:49.083227 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:49.083297 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:49.083297 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:49.083297 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:49.083850 master-0 kubenswrapper[4652]: I0216 17:24:49.083308 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:50.083349 master-0 kubenswrapper[4652]: I0216 17:24:50.083276 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:50.083349 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:50.083349 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:50.083349 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:50.083953 master-0 kubenswrapper[4652]: I0216 17:24:50.083364 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:50.753168 master-0 kubenswrapper[4652]: I0216 17:24:50.753119 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qcgxx" Feb 16 17:24:51.083993 master-0 kubenswrapper[4652]: I0216 17:24:51.083783 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:51.083993 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:51.083993 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:51.083993 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:51.083993 master-0 kubenswrapper[4652]: I0216 17:24:51.083884 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:52.083100 master-0 kubenswrapper[4652]: I0216 17:24:52.083028 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:52.083100 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:52.083100 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:52.083100 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:52.083497 master-0 kubenswrapper[4652]: I0216 17:24:52.083111 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:53.083410 master-0 kubenswrapper[4652]: I0216 17:24:53.083332 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:53.083410 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:53.083410 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:53.083410 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:53.084155 master-0 kubenswrapper[4652]: I0216 17:24:53.083430 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:54.086786 master-0 kubenswrapper[4652]: I0216 17:24:54.086432 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:54.086786 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:54.086786 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:54.086786 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:54.086786 master-0 kubenswrapper[4652]: I0216 17:24:54.086532 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:55.082987 master-0 kubenswrapper[4652]: I0216 17:24:55.082915 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:55.082987 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:55.082987 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:55.082987 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:55.083345 master-0 kubenswrapper[4652]: I0216 17:24:55.083006 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:55.939466 master-0 kubenswrapper[4652]: I0216 17:24:55.939396 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:55.939466 master-0 kubenswrapper[4652]: I0216 17:24:55.939461 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:24:56.082028 master-0 kubenswrapper[4652]: I0216 17:24:56.081941 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:56.082028 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:56.082028 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:56.082028 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:56.082028 master-0 kubenswrapper[4652]: I0216 17:24:56.082009 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:56.085943 master-0 kubenswrapper[4652]: I0216 17:24:56.085904 4652 patch_prober.go:28] interesting pod/console-599b567ff7-nrcpr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.13:8443/health\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 17:24:56.086138 master-0 kubenswrapper[4652]: I0216 17:24:56.085945 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.13:8443/health\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 17:24:57.082064 master-0 kubenswrapper[4652]: I0216 17:24:57.082007 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:57.082064 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:57.082064 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:57.082064 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:57.082064 master-0 kubenswrapper[4652]: I0216 17:24:57.082059 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:58.083012 master-0 kubenswrapper[4652]: I0216 17:24:58.082958 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:58.083012 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:58.083012 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:58.083012 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:58.083628 master-0 kubenswrapper[4652]: I0216 17:24:58.083034 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:24:59.083404 master-0 kubenswrapper[4652]: I0216 17:24:59.083302 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:24:59.083404 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:24:59.083404 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:24:59.083404 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:24:59.083404 master-0 kubenswrapper[4652]: I0216 17:24:59.083393 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:00.083462 master-0 kubenswrapper[4652]: I0216 17:25:00.083394 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:00.083462 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:00.083462 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:00.083462 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:00.084021 master-0 kubenswrapper[4652]: I0216 17:25:00.083467 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:01.083685 master-0 kubenswrapper[4652]: I0216 17:25:01.083545 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:01.083685 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:01.083685 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:01.083685 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:01.084720 master-0 kubenswrapper[4652]: I0216 17:25:01.083688 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:02.123602 master-0 kubenswrapper[4652]: I0216 17:25:02.123539 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:02.123602 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:02.123602 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:02.123602 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:02.124785 master-0 kubenswrapper[4652]: I0216 17:25:02.123616 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:03.083153 master-0 kubenswrapper[4652]: I0216 17:25:03.083067 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:03.083153 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:03.083153 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:03.083153 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:03.083153 master-0 kubenswrapper[4652]: I0216 17:25:03.083133 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:04.087222 master-0 kubenswrapper[4652]: I0216 17:25:04.084375 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:04.087222 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:04.087222 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:04.087222 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:04.087222 master-0 kubenswrapper[4652]: I0216 17:25:04.084452 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:05.083161 master-0 kubenswrapper[4652]: I0216 17:25:05.083063 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:05.083161 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:05.083161 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:05.083161 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:05.083752 master-0 kubenswrapper[4652]: I0216 17:25:05.083168 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:06.083173 master-0 kubenswrapper[4652]: I0216 17:25:06.083106 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:06.083173 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:06.083173 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:06.083173 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:06.084500 master-0 kubenswrapper[4652]: I0216 17:25:06.084415 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:06.085855 master-0 kubenswrapper[4652]: I0216 17:25:06.085753 4652 patch_prober.go:28] interesting pod/console-599b567ff7-nrcpr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.13:8443/health\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 17:25:06.085855 master-0 kubenswrapper[4652]: I0216 17:25:06.085819 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.13:8443/health\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 17:25:07.082746 master-0 kubenswrapper[4652]: I0216 17:25:07.082670 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:07.082746 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:07.082746 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:07.082746 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:07.083178 master-0 kubenswrapper[4652]: I0216 17:25:07.082784 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:08.082085 master-0 kubenswrapper[4652]: I0216 17:25:08.082014 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:08.082085 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:08.082085 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:08.082085 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:08.082761 master-0 kubenswrapper[4652]: I0216 17:25:08.082732 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:09.083023 master-0 kubenswrapper[4652]: I0216 17:25:09.082968 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:09.083023 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:09.083023 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:09.083023 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:09.083806 master-0 kubenswrapper[4652]: I0216 17:25:09.083044 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:10.085440 master-0 kubenswrapper[4652]: I0216 17:25:10.082999 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:10.085440 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:10.085440 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:10.085440 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:10.085440 master-0 kubenswrapper[4652]: I0216 17:25:10.083045 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:10.332051 master-0 kubenswrapper[4652]: I0216 17:25:10.331930 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-64f85b8fc9-n9msn_2be9d55c-a4ec-48cd-93d2-0a1dced745a8/oauth-openshift/2.log" Feb 16 17:25:10.731214 master-0 kubenswrapper[4652]: I0216 17:25:10.731098 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-755d954778-lf4cb_9aa57eb4-c511-4ab8-a5d7-385e1ed9ee41/authentication-operator/5.log" Feb 16 17:25:11.083046 master-0 kubenswrapper[4652]: I0216 17:25:11.082856 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:11.083046 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:11.083046 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:11.083046 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:11.083046 master-0 kubenswrapper[4652]: I0216 17:25:11.082965 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:11.135327 master-0 kubenswrapper[4652]: I0216 17:25:11.135220 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-864ddd5f56-pm4rt_f0b1ebd3-1068-4624-9b6d-3e9f45ded76a/router/4.log" Feb 16 17:25:11.724850 master-0 kubenswrapper[4652]: I0216 17:25:11.724788 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-66788cb45c-dp9bc_7390ccc6-dfbe-4f51-960c-7628f49bffb7/fix-audit-permissions/2.log" Feb 16 17:25:11.933873 master-0 kubenswrapper[4652]: I0216 17:25:11.933753 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-66788cb45c-dp9bc_7390ccc6-dfbe-4f51-960c-7628f49bffb7/oauth-apiserver/2.log" Feb 16 17:25:12.086198 master-0 kubenswrapper[4652]: I0216 17:25:12.086004 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:12.086198 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:12.086198 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:12.086198 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:12.086198 master-0 kubenswrapper[4652]: I0216 17:25:12.086076 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:12.329058 master-0 kubenswrapper[4652]: I0216 17:25:12.328985 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-7777d5cc66-64vhv_0517b180-00ee-47fe-a8e7-36a3931b7e72/console-operator/5.log" Feb 16 17:25:12.725699 master-0 kubenswrapper[4652]: I0216 17:25:12.725662 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-599b567ff7-nrcpr_ed3d89d0-bc00-482e-a656-7fdf4646ab0a/console/2.log" Feb 16 17:25:13.082977 master-0 kubenswrapper[4652]: I0216 17:25:13.082822 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:13.082977 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:13.082977 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:13.082977 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:13.082977 master-0 kubenswrapper[4652]: I0216 17:25:13.082891 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:13.126470 master-0 kubenswrapper[4652]: I0216 17:25:13.125698 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-dcd7b7d95-dhhfh_08a90dc5-b0d8-4aad-a002-736492b6c1a9/download-server/4.log" Feb 16 17:25:13.527962 master-0 kubenswrapper[4652]: I0216 17:25:13.527874 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-96c8c64b8-zwwnk_5dfc09be-2f60-4420-8d3a-6b00b1d3e6fd/cluster-image-registry-operator/3.log" Feb 16 17:25:13.926355 master-0 kubenswrapper[4652]: I0216 17:25:13.926292 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-xv2wv_810a2275-fae5-45df-a3b8-92860451d33b/node-ca/3.log" Feb 16 17:25:14.084183 master-0 kubenswrapper[4652]: I0216 17:25:14.084101 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:14.084183 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:14.084183 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:14.084183 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:14.084814 master-0 kubenswrapper[4652]: I0216 17:25:14.084188 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:14.126188 master-0 kubenswrapper[4652]: E0216 17:25:14.126085 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c53a58c131794a80fa1c0999460553c2cc95a04f4d47697c0e7fb42de126acf\": container with ID starting with 2c53a58c131794a80fa1c0999460553c2cc95a04f4d47697c0e7fb42de126acf not found: ID does not exist" containerID="2c53a58c131794a80fa1c0999460553c2cc95a04f4d47697c0e7fb42de126acf" Feb 16 17:25:14.530616 master-0 kubenswrapper[4652]: I0216 17:25:14.530054 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-78ff47c7c5-txr5k_442600dc-09b2-4fee-9f89-777296b2ee40/kube-controller-manager-operator/5.log" Feb 16 17:25:14.733351 master-0 kubenswrapper[4652]: I0216 17:25:14.733232 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_80420f2e7c3cdda71f7d0d6ccbe6f9f3/kube-controller-manager/10.log" Feb 16 17:25:14.926188 master-0 kubenswrapper[4652]: I0216 17:25:14.926099 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_80420f2e7c3cdda71f7d0d6ccbe6f9f3/cluster-policy-controller/3.log" Feb 16 17:25:15.082921 master-0 kubenswrapper[4652]: I0216 17:25:15.082808 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:15.082921 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:15.082921 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:15.082921 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:15.082921 master-0 kubenswrapper[4652]: I0216 17:25:15.082887 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:15.133399 master-0 kubenswrapper[4652]: I0216 17:25:15.133329 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_80420f2e7c3cdda71f7d0d6ccbe6f9f3/kube-controller-manager/11.log" Feb 16 17:25:15.330458 master-0 kubenswrapper[4652]: I0216 17:25:15.330296 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_80420f2e7c3cdda71f7d0d6ccbe6f9f3/cluster-policy-controller/4.log" Feb 16 17:25:15.925340 master-0 kubenswrapper[4652]: I0216 17:25:15.925281 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/setup/2.log" Feb 16 17:25:15.942268 master-0 kubenswrapper[4652]: I0216 17:25:15.942172 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-5c696dbdcd-qrrc6" Feb 16 17:25:15.947554 master-0 kubenswrapper[4652]: I0216 17:25:15.947414 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:25:15.956332 master-0 kubenswrapper[4652]: I0216 17:25:15.956221 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-745bd8d89b-qr4zh" Feb 16 17:25:15.985134 master-0 kubenswrapper[4652]: I0216 17:25:15.985059 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-vwvwx" Feb 16 17:25:16.084232 master-0 kubenswrapper[4652]: I0216 17:25:16.084158 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:16.084232 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:16.084232 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:16.084232 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:16.084557 master-0 kubenswrapper[4652]: I0216 17:25:16.084267 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:16.086258 master-0 kubenswrapper[4652]: I0216 17:25:16.086209 4652 patch_prober.go:28] interesting pod/console-599b567ff7-nrcpr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.13:8443/health\": dial tcp 10.128.0.13:8443: connect: connection refused" start-of-body= Feb 16 17:25:16.086331 master-0 kubenswrapper[4652]: I0216 17:25:16.086304 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" containerName="console" probeResult="failure" output="Get \"https://10.128.0.13:8443/health\": dial tcp 10.128.0.13:8443: connect: connection refused" Feb 16 17:25:16.125206 master-0 kubenswrapper[4652]: I0216 17:25:16.125146 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_b3322fd3717f4aec0d8f54ec7862c07e/kube-rbac-proxy-crio/7.log" Feb 16 17:25:16.726471 master-0 kubenswrapper[4652]: I0216 17:25:16.726375 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-controller-686c884b4d-ksx48_c8729b1a-e365-4cf7-8a05-91a9987dabe9/machine-config-controller/3.log" Feb 16 17:25:16.926744 master-0 kubenswrapper[4652]: I0216 17:25:16.926660 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-controller-686c884b4d-ksx48_c8729b1a-e365-4cf7-8a05-91a9987dabe9/kube-rbac-proxy/4.log" Feb 16 17:25:17.083411 master-0 kubenswrapper[4652]: I0216 17:25:17.083282 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:17.083411 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:17.083411 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:17.083411 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:17.083411 master-0 kubenswrapper[4652]: I0216 17:25:17.083340 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:17.537699 master-0 kubenswrapper[4652]: I0216 17:25:17.537558 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-98q6v_648abb6c-9c81-4e5c-b5f1-3b7eb254f743/machine-config-daemon/7.log" Feb 16 17:25:17.665444 master-0 kubenswrapper[4652]: I0216 17:25:17.665369 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:25:17.705539 master-0 kubenswrapper[4652]: I0216 17:25:17.705467 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:25:17.728097 master-0 kubenswrapper[4652]: I0216 17:25:17.727989 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-98q6v_648abb6c-9c81-4e5c-b5f1-3b7eb254f743/kube-rbac-proxy/4.log" Feb 16 17:25:18.084234 master-0 kubenswrapper[4652]: I0216 17:25:18.084081 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:18.084234 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:18.084234 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:18.084234 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:18.084234 master-0 kubenswrapper[4652]: I0216 17:25:18.084218 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:18.328075 master-0 kubenswrapper[4652]: I0216 17:25:18.327996 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-operator-84976bb859-rsnqc_f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/machine-config-operator/4.log" Feb 16 17:25:18.528443 master-0 kubenswrapper[4652]: I0216 17:25:18.528368 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-operator-84976bb859-rsnqc_f3c7d762-e2fe-49ca-ade5-3982d91ec2a2/kube-rbac-proxy/2.log" Feb 16 17:25:18.701458 master-0 kubenswrapper[4652]: I0216 17:25:18.701351 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Feb 16 17:25:18.928942 master-0 kubenswrapper[4652]: I0216 17:25:18.928869 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-server-2ws9r_9c48005e-c4df-4332-87fc-ec028f2c6921/machine-config-server/6.log" Feb 16 17:25:19.083555 master-0 kubenswrapper[4652]: I0216 17:25:19.083437 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:19.083555 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:19.083555 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:19.083555 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:19.083555 master-0 kubenswrapper[4652]: I0216 17:25:19.083529 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:20.082875 master-0 kubenswrapper[4652]: I0216 17:25:20.082831 4652 patch_prober.go:28] interesting pod/router-default-864ddd5f56-pm4rt container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 17:25:20.082875 master-0 kubenswrapper[4652]: [-]has-synced failed: reason withheld Feb 16 17:25:20.082875 master-0 kubenswrapper[4652]: [+]process-running ok Feb 16 17:25:20.082875 master-0 kubenswrapper[4652]: healthz check failed Feb 16 17:25:20.083674 master-0 kubenswrapper[4652]: I0216 17:25:20.083633 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" podUID="f0b1ebd3-1068-4624-9b6d-3e9f45ded76a" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 17:25:21.084937 master-0 kubenswrapper[4652]: I0216 17:25:21.084855 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:25:21.088610 master-0 kubenswrapper[4652]: I0216 17:25:21.088561 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-864ddd5f56-pm4rt" Feb 16 17:25:26.094909 master-0 kubenswrapper[4652]: I0216 17:25:26.094742 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:25:26.102008 master-0 kubenswrapper[4652]: I0216 17:25:26.101950 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:26:13.590016 master-0 kubenswrapper[4652]: I0216 17:26:13.589881 4652 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:26:13.591068 master-0 kubenswrapper[4652]: I0216 17:26:13.590535 4652 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:26:43.590031 master-0 kubenswrapper[4652]: I0216 17:26:43.589879 4652 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:26:43.590031 master-0 kubenswrapper[4652]: I0216 17:26:43.590010 4652 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:27:13.590416 master-0 kubenswrapper[4652]: I0216 17:27:13.590304 4652 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:27:13.592039 master-0 kubenswrapper[4652]: I0216 17:27:13.590434 4652 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:27:13.592039 master-0 kubenswrapper[4652]: I0216 17:27:13.590536 4652 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:27:13.592039 master-0 kubenswrapper[4652]: I0216 17:27:13.591852 4652 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"36ecb052054e20edf5b4f7071d4d0da2f770afa6a294fc15e380d4c171f3c6ba"} pod="openshift-machine-config-operator/machine-config-daemon-98q6v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:27:13.592416 master-0 kubenswrapper[4652]: I0216 17:27:13.592104 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" containerID="cri-o://36ecb052054e20edf5b4f7071d4d0da2f770afa6a294fc15e380d4c171f3c6ba" gracePeriod=600 Feb 16 17:27:14.495178 master-0 kubenswrapper[4652]: I0216 17:27:14.495082 4652 generic.go:334] "Generic (PLEG): container finished" podID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerID="36ecb052054e20edf5b4f7071d4d0da2f770afa6a294fc15e380d4c171f3c6ba" exitCode=0 Feb 16 17:27:14.495178 master-0 kubenswrapper[4652]: I0216 17:27:14.495146 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerDied","Data":"36ecb052054e20edf5b4f7071d4d0da2f770afa6a294fc15e380d4c171f3c6ba"} Feb 16 17:27:14.495178 master-0 kubenswrapper[4652]: I0216 17:27:14.495184 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"8e8ee7d5680ba534cacc2ad533bdaeb382e6dc0b07563496fc2774a01de14fd2"} Feb 16 17:29:02.737645 master-0 kubenswrapper[4652]: I0216 17:29:02.737561 4652 kubelet.go:1505] "Image garbage collection succeeded" Feb 16 17:29:13.590655 master-0 kubenswrapper[4652]: I0216 17:29:13.590440 4652 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:29:13.590655 master-0 kubenswrapper[4652]: I0216 17:29:13.590557 4652 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:29:34.631539 master-0 kubenswrapper[4652]: I0216 17:29:34.631466 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd"] Feb 16 17:29:34.632481 master-0 kubenswrapper[4652]: I0216 17:29:34.631718 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" podUID="e1a7c783-2e23-4284-b648-147984cf1022" containerName="controller-manager" containerID="cri-o://5db2accda1febe7a3781694c54c7a20860a29834a8a8682f8a19a5eaabf37ccd" gracePeriod=30 Feb 16 17:29:34.704595 master-0 kubenswrapper[4652]: I0216 17:29:34.704512 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl"] Feb 16 17:29:34.705006 master-0 kubenswrapper[4652]: I0216 17:29:34.704890 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" containerName="route-controller-manager" containerID="cri-o://c35244f57d2aa8ccc41277b99392184f82a8403d9e906c7dd7bda19a4088a4a2" gracePeriod=30 Feb 16 17:29:35.064802 master-0 kubenswrapper[4652]: I0216 17:29:35.064190 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:29:35.161331 master-0 kubenswrapper[4652]: I0216 17:29:35.161218 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") pod \"e1a7c783-2e23-4284-b648-147984cf1022\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " Feb 16 17:29:35.161331 master-0 kubenswrapper[4652]: I0216 17:29:35.161296 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") pod \"e1a7c783-2e23-4284-b648-147984cf1022\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " Feb 16 17:29:35.161331 master-0 kubenswrapper[4652]: I0216 17:29:35.161339 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") pod \"e1a7c783-2e23-4284-b648-147984cf1022\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " Feb 16 17:29:35.161772 master-0 kubenswrapper[4652]: I0216 17:29:35.161409 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") pod \"e1a7c783-2e23-4284-b648-147984cf1022\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " Feb 16 17:29:35.161772 master-0 kubenswrapper[4652]: I0216 17:29:35.161528 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") pod \"e1a7c783-2e23-4284-b648-147984cf1022\" (UID: \"e1a7c783-2e23-4284-b648-147984cf1022\") " Feb 16 17:29:35.161908 master-0 kubenswrapper[4652]: I0216 17:29:35.161856 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca" (OuterVolumeSpecName: "client-ca") pod "e1a7c783-2e23-4284-b648-147984cf1022" (UID: "e1a7c783-2e23-4284-b648-147984cf1022"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:29:35.162202 master-0 kubenswrapper[4652]: I0216 17:29:35.162156 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "e1a7c783-2e23-4284-b648-147984cf1022" (UID: "e1a7c783-2e23-4284-b648-147984cf1022"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:29:35.162202 master-0 kubenswrapper[4652]: I0216 17:29:35.162176 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config" (OuterVolumeSpecName: "config") pod "e1a7c783-2e23-4284-b648-147984cf1022" (UID: "e1a7c783-2e23-4284-b648-147984cf1022"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:29:35.164405 master-0 kubenswrapper[4652]: I0216 17:29:35.164353 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj" (OuterVolumeSpecName: "kube-api-access-2cjmj") pod "e1a7c783-2e23-4284-b648-147984cf1022" (UID: "e1a7c783-2e23-4284-b648-147984cf1022"). InnerVolumeSpecName "kube-api-access-2cjmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:29:35.165533 master-0 kubenswrapper[4652]: I0216 17:29:35.165480 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e1a7c783-2e23-4284-b648-147984cf1022" (UID: "e1a7c783-2e23-4284-b648-147984cf1022"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:29:35.198795 master-0 kubenswrapper[4652]: I0216 17:29:35.198740 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:29:35.264140 master-0 kubenswrapper[4652]: I0216 17:29:35.264062 4652 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1a7c783-2e23-4284-b648-147984cf1022-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:29:35.264140 master-0 kubenswrapper[4652]: I0216 17:29:35.264120 4652 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Feb 16 17:29:35.264140 master-0 kubenswrapper[4652]: I0216 17:29:35.264136 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cjmj\" (UniqueName: \"kubernetes.io/projected/e1a7c783-2e23-4284-b648-147984cf1022-kube-api-access-2cjmj\") on node \"master-0\" DevicePath \"\"" Feb 16 17:29:35.264140 master-0 kubenswrapper[4652]: I0216 17:29:35.264149 4652 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:29:35.264140 master-0 kubenswrapper[4652]: I0216 17:29:35.264161 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1a7c783-2e23-4284-b648-147984cf1022-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:29:35.364864 master-0 kubenswrapper[4652]: I0216 17:29:35.364727 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") pod \"78be97a3-18d1-4962-804f-372974dc8ccc\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " Feb 16 17:29:35.365107 master-0 kubenswrapper[4652]: I0216 17:29:35.364898 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") pod \"78be97a3-18d1-4962-804f-372974dc8ccc\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " Feb 16 17:29:35.365107 master-0 kubenswrapper[4652]: I0216 17:29:35.365012 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") pod \"78be97a3-18d1-4962-804f-372974dc8ccc\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " Feb 16 17:29:35.365107 master-0 kubenswrapper[4652]: I0216 17:29:35.365050 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") pod \"78be97a3-18d1-4962-804f-372974dc8ccc\" (UID: \"78be97a3-18d1-4962-804f-372974dc8ccc\") " Feb 16 17:29:35.365651 master-0 kubenswrapper[4652]: I0216 17:29:35.365518 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca" (OuterVolumeSpecName: "client-ca") pod "78be97a3-18d1-4962-804f-372974dc8ccc" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:29:35.366069 master-0 kubenswrapper[4652]: I0216 17:29:35.366023 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config" (OuterVolumeSpecName: "config") pod "78be97a3-18d1-4962-804f-372974dc8ccc" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:29:35.370304 master-0 kubenswrapper[4652]: I0216 17:29:35.369686 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "78be97a3-18d1-4962-804f-372974dc8ccc" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:29:35.370304 master-0 kubenswrapper[4652]: I0216 17:29:35.369787 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz" (OuterVolumeSpecName: "kube-api-access-wzlnz") pod "78be97a3-18d1-4962-804f-372974dc8ccc" (UID: "78be97a3-18d1-4962-804f-372974dc8ccc"). InnerVolumeSpecName "kube-api-access-wzlnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:29:35.467056 master-0 kubenswrapper[4652]: I0216 17:29:35.466981 4652 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78be97a3-18d1-4962-804f-372974dc8ccc-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:29:35.467056 master-0 kubenswrapper[4652]: I0216 17:29:35.467034 4652 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-client-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:29:35.467056 master-0 kubenswrapper[4652]: I0216 17:29:35.467052 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78be97a3-18d1-4962-804f-372974dc8ccc-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:29:35.467056 master-0 kubenswrapper[4652]: I0216 17:29:35.467064 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzlnz\" (UniqueName: \"kubernetes.io/projected/78be97a3-18d1-4962-804f-372974dc8ccc-kube-api-access-wzlnz\") on node \"master-0\" DevicePath \"\"" Feb 16 17:29:35.572370 master-0 kubenswrapper[4652]: I0216 17:29:35.572281 4652 generic.go:334] "Generic (PLEG): container finished" podID="e1a7c783-2e23-4284-b648-147984cf1022" containerID="5db2accda1febe7a3781694c54c7a20860a29834a8a8682f8a19a5eaabf37ccd" exitCode=0 Feb 16 17:29:35.572370 master-0 kubenswrapper[4652]: I0216 17:29:35.572353 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" Feb 16 17:29:35.572707 master-0 kubenswrapper[4652]: I0216 17:29:35.572411 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" event={"ID":"e1a7c783-2e23-4284-b648-147984cf1022","Type":"ContainerDied","Data":"5db2accda1febe7a3781694c54c7a20860a29834a8a8682f8a19a5eaabf37ccd"} Feb 16 17:29:35.572707 master-0 kubenswrapper[4652]: I0216 17:29:35.572490 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd" event={"ID":"e1a7c783-2e23-4284-b648-147984cf1022","Type":"ContainerDied","Data":"b7d96a080d0bb0d15c8208669e4d58bfabb25955758ae959c0b08a31cb343913"} Feb 16 17:29:35.572707 master-0 kubenswrapper[4652]: I0216 17:29:35.572528 4652 scope.go:117] "RemoveContainer" containerID="5db2accda1febe7a3781694c54c7a20860a29834a8a8682f8a19a5eaabf37ccd" Feb 16 17:29:35.575570 master-0 kubenswrapper[4652]: I0216 17:29:35.574652 4652 generic.go:334] "Generic (PLEG): container finished" podID="78be97a3-18d1-4962-804f-372974dc8ccc" containerID="c35244f57d2aa8ccc41277b99392184f82a8403d9e906c7dd7bda19a4088a4a2" exitCode=0 Feb 16 17:29:35.575570 master-0 kubenswrapper[4652]: I0216 17:29:35.574690 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" event={"ID":"78be97a3-18d1-4962-804f-372974dc8ccc","Type":"ContainerDied","Data":"c35244f57d2aa8ccc41277b99392184f82a8403d9e906c7dd7bda19a4088a4a2"} Feb 16 17:29:35.575570 master-0 kubenswrapper[4652]: I0216 17:29:35.574714 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" event={"ID":"78be97a3-18d1-4962-804f-372974dc8ccc","Type":"ContainerDied","Data":"58bcccf7860e493f67606a412e7a43eefc11f0d58cc1efa1e5e39da4b3054f04"} Feb 16 17:29:35.575570 master-0 kubenswrapper[4652]: I0216 17:29:35.574720 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl" Feb 16 17:29:35.599610 master-0 kubenswrapper[4652]: I0216 17:29:35.599541 4652 scope.go:117] "RemoveContainer" containerID="5db2accda1febe7a3781694c54c7a20860a29834a8a8682f8a19a5eaabf37ccd" Feb 16 17:29:35.600767 master-0 kubenswrapper[4652]: E0216 17:29:35.600679 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5db2accda1febe7a3781694c54c7a20860a29834a8a8682f8a19a5eaabf37ccd\": container with ID starting with 5db2accda1febe7a3781694c54c7a20860a29834a8a8682f8a19a5eaabf37ccd not found: ID does not exist" containerID="5db2accda1febe7a3781694c54c7a20860a29834a8a8682f8a19a5eaabf37ccd" Feb 16 17:29:35.601058 master-0 kubenswrapper[4652]: I0216 17:29:35.600777 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db2accda1febe7a3781694c54c7a20860a29834a8a8682f8a19a5eaabf37ccd"} err="failed to get container status \"5db2accda1febe7a3781694c54c7a20860a29834a8a8682f8a19a5eaabf37ccd\": rpc error: code = NotFound desc = could not find container \"5db2accda1febe7a3781694c54c7a20860a29834a8a8682f8a19a5eaabf37ccd\": container with ID starting with 5db2accda1febe7a3781694c54c7a20860a29834a8a8682f8a19a5eaabf37ccd not found: ID does not exist" Feb 16 17:29:35.601145 master-0 kubenswrapper[4652]: I0216 17:29:35.601096 4652 scope.go:117] "RemoveContainer" containerID="c35244f57d2aa8ccc41277b99392184f82a8403d9e906c7dd7bda19a4088a4a2" Feb 16 17:29:35.636627 master-0 kubenswrapper[4652]: I0216 17:29:35.636531 4652 scope.go:117] "RemoveContainer" containerID="c35244f57d2aa8ccc41277b99392184f82a8403d9e906c7dd7bda19a4088a4a2" Feb 16 17:29:35.637518 master-0 kubenswrapper[4652]: E0216 17:29:35.637461 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c35244f57d2aa8ccc41277b99392184f82a8403d9e906c7dd7bda19a4088a4a2\": container with ID starting with c35244f57d2aa8ccc41277b99392184f82a8403d9e906c7dd7bda19a4088a4a2 not found: ID does not exist" containerID="c35244f57d2aa8ccc41277b99392184f82a8403d9e906c7dd7bda19a4088a4a2" Feb 16 17:29:35.639140 master-0 kubenswrapper[4652]: I0216 17:29:35.637769 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c35244f57d2aa8ccc41277b99392184f82a8403d9e906c7dd7bda19a4088a4a2"} err="failed to get container status \"c35244f57d2aa8ccc41277b99392184f82a8403d9e906c7dd7bda19a4088a4a2\": rpc error: code = NotFound desc = could not find container \"c35244f57d2aa8ccc41277b99392184f82a8403d9e906c7dd7bda19a4088a4a2\": container with ID starting with c35244f57d2aa8ccc41277b99392184f82a8403d9e906c7dd7bda19a4088a4a2 not found: ID does not exist" Feb 16 17:29:35.642245 master-0 kubenswrapper[4652]: I0216 17:29:35.640404 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd"] Feb 16 17:29:35.647943 master-0 kubenswrapper[4652]: I0216 17:29:35.647881 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7fc9897cf8-9rjwd"] Feb 16 17:29:35.657397 master-0 kubenswrapper[4652]: I0216 17:29:35.657330 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl"] Feb 16 17:29:35.663823 master-0 kubenswrapper[4652]: I0216 17:29:35.663762 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dcdb76cc6-5rcvl"] Feb 16 17:29:36.483196 master-0 kubenswrapper[4652]: I0216 17:29:36.483112 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42"] Feb 16 17:29:36.483495 master-0 kubenswrapper[4652]: E0216 17:29:36.483436 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" containerName="route-controller-manager" Feb 16 17:29:36.483495 master-0 kubenswrapper[4652]: I0216 17:29:36.483450 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" containerName="route-controller-manager" Feb 16 17:29:36.483495 master-0 kubenswrapper[4652]: E0216 17:29:36.483461 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a7c783-2e23-4284-b648-147984cf1022" containerName="controller-manager" Feb 16 17:29:36.483495 master-0 kubenswrapper[4652]: I0216 17:29:36.483468 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a7c783-2e23-4284-b648-147984cf1022" containerName="controller-manager" Feb 16 17:29:36.483718 master-0 kubenswrapper[4652]: I0216 17:29:36.483620 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" containerName="route-controller-manager" Feb 16 17:29:36.483718 master-0 kubenswrapper[4652]: I0216 17:29:36.483635 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a7c783-2e23-4284-b648-147984cf1022" containerName="controller-manager" Feb 16 17:29:36.484099 master-0 kubenswrapper[4652]: I0216 17:29:36.484061 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.487465 master-0 kubenswrapper[4652]: I0216 17:29:36.487397 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:29:36.487465 master-0 kubenswrapper[4652]: I0216 17:29:36.487411 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:29:36.489686 master-0 kubenswrapper[4652]: I0216 17:29:36.489634 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:29:36.489809 master-0 kubenswrapper[4652]: I0216 17:29:36.489753 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bbbf4969b-n5f5w"] Feb 16 17:29:36.489912 master-0 kubenswrapper[4652]: I0216 17:29:36.489822 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ztpz8" Feb 16 17:29:36.490686 master-0 kubenswrapper[4652]: I0216 17:29:36.490620 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:29:36.490797 master-0 kubenswrapper[4652]: I0216 17:29:36.490715 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.491003 master-0 kubenswrapper[4652]: I0216 17:29:36.490842 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:29:36.494687 master-0 kubenswrapper[4652]: I0216 17:29:36.494647 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-7mlbn" Feb 16 17:29:36.494895 master-0 kubenswrapper[4652]: I0216 17:29:36.494870 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:29:36.494949 master-0 kubenswrapper[4652]: I0216 17:29:36.494892 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:29:36.494949 master-0 kubenswrapper[4652]: I0216 17:29:36.494903 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:29:36.495558 master-0 kubenswrapper[4652]: I0216 17:29:36.495455 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:29:36.495558 master-0 kubenswrapper[4652]: I0216 17:29:36.495524 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:29:36.503477 master-0 kubenswrapper[4652]: I0216 17:29:36.503406 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:29:36.507211 master-0 kubenswrapper[4652]: I0216 17:29:36.507077 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42"] Feb 16 17:29:36.512019 master-0 kubenswrapper[4652]: I0216 17:29:36.511938 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bbbf4969b-n5f5w"] Feb 16 17:29:36.584889 master-0 kubenswrapper[4652]: I0216 17:29:36.584797 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88776564-dfac-4306-9371-3c3df012cd33-config\") pod \"route-controller-manager-84cb5bdf57-zzv42\" (UID: \"88776564-dfac-4306-9371-3c3df012cd33\") " pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.585132 master-0 kubenswrapper[4652]: I0216 17:29:36.584920 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gwqp\" (UniqueName: \"kubernetes.io/projected/2f7008ff-aac9-498e-977e-38a8c413173f-kube-api-access-8gwqp\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.585132 master-0 kubenswrapper[4652]: I0216 17:29:36.584958 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f7008ff-aac9-498e-977e-38a8c413173f-proxy-ca-bundles\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.585132 master-0 kubenswrapper[4652]: I0216 17:29:36.585000 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f7008ff-aac9-498e-977e-38a8c413173f-client-ca\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.585132 master-0 kubenswrapper[4652]: I0216 17:29:36.585068 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88776564-dfac-4306-9371-3c3df012cd33-client-ca\") pod \"route-controller-manager-84cb5bdf57-zzv42\" (UID: \"88776564-dfac-4306-9371-3c3df012cd33\") " pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.585309 master-0 kubenswrapper[4652]: I0216 17:29:36.585204 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88776564-dfac-4306-9371-3c3df012cd33-serving-cert\") pod \"route-controller-manager-84cb5bdf57-zzv42\" (UID: \"88776564-dfac-4306-9371-3c3df012cd33\") " pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.585309 master-0 kubenswrapper[4652]: I0216 17:29:36.585239 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f7008ff-aac9-498e-977e-38a8c413173f-config\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.585402 master-0 kubenswrapper[4652]: I0216 17:29:36.585378 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24lkv\" (UniqueName: \"kubernetes.io/projected/88776564-dfac-4306-9371-3c3df012cd33-kube-api-access-24lkv\") pod \"route-controller-manager-84cb5bdf57-zzv42\" (UID: \"88776564-dfac-4306-9371-3c3df012cd33\") " pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.585468 master-0 kubenswrapper[4652]: I0216 17:29:36.585439 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f7008ff-aac9-498e-977e-38a8c413173f-serving-cert\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.686722 master-0 kubenswrapper[4652]: I0216 17:29:36.686657 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88776564-dfac-4306-9371-3c3df012cd33-serving-cert\") pod \"route-controller-manager-84cb5bdf57-zzv42\" (UID: \"88776564-dfac-4306-9371-3c3df012cd33\") " pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.687403 master-0 kubenswrapper[4652]: I0216 17:29:36.686932 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f7008ff-aac9-498e-977e-38a8c413173f-config\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.687403 master-0 kubenswrapper[4652]: I0216 17:29:36.687047 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24lkv\" (UniqueName: \"kubernetes.io/projected/88776564-dfac-4306-9371-3c3df012cd33-kube-api-access-24lkv\") pod \"route-controller-manager-84cb5bdf57-zzv42\" (UID: \"88776564-dfac-4306-9371-3c3df012cd33\") " pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.687403 master-0 kubenswrapper[4652]: I0216 17:29:36.687088 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f7008ff-aac9-498e-977e-38a8c413173f-serving-cert\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.689000 master-0 kubenswrapper[4652]: I0216 17:29:36.687485 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88776564-dfac-4306-9371-3c3df012cd33-config\") pod \"route-controller-manager-84cb5bdf57-zzv42\" (UID: \"88776564-dfac-4306-9371-3c3df012cd33\") " pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.689000 master-0 kubenswrapper[4652]: I0216 17:29:36.687597 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gwqp\" (UniqueName: \"kubernetes.io/projected/2f7008ff-aac9-498e-977e-38a8c413173f-kube-api-access-8gwqp\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.689000 master-0 kubenswrapper[4652]: I0216 17:29:36.687635 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f7008ff-aac9-498e-977e-38a8c413173f-proxy-ca-bundles\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.689000 master-0 kubenswrapper[4652]: I0216 17:29:36.687695 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f7008ff-aac9-498e-977e-38a8c413173f-client-ca\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.689000 master-0 kubenswrapper[4652]: I0216 17:29:36.687716 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88776564-dfac-4306-9371-3c3df012cd33-client-ca\") pod \"route-controller-manager-84cb5bdf57-zzv42\" (UID: \"88776564-dfac-4306-9371-3c3df012cd33\") " pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.689000 master-0 kubenswrapper[4652]: I0216 17:29:36.688753 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f7008ff-aac9-498e-977e-38a8c413173f-client-ca\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.689000 master-0 kubenswrapper[4652]: I0216 17:29:36.688767 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/88776564-dfac-4306-9371-3c3df012cd33-client-ca\") pod \"route-controller-manager-84cb5bdf57-zzv42\" (UID: \"88776564-dfac-4306-9371-3c3df012cd33\") " pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.689506 master-0 kubenswrapper[4652]: I0216 17:29:36.689006 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f7008ff-aac9-498e-977e-38a8c413173f-config\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.689506 master-0 kubenswrapper[4652]: I0216 17:29:36.689045 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88776564-dfac-4306-9371-3c3df012cd33-config\") pod \"route-controller-manager-84cb5bdf57-zzv42\" (UID: \"88776564-dfac-4306-9371-3c3df012cd33\") " pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.689667 master-0 kubenswrapper[4652]: I0216 17:29:36.689643 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f7008ff-aac9-498e-977e-38a8c413173f-proxy-ca-bundles\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.699297 master-0 kubenswrapper[4652]: I0216 17:29:36.691212 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88776564-dfac-4306-9371-3c3df012cd33-serving-cert\") pod \"route-controller-manager-84cb5bdf57-zzv42\" (UID: \"88776564-dfac-4306-9371-3c3df012cd33\") " pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.699297 master-0 kubenswrapper[4652]: I0216 17:29:36.691432 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f7008ff-aac9-498e-977e-38a8c413173f-serving-cert\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.711338 master-0 kubenswrapper[4652]: I0216 17:29:36.706181 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gwqp\" (UniqueName: \"kubernetes.io/projected/2f7008ff-aac9-498e-977e-38a8c413173f-kube-api-access-8gwqp\") pod \"controller-manager-bbbf4969b-n5f5w\" (UID: \"2f7008ff-aac9-498e-977e-38a8c413173f\") " pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:36.711338 master-0 kubenswrapper[4652]: I0216 17:29:36.710442 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24lkv\" (UniqueName: \"kubernetes.io/projected/88776564-dfac-4306-9371-3c3df012cd33-kube-api-access-24lkv\") pod \"route-controller-manager-84cb5bdf57-zzv42\" (UID: \"88776564-dfac-4306-9371-3c3df012cd33\") " pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.767144 master-0 kubenswrapper[4652]: I0216 17:29:36.767008 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78be97a3-18d1-4962-804f-372974dc8ccc" path="/var/lib/kubelet/pods/78be97a3-18d1-4962-804f-372974dc8ccc/volumes" Feb 16 17:29:36.768007 master-0 kubenswrapper[4652]: I0216 17:29:36.767943 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a7c783-2e23-4284-b648-147984cf1022" path="/var/lib/kubelet/pods/e1a7c783-2e23-4284-b648-147984cf1022/volumes" Feb 16 17:29:36.805661 master-0 kubenswrapper[4652]: I0216 17:29:36.805607 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:36.826206 master-0 kubenswrapper[4652]: I0216 17:29:36.826146 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:37.224638 master-0 kubenswrapper[4652]: I0216 17:29:37.224539 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42"] Feb 16 17:29:37.225555 master-0 kubenswrapper[4652]: W0216 17:29:37.225500 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88776564_dfac_4306_9371_3c3df012cd33.slice/crio-567c240f97e3211bd9bd3e098f43fc991691cf1ce79a704ee2983c541c162b89 WatchSource:0}: Error finding container 567c240f97e3211bd9bd3e098f43fc991691cf1ce79a704ee2983c541c162b89: Status 404 returned error can't find the container with id 567c240f97e3211bd9bd3e098f43fc991691cf1ce79a704ee2983c541c162b89 Feb 16 17:29:37.305471 master-0 kubenswrapper[4652]: I0216 17:29:37.305421 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bbbf4969b-n5f5w"] Feb 16 17:29:37.311163 master-0 kubenswrapper[4652]: W0216 17:29:37.311116 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f7008ff_aac9_498e_977e_38a8c413173f.slice/crio-70a4bb35d04e96eb8d52f514d36987c677cf8721c7264649662bbe26507a8fb6 WatchSource:0}: Error finding container 70a4bb35d04e96eb8d52f514d36987c677cf8721c7264649662bbe26507a8fb6: Status 404 returned error can't find the container with id 70a4bb35d04e96eb8d52f514d36987c677cf8721c7264649662bbe26507a8fb6 Feb 16 17:29:37.595319 master-0 kubenswrapper[4652]: I0216 17:29:37.595038 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" event={"ID":"2f7008ff-aac9-498e-977e-38a8c413173f","Type":"ContainerStarted","Data":"750399dbf4dcafa8079fb108ad0b21de9cf90a9f143a50972e270285cbc95b2a"} Feb 16 17:29:37.595319 master-0 kubenswrapper[4652]: I0216 17:29:37.595085 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" event={"ID":"2f7008ff-aac9-498e-977e-38a8c413173f","Type":"ContainerStarted","Data":"70a4bb35d04e96eb8d52f514d36987c677cf8721c7264649662bbe26507a8fb6"} Feb 16 17:29:37.597926 master-0 kubenswrapper[4652]: I0216 17:29:37.596117 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:37.597926 master-0 kubenswrapper[4652]: I0216 17:29:37.597066 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" event={"ID":"88776564-dfac-4306-9371-3c3df012cd33","Type":"ContainerStarted","Data":"65d1cfc97f577197da4f7a04f1bc6b3022b1b31135a92d1f8a746d18ef884aee"} Feb 16 17:29:37.597926 master-0 kubenswrapper[4652]: I0216 17:29:37.597239 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" event={"ID":"88776564-dfac-4306-9371-3c3df012cd33","Type":"ContainerStarted","Data":"567c240f97e3211bd9bd3e098f43fc991691cf1ce79a704ee2983c541c162b89"} Feb 16 17:29:37.598642 master-0 kubenswrapper[4652]: I0216 17:29:37.598610 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:37.601835 master-0 kubenswrapper[4652]: I0216 17:29:37.601797 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" Feb 16 17:29:37.623206 master-0 kubenswrapper[4652]: I0216 17:29:37.623118 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bbbf4969b-n5f5w" podStartSLOduration=3.623100452 podStartE2EDuration="3.623100452s" podCreationTimestamp="2026-02-16 17:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:29:37.615182638 +0000 UTC m=+335.003351154" watchObservedRunningTime="2026-02-16 17:29:37.623100452 +0000 UTC m=+335.011268968" Feb 16 17:29:37.642943 master-0 kubenswrapper[4652]: I0216 17:29:37.642569 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" podStartSLOduration=3.642551891 podStartE2EDuration="3.642551891s" podCreationTimestamp="2026-02-16 17:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:29:37.639235736 +0000 UTC m=+335.027404252" watchObservedRunningTime="2026-02-16 17:29:37.642551891 +0000 UTC m=+335.030720407" Feb 16 17:29:37.750373 master-0 kubenswrapper[4652]: I0216 17:29:37.749654 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-84cb5bdf57-zzv42" Feb 16 17:29:43.590512 master-0 kubenswrapper[4652]: I0216 17:29:43.590383 4652 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:29:43.590512 master-0 kubenswrapper[4652]: I0216 17:29:43.590483 4652 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:29:46.551667 master-0 kubenswrapper[4652]: I0216 17:29:46.551586 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-82zhm"] Feb 16 17:29:46.552509 master-0 kubenswrapper[4652]: I0216 17:29:46.552475 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.554698 master-0 kubenswrapper[4652]: I0216 17:29:46.554643 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-whlrc" Feb 16 17:29:46.554698 master-0 kubenswrapper[4652]: I0216 17:29:46.554665 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Feb 16 17:29:46.652134 master-0 kubenswrapper[4652]: I0216 17:29:46.652080 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n9xq\" (UniqueName: \"kubernetes.io/projected/148c658b-3c73-4848-8fc7-b4853dc67a6a-kube-api-access-4n9xq\") pod \"cni-sysctl-allowlist-ds-82zhm\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.652134 master-0 kubenswrapper[4652]: I0216 17:29:46.652129 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/148c658b-3c73-4848-8fc7-b4853dc67a6a-ready\") pod \"cni-sysctl-allowlist-ds-82zhm\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.652457 master-0 kubenswrapper[4652]: I0216 17:29:46.652173 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/148c658b-3c73-4848-8fc7-b4853dc67a6a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-82zhm\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.652457 master-0 kubenswrapper[4652]: I0216 17:29:46.652267 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/148c658b-3c73-4848-8fc7-b4853dc67a6a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-82zhm\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.753730 master-0 kubenswrapper[4652]: I0216 17:29:46.753668 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/148c658b-3c73-4848-8fc7-b4853dc67a6a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-82zhm\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.753982 master-0 kubenswrapper[4652]: I0216 17:29:46.753752 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n9xq\" (UniqueName: \"kubernetes.io/projected/148c658b-3c73-4848-8fc7-b4853dc67a6a-kube-api-access-4n9xq\") pod \"cni-sysctl-allowlist-ds-82zhm\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.753982 master-0 kubenswrapper[4652]: I0216 17:29:46.753788 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/148c658b-3c73-4848-8fc7-b4853dc67a6a-ready\") pod \"cni-sysctl-allowlist-ds-82zhm\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.753982 master-0 kubenswrapper[4652]: I0216 17:29:46.753796 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/148c658b-3c73-4848-8fc7-b4853dc67a6a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-82zhm\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.753982 master-0 kubenswrapper[4652]: I0216 17:29:46.753845 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/148c658b-3c73-4848-8fc7-b4853dc67a6a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-82zhm\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.755573 master-0 kubenswrapper[4652]: I0216 17:29:46.754186 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/148c658b-3c73-4848-8fc7-b4853dc67a6a-ready\") pod \"cni-sysctl-allowlist-ds-82zhm\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.755573 master-0 kubenswrapper[4652]: I0216 17:29:46.754747 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/148c658b-3c73-4848-8fc7-b4853dc67a6a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-82zhm\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.781699 master-0 kubenswrapper[4652]: I0216 17:29:46.781611 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n9xq\" (UniqueName: \"kubernetes.io/projected/148c658b-3c73-4848-8fc7-b4853dc67a6a-kube-api-access-4n9xq\") pod \"cni-sysctl-allowlist-ds-82zhm\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.878891 master-0 kubenswrapper[4652]: I0216 17:29:46.878767 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:46.912289 master-0 kubenswrapper[4652]: W0216 17:29:46.912226 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod148c658b_3c73_4848_8fc7_b4853dc67a6a.slice/crio-3ad3d5c3e674c2f93c8a895c83b621b16e1977e550e861f65236de4315c86d8d WatchSource:0}: Error finding container 3ad3d5c3e674c2f93c8a895c83b621b16e1977e550e861f65236de4315c86d8d: Status 404 returned error can't find the container with id 3ad3d5c3e674c2f93c8a895c83b621b16e1977e550e861f65236de4315c86d8d Feb 16 17:29:47.663962 master-0 kubenswrapper[4652]: I0216 17:29:47.663883 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" event={"ID":"148c658b-3c73-4848-8fc7-b4853dc67a6a","Type":"ContainerStarted","Data":"1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb"} Feb 16 17:29:47.663962 master-0 kubenswrapper[4652]: I0216 17:29:47.663959 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" event={"ID":"148c658b-3c73-4848-8fc7-b4853dc67a6a","Type":"ContainerStarted","Data":"3ad3d5c3e674c2f93c8a895c83b621b16e1977e550e861f65236de4315c86d8d"} Feb 16 17:29:47.664683 master-0 kubenswrapper[4652]: I0216 17:29:47.664465 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:48.704208 master-0 kubenswrapper[4652]: I0216 17:29:48.704130 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:29:48.733462 master-0 kubenswrapper[4652]: I0216 17:29:48.733363 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" podStartSLOduration=2.733337012 podStartE2EDuration="2.733337012s" podCreationTimestamp="2026-02-16 17:29:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:29:47.702143741 +0000 UTC m=+345.090312297" watchObservedRunningTime="2026-02-16 17:29:48.733337012 +0000 UTC m=+346.121505568" Feb 16 17:29:49.554716 master-0 kubenswrapper[4652]: I0216 17:29:49.554622 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-82zhm"] Feb 16 17:29:50.586353 master-0 kubenswrapper[4652]: I0216 17:29:50.586222 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Feb 16 17:29:50.587283 master-0 kubenswrapper[4652]: I0216 17:29:50.587173 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 16 17:29:50.590577 master-0 kubenswrapper[4652]: I0216 17:29:50.590518 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-qlqr4" Feb 16 17:29:50.590980 master-0 kubenswrapper[4652]: I0216 17:29:50.590920 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 17:29:50.602638 master-0 kubenswrapper[4652]: I0216 17:29:50.602538 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Feb 16 17:29:50.691574 master-0 kubenswrapper[4652]: I0216 17:29:50.691478 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" podUID="148c658b-3c73-4848-8fc7-b4853dc67a6a" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" gracePeriod=30 Feb 16 17:29:50.718276 master-0 kubenswrapper[4652]: I0216 17:29:50.718166 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea1b723d-20b9-45ed-be52-704206ed2afb-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"ea1b723d-20b9-45ed-be52-704206ed2afb\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 16 17:29:50.718554 master-0 kubenswrapper[4652]: I0216 17:29:50.718318 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea1b723d-20b9-45ed-be52-704206ed2afb-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"ea1b723d-20b9-45ed-be52-704206ed2afb\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 16 17:29:50.718554 master-0 kubenswrapper[4652]: I0216 17:29:50.718362 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea1b723d-20b9-45ed-be52-704206ed2afb-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"ea1b723d-20b9-45ed-be52-704206ed2afb\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 16 17:29:50.820050 master-0 kubenswrapper[4652]: I0216 17:29:50.819993 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea1b723d-20b9-45ed-be52-704206ed2afb-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"ea1b723d-20b9-45ed-be52-704206ed2afb\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 16 17:29:50.820395 master-0 kubenswrapper[4652]: I0216 17:29:50.820067 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea1b723d-20b9-45ed-be52-704206ed2afb-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"ea1b723d-20b9-45ed-be52-704206ed2afb\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 16 17:29:50.820395 master-0 kubenswrapper[4652]: I0216 17:29:50.820145 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea1b723d-20b9-45ed-be52-704206ed2afb-kubelet-dir\") pod \"installer-2-retry-1-master-0\" (UID: \"ea1b723d-20b9-45ed-be52-704206ed2afb\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 16 17:29:50.820395 master-0 kubenswrapper[4652]: I0216 17:29:50.820238 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea1b723d-20b9-45ed-be52-704206ed2afb-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"ea1b723d-20b9-45ed-be52-704206ed2afb\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 16 17:29:50.820612 master-0 kubenswrapper[4652]: I0216 17:29:50.820297 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea1b723d-20b9-45ed-be52-704206ed2afb-var-lock\") pod \"installer-2-retry-1-master-0\" (UID: \"ea1b723d-20b9-45ed-be52-704206ed2afb\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 16 17:29:50.836900 master-0 kubenswrapper[4652]: I0216 17:29:50.836767 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea1b723d-20b9-45ed-be52-704206ed2afb-kube-api-access\") pod \"installer-2-retry-1-master-0\" (UID: \"ea1b723d-20b9-45ed-be52-704206ed2afb\") " pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 16 17:29:50.948583 master-0 kubenswrapper[4652]: I0216 17:29:50.948514 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 16 17:29:51.356720 master-0 kubenswrapper[4652]: I0216 17:29:51.356658 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Feb 16 17:29:51.359392 master-0 kubenswrapper[4652]: W0216 17:29:51.359296 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podea1b723d_20b9_45ed_be52_704206ed2afb.slice/crio-6076014e57dad34283e2f7535c889f7d07d4e762b772ba43ab9de8213f858880 WatchSource:0}: Error finding container 6076014e57dad34283e2f7535c889f7d07d4e762b772ba43ab9de8213f858880: Status 404 returned error can't find the container with id 6076014e57dad34283e2f7535c889f7d07d4e762b772ba43ab9de8213f858880 Feb 16 17:29:51.702335 master-0 kubenswrapper[4652]: I0216 17:29:51.702273 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"ea1b723d-20b9-45ed-be52-704206ed2afb","Type":"ContainerStarted","Data":"6076014e57dad34283e2f7535c889f7d07d4e762b772ba43ab9de8213f858880"} Feb 16 17:29:52.711207 master-0 kubenswrapper[4652]: I0216 17:29:52.711165 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"ea1b723d-20b9-45ed-be52-704206ed2afb","Type":"ContainerStarted","Data":"89704dfdcf950e4a8181992f9c9e118dc008ba4d29ae1a5ad40f0902457ec3eb"} Feb 16 17:29:52.730483 master-0 kubenswrapper[4652]: I0216 17:29:52.730391 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" podStartSLOduration=2.730339851 podStartE2EDuration="2.730339851s" podCreationTimestamp="2026-02-16 17:29:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:29:52.728097472 +0000 UTC m=+350.116265998" watchObservedRunningTime="2026-02-16 17:29:52.730339851 +0000 UTC m=+350.118508377" Feb 16 17:29:55.747859 master-0 kubenswrapper[4652]: I0216 17:29:55.747783 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz"] Feb 16 17:29:55.749451 master-0 kubenswrapper[4652]: I0216 17:29:55.749411 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz" Feb 16 17:29:55.756992 master-0 kubenswrapper[4652]: I0216 17:29:55.756929 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz"] Feb 16 17:29:55.903671 master-0 kubenswrapper[4652]: I0216 17:29:55.903610 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v7wn\" (UniqueName: \"kubernetes.io/projected/31d31616-f3cb-4494-b81e-53a474da96b9-kube-api-access-4v7wn\") pod \"multus-admission-controller-74fcb67dd7-hv9kz\" (UID: \"31d31616-f3cb-4494-b81e-53a474da96b9\") " pod="openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz" Feb 16 17:29:55.903671 master-0 kubenswrapper[4652]: I0216 17:29:55.903657 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/31d31616-f3cb-4494-b81e-53a474da96b9-webhook-certs\") pod \"multus-admission-controller-74fcb67dd7-hv9kz\" (UID: \"31d31616-f3cb-4494-b81e-53a474da96b9\") " pod="openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz" Feb 16 17:29:56.004572 master-0 kubenswrapper[4652]: I0216 17:29:56.004461 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v7wn\" (UniqueName: \"kubernetes.io/projected/31d31616-f3cb-4494-b81e-53a474da96b9-kube-api-access-4v7wn\") pod \"multus-admission-controller-74fcb67dd7-hv9kz\" (UID: \"31d31616-f3cb-4494-b81e-53a474da96b9\") " pod="openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz" Feb 16 17:29:56.004572 master-0 kubenswrapper[4652]: I0216 17:29:56.004508 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/31d31616-f3cb-4494-b81e-53a474da96b9-webhook-certs\") pod \"multus-admission-controller-74fcb67dd7-hv9kz\" (UID: \"31d31616-f3cb-4494-b81e-53a474da96b9\") " pod="openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz" Feb 16 17:29:56.007323 master-0 kubenswrapper[4652]: I0216 17:29:56.007289 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/31d31616-f3cb-4494-b81e-53a474da96b9-webhook-certs\") pod \"multus-admission-controller-74fcb67dd7-hv9kz\" (UID: \"31d31616-f3cb-4494-b81e-53a474da96b9\") " pod="openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz" Feb 16 17:29:56.020144 master-0 kubenswrapper[4652]: I0216 17:29:56.020086 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v7wn\" (UniqueName: \"kubernetes.io/projected/31d31616-f3cb-4494-b81e-53a474da96b9-kube-api-access-4v7wn\") pod \"multus-admission-controller-74fcb67dd7-hv9kz\" (UID: \"31d31616-f3cb-4494-b81e-53a474da96b9\") " pod="openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz" Feb 16 17:29:56.070636 master-0 kubenswrapper[4652]: I0216 17:29:56.070550 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz" Feb 16 17:29:56.456846 master-0 kubenswrapper[4652]: I0216 17:29:56.456768 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz"] Feb 16 17:29:56.463572 master-0 kubenswrapper[4652]: W0216 17:29:56.463483 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31d31616_f3cb_4494_b81e_53a474da96b9.slice/crio-901af9f8a7f7576ab61bf353cfdfb930edc95cb39e0ace453990030f539ae18d WatchSource:0}: Error finding container 901af9f8a7f7576ab61bf353cfdfb930edc95cb39e0ace453990030f539ae18d: Status 404 returned error can't find the container with id 901af9f8a7f7576ab61bf353cfdfb930edc95cb39e0ace453990030f539ae18d Feb 16 17:29:56.754693 master-0 kubenswrapper[4652]: I0216 17:29:56.754407 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz" event={"ID":"31d31616-f3cb-4494-b81e-53a474da96b9","Type":"ContainerStarted","Data":"06ea4cc8666367ddb285735034c2699d9d7eb350ea6b85f674f3e60d4c16f52b"} Feb 16 17:29:56.754693 master-0 kubenswrapper[4652]: I0216 17:29:56.754459 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz" event={"ID":"31d31616-f3cb-4494-b81e-53a474da96b9","Type":"ContainerStarted","Data":"901af9f8a7f7576ab61bf353cfdfb930edc95cb39e0ace453990030f539ae18d"} Feb 16 17:29:56.881023 master-0 kubenswrapper[4652]: E0216 17:29:56.880850 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:29:56.882153 master-0 kubenswrapper[4652]: E0216 17:29:56.882092 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:29:56.883468 master-0 kubenswrapper[4652]: E0216 17:29:56.883425 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:29:56.883563 master-0 kubenswrapper[4652]: E0216 17:29:56.883471 4652 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" podUID="148c658b-3c73-4848-8fc7-b4853dc67a6a" containerName="kube-multus-additional-cni-plugins" Feb 16 17:29:57.762726 master-0 kubenswrapper[4652]: I0216 17:29:57.762653 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz" event={"ID":"31d31616-f3cb-4494-b81e-53a474da96b9","Type":"ContainerStarted","Data":"43057a8d387b2cb29ef4e905552036982db2fc2c9b95e3f0cce35ce8c7b902bc"} Feb 16 17:29:57.800720 master-0 kubenswrapper[4652]: I0216 17:29:57.800621 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-74fcb67dd7-hv9kz" podStartSLOduration=2.800590224 podStartE2EDuration="2.800590224s" podCreationTimestamp="2026-02-16 17:29:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:29:57.789814419 +0000 UTC m=+355.177982935" watchObservedRunningTime="2026-02-16 17:29:57.800590224 +0000 UTC m=+355.188758780" Feb 16 17:29:57.840678 master-0 kubenswrapper[4652]: I0216 17:29:57.839753 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-5n9cl"] Feb 16 17:29:57.840678 master-0 kubenswrapper[4652]: I0216 17:29:57.840062 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" containerName="multus-admission-controller" containerID="cri-o://c583661e5e33797ca1d13255c948bcaf6544bd8dfe5baf1799d8f2972453dd92" gracePeriod=30 Feb 16 17:29:57.840966 master-0 kubenswrapper[4652]: I0216 17:29:57.840700 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" containerName="kube-rbac-proxy" containerID="cri-o://50c66ed61b907edd1ceec45a87aa52fa3fd05087ad18e9d732bbeaa727b7aef1" gracePeriod=30 Feb 16 17:29:59.777183 master-0 kubenswrapper[4652]: I0216 17:29:59.777136 4652 generic.go:334] "Generic (PLEG): container finished" podID="0d980a9a-2574-41b9-b970-0718cd97c8cd" containerID="50c66ed61b907edd1ceec45a87aa52fa3fd05087ad18e9d732bbeaa727b7aef1" exitCode=0 Feb 16 17:29:59.777183 master-0 kubenswrapper[4652]: I0216 17:29:59.777178 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerDied","Data":"50c66ed61b907edd1ceec45a87aa52fa3fd05087ad18e9d732bbeaa727b7aef1"} Feb 16 17:30:00.186202 master-0 kubenswrapper[4652]: I0216 17:30:00.186131 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk"] Feb 16 17:30:00.187207 master-0 kubenswrapper[4652]: I0216 17:30:00.187181 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" Feb 16 17:30:00.189768 master-0 kubenswrapper[4652]: I0216 17:30:00.189735 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 17:30:00.192188 master-0 kubenswrapper[4652]: I0216 17:30:00.192132 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-4vsn8" Feb 16 17:30:00.197282 master-0 kubenswrapper[4652]: I0216 17:30:00.197206 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk"] Feb 16 17:30:00.273332 master-0 kubenswrapper[4652]: I0216 17:30:00.273142 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c6e30f-bdad-470f-b310-f1c4ad117dc9-config-volume\") pod \"collect-profiles-29521050-dpzjk\" (UID: \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" Feb 16 17:30:00.273332 master-0 kubenswrapper[4652]: I0216 17:30:00.273277 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzv54\" (UniqueName: \"kubernetes.io/projected/42c6e30f-bdad-470f-b310-f1c4ad117dc9-kube-api-access-dzv54\") pod \"collect-profiles-29521050-dpzjk\" (UID: \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" Feb 16 17:30:00.273624 master-0 kubenswrapper[4652]: I0216 17:30:00.273386 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c6e30f-bdad-470f-b310-f1c4ad117dc9-secret-volume\") pod \"collect-profiles-29521050-dpzjk\" (UID: \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" Feb 16 17:30:00.374892 master-0 kubenswrapper[4652]: I0216 17:30:00.374810 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzv54\" (UniqueName: \"kubernetes.io/projected/42c6e30f-bdad-470f-b310-f1c4ad117dc9-kube-api-access-dzv54\") pod \"collect-profiles-29521050-dpzjk\" (UID: \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" Feb 16 17:30:00.375204 master-0 kubenswrapper[4652]: I0216 17:30:00.374974 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c6e30f-bdad-470f-b310-f1c4ad117dc9-secret-volume\") pod \"collect-profiles-29521050-dpzjk\" (UID: \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" Feb 16 17:30:00.375204 master-0 kubenswrapper[4652]: I0216 17:30:00.375104 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c6e30f-bdad-470f-b310-f1c4ad117dc9-config-volume\") pod \"collect-profiles-29521050-dpzjk\" (UID: \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" Feb 16 17:30:00.377024 master-0 kubenswrapper[4652]: I0216 17:30:00.376965 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c6e30f-bdad-470f-b310-f1c4ad117dc9-config-volume\") pod \"collect-profiles-29521050-dpzjk\" (UID: \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" Feb 16 17:30:00.379291 master-0 kubenswrapper[4652]: I0216 17:30:00.379214 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c6e30f-bdad-470f-b310-f1c4ad117dc9-secret-volume\") pod \"collect-profiles-29521050-dpzjk\" (UID: \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" Feb 16 17:30:00.395880 master-0 kubenswrapper[4652]: I0216 17:30:00.395805 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzv54\" (UniqueName: \"kubernetes.io/projected/42c6e30f-bdad-470f-b310-f1c4ad117dc9-kube-api-access-dzv54\") pod \"collect-profiles-29521050-dpzjk\" (UID: \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" Feb 16 17:30:00.510813 master-0 kubenswrapper[4652]: I0216 17:30:00.510663 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" Feb 16 17:30:00.927955 master-0 kubenswrapper[4652]: I0216 17:30:00.927865 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk"] Feb 16 17:30:00.936553 master-0 kubenswrapper[4652]: W0216 17:30:00.936486 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c6e30f_bdad_470f_b310_f1c4ad117dc9.slice/crio-478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129 WatchSource:0}: Error finding container 478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129: Status 404 returned error can't find the container with id 478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129 Feb 16 17:30:01.791984 master-0 kubenswrapper[4652]: I0216 17:30:01.791878 4652 generic.go:334] "Generic (PLEG): container finished" podID="42c6e30f-bdad-470f-b310-f1c4ad117dc9" containerID="00e536e5ac15c166f17381f2d12f1b92c8875c94578bdd07182180b4c3006573" exitCode=0 Feb 16 17:30:01.791984 master-0 kubenswrapper[4652]: I0216 17:30:01.791963 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" event={"ID":"42c6e30f-bdad-470f-b310-f1c4ad117dc9","Type":"ContainerDied","Data":"00e536e5ac15c166f17381f2d12f1b92c8875c94578bdd07182180b4c3006573"} Feb 16 17:30:01.792950 master-0 kubenswrapper[4652]: I0216 17:30:01.792017 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" event={"ID":"42c6e30f-bdad-470f-b310-f1c4ad117dc9","Type":"ContainerStarted","Data":"478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129"} Feb 16 17:30:02.860737 master-0 kubenswrapper[4652]: E0216 17:30:02.860666 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f624baceef6090453354470adbca23ad324260cb5a3146d25137c56884ff67e\": container with ID starting with 1f624baceef6090453354470adbca23ad324260cb5a3146d25137c56884ff67e not found: ID does not exist" containerID="1f624baceef6090453354470adbca23ad324260cb5a3146d25137c56884ff67e" Feb 16 17:30:02.861470 master-0 kubenswrapper[4652]: I0216 17:30:02.860746 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="1f624baceef6090453354470adbca23ad324260cb5a3146d25137c56884ff67e" err="rpc error: code = NotFound desc = could not find container \"1f624baceef6090453354470adbca23ad324260cb5a3146d25137c56884ff67e\": container with ID starting with 1f624baceef6090453354470adbca23ad324260cb5a3146d25137c56884ff67e not found: ID does not exist" Feb 16 17:30:02.861470 master-0 kubenswrapper[4652]: E0216 17:30:02.861275 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a582040a6fe07000f964991d6b5ca6719ac040d9faa11252a7ce6bf5da016e3\": container with ID starting with 2a582040a6fe07000f964991d6b5ca6719ac040d9faa11252a7ce6bf5da016e3 not found: ID does not exist" containerID="2a582040a6fe07000f964991d6b5ca6719ac040d9faa11252a7ce6bf5da016e3" Feb 16 17:30:02.861470 master-0 kubenswrapper[4652]: I0216 17:30:02.861324 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="2a582040a6fe07000f964991d6b5ca6719ac040d9faa11252a7ce6bf5da016e3" err="rpc error: code = NotFound desc = could not find container \"2a582040a6fe07000f964991d6b5ca6719ac040d9faa11252a7ce6bf5da016e3\": container with ID starting with 2a582040a6fe07000f964991d6b5ca6719ac040d9faa11252a7ce6bf5da016e3 not found: ID does not exist" Feb 16 17:30:02.861839 master-0 kubenswrapper[4652]: E0216 17:30:02.861796 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abc0a1f84bde8763c28cad4b7f880d6652bce9442417fc89848d5368bf9822ad\": container with ID starting with abc0a1f84bde8763c28cad4b7f880d6652bce9442417fc89848d5368bf9822ad not found: ID does not exist" containerID="abc0a1f84bde8763c28cad4b7f880d6652bce9442417fc89848d5368bf9822ad" Feb 16 17:30:02.861839 master-0 kubenswrapper[4652]: I0216 17:30:02.861830 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="abc0a1f84bde8763c28cad4b7f880d6652bce9442417fc89848d5368bf9822ad" err="rpc error: code = NotFound desc = could not find container \"abc0a1f84bde8763c28cad4b7f880d6652bce9442417fc89848d5368bf9822ad\": container with ID starting with abc0a1f84bde8763c28cad4b7f880d6652bce9442417fc89848d5368bf9822ad not found: ID does not exist" Feb 16 17:30:02.865599 master-0 kubenswrapper[4652]: E0216 17:30:02.865548 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37060e3d6082551da36ecc80a6060aed182d6f433dd96f7ed17dfdfc6699bb75\": container with ID starting with 37060e3d6082551da36ecc80a6060aed182d6f433dd96f7ed17dfdfc6699bb75 not found: ID does not exist" containerID="37060e3d6082551da36ecc80a6060aed182d6f433dd96f7ed17dfdfc6699bb75" Feb 16 17:30:02.865673 master-0 kubenswrapper[4652]: I0216 17:30:02.865605 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="37060e3d6082551da36ecc80a6060aed182d6f433dd96f7ed17dfdfc6699bb75" err="rpc error: code = NotFound desc = could not find container \"37060e3d6082551da36ecc80a6060aed182d6f433dd96f7ed17dfdfc6699bb75\": container with ID starting with 37060e3d6082551da36ecc80a6060aed182d6f433dd96f7ed17dfdfc6699bb75 not found: ID does not exist" Feb 16 17:30:02.866473 master-0 kubenswrapper[4652]: E0216 17:30:02.866405 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d077fd0a75015a74c26fba1db2c5751ce399c05a04ad0dce4ab7670133702c9\": container with ID starting with 7d077fd0a75015a74c26fba1db2c5751ce399c05a04ad0dce4ab7670133702c9 not found: ID does not exist" containerID="7d077fd0a75015a74c26fba1db2c5751ce399c05a04ad0dce4ab7670133702c9" Feb 16 17:30:02.866532 master-0 kubenswrapper[4652]: I0216 17:30:02.866491 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="7d077fd0a75015a74c26fba1db2c5751ce399c05a04ad0dce4ab7670133702c9" err="rpc error: code = NotFound desc = could not find container \"7d077fd0a75015a74c26fba1db2c5751ce399c05a04ad0dce4ab7670133702c9\": container with ID starting with 7d077fd0a75015a74c26fba1db2c5751ce399c05a04ad0dce4ab7670133702c9 not found: ID does not exist" Feb 16 17:30:03.226016 master-0 kubenswrapper[4652]: I0216 17:30:03.225965 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" Feb 16 17:30:03.325195 master-0 kubenswrapper[4652]: I0216 17:30:03.325137 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzv54\" (UniqueName: \"kubernetes.io/projected/42c6e30f-bdad-470f-b310-f1c4ad117dc9-kube-api-access-dzv54\") pod \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\" (UID: \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\") " Feb 16 17:30:03.325195 master-0 kubenswrapper[4652]: I0216 17:30:03.325202 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c6e30f-bdad-470f-b310-f1c4ad117dc9-config-volume\") pod \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\" (UID: \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\") " Feb 16 17:30:03.325482 master-0 kubenswrapper[4652]: I0216 17:30:03.325315 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c6e30f-bdad-470f-b310-f1c4ad117dc9-secret-volume\") pod \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\" (UID: \"42c6e30f-bdad-470f-b310-f1c4ad117dc9\") " Feb 16 17:30:03.325984 master-0 kubenswrapper[4652]: I0216 17:30:03.325927 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42c6e30f-bdad-470f-b310-f1c4ad117dc9-config-volume" (OuterVolumeSpecName: "config-volume") pod "42c6e30f-bdad-470f-b310-f1c4ad117dc9" (UID: "42c6e30f-bdad-470f-b310-f1c4ad117dc9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:30:03.328201 master-0 kubenswrapper[4652]: I0216 17:30:03.328134 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42c6e30f-bdad-470f-b310-f1c4ad117dc9-kube-api-access-dzv54" (OuterVolumeSpecName: "kube-api-access-dzv54") pod "42c6e30f-bdad-470f-b310-f1c4ad117dc9" (UID: "42c6e30f-bdad-470f-b310-f1c4ad117dc9"). InnerVolumeSpecName "kube-api-access-dzv54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:30:03.328341 master-0 kubenswrapper[4652]: I0216 17:30:03.328313 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42c6e30f-bdad-470f-b310-f1c4ad117dc9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "42c6e30f-bdad-470f-b310-f1c4ad117dc9" (UID: "42c6e30f-bdad-470f-b310-f1c4ad117dc9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:03.427561 master-0 kubenswrapper[4652]: I0216 17:30:03.427418 4652 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c6e30f-bdad-470f-b310-f1c4ad117dc9-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:03.427561 master-0 kubenswrapper[4652]: I0216 17:30:03.427475 4652 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/42c6e30f-bdad-470f-b310-f1c4ad117dc9-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:03.427561 master-0 kubenswrapper[4652]: I0216 17:30:03.427496 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzv54\" (UniqueName: \"kubernetes.io/projected/42c6e30f-bdad-470f-b310-f1c4ad117dc9-kube-api-access-dzv54\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:03.808706 master-0 kubenswrapper[4652]: I0216 17:30:03.808551 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" event={"ID":"42c6e30f-bdad-470f-b310-f1c4ad117dc9","Type":"ContainerDied","Data":"478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129"} Feb 16 17:30:03.808706 master-0 kubenswrapper[4652]: I0216 17:30:03.808600 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129" Feb 16 17:30:03.808706 master-0 kubenswrapper[4652]: I0216 17:30:03.808602 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk" Feb 16 17:30:05.575234 master-0 kubenswrapper[4652]: I0216 17:30:05.575162 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Feb 16 17:30:05.575942 master-0 kubenswrapper[4652]: I0216 17:30:05.575376 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" podUID="ea1b723d-20b9-45ed-be52-704206ed2afb" containerName="installer" containerID="cri-o://89704dfdcf950e4a8181992f9c9e118dc008ba4d29ae1a5ad40f0902457ec3eb" gracePeriod=30 Feb 16 17:30:06.882024 master-0 kubenswrapper[4652]: E0216 17:30:06.881917 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:30:06.883483 master-0 kubenswrapper[4652]: E0216 17:30:06.883408 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:30:06.885198 master-0 kubenswrapper[4652]: E0216 17:30:06.885141 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:30:06.885372 master-0 kubenswrapper[4652]: E0216 17:30:06.885198 4652 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" podUID="148c658b-3c73-4848-8fc7-b4853dc67a6a" containerName="kube-multus-additional-cni-plugins" Feb 16 17:30:07.575598 master-0 kubenswrapper[4652]: I0216 17:30:07.575543 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 16 17:30:07.575877 master-0 kubenswrapper[4652]: E0216 17:30:07.575844 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42c6e30f-bdad-470f-b310-f1c4ad117dc9" containerName="collect-profiles" Feb 16 17:30:07.575877 master-0 kubenswrapper[4652]: I0216 17:30:07.575866 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="42c6e30f-bdad-470f-b310-f1c4ad117dc9" containerName="collect-profiles" Feb 16 17:30:07.576065 master-0 kubenswrapper[4652]: I0216 17:30:07.576041 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="42c6e30f-bdad-470f-b310-f1c4ad117dc9" containerName="collect-profiles" Feb 16 17:30:07.576567 master-0 kubenswrapper[4652]: I0216 17:30:07.576541 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 17:30:07.589969 master-0 kubenswrapper[4652]: I0216 17:30:07.589904 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 16 17:30:07.688926 master-0 kubenswrapper[4652]: I0216 17:30:07.688851 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/963634f3-94ac-4b84-a92e-6224fb4692e0-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"963634f3-94ac-4b84-a92e-6224fb4692e0\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 17:30:07.689154 master-0 kubenswrapper[4652]: I0216 17:30:07.688958 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/963634f3-94ac-4b84-a92e-6224fb4692e0-kube-api-access\") pod \"installer-3-master-0\" (UID: \"963634f3-94ac-4b84-a92e-6224fb4692e0\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 17:30:07.689154 master-0 kubenswrapper[4652]: I0216 17:30:07.689018 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/963634f3-94ac-4b84-a92e-6224fb4692e0-var-lock\") pod \"installer-3-master-0\" (UID: \"963634f3-94ac-4b84-a92e-6224fb4692e0\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 17:30:07.791432 master-0 kubenswrapper[4652]: I0216 17:30:07.791301 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/963634f3-94ac-4b84-a92e-6224fb4692e0-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"963634f3-94ac-4b84-a92e-6224fb4692e0\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 17:30:07.791679 master-0 kubenswrapper[4652]: I0216 17:30:07.791458 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/963634f3-94ac-4b84-a92e-6224fb4692e0-kube-api-access\") pod \"installer-3-master-0\" (UID: \"963634f3-94ac-4b84-a92e-6224fb4692e0\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 17:30:07.791679 master-0 kubenswrapper[4652]: I0216 17:30:07.791505 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/963634f3-94ac-4b84-a92e-6224fb4692e0-var-lock\") pod \"installer-3-master-0\" (UID: \"963634f3-94ac-4b84-a92e-6224fb4692e0\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 17:30:07.791679 master-0 kubenswrapper[4652]: I0216 17:30:07.791652 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/963634f3-94ac-4b84-a92e-6224fb4692e0-var-lock\") pod \"installer-3-master-0\" (UID: \"963634f3-94ac-4b84-a92e-6224fb4692e0\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 17:30:07.791815 master-0 kubenswrapper[4652]: I0216 17:30:07.791713 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/963634f3-94ac-4b84-a92e-6224fb4692e0-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"963634f3-94ac-4b84-a92e-6224fb4692e0\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 17:30:07.808915 master-0 kubenswrapper[4652]: I0216 17:30:07.808873 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/963634f3-94ac-4b84-a92e-6224fb4692e0-kube-api-access\") pod \"installer-3-master-0\" (UID: \"963634f3-94ac-4b84-a92e-6224fb4692e0\") " pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 17:30:07.901262 master-0 kubenswrapper[4652]: I0216 17:30:07.901102 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 17:30:08.327327 master-0 kubenswrapper[4652]: I0216 17:30:08.326304 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Feb 16 17:30:08.842170 master-0 kubenswrapper[4652]: I0216 17:30:08.842108 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"963634f3-94ac-4b84-a92e-6224fb4692e0","Type":"ContainerStarted","Data":"516af62d0b3712a05a55e5ee2969a0158700e9b7353f175be8fef6deb2c5f81c"} Feb 16 17:30:08.842432 master-0 kubenswrapper[4652]: I0216 17:30:08.842170 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"963634f3-94ac-4b84-a92e-6224fb4692e0","Type":"ContainerStarted","Data":"0de8838e295c365cae707645cf2aee2e79680709f82ff273615ae7ae657d534c"} Feb 16 17:30:08.864184 master-0 kubenswrapper[4652]: I0216 17:30:08.864097 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=1.864078944 podStartE2EDuration="1.864078944s" podCreationTimestamp="2026-02-16 17:30:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:30:08.860689584 +0000 UTC m=+366.248858110" watchObservedRunningTime="2026-02-16 17:30:08.864078944 +0000 UTC m=+366.252247460" Feb 16 17:30:13.236151 master-0 kubenswrapper[4652]: I0216 17:30:13.236023 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-846d98f6c-cnjjz"] Feb 16 17:30:13.238475 master-0 kubenswrapper[4652]: I0216 17:30:13.238450 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.254636 master-0 kubenswrapper[4652]: I0216 17:30:13.254583 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-846d98f6c-cnjjz"] Feb 16 17:30:13.379397 master-0 kubenswrapper[4652]: I0216 17:30:13.379345 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-trusted-ca-bundle\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.379397 master-0 kubenswrapper[4652]: I0216 17:30:13.379406 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-service-ca\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.379654 master-0 kubenswrapper[4652]: I0216 17:30:13.379431 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d959a347-b11d-4a51-9729-26b1b7842cc9-console-oauth-config\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.379654 master-0 kubenswrapper[4652]: I0216 17:30:13.379453 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-console-config\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.379654 master-0 kubenswrapper[4652]: I0216 17:30:13.379485 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4gvm\" (UniqueName: \"kubernetes.io/projected/d959a347-b11d-4a51-9729-26b1b7842cc9-kube-api-access-p4gvm\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.379654 master-0 kubenswrapper[4652]: I0216 17:30:13.379509 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d959a347-b11d-4a51-9729-26b1b7842cc9-console-serving-cert\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.379654 master-0 kubenswrapper[4652]: I0216 17:30:13.379540 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-oauth-serving-cert\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.480659 master-0 kubenswrapper[4652]: I0216 17:30:13.480603 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4gvm\" (UniqueName: \"kubernetes.io/projected/d959a347-b11d-4a51-9729-26b1b7842cc9-kube-api-access-p4gvm\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.480659 master-0 kubenswrapper[4652]: I0216 17:30:13.480663 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d959a347-b11d-4a51-9729-26b1b7842cc9-console-serving-cert\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.481082 master-0 kubenswrapper[4652]: I0216 17:30:13.481046 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-oauth-serving-cert\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.481353 master-0 kubenswrapper[4652]: I0216 17:30:13.481335 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-trusted-ca-bundle\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.481532 master-0 kubenswrapper[4652]: I0216 17:30:13.481512 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-service-ca\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.481661 master-0 kubenswrapper[4652]: I0216 17:30:13.481644 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d959a347-b11d-4a51-9729-26b1b7842cc9-console-oauth-config\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.481780 master-0 kubenswrapper[4652]: I0216 17:30:13.481763 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-console-config\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.482131 master-0 kubenswrapper[4652]: I0216 17:30:13.482091 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-oauth-serving-cert\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.482463 master-0 kubenswrapper[4652]: I0216 17:30:13.482423 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-trusted-ca-bundle\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.483201 master-0 kubenswrapper[4652]: I0216 17:30:13.482649 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-console-config\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.483201 master-0 kubenswrapper[4652]: I0216 17:30:13.482755 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-service-ca\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.484107 master-0 kubenswrapper[4652]: I0216 17:30:13.484076 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d959a347-b11d-4a51-9729-26b1b7842cc9-console-serving-cert\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.484612 master-0 kubenswrapper[4652]: I0216 17:30:13.484562 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d959a347-b11d-4a51-9729-26b1b7842cc9-console-oauth-config\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.497329 master-0 kubenswrapper[4652]: I0216 17:30:13.497199 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4gvm\" (UniqueName: \"kubernetes.io/projected/d959a347-b11d-4a51-9729-26b1b7842cc9-kube-api-access-p4gvm\") pod \"console-846d98f6c-cnjjz\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.557288 master-0 kubenswrapper[4652]: I0216 17:30:13.557209 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:13.590590 master-0 kubenswrapper[4652]: I0216 17:30:13.590523 4652 patch_prober.go:28] interesting pod/machine-config-daemon-98q6v container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 17:30:13.590590 master-0 kubenswrapper[4652]: I0216 17:30:13.590591 4652 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 17:30:13.590855 master-0 kubenswrapper[4652]: I0216 17:30:13.590634 4652 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" Feb 16 17:30:13.591192 master-0 kubenswrapper[4652]: I0216 17:30:13.591156 4652 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e8ee7d5680ba534cacc2ad533bdaeb382e6dc0b07563496fc2774a01de14fd2"} pod="openshift-machine-config-operator/machine-config-daemon-98q6v" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 17:30:13.591233 master-0 kubenswrapper[4652]: I0216 17:30:13.591217 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" podUID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerName="machine-config-daemon" containerID="cri-o://8e8ee7d5680ba534cacc2ad533bdaeb382e6dc0b07563496fc2774a01de14fd2" gracePeriod=600 Feb 16 17:30:13.890617 master-0 kubenswrapper[4652]: I0216 17:30:13.890541 4652 generic.go:334] "Generic (PLEG): container finished" podID="648abb6c-9c81-4e5c-b5f1-3b7eb254f743" containerID="8e8ee7d5680ba534cacc2ad533bdaeb382e6dc0b07563496fc2774a01de14fd2" exitCode=0 Feb 16 17:30:13.890870 master-0 kubenswrapper[4652]: I0216 17:30:13.890603 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerDied","Data":"8e8ee7d5680ba534cacc2ad533bdaeb382e6dc0b07563496fc2774a01de14fd2"} Feb 16 17:30:13.890870 master-0 kubenswrapper[4652]: I0216 17:30:13.890707 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-98q6v" event={"ID":"648abb6c-9c81-4e5c-b5f1-3b7eb254f743","Type":"ContainerStarted","Data":"776226b4351922cfe7c49f76e87dbc13de17e84656035775684b668abffa7089"} Feb 16 17:30:13.890870 master-0 kubenswrapper[4652]: I0216 17:30:13.890742 4652 scope.go:117] "RemoveContainer" containerID="36ecb052054e20edf5b4f7071d4d0da2f770afa6a294fc15e380d4c171f3c6ba" Feb 16 17:30:14.005037 master-0 kubenswrapper[4652]: I0216 17:30:14.004982 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-846d98f6c-cnjjz"] Feb 16 17:30:14.900431 master-0 kubenswrapper[4652]: I0216 17:30:14.900371 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-846d98f6c-cnjjz" event={"ID":"d959a347-b11d-4a51-9729-26b1b7842cc9","Type":"ContainerStarted","Data":"b42d4c974fa1c61b26c543260c776911d99feda72bf82a421d07a2ff2bb063dd"} Feb 16 17:30:14.900431 master-0 kubenswrapper[4652]: I0216 17:30:14.900416 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-846d98f6c-cnjjz" event={"ID":"d959a347-b11d-4a51-9729-26b1b7842cc9","Type":"ContainerStarted","Data":"7762b4372e40eae8aadbad57c09e3fe8177bcb14ae84ec8f433a960b14f37a7c"} Feb 16 17:30:14.920314 master-0 kubenswrapper[4652]: I0216 17:30:14.920196 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-846d98f6c-cnjjz" podStartSLOduration=1.920171373 podStartE2EDuration="1.920171373s" podCreationTimestamp="2026-02-16 17:30:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:30:14.915606913 +0000 UTC m=+372.303775459" watchObservedRunningTime="2026-02-16 17:30:14.920171373 +0000 UTC m=+372.308339909" Feb 16 17:30:16.882495 master-0 kubenswrapper[4652]: E0216 17:30:16.882380 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:30:16.884293 master-0 kubenswrapper[4652]: E0216 17:30:16.884079 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:30:16.886093 master-0 kubenswrapper[4652]: E0216 17:30:16.885633 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 17:30:16.886093 master-0 kubenswrapper[4652]: E0216 17:30:16.885669 4652 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" podUID="148c658b-3c73-4848-8fc7-b4853dc67a6a" containerName="kube-multus-additional-cni-plugins" Feb 16 17:30:19.722852 master-0 kubenswrapper[4652]: I0216 17:30:19.722795 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-64f85b8fc9-n9msn"] Feb 16 17:30:20.400460 master-0 kubenswrapper[4652]: I0216 17:30:20.400366 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 16 17:30:20.402241 master-0 kubenswrapper[4652]: I0216 17:30:20.402181 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 17:30:20.406341 master-0 kubenswrapper[4652]: I0216 17:30:20.406290 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 17:30:20.406341 master-0 kubenswrapper[4652]: I0216 17:30:20.406308 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-z9qtm" Feb 16 17:30:20.413814 master-0 kubenswrapper[4652]: I0216 17:30:20.413761 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 16 17:30:20.497547 master-0 kubenswrapper[4652]: I0216 17:30:20.497493 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d172f32-1e6a-4002-b856-a10a5449a643-kube-api-access\") pod \"installer-5-master-0\" (UID: \"8d172f32-1e6a-4002-b856-a10a5449a643\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 17:30:20.497847 master-0 kubenswrapper[4652]: I0216 17:30:20.497781 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d172f32-1e6a-4002-b856-a10a5449a643-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"8d172f32-1e6a-4002-b856-a10a5449a643\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 17:30:20.498004 master-0 kubenswrapper[4652]: I0216 17:30:20.497971 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8d172f32-1e6a-4002-b856-a10a5449a643-var-lock\") pod \"installer-5-master-0\" (UID: \"8d172f32-1e6a-4002-b856-a10a5449a643\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 17:30:20.599557 master-0 kubenswrapper[4652]: I0216 17:30:20.599489 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d172f32-1e6a-4002-b856-a10a5449a643-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"8d172f32-1e6a-4002-b856-a10a5449a643\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 17:30:20.599923 master-0 kubenswrapper[4652]: I0216 17:30:20.599583 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8d172f32-1e6a-4002-b856-a10a5449a643-var-lock\") pod \"installer-5-master-0\" (UID: \"8d172f32-1e6a-4002-b856-a10a5449a643\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 17:30:20.599923 master-0 kubenswrapper[4652]: I0216 17:30:20.599592 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d172f32-1e6a-4002-b856-a10a5449a643-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"8d172f32-1e6a-4002-b856-a10a5449a643\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 17:30:20.599923 master-0 kubenswrapper[4652]: I0216 17:30:20.599628 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d172f32-1e6a-4002-b856-a10a5449a643-kube-api-access\") pod \"installer-5-master-0\" (UID: \"8d172f32-1e6a-4002-b856-a10a5449a643\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 17:30:20.599923 master-0 kubenswrapper[4652]: I0216 17:30:20.599708 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8d172f32-1e6a-4002-b856-a10a5449a643-var-lock\") pod \"installer-5-master-0\" (UID: \"8d172f32-1e6a-4002-b856-a10a5449a643\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 17:30:20.622489 master-0 kubenswrapper[4652]: I0216 17:30:20.622424 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d172f32-1e6a-4002-b856-a10a5449a643-kube-api-access\") pod \"installer-5-master-0\" (UID: \"8d172f32-1e6a-4002-b856-a10a5449a643\") " pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 17:30:20.729597 master-0 kubenswrapper[4652]: I0216 17:30:20.729464 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 17:30:20.808336 master-0 kubenswrapper[4652]: I0216 17:30:20.808286 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-82zhm_148c658b-3c73-4848-8fc7-b4853dc67a6a/kube-multus-additional-cni-plugins/0.log" Feb 16 17:30:20.808520 master-0 kubenswrapper[4652]: I0216 17:30:20.808401 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:30:20.892650 master-0 kubenswrapper[4652]: E0216 17:30:20.892591 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c6e30f_bdad_470f_b310_f1c4ad117dc9.slice/crio-478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129\": RecentStats: unable to find data in memory cache]" Feb 16 17:30:20.904171 master-0 kubenswrapper[4652]: I0216 17:30:20.904117 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/148c658b-3c73-4848-8fc7-b4853dc67a6a-cni-sysctl-allowlist\") pod \"148c658b-3c73-4848-8fc7-b4853dc67a6a\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " Feb 16 17:30:20.904379 master-0 kubenswrapper[4652]: I0216 17:30:20.904210 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/148c658b-3c73-4848-8fc7-b4853dc67a6a-tuning-conf-dir\") pod \"148c658b-3c73-4848-8fc7-b4853dc67a6a\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " Feb 16 17:30:20.904379 master-0 kubenswrapper[4652]: I0216 17:30:20.904299 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n9xq\" (UniqueName: \"kubernetes.io/projected/148c658b-3c73-4848-8fc7-b4853dc67a6a-kube-api-access-4n9xq\") pod \"148c658b-3c73-4848-8fc7-b4853dc67a6a\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " Feb 16 17:30:20.904379 master-0 kubenswrapper[4652]: I0216 17:30:20.904309 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/148c658b-3c73-4848-8fc7-b4853dc67a6a-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "148c658b-3c73-4848-8fc7-b4853dc67a6a" (UID: "148c658b-3c73-4848-8fc7-b4853dc67a6a"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:30:20.904379 master-0 kubenswrapper[4652]: I0216 17:30:20.904346 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/148c658b-3c73-4848-8fc7-b4853dc67a6a-ready\") pod \"148c658b-3c73-4848-8fc7-b4853dc67a6a\" (UID: \"148c658b-3c73-4848-8fc7-b4853dc67a6a\") " Feb 16 17:30:20.904778 master-0 kubenswrapper[4652]: I0216 17:30:20.904744 4652 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/148c658b-3c73-4848-8fc7-b4853dc67a6a-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:20.904923 master-0 kubenswrapper[4652]: I0216 17:30:20.904832 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/148c658b-3c73-4848-8fc7-b4853dc67a6a-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "148c658b-3c73-4848-8fc7-b4853dc67a6a" (UID: "148c658b-3c73-4848-8fc7-b4853dc67a6a"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:30:20.905634 master-0 kubenswrapper[4652]: I0216 17:30:20.905047 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/148c658b-3c73-4848-8fc7-b4853dc67a6a-ready" (OuterVolumeSpecName: "ready") pod "148c658b-3c73-4848-8fc7-b4853dc67a6a" (UID: "148c658b-3c73-4848-8fc7-b4853dc67a6a"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:30:20.907223 master-0 kubenswrapper[4652]: I0216 17:30:20.907187 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/148c658b-3c73-4848-8fc7-b4853dc67a6a-kube-api-access-4n9xq" (OuterVolumeSpecName: "kube-api-access-4n9xq") pod "148c658b-3c73-4848-8fc7-b4853dc67a6a" (UID: "148c658b-3c73-4848-8fc7-b4853dc67a6a"). InnerVolumeSpecName "kube-api-access-4n9xq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:30:20.941791 master-0 kubenswrapper[4652]: I0216 17:30:20.941746 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-82zhm_148c658b-3c73-4848-8fc7-b4853dc67a6a/kube-multus-additional-cni-plugins/0.log" Feb 16 17:30:20.941990 master-0 kubenswrapper[4652]: I0216 17:30:20.941796 4652 generic.go:334] "Generic (PLEG): container finished" podID="148c658b-3c73-4848-8fc7-b4853dc67a6a" containerID="1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" exitCode=137 Feb 16 17:30:20.941990 master-0 kubenswrapper[4652]: I0216 17:30:20.941820 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" event={"ID":"148c658b-3c73-4848-8fc7-b4853dc67a6a","Type":"ContainerDied","Data":"1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb"} Feb 16 17:30:20.941990 master-0 kubenswrapper[4652]: I0216 17:30:20.941842 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" event={"ID":"148c658b-3c73-4848-8fc7-b4853dc67a6a","Type":"ContainerDied","Data":"3ad3d5c3e674c2f93c8a895c83b621b16e1977e550e861f65236de4315c86d8d"} Feb 16 17:30:20.941990 master-0 kubenswrapper[4652]: I0216 17:30:20.941860 4652 scope.go:117] "RemoveContainer" containerID="1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" Feb 16 17:30:20.941990 master-0 kubenswrapper[4652]: I0216 17:30:20.941900 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-82zhm" Feb 16 17:30:20.958286 master-0 kubenswrapper[4652]: I0216 17:30:20.958222 4652 scope.go:117] "RemoveContainer" containerID="1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" Feb 16 17:30:20.958648 master-0 kubenswrapper[4652]: E0216 17:30:20.958619 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb\": container with ID starting with 1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb not found: ID does not exist" containerID="1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb" Feb 16 17:30:20.958712 master-0 kubenswrapper[4652]: I0216 17:30:20.958651 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb"} err="failed to get container status \"1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb\": rpc error: code = NotFound desc = could not find container \"1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb\": container with ID starting with 1105cd21e860441b130f8b3e33943545c88a39b43513241493df80710044aacb not found: ID does not exist" Feb 16 17:30:20.988467 master-0 kubenswrapper[4652]: I0216 17:30:20.988396 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-82zhm"] Feb 16 17:30:20.996414 master-0 kubenswrapper[4652]: I0216 17:30:20.996370 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-82zhm"] Feb 16 17:30:21.006240 master-0 kubenswrapper[4652]: I0216 17:30:21.006196 4652 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/148c658b-3c73-4848-8fc7-b4853dc67a6a-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:21.006240 master-0 kubenswrapper[4652]: I0216 17:30:21.006226 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n9xq\" (UniqueName: \"kubernetes.io/projected/148c658b-3c73-4848-8fc7-b4853dc67a6a-kube-api-access-4n9xq\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:21.006240 master-0 kubenswrapper[4652]: I0216 17:30:21.006236 4652 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/148c658b-3c73-4848-8fc7-b4853dc67a6a-ready\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:21.124169 master-0 kubenswrapper[4652]: I0216 17:30:21.122481 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Feb 16 17:30:21.948616 master-0 kubenswrapper[4652]: I0216 17:30:21.948558 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"8d172f32-1e6a-4002-b856-a10a5449a643","Type":"ContainerStarted","Data":"4e0e8938969c332e02a547abb5dc6be217a78e6c962de4db57a5a1d03c8cb4b6"} Feb 16 17:30:21.948616 master-0 kubenswrapper[4652]: I0216 17:30:21.948629 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"8d172f32-1e6a-4002-b856-a10a5449a643","Type":"ContainerStarted","Data":"f777180be809bef07385fe91421e7b3ad17717f54df53463edba1d867be6cec9"} Feb 16 17:30:21.969612 master-0 kubenswrapper[4652]: I0216 17:30:21.969515 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=1.969498265 podStartE2EDuration="1.969498265s" podCreationTimestamp="2026-02-16 17:30:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:30:21.96664699 +0000 UTC m=+379.354815506" watchObservedRunningTime="2026-02-16 17:30:21.969498265 +0000 UTC m=+379.357666781" Feb 16 17:30:22.494257 master-0 kubenswrapper[4652]: E0216 17:30:22.494177 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod963634f3_94ac_4b84_a92e_6224fb4692e0.slice/crio-516af62d0b3712a05a55e5ee2969a0158700e9b7353f175be8fef6deb2c5f81c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c6e30f_bdad_470f_b310_f1c4ad117dc9.slice/crio-478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod648abb6c_9c81_4e5c_b5f1_3b7eb254f743.slice/crio-8e8ee7d5680ba534cacc2ad533bdaeb382e6dc0b07563496fc2774a01de14fd2.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:30:22.757816 master-0 kubenswrapper[4652]: I0216 17:30:22.757697 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="148c658b-3c73-4848-8fc7-b4853dc67a6a" path="/var/lib/kubelet/pods/148c658b-3c73-4848-8fc7-b4853dc67a6a/volumes" Feb 16 17:30:22.826479 master-0 kubenswrapper[4652]: I0216 17:30:22.826444 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-retry-1-master-0_ea1b723d-20b9-45ed-be52-704206ed2afb/installer/0.log" Feb 16 17:30:22.826686 master-0 kubenswrapper[4652]: I0216 17:30:22.826514 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 16 17:30:22.937700 master-0 kubenswrapper[4652]: I0216 17:30:22.937645 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea1b723d-20b9-45ed-be52-704206ed2afb-var-lock\") pod \"ea1b723d-20b9-45ed-be52-704206ed2afb\" (UID: \"ea1b723d-20b9-45ed-be52-704206ed2afb\") " Feb 16 17:30:22.937700 master-0 kubenswrapper[4652]: I0216 17:30:22.937687 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea1b723d-20b9-45ed-be52-704206ed2afb-kubelet-dir\") pod \"ea1b723d-20b9-45ed-be52-704206ed2afb\" (UID: \"ea1b723d-20b9-45ed-be52-704206ed2afb\") " Feb 16 17:30:22.938048 master-0 kubenswrapper[4652]: I0216 17:30:22.937756 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea1b723d-20b9-45ed-be52-704206ed2afb-kube-api-access\") pod \"ea1b723d-20b9-45ed-be52-704206ed2afb\" (UID: \"ea1b723d-20b9-45ed-be52-704206ed2afb\") " Feb 16 17:30:22.938048 master-0 kubenswrapper[4652]: I0216 17:30:22.937747 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea1b723d-20b9-45ed-be52-704206ed2afb-var-lock" (OuterVolumeSpecName: "var-lock") pod "ea1b723d-20b9-45ed-be52-704206ed2afb" (UID: "ea1b723d-20b9-45ed-be52-704206ed2afb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:30:22.938048 master-0 kubenswrapper[4652]: I0216 17:30:22.937824 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ea1b723d-20b9-45ed-be52-704206ed2afb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ea1b723d-20b9-45ed-be52-704206ed2afb" (UID: "ea1b723d-20b9-45ed-be52-704206ed2afb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:30:22.938302 master-0 kubenswrapper[4652]: I0216 17:30:22.938278 4652 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ea1b723d-20b9-45ed-be52-704206ed2afb-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:22.938380 master-0 kubenswrapper[4652]: I0216 17:30:22.938303 4652 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ea1b723d-20b9-45ed-be52-704206ed2afb-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:22.947023 master-0 kubenswrapper[4652]: I0216 17:30:22.946974 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea1b723d-20b9-45ed-be52-704206ed2afb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ea1b723d-20b9-45ed-be52-704206ed2afb" (UID: "ea1b723d-20b9-45ed-be52-704206ed2afb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:30:22.970567 master-0 kubenswrapper[4652]: I0216 17:30:22.967852 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-retry-1-master-0_ea1b723d-20b9-45ed-be52-704206ed2afb/installer/0.log" Feb 16 17:30:22.970567 master-0 kubenswrapper[4652]: I0216 17:30:22.967910 4652 generic.go:334] "Generic (PLEG): container finished" podID="ea1b723d-20b9-45ed-be52-704206ed2afb" containerID="89704dfdcf950e4a8181992f9c9e118dc008ba4d29ae1a5ad40f0902457ec3eb" exitCode=1 Feb 16 17:30:22.970567 master-0 kubenswrapper[4652]: I0216 17:30:22.967999 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"ea1b723d-20b9-45ed-be52-704206ed2afb","Type":"ContainerDied","Data":"89704dfdcf950e4a8181992f9c9e118dc008ba4d29ae1a5ad40f0902457ec3eb"} Feb 16 17:30:22.970567 master-0 kubenswrapper[4652]: I0216 17:30:22.968070 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" event={"ID":"ea1b723d-20b9-45ed-be52-704206ed2afb","Type":"ContainerDied","Data":"6076014e57dad34283e2f7535c889f7d07d4e762b772ba43ab9de8213f858880"} Feb 16 17:30:22.970567 master-0 kubenswrapper[4652]: I0216 17:30:22.968097 4652 scope.go:117] "RemoveContainer" containerID="89704dfdcf950e4a8181992f9c9e118dc008ba4d29ae1a5ad40f0902457ec3eb" Feb 16 17:30:22.970567 master-0 kubenswrapper[4652]: I0216 17:30:22.968024 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-retry-1-master-0" Feb 16 17:30:23.007531 master-0 kubenswrapper[4652]: I0216 17:30:23.007479 4652 scope.go:117] "RemoveContainer" containerID="89704dfdcf950e4a8181992f9c9e118dc008ba4d29ae1a5ad40f0902457ec3eb" Feb 16 17:30:23.009479 master-0 kubenswrapper[4652]: E0216 17:30:23.009402 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89704dfdcf950e4a8181992f9c9e118dc008ba4d29ae1a5ad40f0902457ec3eb\": container with ID starting with 89704dfdcf950e4a8181992f9c9e118dc008ba4d29ae1a5ad40f0902457ec3eb not found: ID does not exist" containerID="89704dfdcf950e4a8181992f9c9e118dc008ba4d29ae1a5ad40f0902457ec3eb" Feb 16 17:30:23.009479 master-0 kubenswrapper[4652]: I0216 17:30:23.009440 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89704dfdcf950e4a8181992f9c9e118dc008ba4d29ae1a5ad40f0902457ec3eb"} err="failed to get container status \"89704dfdcf950e4a8181992f9c9e118dc008ba4d29ae1a5ad40f0902457ec3eb\": rpc error: code = NotFound desc = could not find container \"89704dfdcf950e4a8181992f9c9e118dc008ba4d29ae1a5ad40f0902457ec3eb\": container with ID starting with 89704dfdcf950e4a8181992f9c9e118dc008ba4d29ae1a5ad40f0902457ec3eb not found: ID does not exist" Feb 16 17:30:23.032157 master-0 kubenswrapper[4652]: I0216 17:30:23.032107 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Feb 16 17:30:23.039899 master-0 kubenswrapper[4652]: I0216 17:30:23.039857 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ea1b723d-20b9-45ed-be52-704206ed2afb-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:23.044349 master-0 kubenswrapper[4652]: I0216 17:30:23.044283 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-2-retry-1-master-0"] Feb 16 17:30:23.558290 master-0 kubenswrapper[4652]: I0216 17:30:23.558229 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:23.558290 master-0 kubenswrapper[4652]: I0216 17:30:23.558297 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:23.563568 master-0 kubenswrapper[4652]: I0216 17:30:23.563534 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:23.988017 master-0 kubenswrapper[4652]: I0216 17:30:23.987972 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:30:24.052705 master-0 kubenswrapper[4652]: I0216 17:30:24.052640 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-599b567ff7-nrcpr"] Feb 16 17:30:24.727642 master-0 kubenswrapper[4652]: E0216 17:30:24.727581 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c6e30f_bdad_470f_b310_f1c4ad117dc9.slice/crio-478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129\": RecentStats: unable to find data in memory cache]" Feb 16 17:30:24.753296 master-0 kubenswrapper[4652]: I0216 17:30:24.753234 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea1b723d-20b9-45ed-be52-704206ed2afb" path="/var/lib/kubelet/pods/ea1b723d-20b9-45ed-be52-704206ed2afb/volumes" Feb 16 17:30:28.009389 master-0 kubenswrapper[4652]: I0216 17:30:28.009297 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-6d678b8d67-5n9cl_0d980a9a-2574-41b9-b970-0718cd97c8cd/multus-admission-controller/2.log" Feb 16 17:30:28.009389 master-0 kubenswrapper[4652]: I0216 17:30:28.009367 4652 generic.go:334] "Generic (PLEG): container finished" podID="0d980a9a-2574-41b9-b970-0718cd97c8cd" containerID="c583661e5e33797ca1d13255c948bcaf6544bd8dfe5baf1799d8f2972453dd92" exitCode=137 Feb 16 17:30:28.009389 master-0 kubenswrapper[4652]: I0216 17:30:28.009405 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerDied","Data":"c583661e5e33797ca1d13255c948bcaf6544bd8dfe5baf1799d8f2972453dd92"} Feb 16 17:30:28.653583 master-0 kubenswrapper[4652]: I0216 17:30:28.653555 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-6d678b8d67-5n9cl_0d980a9a-2574-41b9-b970-0718cd97c8cd/multus-admission-controller/2.log" Feb 16 17:30:28.653865 master-0 kubenswrapper[4652]: I0216 17:30:28.653850 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:30:28.732697 master-0 kubenswrapper[4652]: I0216 17:30:28.732614 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7l6q\" (UniqueName: \"kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q\") pod \"0d980a9a-2574-41b9-b970-0718cd97c8cd\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " Feb 16 17:30:28.732928 master-0 kubenswrapper[4652]: I0216 17:30:28.732849 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") pod \"0d980a9a-2574-41b9-b970-0718cd97c8cd\" (UID: \"0d980a9a-2574-41b9-b970-0718cd97c8cd\") " Feb 16 17:30:28.736487 master-0 kubenswrapper[4652]: I0216 17:30:28.736437 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q" (OuterVolumeSpecName: "kube-api-access-t7l6q") pod "0d980a9a-2574-41b9-b970-0718cd97c8cd" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd"). InnerVolumeSpecName "kube-api-access-t7l6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:30:28.736623 master-0 kubenswrapper[4652]: I0216 17:30:28.736587 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0d980a9a-2574-41b9-b970-0718cd97c8cd" (UID: "0d980a9a-2574-41b9-b970-0718cd97c8cd"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:28.836216 master-0 kubenswrapper[4652]: I0216 17:30:28.836144 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7l6q\" (UniqueName: \"kubernetes.io/projected/0d980a9a-2574-41b9-b970-0718cd97c8cd-kube-api-access-t7l6q\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:28.836216 master-0 kubenswrapper[4652]: I0216 17:30:28.836212 4652 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0d980a9a-2574-41b9-b970-0718cd97c8cd-webhook-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:29.019047 master-0 kubenswrapper[4652]: I0216 17:30:29.018985 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-6d678b8d67-5n9cl_0d980a9a-2574-41b9-b970-0718cd97c8cd/multus-admission-controller/2.log" Feb 16 17:30:29.019047 master-0 kubenswrapper[4652]: I0216 17:30:29.019049 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" event={"ID":"0d980a9a-2574-41b9-b970-0718cd97c8cd","Type":"ContainerDied","Data":"5bcdb4a4da23a77b6ec44653d6844bca70f1bf2c4d2302a5c6b717ee9b4a0958"} Feb 16 17:30:29.020077 master-0 kubenswrapper[4652]: I0216 17:30:29.019093 4652 scope.go:117] "RemoveContainer" containerID="50c66ed61b907edd1ceec45a87aa52fa3fd05087ad18e9d732bbeaa727b7aef1" Feb 16 17:30:29.020077 master-0 kubenswrapper[4652]: I0216 17:30:29.019222 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-6d678b8d67-5n9cl" Feb 16 17:30:29.038349 master-0 kubenswrapper[4652]: I0216 17:30:29.038289 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-5n9cl"] Feb 16 17:30:29.042034 master-0 kubenswrapper[4652]: I0216 17:30:29.041987 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-6d678b8d67-5n9cl"] Feb 16 17:30:29.044952 master-0 kubenswrapper[4652]: I0216 17:30:29.044906 4652 scope.go:117] "RemoveContainer" containerID="c583661e5e33797ca1d13255c948bcaf6544bd8dfe5baf1799d8f2972453dd92" Feb 16 17:30:30.753357 master-0 kubenswrapper[4652]: I0216 17:30:30.753293 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" path="/var/lib/kubelet/pods/0d980a9a-2574-41b9-b970-0718cd97c8cd/volumes" Feb 16 17:30:32.700329 master-0 kubenswrapper[4652]: E0216 17:30:32.700277 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c6e30f_bdad_470f_b310_f1c4ad117dc9.slice/crio-478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129\": RecentStats: unable to find data in memory cache]" Feb 16 17:30:37.646536 master-0 kubenswrapper[4652]: E0216 17:30:37.646446 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c6e30f_bdad_470f_b310_f1c4ad117dc9.slice/crio-478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129\": RecentStats: unable to find data in memory cache]" Feb 16 17:30:37.653915 master-0 kubenswrapper[4652]: E0216 17:30:37.653823 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c6e30f_bdad_470f_b310_f1c4ad117dc9.slice/crio-478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129\": RecentStats: unable to find data in memory cache]" Feb 16 17:30:39.783017 master-0 kubenswrapper[4652]: E0216 17:30:39.782973 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c6e30f_bdad_470f_b310_f1c4ad117dc9.slice/crio-478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129\": RecentStats: unable to find data in memory cache]" Feb 16 17:30:42.731900 master-0 kubenswrapper[4652]: E0216 17:30:42.731797 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c6e30f_bdad_470f_b310_f1c4ad117dc9.slice/crio-478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129\": RecentStats: unable to find data in memory cache]" Feb 16 17:30:44.758238 master-0 kubenswrapper[4652]: I0216 17:30:44.758135 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" containerName="oauth-openshift" containerID="cri-o://24810449690d5775751417ebd8694b4596d66b3a929f6ba3ed10b3bf89940dc8" gracePeriod=15 Feb 16 17:30:45.129666 master-0 kubenswrapper[4652]: I0216 17:30:45.129608 4652 generic.go:334] "Generic (PLEG): container finished" podID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" containerID="24810449690d5775751417ebd8694b4596d66b3a929f6ba3ed10b3bf89940dc8" exitCode=0 Feb 16 17:30:45.129666 master-0 kubenswrapper[4652]: I0216 17:30:45.129650 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" event={"ID":"2be9d55c-a4ec-48cd-93d2-0a1dced745a8","Type":"ContainerDied","Data":"24810449690d5775751417ebd8694b4596d66b3a929f6ba3ed10b3bf89940dc8"} Feb 16 17:30:45.218202 master-0 kubenswrapper[4652]: I0216 17:30:45.218157 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:30:45.257926 master-0 kubenswrapper[4652]: I0216 17:30:45.257871 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7789f6f4b-bbcrd"] Feb 16 17:30:45.258190 master-0 kubenswrapper[4652]: E0216 17:30:45.258163 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" containerName="oauth-openshift" Feb 16 17:30:45.258190 master-0 kubenswrapper[4652]: I0216 17:30:45.258180 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" containerName="oauth-openshift" Feb 16 17:30:45.258313 master-0 kubenswrapper[4652]: E0216 17:30:45.258198 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" containerName="multus-admission-controller" Feb 16 17:30:45.258313 master-0 kubenswrapper[4652]: I0216 17:30:45.258206 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" containerName="multus-admission-controller" Feb 16 17:30:45.258313 master-0 kubenswrapper[4652]: E0216 17:30:45.258216 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1b723d-20b9-45ed-be52-704206ed2afb" containerName="installer" Feb 16 17:30:45.258313 master-0 kubenswrapper[4652]: I0216 17:30:45.258223 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1b723d-20b9-45ed-be52-704206ed2afb" containerName="installer" Feb 16 17:30:45.258313 master-0 kubenswrapper[4652]: E0216 17:30:45.258230 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" containerName="kube-rbac-proxy" Feb 16 17:30:45.258313 master-0 kubenswrapper[4652]: I0216 17:30:45.258236 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" containerName="kube-rbac-proxy" Feb 16 17:30:45.258313 master-0 kubenswrapper[4652]: E0216 17:30:45.258267 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="148c658b-3c73-4848-8fc7-b4853dc67a6a" containerName="kube-multus-additional-cni-plugins" Feb 16 17:30:45.258313 master-0 kubenswrapper[4652]: I0216 17:30:45.258280 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="148c658b-3c73-4848-8fc7-b4853dc67a6a" containerName="kube-multus-additional-cni-plugins" Feb 16 17:30:45.258994 master-0 kubenswrapper[4652]: I0216 17:30:45.258444 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="148c658b-3c73-4848-8fc7-b4853dc67a6a" containerName="kube-multus-additional-cni-plugins" Feb 16 17:30:45.258994 master-0 kubenswrapper[4652]: I0216 17:30:45.258454 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea1b723d-20b9-45ed-be52-704206ed2afb" containerName="installer" Feb 16 17:30:45.258994 master-0 kubenswrapper[4652]: I0216 17:30:45.258471 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" containerName="oauth-openshift" Feb 16 17:30:45.258994 master-0 kubenswrapper[4652]: I0216 17:30:45.258484 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" containerName="multus-admission-controller" Feb 16 17:30:45.258994 master-0 kubenswrapper[4652]: I0216 17:30:45.258493 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d980a9a-2574-41b9-b970-0718cd97c8cd" containerName="kube-rbac-proxy" Feb 16 17:30:45.259382 master-0 kubenswrapper[4652]: I0216 17:30:45.259062 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.282985 master-0 kubenswrapper[4652]: I0216 17:30:45.280291 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7789f6f4b-bbcrd"] Feb 16 17:30:45.306668 master-0 kubenswrapper[4652]: I0216 17:30:45.306621 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") pod \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " Feb 16 17:30:45.306668 master-0 kubenswrapper[4652]: I0216 17:30:45.306672 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") pod \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " Feb 16 17:30:45.306959 master-0 kubenswrapper[4652]: I0216 17:30:45.306692 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") pod \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " Feb 16 17:30:45.306959 master-0 kubenswrapper[4652]: I0216 17:30:45.306722 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") pod \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " Feb 16 17:30:45.306959 master-0 kubenswrapper[4652]: I0216 17:30:45.306772 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") pod \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " Feb 16 17:30:45.306959 master-0 kubenswrapper[4652]: I0216 17:30:45.306798 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") pod \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " Feb 16 17:30:45.306959 master-0 kubenswrapper[4652]: I0216 17:30:45.306830 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") pod \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " Feb 16 17:30:45.306959 master-0 kubenswrapper[4652]: I0216 17:30:45.306855 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") pod \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " Feb 16 17:30:45.306959 master-0 kubenswrapper[4652]: I0216 17:30:45.306878 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") pod \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " Feb 16 17:30:45.307182 master-0 kubenswrapper[4652]: I0216 17:30:45.306962 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-dir\") pod \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " Feb 16 17:30:45.307182 master-0 kubenswrapper[4652]: I0216 17:30:45.306987 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") pod \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " Feb 16 17:30:45.307722 master-0 kubenswrapper[4652]: I0216 17:30:45.307375 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2be9d55c-a4ec-48cd-93d2-0a1dced745a8" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:30:45.307722 master-0 kubenswrapper[4652]: I0216 17:30:45.307459 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2be9d55c-a4ec-48cd-93d2-0a1dced745a8" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:30:45.307722 master-0 kubenswrapper[4652]: I0216 17:30:45.307468 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2be9d55c-a4ec-48cd-93d2-0a1dced745a8" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:30:45.307909 master-0 kubenswrapper[4652]: I0216 17:30:45.307875 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") pod \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " Feb 16 17:30:45.307960 master-0 kubenswrapper[4652]: I0216 17:30:45.307950 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") pod \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\" (UID: \"2be9d55c-a4ec-48cd-93d2-0a1dced745a8\") " Feb 16 17:30:45.308393 master-0 kubenswrapper[4652]: I0216 17:30:45.308373 4652 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:45.308482 master-0 kubenswrapper[4652]: I0216 17:30:45.308393 4652 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-policies\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:45.308482 master-0 kubenswrapper[4652]: I0216 17:30:45.308406 4652 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:45.308568 master-0 kubenswrapper[4652]: I0216 17:30:45.308539 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2be9d55c-a4ec-48cd-93d2-0a1dced745a8" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:30:45.308879 master-0 kubenswrapper[4652]: I0216 17:30:45.308641 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2be9d55c-a4ec-48cd-93d2-0a1dced745a8" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:30:45.310213 master-0 kubenswrapper[4652]: I0216 17:30:45.310171 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2be9d55c-a4ec-48cd-93d2-0a1dced745a8" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:45.310362 master-0 kubenswrapper[4652]: I0216 17:30:45.310319 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc" (OuterVolumeSpecName: "kube-api-access-7mrkc") pod "2be9d55c-a4ec-48cd-93d2-0a1dced745a8" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8"). InnerVolumeSpecName "kube-api-access-7mrkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:30:45.310642 master-0 kubenswrapper[4652]: I0216 17:30:45.310612 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2be9d55c-a4ec-48cd-93d2-0a1dced745a8" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:45.311582 master-0 kubenswrapper[4652]: I0216 17:30:45.311510 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2be9d55c-a4ec-48cd-93d2-0a1dced745a8" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:45.311709 master-0 kubenswrapper[4652]: I0216 17:30:45.311680 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2be9d55c-a4ec-48cd-93d2-0a1dced745a8" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:45.313859 master-0 kubenswrapper[4652]: I0216 17:30:45.313690 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2be9d55c-a4ec-48cd-93d2-0a1dced745a8" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:45.314212 master-0 kubenswrapper[4652]: I0216 17:30:45.314139 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2be9d55c-a4ec-48cd-93d2-0a1dced745a8" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:45.321652 master-0 kubenswrapper[4652]: I0216 17:30:45.321619 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2be9d55c-a4ec-48cd-93d2-0a1dced745a8" (UID: "2be9d55c-a4ec-48cd-93d2-0a1dced745a8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.409504 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-service-ca\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.409577 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d801bdbd-209c-4c9e-8593-8b322a31efec-audit-dir\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.409655 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-user-template-login\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.409681 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4trf\" (UniqueName: \"kubernetes.io/projected/d801bdbd-209c-4c9e-8593-8b322a31efec-kube-api-access-n4trf\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.409748 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d801bdbd-209c-4c9e-8593-8b322a31efec-audit-policies\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.409772 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.409790 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-user-template-error\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.409807 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-router-certs\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.409854 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-session\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.409876 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.409911 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.409939 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.409956 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.410038 4652 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.410050 4652 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.410060 4652 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.410069 4652 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.410079 4652 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.410088 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mrkc\" (UniqueName: \"kubernetes.io/projected/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-kube-api-access-7mrkc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.410099 4652 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.410108 4652 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.410117 4652 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:45.410268 master-0 kubenswrapper[4652]: I0216 17:30:45.410128 4652 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2be9d55c-a4ec-48cd-93d2-0a1dced745a8-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:45.511288 master-0 kubenswrapper[4652]: I0216 17:30:45.511161 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-session\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.511726 master-0 kubenswrapper[4652]: I0216 17:30:45.511437 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.511726 master-0 kubenswrapper[4652]: I0216 17:30:45.511576 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.513896 master-0 kubenswrapper[4652]: I0216 17:30:45.511740 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.513896 master-0 kubenswrapper[4652]: I0216 17:30:45.511780 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.513896 master-0 kubenswrapper[4652]: I0216 17:30:45.511815 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-service-ca\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.513896 master-0 kubenswrapper[4652]: I0216 17:30:45.511850 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d801bdbd-209c-4c9e-8593-8b322a31efec-audit-dir\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.513896 master-0 kubenswrapper[4652]: I0216 17:30:45.512102 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d801bdbd-209c-4c9e-8593-8b322a31efec-audit-dir\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.513896 master-0 kubenswrapper[4652]: I0216 17:30:45.512101 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-user-template-login\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.513896 master-0 kubenswrapper[4652]: I0216 17:30:45.512211 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4trf\" (UniqueName: \"kubernetes.io/projected/d801bdbd-209c-4c9e-8593-8b322a31efec-kube-api-access-n4trf\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.513896 master-0 kubenswrapper[4652]: I0216 17:30:45.512322 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d801bdbd-209c-4c9e-8593-8b322a31efec-audit-policies\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.513896 master-0 kubenswrapper[4652]: I0216 17:30:45.512395 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.513896 master-0 kubenswrapper[4652]: I0216 17:30:45.512430 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-user-template-error\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.513896 master-0 kubenswrapper[4652]: I0216 17:30:45.512478 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-router-certs\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.513896 master-0 kubenswrapper[4652]: I0216 17:30:45.513321 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-service-ca\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.513896 master-0 kubenswrapper[4652]: I0216 17:30:45.513637 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.515495 master-0 kubenswrapper[4652]: I0216 17:30:45.514838 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d801bdbd-209c-4c9e-8593-8b322a31efec-audit-policies\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.515495 master-0 kubenswrapper[4652]: I0216 17:30:45.515165 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.515495 master-0 kubenswrapper[4652]: I0216 17:30:45.515452 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.515643 master-0 kubenswrapper[4652]: I0216 17:30:45.515570 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-user-template-login\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.516765 master-0 kubenswrapper[4652]: I0216 17:30:45.516712 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.516837 master-0 kubenswrapper[4652]: I0216 17:30:45.516780 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-session\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.517731 master-0 kubenswrapper[4652]: I0216 17:30:45.517621 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-system-router-certs\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.517731 master-0 kubenswrapper[4652]: I0216 17:30:45.517695 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-user-template-error\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.517977 master-0 kubenswrapper[4652]: I0216 17:30:45.517936 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d801bdbd-209c-4c9e-8593-8b322a31efec-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.544082 master-0 kubenswrapper[4652]: I0216 17:30:45.543993 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4trf\" (UniqueName: \"kubernetes.io/projected/d801bdbd-209c-4c9e-8593-8b322a31efec-kube-api-access-n4trf\") pod \"oauth-openshift-7789f6f4b-bbcrd\" (UID: \"d801bdbd-209c-4c9e-8593-8b322a31efec\") " pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:45.606967 master-0 kubenswrapper[4652]: I0216 17:30:45.606890 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:46.015712 master-0 kubenswrapper[4652]: I0216 17:30:46.015668 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7789f6f4b-bbcrd"] Feb 16 17:30:46.136604 master-0 kubenswrapper[4652]: I0216 17:30:46.136546 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" event={"ID":"d801bdbd-209c-4c9e-8593-8b322a31efec","Type":"ContainerStarted","Data":"7b0d2d874ba2adbc3ed777b89a2be40ded1502401cfae0f8ca224a6276d24104"} Feb 16 17:30:46.138626 master-0 kubenswrapper[4652]: I0216 17:30:46.138576 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" event={"ID":"2be9d55c-a4ec-48cd-93d2-0a1dced745a8","Type":"ContainerDied","Data":"233776721f1387ea746aa3a9cde87b3c2a4f461764784c200d330f2a97c3e715"} Feb 16 17:30:46.138626 master-0 kubenswrapper[4652]: I0216 17:30:46.138623 4652 scope.go:117] "RemoveContainer" containerID="24810449690d5775751417ebd8694b4596d66b3a929f6ba3ed10b3bf89940dc8" Feb 16 17:30:46.138812 master-0 kubenswrapper[4652]: I0216 17:30:46.138676 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-64f85b8fc9-n9msn" Feb 16 17:30:46.180633 master-0 kubenswrapper[4652]: I0216 17:30:46.180590 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-64f85b8fc9-n9msn"] Feb 16 17:30:46.184169 master-0 kubenswrapper[4652]: I0216 17:30:46.184117 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-64f85b8fc9-n9msn"] Feb 16 17:30:46.754735 master-0 kubenswrapper[4652]: I0216 17:30:46.754693 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2be9d55c-a4ec-48cd-93d2-0a1dced745a8" path="/var/lib/kubelet/pods/2be9d55c-a4ec-48cd-93d2-0a1dced745a8/volumes" Feb 16 17:30:47.152527 master-0 kubenswrapper[4652]: I0216 17:30:47.152379 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" event={"ID":"d801bdbd-209c-4c9e-8593-8b322a31efec","Type":"ContainerStarted","Data":"97bd0d20a32fc741c4561628b88dd3e25ebb66d70a042f5a779faa1d138609ad"} Feb 16 17:30:47.152527 master-0 kubenswrapper[4652]: I0216 17:30:47.152474 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:47.158276 master-0 kubenswrapper[4652]: I0216 17:30:47.158225 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" Feb 16 17:30:47.177793 master-0 kubenswrapper[4652]: I0216 17:30:47.177727 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7789f6f4b-bbcrd" podStartSLOduration=28.177688675 podStartE2EDuration="28.177688675s" podCreationTimestamp="2026-02-16 17:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:30:47.175101377 +0000 UTC m=+404.563269913" watchObservedRunningTime="2026-02-16 17:30:47.177688675 +0000 UTC m=+404.565857191" Feb 16 17:30:49.101972 master-0 kubenswrapper[4652]: I0216 17:30:49.101890 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-599b567ff7-nrcpr" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" containerName="console" containerID="cri-o://a6d8d66468455600a88bcb9dd3e219d233baab7041ba8c82c91c8eda93212ca9" gracePeriod=15 Feb 16 17:30:49.505195 master-0 kubenswrapper[4652]: I0216 17:30:49.505151 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-599b567ff7-nrcpr_ed3d89d0-bc00-482e-a656-7fdf4646ab0a/console/2.log" Feb 16 17:30:49.505460 master-0 kubenswrapper[4652]: I0216 17:30:49.505216 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:30:49.576309 master-0 kubenswrapper[4652]: I0216 17:30:49.576221 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") pod \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " Feb 16 17:30:49.576504 master-0 kubenswrapper[4652]: I0216 17:30:49.576340 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") pod \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " Feb 16 17:30:49.576949 master-0 kubenswrapper[4652]: I0216 17:30:49.576921 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") pod \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " Feb 16 17:30:49.576991 master-0 kubenswrapper[4652]: I0216 17:30:49.576954 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") pod \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " Feb 16 17:30:49.577030 master-0 kubenswrapper[4652]: I0216 17:30:49.576988 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") pod \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " Feb 16 17:30:49.577109 master-0 kubenswrapper[4652]: I0216 17:30:49.577057 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") pod \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " Feb 16 17:30:49.577198 master-0 kubenswrapper[4652]: I0216 17:30:49.577175 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") pod \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\" (UID: \"ed3d89d0-bc00-482e-a656-7fdf4646ab0a\") " Feb 16 17:30:49.577787 master-0 kubenswrapper[4652]: I0216 17:30:49.577748 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ed3d89d0-bc00-482e-a656-7fdf4646ab0a" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:30:49.577881 master-0 kubenswrapper[4652]: I0216 17:30:49.577778 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config" (OuterVolumeSpecName: "console-config") pod "ed3d89d0-bc00-482e-a656-7fdf4646ab0a" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:30:49.577881 master-0 kubenswrapper[4652]: I0216 17:30:49.577832 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ed3d89d0-bc00-482e-a656-7fdf4646ab0a" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:30:49.578017 master-0 kubenswrapper[4652]: I0216 17:30:49.577958 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca" (OuterVolumeSpecName: "service-ca") pod "ed3d89d0-bc00-482e-a656-7fdf4646ab0a" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:30:49.580491 master-0 kubenswrapper[4652]: I0216 17:30:49.580441 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ed3d89d0-bc00-482e-a656-7fdf4646ab0a" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:49.580709 master-0 kubenswrapper[4652]: I0216 17:30:49.580685 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv" (OuterVolumeSpecName: "kube-api-access-st6bv") pod "ed3d89d0-bc00-482e-a656-7fdf4646ab0a" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a"). InnerVolumeSpecName "kube-api-access-st6bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:30:49.581325 master-0 kubenswrapper[4652]: I0216 17:30:49.581234 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ed3d89d0-bc00-482e-a656-7fdf4646ab0a" (UID: "ed3d89d0-bc00-482e-a656-7fdf4646ab0a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:30:49.679726 master-0 kubenswrapper[4652]: I0216 17:30:49.679652 4652 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:49.679726 master-0 kubenswrapper[4652]: I0216 17:30:49.679700 4652 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:49.679726 master-0 kubenswrapper[4652]: I0216 17:30:49.679712 4652 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:49.679726 master-0 kubenswrapper[4652]: I0216 17:30:49.679721 4652 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:49.679726 master-0 kubenswrapper[4652]: I0216 17:30:49.679730 4652 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:49.679726 master-0 kubenswrapper[4652]: I0216 17:30:49.679738 4652 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:49.679726 master-0 kubenswrapper[4652]: I0216 17:30:49.679746 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st6bv\" (UniqueName: \"kubernetes.io/projected/ed3d89d0-bc00-482e-a656-7fdf4646ab0a-kube-api-access-st6bv\") on node \"master-0\" DevicePath \"\"" Feb 16 17:30:50.176605 master-0 kubenswrapper[4652]: I0216 17:30:50.176557 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-599b567ff7-nrcpr_ed3d89d0-bc00-482e-a656-7fdf4646ab0a/console/2.log" Feb 16 17:30:50.177186 master-0 kubenswrapper[4652]: I0216 17:30:50.176662 4652 generic.go:334] "Generic (PLEG): container finished" podID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" containerID="a6d8d66468455600a88bcb9dd3e219d233baab7041ba8c82c91c8eda93212ca9" exitCode=2 Feb 16 17:30:50.177186 master-0 kubenswrapper[4652]: I0216 17:30:50.176732 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-599b567ff7-nrcpr" event={"ID":"ed3d89d0-bc00-482e-a656-7fdf4646ab0a","Type":"ContainerDied","Data":"a6d8d66468455600a88bcb9dd3e219d233baab7041ba8c82c91c8eda93212ca9"} Feb 16 17:30:50.177186 master-0 kubenswrapper[4652]: I0216 17:30:50.176770 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-599b567ff7-nrcpr" event={"ID":"ed3d89d0-bc00-482e-a656-7fdf4646ab0a","Type":"ContainerDied","Data":"2aae3f7e5e631f5ae4cd5fcc26116e549b52713c7c67b52fc8e5f6cceabd028a"} Feb 16 17:30:50.177186 master-0 kubenswrapper[4652]: I0216 17:30:50.176820 4652 scope.go:117] "RemoveContainer" containerID="a6d8d66468455600a88bcb9dd3e219d233baab7041ba8c82c91c8eda93212ca9" Feb 16 17:30:50.177186 master-0 kubenswrapper[4652]: I0216 17:30:50.176915 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-599b567ff7-nrcpr" Feb 16 17:30:50.199101 master-0 kubenswrapper[4652]: I0216 17:30:50.198984 4652 scope.go:117] "RemoveContainer" containerID="a6d8d66468455600a88bcb9dd3e219d233baab7041ba8c82c91c8eda93212ca9" Feb 16 17:30:50.199913 master-0 kubenswrapper[4652]: E0216 17:30:50.199818 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d8d66468455600a88bcb9dd3e219d233baab7041ba8c82c91c8eda93212ca9\": container with ID starting with a6d8d66468455600a88bcb9dd3e219d233baab7041ba8c82c91c8eda93212ca9 not found: ID does not exist" containerID="a6d8d66468455600a88bcb9dd3e219d233baab7041ba8c82c91c8eda93212ca9" Feb 16 17:30:50.200164 master-0 kubenswrapper[4652]: I0216 17:30:50.199942 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d8d66468455600a88bcb9dd3e219d233baab7041ba8c82c91c8eda93212ca9"} err="failed to get container status \"a6d8d66468455600a88bcb9dd3e219d233baab7041ba8c82c91c8eda93212ca9\": rpc error: code = NotFound desc = could not find container \"a6d8d66468455600a88bcb9dd3e219d233baab7041ba8c82c91c8eda93212ca9\": container with ID starting with a6d8d66468455600a88bcb9dd3e219d233baab7041ba8c82c91c8eda93212ca9 not found: ID does not exist" Feb 16 17:30:50.225784 master-0 kubenswrapper[4652]: I0216 17:30:50.225715 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-599b567ff7-nrcpr"] Feb 16 17:30:50.237338 master-0 kubenswrapper[4652]: I0216 17:30:50.237243 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-599b567ff7-nrcpr"] Feb 16 17:30:50.754795 master-0 kubenswrapper[4652]: I0216 17:30:50.754722 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" path="/var/lib/kubelet/pods/ed3d89d0-bc00-482e-a656-7fdf4646ab0a/volumes" Feb 16 17:30:52.910603 master-0 kubenswrapper[4652]: E0216 17:30:52.910524 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c6e30f_bdad_470f_b310_f1c4ad117dc9.slice/crio-478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129\": RecentStats: unable to find data in memory cache]" Feb 16 17:30:54.724826 master-0 kubenswrapper[4652]: E0216 17:30:54.724740 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c6e30f_bdad_470f_b310_f1c4ad117dc9.slice/crio-478494b974173d881c9dc9fdbd6581f8213904abd682993e7abac49cf7315129\": RecentStats: unable to find data in memory cache]" Feb 16 17:30:59.130311 master-0 kubenswrapper[4652]: I0216 17:30:59.130204 4652 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 17:30:59.130881 master-0 kubenswrapper[4652]: E0216 17:30:59.130624 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" containerName="console" Feb 16 17:30:59.130881 master-0 kubenswrapper[4652]: I0216 17:30:59.130646 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" containerName="console" Feb 16 17:30:59.130881 master-0 kubenswrapper[4652]: I0216 17:30:59.130822 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed3d89d0-bc00-482e-a656-7fdf4646ab0a" containerName="console" Feb 16 17:30:59.131358 master-0 kubenswrapper[4652]: I0216 17:30:59.131336 4652 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:30:59.131641 master-0 kubenswrapper[4652]: I0216 17:30:59.131609 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" containerID="cri-o://6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134" gracePeriod=15 Feb 16 17:30:59.132592 master-0 kubenswrapper[4652]: I0216 17:30:59.132484 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d" gracePeriod=15 Feb 16 17:30:59.132836 master-0 kubenswrapper[4652]: I0216 17:30:59.132807 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-syncer" containerID="cri-o://27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988" gracePeriod=15 Feb 16 17:30:59.132897 master-0 kubenswrapper[4652]: I0216 17:30:59.132823 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-check-endpoints" containerID="cri-o://27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5" gracePeriod=15 Feb 16 17:30:59.133961 master-0 kubenswrapper[4652]: I0216 17:30:59.131703 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7" gracePeriod=15 Feb 16 17:30:59.135623 master-0 kubenswrapper[4652]: I0216 17:30:59.135589 4652 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: E0216 17:30:59.147785 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: I0216 17:30:59.147832 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: E0216 17:30:59.147868 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-syncer" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: I0216 17:30:59.147874 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-syncer" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: E0216 17:30:59.147884 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="setup" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: I0216 17:30:59.147890 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="setup" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: E0216 17:30:59.147906 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: I0216 17:30:59.147913 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: E0216 17:30:59.147923 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-check-endpoints" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: I0216 17:30:59.147935 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-check-endpoints" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: E0216 17:30:59.147949 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-insecure-readyz" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: I0216 17:30:59.147955 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-insecure-readyz" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: I0216 17:30:59.148332 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-syncer" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: I0216 17:30:59.148352 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="setup" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: I0216 17:30:59.148361 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-insecure-readyz" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: I0216 17:30:59.148381 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: I0216 17:30:59.148394 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:30:59.148876 master-0 kubenswrapper[4652]: I0216 17:30:59.148406 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e298020284b0e8ffa6a0bc184059d9" containerName="kube-apiserver-check-endpoints" Feb 16 17:30:59.150904 master-0 kubenswrapper[4652]: I0216 17:30:59.150859 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.165418 master-0 kubenswrapper[4652]: I0216 17:30:59.165348 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="10e298020284b0e8ffa6a0bc184059d9" podUID="faa15f80078a2bfbe2234a74ab4da87c" Feb 16 17:30:59.229061 master-0 kubenswrapper[4652]: I0216 17:30:59.228995 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:30:59.229294 master-0 kubenswrapper[4652]: I0216 17:30:59.229075 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.229294 master-0 kubenswrapper[4652]: I0216 17:30:59.229106 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.229294 master-0 kubenswrapper[4652]: I0216 17:30:59.229127 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.229294 master-0 kubenswrapper[4652]: I0216 17:30:59.229160 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:30:59.229294 master-0 kubenswrapper[4652]: I0216 17:30:59.229182 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.229294 master-0 kubenswrapper[4652]: I0216 17:30:59.229217 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:30:59.229294 master-0 kubenswrapper[4652]: I0216 17:30:59.229266 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.331035 master-0 kubenswrapper[4652]: I0216 17:30:59.330966 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:30:59.331112 master-0 kubenswrapper[4652]: I0216 17:30:59.331044 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.331150 master-0 kubenswrapper[4652]: I0216 17:30:59.331125 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:30:59.331182 master-0 kubenswrapper[4652]: I0216 17:30:59.331152 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.331182 master-0 kubenswrapper[4652]: I0216 17:30:59.331156 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.331182 master-0 kubenswrapper[4652]: I0216 17:30:59.331176 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.331295 master-0 kubenswrapper[4652]: I0216 17:30:59.331196 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.331295 master-0 kubenswrapper[4652]: I0216 17:30:59.331115 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:30:59.331295 master-0 kubenswrapper[4652]: I0216 17:30:59.331231 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:30:59.331295 master-0 kubenswrapper[4652]: I0216 17:30:59.331262 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:30:59.331295 master-0 kubenswrapper[4652]: I0216 17:30:59.331289 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.331492 master-0 kubenswrapper[4652]: I0216 17:30:59.331295 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.331492 master-0 kubenswrapper[4652]: I0216 17:30:59.331370 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.331492 master-0 kubenswrapper[4652]: I0216 17:30:59.331384 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:30:59.331492 master-0 kubenswrapper[4652]: I0216 17:30:59.331401 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:30:59.331492 master-0 kubenswrapper[4652]: I0216 17:30:59.331413 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:31:00.256526 master-0 kubenswrapper[4652]: I0216 17:31:00.256467 4652 generic.go:334] "Generic (PLEG): container finished" podID="8d172f32-1e6a-4002-b856-a10a5449a643" containerID="4e0e8938969c332e02a547abb5dc6be217a78e6c962de4db57a5a1d03c8cb4b6" exitCode=0 Feb 16 17:31:00.257212 master-0 kubenswrapper[4652]: I0216 17:31:00.256556 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"8d172f32-1e6a-4002-b856-a10a5449a643","Type":"ContainerDied","Data":"4e0e8938969c332e02a547abb5dc6be217a78e6c962de4db57a5a1d03c8cb4b6"} Feb 16 17:31:00.258177 master-0 kubenswrapper[4652]: I0216 17:31:00.258108 4652 status_manager.go:851] "Failed to get status for pod" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:00.260333 master-0 kubenswrapper[4652]: I0216 17:31:00.260291 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_10e298020284b0e8ffa6a0bc184059d9/kube-apiserver-cert-syncer/2.log" Feb 16 17:31:00.260981 master-0 kubenswrapper[4652]: I0216 17:31:00.260937 4652 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5" exitCode=0 Feb 16 17:31:00.260981 master-0 kubenswrapper[4652]: I0216 17:31:00.260969 4652 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d" exitCode=0 Feb 16 17:31:00.260981 master-0 kubenswrapper[4652]: I0216 17:31:00.260980 4652 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7" exitCode=0 Feb 16 17:31:00.261133 master-0 kubenswrapper[4652]: I0216 17:31:00.260992 4652 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988" exitCode=2 Feb 16 17:31:00.263441 master-0 kubenswrapper[4652]: I0216 17:31:00.263407 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_963634f3-94ac-4b84-a92e-6224fb4692e0/installer/0.log" Feb 16 17:31:00.263513 master-0 kubenswrapper[4652]: I0216 17:31:00.263454 4652 generic.go:334] "Generic (PLEG): container finished" podID="963634f3-94ac-4b84-a92e-6224fb4692e0" containerID="516af62d0b3712a05a55e5ee2969a0158700e9b7353f175be8fef6deb2c5f81c" exitCode=1 Feb 16 17:31:00.263513 master-0 kubenswrapper[4652]: I0216 17:31:00.263480 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"963634f3-94ac-4b84-a92e-6224fb4692e0","Type":"ContainerDied","Data":"516af62d0b3712a05a55e5ee2969a0158700e9b7353f175be8fef6deb2c5f81c"} Feb 16 17:31:00.264456 master-0 kubenswrapper[4652]: I0216 17:31:00.264398 4652 status_manager.go:851] "Failed to get status for pod" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:00.265048 master-0 kubenswrapper[4652]: I0216 17:31:00.264996 4652 status_manager.go:851] "Failed to get status for pod" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.529554 master-0 kubenswrapper[4652]: I0216 17:31:01.529503 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_10e298020284b0e8ffa6a0bc184059d9/kube-apiserver-cert-syncer/2.log" Feb 16 17:31:01.531174 master-0 kubenswrapper[4652]: I0216 17:31:01.531128 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:31:01.532259 master-0 kubenswrapper[4652]: I0216 17:31:01.532202 4652 status_manager.go:851] "Failed to get status for pod" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.532814 master-0 kubenswrapper[4652]: I0216 17:31:01.532770 4652 status_manager.go:851] "Failed to get status for pod" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.533276 master-0 kubenswrapper[4652]: I0216 17:31:01.533209 4652 status_manager.go:851] "Failed to get status for pod" podUID="10e298020284b0e8ffa6a0bc184059d9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.664081 master-0 kubenswrapper[4652]: I0216 17:31:01.664015 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") pod \"10e298020284b0e8ffa6a0bc184059d9\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " Feb 16 17:31:01.664081 master-0 kubenswrapper[4652]: I0216 17:31:01.664082 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") pod \"10e298020284b0e8ffa6a0bc184059d9\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " Feb 16 17:31:01.664349 master-0 kubenswrapper[4652]: I0216 17:31:01.664118 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") pod \"10e298020284b0e8ffa6a0bc184059d9\" (UID: \"10e298020284b0e8ffa6a0bc184059d9\") " Feb 16 17:31:01.664349 master-0 kubenswrapper[4652]: I0216 17:31:01.664175 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "10e298020284b0e8ffa6a0bc184059d9" (UID: "10e298020284b0e8ffa6a0bc184059d9"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:31:01.664349 master-0 kubenswrapper[4652]: I0216 17:31:01.664236 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "10e298020284b0e8ffa6a0bc184059d9" (UID: "10e298020284b0e8ffa6a0bc184059d9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:31:01.664349 master-0 kubenswrapper[4652]: I0216 17:31:01.664274 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "10e298020284b0e8ffa6a0bc184059d9" (UID: "10e298020284b0e8ffa6a0bc184059d9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:31:01.664587 master-0 kubenswrapper[4652]: I0216 17:31:01.664557 4652 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:01.664587 master-0 kubenswrapper[4652]: I0216 17:31:01.664578 4652 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:01.664698 master-0 kubenswrapper[4652]: I0216 17:31:01.664590 4652 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/10e298020284b0e8ffa6a0bc184059d9-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:01.717969 master-0 kubenswrapper[4652]: E0216 17:31:01.717888 4652 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.718528 master-0 kubenswrapper[4652]: E0216 17:31:01.718486 4652 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.719042 master-0 kubenswrapper[4652]: E0216 17:31:01.719007 4652 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.719815 master-0 kubenswrapper[4652]: E0216 17:31:01.719767 4652 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.720583 master-0 kubenswrapper[4652]: E0216 17:31:01.720356 4652 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.720583 master-0 kubenswrapper[4652]: I0216 17:31:01.720580 4652 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 17:31:01.721684 master-0 kubenswrapper[4652]: E0216 17:31:01.721543 4652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 16 17:31:01.753265 master-0 kubenswrapper[4652]: I0216 17:31:01.752833 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_963634f3-94ac-4b84-a92e-6224fb4692e0/installer/0.log" Feb 16 17:31:01.753265 master-0 kubenswrapper[4652]: I0216 17:31:01.752891 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 17:31:01.753891 master-0 kubenswrapper[4652]: I0216 17:31:01.753841 4652 status_manager.go:851] "Failed to get status for pod" podUID="10e298020284b0e8ffa6a0bc184059d9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.754566 master-0 kubenswrapper[4652]: I0216 17:31:01.754528 4652 status_manager.go:851] "Failed to get status for pod" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.755060 master-0 kubenswrapper[4652]: I0216 17:31:01.755027 4652 status_manager.go:851] "Failed to get status for pod" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.781044 master-0 kubenswrapper[4652]: I0216 17:31:01.781002 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 17:31:01.781813 master-0 kubenswrapper[4652]: I0216 17:31:01.781778 4652 status_manager.go:851] "Failed to get status for pod" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.782527 master-0 kubenswrapper[4652]: I0216 17:31:01.782142 4652 status_manager.go:851] "Failed to get status for pod" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.782527 master-0 kubenswrapper[4652]: I0216 17:31:01.782494 4652 status_manager.go:851] "Failed to get status for pod" podUID="10e298020284b0e8ffa6a0bc184059d9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:01.867961 master-0 kubenswrapper[4652]: I0216 17:31:01.867259 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/963634f3-94ac-4b84-a92e-6224fb4692e0-var-lock\") pod \"963634f3-94ac-4b84-a92e-6224fb4692e0\" (UID: \"963634f3-94ac-4b84-a92e-6224fb4692e0\") " Feb 16 17:31:01.867961 master-0 kubenswrapper[4652]: I0216 17:31:01.867366 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8d172f32-1e6a-4002-b856-a10a5449a643-var-lock\") pod \"8d172f32-1e6a-4002-b856-a10a5449a643\" (UID: \"8d172f32-1e6a-4002-b856-a10a5449a643\") " Feb 16 17:31:01.867961 master-0 kubenswrapper[4652]: I0216 17:31:01.867388 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/963634f3-94ac-4b84-a92e-6224fb4692e0-kubelet-dir\") pod \"963634f3-94ac-4b84-a92e-6224fb4692e0\" (UID: \"963634f3-94ac-4b84-a92e-6224fb4692e0\") " Feb 16 17:31:01.867961 master-0 kubenswrapper[4652]: I0216 17:31:01.867427 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/963634f3-94ac-4b84-a92e-6224fb4692e0-kube-api-access\") pod \"963634f3-94ac-4b84-a92e-6224fb4692e0\" (UID: \"963634f3-94ac-4b84-a92e-6224fb4692e0\") " Feb 16 17:31:01.867961 master-0 kubenswrapper[4652]: I0216 17:31:01.867419 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963634f3-94ac-4b84-a92e-6224fb4692e0-var-lock" (OuterVolumeSpecName: "var-lock") pod "963634f3-94ac-4b84-a92e-6224fb4692e0" (UID: "963634f3-94ac-4b84-a92e-6224fb4692e0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:31:01.867961 master-0 kubenswrapper[4652]: I0216 17:31:01.867467 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963634f3-94ac-4b84-a92e-6224fb4692e0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "963634f3-94ac-4b84-a92e-6224fb4692e0" (UID: "963634f3-94ac-4b84-a92e-6224fb4692e0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:31:01.867961 master-0 kubenswrapper[4652]: I0216 17:31:01.867492 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d172f32-1e6a-4002-b856-a10a5449a643-var-lock" (OuterVolumeSpecName: "var-lock") pod "8d172f32-1e6a-4002-b856-a10a5449a643" (UID: "8d172f32-1e6a-4002-b856-a10a5449a643"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:31:01.868549 master-0 kubenswrapper[4652]: I0216 17:31:01.868505 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d172f32-1e6a-4002-b856-a10a5449a643-kube-api-access\") pod \"8d172f32-1e6a-4002-b856-a10a5449a643\" (UID: \"8d172f32-1e6a-4002-b856-a10a5449a643\") " Feb 16 17:31:01.868600 master-0 kubenswrapper[4652]: I0216 17:31:01.868575 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d172f32-1e6a-4002-b856-a10a5449a643-kubelet-dir\") pod \"8d172f32-1e6a-4002-b856-a10a5449a643\" (UID: \"8d172f32-1e6a-4002-b856-a10a5449a643\") " Feb 16 17:31:01.868742 master-0 kubenswrapper[4652]: I0216 17:31:01.868714 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d172f32-1e6a-4002-b856-a10a5449a643-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8d172f32-1e6a-4002-b856-a10a5449a643" (UID: "8d172f32-1e6a-4002-b856-a10a5449a643"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:31:01.869366 master-0 kubenswrapper[4652]: I0216 17:31:01.869243 4652 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/963634f3-94ac-4b84-a92e-6224fb4692e0-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:01.869366 master-0 kubenswrapper[4652]: I0216 17:31:01.869343 4652 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8d172f32-1e6a-4002-b856-a10a5449a643-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:01.869475 master-0 kubenswrapper[4652]: I0216 17:31:01.869373 4652 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/963634f3-94ac-4b84-a92e-6224fb4692e0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:01.869475 master-0 kubenswrapper[4652]: I0216 17:31:01.869399 4652 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8d172f32-1e6a-4002-b856-a10a5449a643-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:01.870124 master-0 kubenswrapper[4652]: I0216 17:31:01.870079 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963634f3-94ac-4b84-a92e-6224fb4692e0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "963634f3-94ac-4b84-a92e-6224fb4692e0" (UID: "963634f3-94ac-4b84-a92e-6224fb4692e0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:31:01.870929 master-0 kubenswrapper[4652]: I0216 17:31:01.870878 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d172f32-1e6a-4002-b856-a10a5449a643-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8d172f32-1e6a-4002-b856-a10a5449a643" (UID: "8d172f32-1e6a-4002-b856-a10a5449a643"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:31:01.923735 master-0 kubenswrapper[4652]: E0216 17:31:01.923658 4652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 16 17:31:01.970540 master-0 kubenswrapper[4652]: I0216 17:31:01.970475 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/963634f3-94ac-4b84-a92e-6224fb4692e0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:01.970540 master-0 kubenswrapper[4652]: I0216 17:31:01.970523 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8d172f32-1e6a-4002-b856-a10a5449a643-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:02.299170 master-0 kubenswrapper[4652]: I0216 17:31:02.299125 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_10e298020284b0e8ffa6a0bc184059d9/kube-apiserver-cert-syncer/2.log" Feb 16 17:31:02.300018 master-0 kubenswrapper[4652]: I0216 17:31:02.299993 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:31:02.300076 master-0 kubenswrapper[4652]: I0216 17:31:02.300021 4652 scope.go:117] "RemoveContainer" containerID="27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5" Feb 16 17:31:02.300209 master-0 kubenswrapper[4652]: I0216 17:31:02.300172 4652 generic.go:334] "Generic (PLEG): container finished" podID="10e298020284b0e8ffa6a0bc184059d9" containerID="6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134" exitCode=0 Feb 16 17:31:02.303689 master-0 kubenswrapper[4652]: I0216 17:31:02.303209 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_963634f3-94ac-4b84-a92e-6224fb4692e0/installer/0.log" Feb 16 17:31:02.303689 master-0 kubenswrapper[4652]: I0216 17:31:02.303260 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"963634f3-94ac-4b84-a92e-6224fb4692e0","Type":"ContainerDied","Data":"0de8838e295c365cae707645cf2aee2e79680709f82ff273615ae7ae657d534c"} Feb 16 17:31:02.303689 master-0 kubenswrapper[4652]: I0216 17:31:02.303298 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0de8838e295c365cae707645cf2aee2e79680709f82ff273615ae7ae657d534c" Feb 16 17:31:02.303689 master-0 kubenswrapper[4652]: I0216 17:31:02.303333 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Feb 16 17:31:02.307763 master-0 kubenswrapper[4652]: I0216 17:31:02.307705 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"8d172f32-1e6a-4002-b856-a10a5449a643","Type":"ContainerDied","Data":"f777180be809bef07385fe91421e7b3ad17717f54df53463edba1d867be6cec9"} Feb 16 17:31:02.307763 master-0 kubenswrapper[4652]: I0216 17:31:02.307756 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f777180be809bef07385fe91421e7b3ad17717f54df53463edba1d867be6cec9" Feb 16 17:31:02.308083 master-0 kubenswrapper[4652]: I0216 17:31:02.307815 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Feb 16 17:31:02.316572 master-0 kubenswrapper[4652]: I0216 17:31:02.316515 4652 status_manager.go:851] "Failed to get status for pod" podUID="10e298020284b0e8ffa6a0bc184059d9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:02.317089 master-0 kubenswrapper[4652]: I0216 17:31:02.317036 4652 status_manager.go:851] "Failed to get status for pod" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:02.317654 master-0 kubenswrapper[4652]: I0216 17:31:02.317620 4652 status_manager.go:851] "Failed to get status for pod" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:02.326676 master-0 kubenswrapper[4652]: E0216 17:31:02.326458 4652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 16 17:31:02.327131 master-0 kubenswrapper[4652]: I0216 17:31:02.327092 4652 scope.go:117] "RemoveContainer" containerID="793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d" Feb 16 17:31:02.327725 master-0 kubenswrapper[4652]: I0216 17:31:02.327670 4652 status_manager.go:851] "Failed to get status for pod" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:02.329499 master-0 kubenswrapper[4652]: I0216 17:31:02.329452 4652 status_manager.go:851] "Failed to get status for pod" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:02.330197 master-0 kubenswrapper[4652]: I0216 17:31:02.330156 4652 status_manager.go:851] "Failed to get status for pod" podUID="10e298020284b0e8ffa6a0bc184059d9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:02.330960 master-0 kubenswrapper[4652]: I0216 17:31:02.330871 4652 status_manager.go:851] "Failed to get status for pod" podUID="10e298020284b0e8ffa6a0bc184059d9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:02.331696 master-0 kubenswrapper[4652]: I0216 17:31:02.331640 4652 status_manager.go:851] "Failed to get status for pod" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:02.332322 master-0 kubenswrapper[4652]: I0216 17:31:02.332242 4652 status_manager.go:851] "Failed to get status for pod" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:02.346444 master-0 kubenswrapper[4652]: I0216 17:31:02.346261 4652 scope.go:117] "RemoveContainer" containerID="5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7" Feb 16 17:31:02.364544 master-0 kubenswrapper[4652]: I0216 17:31:02.364362 4652 scope.go:117] "RemoveContainer" containerID="27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988" Feb 16 17:31:02.380603 master-0 kubenswrapper[4652]: I0216 17:31:02.380534 4652 scope.go:117] "RemoveContainer" containerID="6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134" Feb 16 17:31:02.398272 master-0 kubenswrapper[4652]: I0216 17:31:02.398229 4652 scope.go:117] "RemoveContainer" containerID="bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576" Feb 16 17:31:02.422072 master-0 kubenswrapper[4652]: I0216 17:31:02.421961 4652 scope.go:117] "RemoveContainer" containerID="27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5" Feb 16 17:31:02.422755 master-0 kubenswrapper[4652]: E0216 17:31:02.422713 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5\": container with ID starting with 27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5 not found: ID does not exist" containerID="27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5" Feb 16 17:31:02.422897 master-0 kubenswrapper[4652]: I0216 17:31:02.422775 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5"} err="failed to get container status \"27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5\": rpc error: code = NotFound desc = could not find container \"27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5\": container with ID starting with 27af2f71bcbfdb5d359f0ba5c6ba64859efe9604b503850a48bf34bcc9062ed5 not found: ID does not exist" Feb 16 17:31:02.422897 master-0 kubenswrapper[4652]: I0216 17:31:02.422808 4652 scope.go:117] "RemoveContainer" containerID="793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d" Feb 16 17:31:02.423175 master-0 kubenswrapper[4652]: E0216 17:31:02.423130 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d\": container with ID starting with 793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d not found: ID does not exist" containerID="793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d" Feb 16 17:31:02.423175 master-0 kubenswrapper[4652]: I0216 17:31:02.423163 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d"} err="failed to get container status \"793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d\": rpc error: code = NotFound desc = could not find container \"793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d\": container with ID starting with 793a134a3128814126f79be866b4f3e87e426bb71fea3344c86391898f32e83d not found: ID does not exist" Feb 16 17:31:02.423374 master-0 kubenswrapper[4652]: I0216 17:31:02.423184 4652 scope.go:117] "RemoveContainer" containerID="5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7" Feb 16 17:31:02.424302 master-0 kubenswrapper[4652]: E0216 17:31:02.424265 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7\": container with ID starting with 5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7 not found: ID does not exist" containerID="5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7" Feb 16 17:31:02.424398 master-0 kubenswrapper[4652]: I0216 17:31:02.424316 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7"} err="failed to get container status \"5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7\": rpc error: code = NotFound desc = could not find container \"5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7\": container with ID starting with 5b99b32a955a10082d2d789b0bdf73b56299c988c32baeeddd30246aee7b9cb7 not found: ID does not exist" Feb 16 17:31:02.424398 master-0 kubenswrapper[4652]: I0216 17:31:02.424368 4652 scope.go:117] "RemoveContainer" containerID="27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988" Feb 16 17:31:02.424703 master-0 kubenswrapper[4652]: E0216 17:31:02.424676 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988\": container with ID starting with 27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988 not found: ID does not exist" containerID="27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988" Feb 16 17:31:02.424827 master-0 kubenswrapper[4652]: I0216 17:31:02.424704 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988"} err="failed to get container status \"27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988\": rpc error: code = NotFound desc = could not find container \"27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988\": container with ID starting with 27b94e57025fb2ddd71242a44b9c2224058665412d85a53f9223cf7fc5f93988 not found: ID does not exist" Feb 16 17:31:02.424827 master-0 kubenswrapper[4652]: I0216 17:31:02.424720 4652 scope.go:117] "RemoveContainer" containerID="6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134" Feb 16 17:31:02.424997 master-0 kubenswrapper[4652]: E0216 17:31:02.424956 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134\": container with ID starting with 6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134 not found: ID does not exist" containerID="6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134" Feb 16 17:31:02.424997 master-0 kubenswrapper[4652]: I0216 17:31:02.424975 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134"} err="failed to get container status \"6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134\": rpc error: code = NotFound desc = could not find container \"6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134\": container with ID starting with 6e668f6f4053a11ee43e05a02ac268068842801535fc0473881e90d213299134 not found: ID does not exist" Feb 16 17:31:02.424997 master-0 kubenswrapper[4652]: I0216 17:31:02.424990 4652 scope.go:117] "RemoveContainer" containerID="bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576" Feb 16 17:31:02.425278 master-0 kubenswrapper[4652]: E0216 17:31:02.425236 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576\": container with ID starting with bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576 not found: ID does not exist" containerID="bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576" Feb 16 17:31:02.425278 master-0 kubenswrapper[4652]: I0216 17:31:02.425272 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576"} err="failed to get container status \"bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576\": rpc error: code = NotFound desc = could not find container \"bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576\": container with ID starting with bed21cff20aa5ec9ad75e72fe640efb26297f72d37a7ad76fe30130503504576 not found: ID does not exist" Feb 16 17:31:02.750436 master-0 kubenswrapper[4652]: I0216 17:31:02.750377 4652 status_manager.go:851] "Failed to get status for pod" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:02.751719 master-0 kubenswrapper[4652]: I0216 17:31:02.751637 4652 status_manager.go:851] "Failed to get status for pod" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:02.753176 master-0 kubenswrapper[4652]: I0216 17:31:02.752766 4652 status_manager.go:851] "Failed to get status for pod" podUID="10e298020284b0e8ffa6a0bc184059d9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:02.753928 master-0 kubenswrapper[4652]: I0216 17:31:02.753890 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10e298020284b0e8ffa6a0bc184059d9" path="/var/lib/kubelet/pods/10e298020284b0e8ffa6a0bc184059d9/volumes" Feb 16 17:31:02.930482 master-0 kubenswrapper[4652]: E0216 17:31:02.930430 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"297af5174948e7a5e193045218b9b8c209b0751b5e42d942a380c1c4105d45a9\": container with ID starting with 297af5174948e7a5e193045218b9b8c209b0751b5e42d942a380c1c4105d45a9 not found: ID does not exist" containerID="297af5174948e7a5e193045218b9b8c209b0751b5e42d942a380c1c4105d45a9" Feb 16 17:31:02.930660 master-0 kubenswrapper[4652]: I0216 17:31:02.930487 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="297af5174948e7a5e193045218b9b8c209b0751b5e42d942a380c1c4105d45a9" err="rpc error: code = NotFound desc = could not find container \"297af5174948e7a5e193045218b9b8c209b0751b5e42d942a380c1c4105d45a9\": container with ID starting with 297af5174948e7a5e193045218b9b8c209b0751b5e42d942a380c1c4105d45a9 not found: ID does not exist" Feb 16 17:31:02.932161 master-0 kubenswrapper[4652]: E0216 17:31:02.932114 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4992c22e831121a95bc649c439bd4c2706d23e304f9780a4ff56be58ca6cb875\": container with ID starting with 4992c22e831121a95bc649c439bd4c2706d23e304f9780a4ff56be58ca6cb875 not found: ID does not exist" containerID="4992c22e831121a95bc649c439bd4c2706d23e304f9780a4ff56be58ca6cb875" Feb 16 17:31:02.932161 master-0 kubenswrapper[4652]: I0216 17:31:02.932157 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="4992c22e831121a95bc649c439bd4c2706d23e304f9780a4ff56be58ca6cb875" err="rpc error: code = NotFound desc = could not find container \"4992c22e831121a95bc649c439bd4c2706d23e304f9780a4ff56be58ca6cb875\": container with ID starting with 4992c22e831121a95bc649c439bd4c2706d23e304f9780a4ff56be58ca6cb875 not found: ID does not exist" Feb 16 17:31:02.932586 master-0 kubenswrapper[4652]: E0216 17:31:02.932540 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f9ccdae401f5d6081eacec1d36e73c0010b8afd14d410faf565b70a3ef4d59d\": container with ID starting with 1f9ccdae401f5d6081eacec1d36e73c0010b8afd14d410faf565b70a3ef4d59d not found: ID does not exist" containerID="1f9ccdae401f5d6081eacec1d36e73c0010b8afd14d410faf565b70a3ef4d59d" Feb 16 17:31:02.932658 master-0 kubenswrapper[4652]: I0216 17:31:02.932610 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="1f9ccdae401f5d6081eacec1d36e73c0010b8afd14d410faf565b70a3ef4d59d" err="rpc error: code = NotFound desc = could not find container \"1f9ccdae401f5d6081eacec1d36e73c0010b8afd14d410faf565b70a3ef4d59d\": container with ID starting with 1f9ccdae401f5d6081eacec1d36e73c0010b8afd14d410faf565b70a3ef4d59d not found: ID does not exist" Feb 16 17:31:02.933112 master-0 kubenswrapper[4652]: E0216 17:31:02.933069 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1eea8daa47d9e87380f3482df90e85193de1a1240a1d240c6fe7a9ef3f0312e1\": container with ID starting with 1eea8daa47d9e87380f3482df90e85193de1a1240a1d240c6fe7a9ef3f0312e1 not found: ID does not exist" containerID="1eea8daa47d9e87380f3482df90e85193de1a1240a1d240c6fe7a9ef3f0312e1" Feb 16 17:31:02.933112 master-0 kubenswrapper[4652]: I0216 17:31:02.933104 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="1eea8daa47d9e87380f3482df90e85193de1a1240a1d240c6fe7a9ef3f0312e1" err="rpc error: code = NotFound desc = could not find container \"1eea8daa47d9e87380f3482df90e85193de1a1240a1d240c6fe7a9ef3f0312e1\": container with ID starting with 1eea8daa47d9e87380f3482df90e85193de1a1240a1d240c6fe7a9ef3f0312e1 not found: ID does not exist" Feb 16 17:31:02.933653 master-0 kubenswrapper[4652]: E0216 17:31:02.933597 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15bd44df4287ebd73cf51ab580577f1b0e984e37a690d3b82be46044edeeb30a\": container with ID starting with 15bd44df4287ebd73cf51ab580577f1b0e984e37a690d3b82be46044edeeb30a not found: ID does not exist" containerID="15bd44df4287ebd73cf51ab580577f1b0e984e37a690d3b82be46044edeeb30a" Feb 16 17:31:02.933733 master-0 kubenswrapper[4652]: I0216 17:31:02.933657 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="15bd44df4287ebd73cf51ab580577f1b0e984e37a690d3b82be46044edeeb30a" err="rpc error: code = NotFound desc = could not find container \"15bd44df4287ebd73cf51ab580577f1b0e984e37a690d3b82be46044edeeb30a\": container with ID starting with 15bd44df4287ebd73cf51ab580577f1b0e984e37a690d3b82be46044edeeb30a not found: ID does not exist" Feb 16 17:31:02.934151 master-0 kubenswrapper[4652]: E0216 17:31:02.934081 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"861e88422bffb8b290aabc9e4e2f5c409ac08d163d44f75a8570182c8883c793\": container with ID starting with 861e88422bffb8b290aabc9e4e2f5c409ac08d163d44f75a8570182c8883c793 not found: ID does not exist" containerID="861e88422bffb8b290aabc9e4e2f5c409ac08d163d44f75a8570182c8883c793" Feb 16 17:31:02.934213 master-0 kubenswrapper[4652]: I0216 17:31:02.934150 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="861e88422bffb8b290aabc9e4e2f5c409ac08d163d44f75a8570182c8883c793" err="rpc error: code = NotFound desc = could not find container \"861e88422bffb8b290aabc9e4e2f5c409ac08d163d44f75a8570182c8883c793\": container with ID starting with 861e88422bffb8b290aabc9e4e2f5c409ac08d163d44f75a8570182c8883c793 not found: ID does not exist" Feb 16 17:31:02.936039 master-0 kubenswrapper[4652]: E0216 17:31:02.935968 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25465b580bb48f3a6fb46d8c2d044d08f4836a58f5f20bd9908d153f0df9ce48\": container with ID starting with 25465b580bb48f3a6fb46d8c2d044d08f4836a58f5f20bd9908d153f0df9ce48 not found: ID does not exist" containerID="25465b580bb48f3a6fb46d8c2d044d08f4836a58f5f20bd9908d153f0df9ce48" Feb 16 17:31:02.936039 master-0 kubenswrapper[4652]: I0216 17:31:02.936030 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="25465b580bb48f3a6fb46d8c2d044d08f4836a58f5f20bd9908d153f0df9ce48" err="rpc error: code = NotFound desc = could not find container \"25465b580bb48f3a6fb46d8c2d044d08f4836a58f5f20bd9908d153f0df9ce48\": container with ID starting with 25465b580bb48f3a6fb46d8c2d044d08f4836a58f5f20bd9908d153f0df9ce48 not found: ID does not exist" Feb 16 17:31:02.936449 master-0 kubenswrapper[4652]: E0216 17:31:02.936409 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"031351d655133b7eb2314bbf088d80efa45189d1f9252d31e6d06128b3e90f08\": container with ID starting with 031351d655133b7eb2314bbf088d80efa45189d1f9252d31e6d06128b3e90f08 not found: ID does not exist" containerID="031351d655133b7eb2314bbf088d80efa45189d1f9252d31e6d06128b3e90f08" Feb 16 17:31:02.936517 master-0 kubenswrapper[4652]: I0216 17:31:02.936475 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="031351d655133b7eb2314bbf088d80efa45189d1f9252d31e6d06128b3e90f08" err="rpc error: code = NotFound desc = could not find container \"031351d655133b7eb2314bbf088d80efa45189d1f9252d31e6d06128b3e90f08\": container with ID starting with 031351d655133b7eb2314bbf088d80efa45189d1f9252d31e6d06128b3e90f08 not found: ID does not exist" Feb 16 17:31:02.938996 master-0 kubenswrapper[4652]: E0216 17:31:02.938951 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6085bc9ebf42425482da1178217497bf1485803c2eb95c3c1e42d6ee2c909484\": container with ID starting with 6085bc9ebf42425482da1178217497bf1485803c2eb95c3c1e42d6ee2c909484 not found: ID does not exist" containerID="6085bc9ebf42425482da1178217497bf1485803c2eb95c3c1e42d6ee2c909484" Feb 16 17:31:02.938996 master-0 kubenswrapper[4652]: I0216 17:31:02.938988 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="6085bc9ebf42425482da1178217497bf1485803c2eb95c3c1e42d6ee2c909484" err="rpc error: code = NotFound desc = could not find container \"6085bc9ebf42425482da1178217497bf1485803c2eb95c3c1e42d6ee2c909484\": container with ID starting with 6085bc9ebf42425482da1178217497bf1485803c2eb95c3c1e42d6ee2c909484 not found: ID does not exist" Feb 16 17:31:02.939428 master-0 kubenswrapper[4652]: E0216 17:31:02.939387 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ba471877e1d22d2e62b0c2c1cb7eebd4f675dec80131e40647259b0b619f256\": container with ID starting with 6ba471877e1d22d2e62b0c2c1cb7eebd4f675dec80131e40647259b0b619f256 not found: ID does not exist" containerID="6ba471877e1d22d2e62b0c2c1cb7eebd4f675dec80131e40647259b0b619f256" Feb 16 17:31:02.939428 master-0 kubenswrapper[4652]: I0216 17:31:02.939421 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="6ba471877e1d22d2e62b0c2c1cb7eebd4f675dec80131e40647259b0b619f256" err="rpc error: code = NotFound desc = could not find container \"6ba471877e1d22d2e62b0c2c1cb7eebd4f675dec80131e40647259b0b619f256\": container with ID starting with 6ba471877e1d22d2e62b0c2c1cb7eebd4f675dec80131e40647259b0b619f256 not found: ID does not exist" Feb 16 17:31:02.940025 master-0 kubenswrapper[4652]: E0216 17:31:02.939970 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f2a67f11077d9bc44da6ffe3c1160975d06906e26e1b32fd900e010533a01a0\": container with ID starting with 6f2a67f11077d9bc44da6ffe3c1160975d06906e26e1b32fd900e010533a01a0 not found: ID does not exist" containerID="6f2a67f11077d9bc44da6ffe3c1160975d06906e26e1b32fd900e010533a01a0" Feb 16 17:31:02.940095 master-0 kubenswrapper[4652]: I0216 17:31:02.940027 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="6f2a67f11077d9bc44da6ffe3c1160975d06906e26e1b32fd900e010533a01a0" err="rpc error: code = NotFound desc = could not find container \"6f2a67f11077d9bc44da6ffe3c1160975d06906e26e1b32fd900e010533a01a0\": container with ID starting with 6f2a67f11077d9bc44da6ffe3c1160975d06906e26e1b32fd900e010533a01a0 not found: ID does not exist" Feb 16 17:31:02.940590 master-0 kubenswrapper[4652]: E0216 17:31:02.940547 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a34d2995b35b8dc31426e61695ea1611b88d90123e5a6ad5b62a2e08e2e26a32\": container with ID starting with a34d2995b35b8dc31426e61695ea1611b88d90123e5a6ad5b62a2e08e2e26a32 not found: ID does not exist" containerID="a34d2995b35b8dc31426e61695ea1611b88d90123e5a6ad5b62a2e08e2e26a32" Feb 16 17:31:02.940663 master-0 kubenswrapper[4652]: I0216 17:31:02.940584 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="a34d2995b35b8dc31426e61695ea1611b88d90123e5a6ad5b62a2e08e2e26a32" err="rpc error: code = NotFound desc = could not find container \"a34d2995b35b8dc31426e61695ea1611b88d90123e5a6ad5b62a2e08e2e26a32\": container with ID starting with a34d2995b35b8dc31426e61695ea1611b88d90123e5a6ad5b62a2e08e2e26a32 not found: ID does not exist" Feb 16 17:31:02.941686 master-0 kubenswrapper[4652]: E0216 17:31:02.941631 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e7a54a797193d9e6354dfc9c20644fa6cc6d33f0d9ebf10fdfdf046234d2555\": container with ID starting with 6e7a54a797193d9e6354dfc9c20644fa6cc6d33f0d9ebf10fdfdf046234d2555 not found: ID does not exist" containerID="6e7a54a797193d9e6354dfc9c20644fa6cc6d33f0d9ebf10fdfdf046234d2555" Feb 16 17:31:02.941686 master-0 kubenswrapper[4652]: I0216 17:31:02.941678 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="6e7a54a797193d9e6354dfc9c20644fa6cc6d33f0d9ebf10fdfdf046234d2555" err="rpc error: code = NotFound desc = could not find container \"6e7a54a797193d9e6354dfc9c20644fa6cc6d33f0d9ebf10fdfdf046234d2555\": container with ID starting with 6e7a54a797193d9e6354dfc9c20644fa6cc6d33f0d9ebf10fdfdf046234d2555 not found: ID does not exist" Feb 16 17:31:03.128214 master-0 kubenswrapper[4652]: E0216 17:31:03.128068 4652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 16 17:31:04.192518 master-0 kubenswrapper[4652]: E0216 17:31:04.192437 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:31:04.193525 master-0 kubenswrapper[4652]: I0216 17:31:04.192948 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:31:04.216853 master-0 kubenswrapper[4652]: W0216 17:31:04.216783 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ef22dacc42282620a76fbbcd3b157ad.slice/crio-d91933703cc16cb135649c91efc615538e86ce3ead3dc279c5dbe4c1a4c4f4b6 WatchSource:0}: Error finding container d91933703cc16cb135649c91efc615538e86ce3ead3dc279c5dbe4c1a4c4f4b6: Status 404 returned error can't find the container with id d91933703cc16cb135649c91efc615538e86ce3ead3dc279c5dbe4c1a4c4f4b6 Feb 16 17:31:04.220244 master-0 kubenswrapper[4652]: E0216 17:31:04.220053 4652 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1894ca5f528aef39 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:0ef22dacc42282620a76fbbcd3b157ad,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:31:04.218988345 +0000 UTC m=+421.607156861,LastTimestamp:2026-02-16 17:31:04.218988345 +0000 UTC m=+421.607156861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:31:04.327535 master-0 kubenswrapper[4652]: I0216 17:31:04.327484 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"0ef22dacc42282620a76fbbcd3b157ad","Type":"ContainerStarted","Data":"d91933703cc16cb135649c91efc615538e86ce3ead3dc279c5dbe4c1a4c4f4b6"} Feb 16 17:31:04.729424 master-0 kubenswrapper[4652]: E0216 17:31:04.729267 4652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 16 17:31:05.345981 master-0 kubenswrapper[4652]: I0216 17:31:05.345564 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"0ef22dacc42282620a76fbbcd3b157ad","Type":"ContainerStarted","Data":"572c8ca93b1c4bbde361decb2adf3bbf7a12381dd3ea534766cfa124369da8b0"} Feb 16 17:31:05.347689 master-0 kubenswrapper[4652]: E0216 17:31:05.346510 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:31:05.347689 master-0 kubenswrapper[4652]: I0216 17:31:05.346966 4652 status_manager.go:851] "Failed to get status for pod" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:05.347689 master-0 kubenswrapper[4652]: I0216 17:31:05.347621 4652 status_manager.go:851] "Failed to get status for pod" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:06.352737 master-0 kubenswrapper[4652]: E0216 17:31:06.352656 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:31:07.931098 master-0 kubenswrapper[4652]: E0216 17:31:07.930925 4652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Feb 16 17:31:08.573311 master-0 kubenswrapper[4652]: E0216 17:31:08.573054 4652 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.1894ca5f528aef39 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:0ef22dacc42282620a76fbbcd3b157ad,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cd019260c7aa2018ad976745ab7ff71deb43fe556a8972e5d6553facd5a65a49\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:31:04.218988345 +0000 UTC m=+421.607156861,LastTimestamp:2026-02-16 17:31:04.218988345 +0000 UTC m=+421.607156861,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:31:10.745696 master-0 kubenswrapper[4652]: I0216 17:31:10.745618 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:31:10.747032 master-0 kubenswrapper[4652]: I0216 17:31:10.746961 4652 status_manager.go:851] "Failed to get status for pod" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:10.748613 master-0 kubenswrapper[4652]: I0216 17:31:10.748549 4652 status_manager.go:851] "Failed to get status for pod" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:10.767022 master-0 kubenswrapper[4652]: I0216 17:31:10.766987 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ef9cb618-13ca-4088-a3d4-fb78be3f4bff" Feb 16 17:31:10.767327 master-0 kubenswrapper[4652]: I0216 17:31:10.767300 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ef9cb618-13ca-4088-a3d4-fb78be3f4bff" Feb 16 17:31:10.768325 master-0 kubenswrapper[4652]: E0216 17:31:10.768240 4652 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:31:10.768931 master-0 kubenswrapper[4652]: I0216 17:31:10.768902 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:31:10.789677 master-0 kubenswrapper[4652]: W0216 17:31:10.789628 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfaa15f80078a2bfbe2234a74ab4da87c.slice/crio-aec44bacf89b4bee2337b4f4cfffb367d8bf92228567040aaa3df28e822e3e25 WatchSource:0}: Error finding container aec44bacf89b4bee2337b4f4cfffb367d8bf92228567040aaa3df28e822e3e25: Status 404 returned error can't find the container with id aec44bacf89b4bee2337b4f4cfffb367d8bf92228567040aaa3df28e822e3e25 Feb 16 17:31:11.383741 master-0 kubenswrapper[4652]: I0216 17:31:11.383608 4652 generic.go:334] "Generic (PLEG): container finished" podID="faa15f80078a2bfbe2234a74ab4da87c" containerID="bb0c27734e1f9fd9ebf97aa709cc6f607812f77f245028c8df2c4bb9dec553ce" exitCode=0 Feb 16 17:31:11.383741 master-0 kubenswrapper[4652]: I0216 17:31:11.383660 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerDied","Data":"bb0c27734e1f9fd9ebf97aa709cc6f607812f77f245028c8df2c4bb9dec553ce"} Feb 16 17:31:11.383741 master-0 kubenswrapper[4652]: I0216 17:31:11.383694 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerStarted","Data":"aec44bacf89b4bee2337b4f4cfffb367d8bf92228567040aaa3df28e822e3e25"} Feb 16 17:31:11.384075 master-0 kubenswrapper[4652]: I0216 17:31:11.384009 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ef9cb618-13ca-4088-a3d4-fb78be3f4bff" Feb 16 17:31:11.384075 master-0 kubenswrapper[4652]: I0216 17:31:11.384026 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ef9cb618-13ca-4088-a3d4-fb78be3f4bff" Feb 16 17:31:11.384829 master-0 kubenswrapper[4652]: E0216 17:31:11.384790 4652 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:31:11.384829 master-0 kubenswrapper[4652]: I0216 17:31:11.384789 4652 status_manager.go:851] "Failed to get status for pod" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" pod="openshift-kube-controller-manager/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:11.385339 master-0 kubenswrapper[4652]: I0216 17:31:11.385297 4652 status_manager.go:851] "Failed to get status for pod" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:31:12.395048 master-0 kubenswrapper[4652]: I0216 17:31:12.394992 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerStarted","Data":"34bf25d986801d0b46ede8fde9ead2765c451c45a097605d361e2e2d2b05e878"} Feb 16 17:31:12.395048 master-0 kubenswrapper[4652]: I0216 17:31:12.395037 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerStarted","Data":"164b3a54c598ea12b9aa8bd249c91c60806eda6d501871a369e2d5c5b7879903"} Feb 16 17:31:12.395048 master-0 kubenswrapper[4652]: I0216 17:31:12.395048 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerStarted","Data":"f6b7271a04cc4daa6d1770f988b8eabff167278bdea4ccbf77e19ff3390112e7"} Feb 16 17:31:12.401254 master-0 kubenswrapper[4652]: I0216 17:31:12.399802 4652 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc" exitCode=1 Feb 16 17:31:12.401254 master-0 kubenswrapper[4652]: I0216 17:31:12.399830 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerDied","Data":"c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc"} Feb 16 17:31:12.401254 master-0 kubenswrapper[4652]: I0216 17:31:12.399853 4652 scope.go:117] "RemoveContainer" containerID="ac596da9ba2aef67b1e91be250c648d0d94ad9e7ab12065e2f256126b855cff0" Feb 16 17:31:12.401254 master-0 kubenswrapper[4652]: I0216 17:31:12.400703 4652 scope.go:117] "RemoveContainer" containerID="c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc" Feb 16 17:31:13.410546 master-0 kubenswrapper[4652]: I0216 17:31:13.410453 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerStarted","Data":"284583b4fbcd96e9535401a9d9fa6c2c190471d2cc1c24ecec6c46f11411d325"} Feb 16 17:31:13.410546 master-0 kubenswrapper[4652]: I0216 17:31:13.410528 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"faa15f80078a2bfbe2234a74ab4da87c","Type":"ContainerStarted","Data":"6bbe807b9ce4dc7c6e7fcc8d4a47cb1d2dedb5ad58f4702ef0b65f1e8d8e7bb8"} Feb 16 17:31:13.411386 master-0 kubenswrapper[4652]: I0216 17:31:13.410683 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:31:13.411386 master-0 kubenswrapper[4652]: I0216 17:31:13.410784 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ef9cb618-13ca-4088-a3d4-fb78be3f4bff" Feb 16 17:31:13.411386 master-0 kubenswrapper[4652]: I0216 17:31:13.410810 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ef9cb618-13ca-4088-a3d4-fb78be3f4bff" Feb 16 17:31:13.413452 master-0 kubenswrapper[4652]: I0216 17:31:13.413404 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"80420f2e7c3cdda71f7d0d6ccbe6f9f3","Type":"ContainerStarted","Data":"b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676"} Feb 16 17:31:13.634029 master-0 kubenswrapper[4652]: I0216 17:31:13.633950 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:31:15.770199 master-0 kubenswrapper[4652]: I0216 17:31:15.770128 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:31:15.771172 master-0 kubenswrapper[4652]: I0216 17:31:15.771156 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:31:15.778651 master-0 kubenswrapper[4652]: I0216 17:31:15.778635 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:31:18.423973 master-0 kubenswrapper[4652]: I0216 17:31:18.423909 4652 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:31:18.455163 master-0 kubenswrapper[4652]: I0216 17:31:18.455121 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ef9cb618-13ca-4088-a3d4-fb78be3f4bff" Feb 16 17:31:18.455163 master-0 kubenswrapper[4652]: I0216 17:31:18.455149 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ef9cb618-13ca-4088-a3d4-fb78be3f4bff" Feb 16 17:31:18.482332 master-0 kubenswrapper[4652]: I0216 17:31:18.480535 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:31:18.483230 master-0 kubenswrapper[4652]: I0216 17:31:18.483124 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="faa15f80078a2bfbe2234a74ab4da87c" podUID="e41f299c-b815-4ef7-a901-b7720b1ac5c6" Feb 16 17:31:19.198425 master-0 kubenswrapper[4652]: I0216 17:31:19.198348 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:31:19.206704 master-0 kubenswrapper[4652]: I0216 17:31:19.206649 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:31:19.466355 master-0 kubenswrapper[4652]: I0216 17:31:19.466192 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ef9cb618-13ca-4088-a3d4-fb78be3f4bff" Feb 16 17:31:19.466355 master-0 kubenswrapper[4652]: I0216 17:31:19.466235 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="ef9cb618-13ca-4088-a3d4-fb78be3f4bff" Feb 16 17:31:22.770470 master-0 kubenswrapper[4652]: I0216 17:31:22.770406 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="faa15f80078a2bfbe2234a74ab4da87c" podUID="e41f299c-b815-4ef7-a901-b7720b1ac5c6" Feb 16 17:31:23.639431 master-0 kubenswrapper[4652]: I0216 17:31:23.639388 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:31:26.641098 master-0 kubenswrapper[4652]: I0216 17:31:26.641025 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:31:26.648714 master-0 kubenswrapper[4652]: I0216 17:31:26.648616 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 17:31:27.459403 master-0 kubenswrapper[4652]: I0216 17:31:27.459322 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 17:31:27.745330 master-0 kubenswrapper[4652]: I0216 17:31:27.745116 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 17:31:27.951123 master-0 kubenswrapper[4652]: I0216 17:31:27.951068 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 17:31:28.845604 master-0 kubenswrapper[4652]: I0216 17:31:28.845529 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 17:31:29.320006 master-0 kubenswrapper[4652]: I0216 17:31:29.319966 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 17:31:29.605090 master-0 kubenswrapper[4652]: I0216 17:31:29.604953 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 17:31:29.652046 master-0 kubenswrapper[4652]: I0216 17:31:29.651949 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 17:31:29.829938 master-0 kubenswrapper[4652]: I0216 17:31:29.829885 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 17:31:29.928024 master-0 kubenswrapper[4652]: I0216 17:31:29.927950 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 17:31:29.965015 master-0 kubenswrapper[4652]: I0216 17:31:29.964977 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-t46bw" Feb 16 17:31:30.087116 master-0 kubenswrapper[4652]: I0216 17:31:30.087047 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 17:31:30.178656 master-0 kubenswrapper[4652]: I0216 17:31:30.178492 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 17:31:30.629334 master-0 kubenswrapper[4652]: I0216 17:31:30.629138 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 17:31:31.104679 master-0 kubenswrapper[4652]: I0216 17:31:31.104607 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 17:31:31.214013 master-0 kubenswrapper[4652]: I0216 17:31:31.213922 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:31:31.215277 master-0 kubenswrapper[4652]: I0216 17:31:31.215191 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 17:31:31.305416 master-0 kubenswrapper[4652]: I0216 17:31:31.305353 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 17:31:31.509324 master-0 kubenswrapper[4652]: I0216 17:31:31.509227 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 16 17:31:31.511174 master-0 kubenswrapper[4652]: I0216 17:31:31.511147 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 17:31:31.544211 master-0 kubenswrapper[4652]: I0216 17:31:31.544149 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 17:31:31.560552 master-0 kubenswrapper[4652]: I0216 17:31:31.560505 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 17:31:31.574854 master-0 kubenswrapper[4652]: I0216 17:31:31.574788 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 17:31:31.665930 master-0 kubenswrapper[4652]: I0216 17:31:31.665839 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-3enh2b6fkpcog" Feb 16 17:31:31.867704 master-0 kubenswrapper[4652]: I0216 17:31:31.867498 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 16 17:31:31.984402 master-0 kubenswrapper[4652]: I0216 17:31:31.984321 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 17:31:32.062551 master-0 kubenswrapper[4652]: I0216 17:31:32.062489 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-r5p9m" Feb 16 17:31:32.063699 master-0 kubenswrapper[4652]: I0216 17:31:32.063667 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 17:31:32.103400 master-0 kubenswrapper[4652]: I0216 17:31:32.103352 4652 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 17:31:32.223707 master-0 kubenswrapper[4652]: I0216 17:31:32.223669 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 16 17:31:32.432501 master-0 kubenswrapper[4652]: I0216 17:31:32.432463 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 17:31:32.442934 master-0 kubenswrapper[4652]: I0216 17:31:32.442870 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 17:31:32.538077 master-0 kubenswrapper[4652]: I0216 17:31:32.537976 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 17:31:32.656318 master-0 kubenswrapper[4652]: I0216 17:31:32.656208 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 17:31:32.822803 master-0 kubenswrapper[4652]: I0216 17:31:32.822626 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 17:31:32.830447 master-0 kubenswrapper[4652]: I0216 17:31:32.830399 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 17:31:32.899649 master-0 kubenswrapper[4652]: I0216 17:31:32.899576 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:31:32.906622 master-0 kubenswrapper[4652]: I0216 17:31:32.906589 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:31:33.104154 master-0 kubenswrapper[4652]: I0216 17:31:33.104010 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 17:31:33.182575 master-0 kubenswrapper[4652]: I0216 17:31:33.182487 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:31:33.211116 master-0 kubenswrapper[4652]: I0216 17:31:33.211051 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 17:31:33.216954 master-0 kubenswrapper[4652]: I0216 17:31:33.216819 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 17:31:33.390304 master-0 kubenswrapper[4652]: I0216 17:31:33.390085 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 17:31:33.442460 master-0 kubenswrapper[4652]: I0216 17:31:33.442393 4652 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:31:33.456522 master-0 kubenswrapper[4652]: I0216 17:31:33.456419 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 17:31:33.463864 master-0 kubenswrapper[4652]: I0216 17:31:33.463810 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 17:31:33.498320 master-0 kubenswrapper[4652]: I0216 17:31:33.498242 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 17:31:33.502057 master-0 kubenswrapper[4652]: I0216 17:31:33.502014 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 17:31:33.563577 master-0 kubenswrapper[4652]: I0216 17:31:33.563514 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 17:31:33.697085 master-0 kubenswrapper[4652]: I0216 17:31:33.697026 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-lcpkn" Feb 16 17:31:33.816532 master-0 kubenswrapper[4652]: I0216 17:31:33.816489 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 17:31:33.825489 master-0 kubenswrapper[4652]: I0216 17:31:33.825442 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 17:31:34.065942 master-0 kubenswrapper[4652]: I0216 17:31:34.065836 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 17:31:34.067733 master-0 kubenswrapper[4652]: I0216 17:31:34.067710 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 16 17:31:34.080906 master-0 kubenswrapper[4652]: I0216 17:31:34.080874 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 17:31:34.165323 master-0 kubenswrapper[4652]: I0216 17:31:34.165267 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 16 17:31:34.202708 master-0 kubenswrapper[4652]: I0216 17:31:34.202642 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 16 17:31:34.258948 master-0 kubenswrapper[4652]: I0216 17:31:34.258891 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 17:31:34.297058 master-0 kubenswrapper[4652]: I0216 17:31:34.296987 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 17:31:34.323533 master-0 kubenswrapper[4652]: I0216 17:31:34.323396 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 16 17:31:34.393286 master-0 kubenswrapper[4652]: I0216 17:31:34.392242 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 17:31:34.393887 master-0 kubenswrapper[4652]: I0216 17:31:34.393680 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 17:31:34.490757 master-0 kubenswrapper[4652]: I0216 17:31:34.490660 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 16 17:31:34.502676 master-0 kubenswrapper[4652]: I0216 17:31:34.502609 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 16 17:31:34.667473 master-0 kubenswrapper[4652]: I0216 17:31:34.667345 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 17:31:34.683099 master-0 kubenswrapper[4652]: I0216 17:31:34.683037 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 17:31:34.728052 master-0 kubenswrapper[4652]: I0216 17:31:34.727988 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 17:31:34.908038 master-0 kubenswrapper[4652]: I0216 17:31:34.907991 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 17:31:34.967268 master-0 kubenswrapper[4652]: I0216 17:31:34.967165 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 17:31:35.039006 master-0 kubenswrapper[4652]: I0216 17:31:35.038926 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 17:31:35.047338 master-0 kubenswrapper[4652]: I0216 17:31:35.047300 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 17:31:35.105547 master-0 kubenswrapper[4652]: I0216 17:31:35.105477 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 16 17:31:35.169301 master-0 kubenswrapper[4652]: I0216 17:31:35.169231 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 17:31:35.208590 master-0 kubenswrapper[4652]: I0216 17:31:35.208537 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 17:31:35.217638 master-0 kubenswrapper[4652]: I0216 17:31:35.217562 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 17:31:35.423692 master-0 kubenswrapper[4652]: I0216 17:31:35.423632 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 17:31:35.436225 master-0 kubenswrapper[4652]: I0216 17:31:35.436157 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 17:31:35.537304 master-0 kubenswrapper[4652]: I0216 17:31:35.537168 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 16 17:31:35.581995 master-0 kubenswrapper[4652]: I0216 17:31:35.581942 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 16 17:31:35.637643 master-0 kubenswrapper[4652]: I0216 17:31:35.637600 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 17:31:35.681664 master-0 kubenswrapper[4652]: I0216 17:31:35.681613 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 17:31:35.778802 master-0 kubenswrapper[4652]: I0216 17:31:35.778722 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 17:31:35.846023 master-0 kubenswrapper[4652]: I0216 17:31:35.845875 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 17:31:35.866071 master-0 kubenswrapper[4652]: I0216 17:31:35.866006 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 17:31:35.927618 master-0 kubenswrapper[4652]: I0216 17:31:35.927572 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 17:31:35.944641 master-0 kubenswrapper[4652]: I0216 17:31:35.944604 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 17:31:36.041961 master-0 kubenswrapper[4652]: I0216 17:31:36.041904 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 17:31:36.087053 master-0 kubenswrapper[4652]: I0216 17:31:36.086997 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 17:31:36.195242 master-0 kubenswrapper[4652]: I0216 17:31:36.195192 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 17:31:36.293755 master-0 kubenswrapper[4652]: I0216 17:31:36.293713 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 17:31:36.318670 master-0 kubenswrapper[4652]: I0216 17:31:36.318616 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 17:31:36.329791 master-0 kubenswrapper[4652]: I0216 17:31:36.329732 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 17:31:36.358701 master-0 kubenswrapper[4652]: I0216 17:31:36.358652 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 17:31:36.374603 master-0 kubenswrapper[4652]: I0216 17:31:36.374539 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 17:31:36.379349 master-0 kubenswrapper[4652]: I0216 17:31:36.379316 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 16 17:31:36.390492 master-0 kubenswrapper[4652]: I0216 17:31:36.390453 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 17:31:36.416399 master-0 kubenswrapper[4652]: I0216 17:31:36.416343 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 17:31:36.425521 master-0 kubenswrapper[4652]: I0216 17:31:36.425486 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-mzz6s" Feb 16 17:31:36.433308 master-0 kubenswrapper[4652]: I0216 17:31:36.433258 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 17:31:36.493430 master-0 kubenswrapper[4652]: I0216 17:31:36.493232 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 17:31:36.494748 master-0 kubenswrapper[4652]: I0216 17:31:36.494690 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-qmzhq" Feb 16 17:31:36.528438 master-0 kubenswrapper[4652]: I0216 17:31:36.528380 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 17:31:36.580463 master-0 kubenswrapper[4652]: I0216 17:31:36.580410 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 17:31:36.606751 master-0 kubenswrapper[4652]: I0216 17:31:36.606682 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 17:31:36.613931 master-0 kubenswrapper[4652]: I0216 17:31:36.613872 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 17:31:36.732885 master-0 kubenswrapper[4652]: I0216 17:31:36.732827 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 17:31:36.759484 master-0 kubenswrapper[4652]: I0216 17:31:36.759232 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-gtxjb" Feb 16 17:31:36.792478 master-0 kubenswrapper[4652]: I0216 17:31:36.792420 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:31:36.806573 master-0 kubenswrapper[4652]: I0216 17:31:36.806519 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 17:31:36.904974 master-0 kubenswrapper[4652]: I0216 17:31:36.904882 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 17:31:36.947360 master-0 kubenswrapper[4652]: I0216 17:31:36.947307 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-b9gfw" Feb 16 17:31:36.983430 master-0 kubenswrapper[4652]: I0216 17:31:36.983347 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 17:31:37.022797 master-0 kubenswrapper[4652]: I0216 17:31:37.022601 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 17:31:37.042852 master-0 kubenswrapper[4652]: I0216 17:31:37.042772 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 17:31:37.045296 master-0 kubenswrapper[4652]: I0216 17:31:37.045231 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 17:31:37.048706 master-0 kubenswrapper[4652]: I0216 17:31:37.048663 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 17:31:37.108347 master-0 kubenswrapper[4652]: I0216 17:31:37.108277 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 17:31:37.182521 master-0 kubenswrapper[4652]: I0216 17:31:37.182461 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:31:37.215530 master-0 kubenswrapper[4652]: I0216 17:31:37.215496 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 17:31:37.245361 master-0 kubenswrapper[4652]: I0216 17:31:37.245297 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 16 17:31:37.293965 master-0 kubenswrapper[4652]: I0216 17:31:37.293839 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 17:31:37.298836 master-0 kubenswrapper[4652]: I0216 17:31:37.298799 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 17:31:37.320235 master-0 kubenswrapper[4652]: I0216 17:31:37.320169 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 16 17:31:37.370375 master-0 kubenswrapper[4652]: I0216 17:31:37.370333 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 17:31:37.411015 master-0 kubenswrapper[4652]: I0216 17:31:37.410970 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 16 17:31:37.428582 master-0 kubenswrapper[4652]: I0216 17:31:37.428524 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 17:31:37.471466 master-0 kubenswrapper[4652]: I0216 17:31:37.471417 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:31:37.494100 master-0 kubenswrapper[4652]: I0216 17:31:37.494044 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 17:31:37.513894 master-0 kubenswrapper[4652]: I0216 17:31:37.513824 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 17:31:37.548770 master-0 kubenswrapper[4652]: I0216 17:31:37.548647 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 17:31:37.562079 master-0 kubenswrapper[4652]: I0216 17:31:37.562045 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 17:31:37.581242 master-0 kubenswrapper[4652]: I0216 17:31:37.581170 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ztpz8" Feb 16 17:31:37.582375 master-0 kubenswrapper[4652]: I0216 17:31:37.582336 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 17:31:37.623868 master-0 kubenswrapper[4652]: I0216 17:31:37.623783 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 17:31:37.685896 master-0 kubenswrapper[4652]: I0216 17:31:37.685844 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 17:31:37.710703 master-0 kubenswrapper[4652]: I0216 17:31:37.710657 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 17:31:37.758412 master-0 kubenswrapper[4652]: I0216 17:31:37.758336 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 17:31:37.804376 master-0 kubenswrapper[4652]: I0216 17:31:37.804271 4652 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:31:37.827812 master-0 kubenswrapper[4652]: I0216 17:31:37.827740 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 17:31:37.852769 master-0 kubenswrapper[4652]: I0216 17:31:37.852701 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:31:37.894370 master-0 kubenswrapper[4652]: I0216 17:31:37.894312 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:31:37.935324 master-0 kubenswrapper[4652]: I0216 17:31:37.935179 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 16 17:31:37.973181 master-0 kubenswrapper[4652]: I0216 17:31:37.973102 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:31:38.013712 master-0 kubenswrapper[4652]: I0216 17:31:38.013642 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 17:31:38.016725 master-0 kubenswrapper[4652]: I0216 17:31:38.016666 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 17:31:38.126491 master-0 kubenswrapper[4652]: I0216 17:31:38.126319 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 17:31:38.145156 master-0 kubenswrapper[4652]: I0216 17:31:38.145094 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 17:31:38.234025 master-0 kubenswrapper[4652]: I0216 17:31:38.233932 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 17:31:38.250620 master-0 kubenswrapper[4652]: I0216 17:31:38.250534 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 17:31:38.388697 master-0 kubenswrapper[4652]: I0216 17:31:38.388536 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 17:31:38.428134 master-0 kubenswrapper[4652]: I0216 17:31:38.428073 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 17:31:38.430415 master-0 kubenswrapper[4652]: I0216 17:31:38.430360 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-b4rnj" Feb 16 17:31:38.432469 master-0 kubenswrapper[4652]: I0216 17:31:38.432437 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 16 17:31:38.464444 master-0 kubenswrapper[4652]: I0216 17:31:38.464376 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 17:31:38.475820 master-0 kubenswrapper[4652]: I0216 17:31:38.475755 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 17:31:38.496329 master-0 kubenswrapper[4652]: I0216 17:31:38.496237 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 17:31:38.551795 master-0 kubenswrapper[4652]: I0216 17:31:38.551718 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 17:31:38.678849 master-0 kubenswrapper[4652]: I0216 17:31:38.678766 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:31:38.686464 master-0 kubenswrapper[4652]: I0216 17:31:38.686400 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 17:31:38.730544 master-0 kubenswrapper[4652]: I0216 17:31:38.730462 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-q5h8t" Feb 16 17:31:38.777150 master-0 kubenswrapper[4652]: I0216 17:31:38.777061 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-wxz7g" Feb 16 17:31:38.781701 master-0 kubenswrapper[4652]: I0216 17:31:38.781653 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 17:31:38.801692 master-0 kubenswrapper[4652]: I0216 17:31:38.801592 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 17:31:38.863487 master-0 kubenswrapper[4652]: I0216 17:31:38.863419 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 17:31:38.869055 master-0 kubenswrapper[4652]: I0216 17:31:38.869002 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 17:31:38.873581 master-0 kubenswrapper[4652]: I0216 17:31:38.873515 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 17:31:38.892478 master-0 kubenswrapper[4652]: I0216 17:31:38.892399 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:31:38.913115 master-0 kubenswrapper[4652]: I0216 17:31:38.913002 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-7mlbn" Feb 16 17:31:38.940551 master-0 kubenswrapper[4652]: I0216 17:31:38.940352 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-6858s" Feb 16 17:31:38.981484 master-0 kubenswrapper[4652]: I0216 17:31:38.981424 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 17:31:39.032578 master-0 kubenswrapper[4652]: I0216 17:31:39.032525 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 16 17:31:39.058053 master-0 kubenswrapper[4652]: I0216 17:31:39.057987 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 17:31:39.131768 master-0 kubenswrapper[4652]: I0216 17:31:39.131709 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 17:31:39.207467 master-0 kubenswrapper[4652]: I0216 17:31:39.207348 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 17:31:39.219606 master-0 kubenswrapper[4652]: I0216 17:31:39.219542 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 17:31:39.219606 master-0 kubenswrapper[4652]: I0216 17:31:39.219588 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 16 17:31:39.223331 master-0 kubenswrapper[4652]: I0216 17:31:39.223287 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 17:31:39.275928 master-0 kubenswrapper[4652]: I0216 17:31:39.275884 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 17:31:39.314813 master-0 kubenswrapper[4652]: I0216 17:31:39.314775 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 17:31:39.326495 master-0 kubenswrapper[4652]: I0216 17:31:39.326453 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 17:31:39.327373 master-0 kubenswrapper[4652]: I0216 17:31:39.327344 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 16 17:31:39.353768 master-0 kubenswrapper[4652]: I0216 17:31:39.353724 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 17:31:39.364689 master-0 kubenswrapper[4652]: I0216 17:31:39.364643 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 17:31:39.428493 master-0 kubenswrapper[4652]: I0216 17:31:39.428420 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 17:31:39.446111 master-0 kubenswrapper[4652]: I0216 17:31:39.446043 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 17:31:39.483156 master-0 kubenswrapper[4652]: I0216 17:31:39.483030 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 17:31:39.494092 master-0 kubenswrapper[4652]: I0216 17:31:39.494032 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 17:31:39.511767 master-0 kubenswrapper[4652]: I0216 17:31:39.511699 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 17:31:39.534614 master-0 kubenswrapper[4652]: I0216 17:31:39.534561 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 17:31:39.543231 master-0 kubenswrapper[4652]: I0216 17:31:39.543192 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 17:31:39.591749 master-0 kubenswrapper[4652]: I0216 17:31:39.591693 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 17:31:39.631311 master-0 kubenswrapper[4652]: I0216 17:31:39.631238 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 17:31:39.661365 master-0 kubenswrapper[4652]: I0216 17:31:39.661273 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 17:31:39.685493 master-0 kubenswrapper[4652]: I0216 17:31:39.684639 4652 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:31:39.692830 master-0 kubenswrapper[4652]: I0216 17:31:39.692789 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 17:31:39.700763 master-0 kubenswrapper[4652]: I0216 17:31:39.700710 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 17:31:39.838480 master-0 kubenswrapper[4652]: I0216 17:31:39.838348 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 17:31:39.894678 master-0 kubenswrapper[4652]: I0216 17:31:39.893778 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 16 17:31:39.898676 master-0 kubenswrapper[4652]: I0216 17:31:39.898641 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:31:39.904767 master-0 kubenswrapper[4652]: I0216 17:31:39.904731 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-5lx84" Feb 16 17:31:39.950332 master-0 kubenswrapper[4652]: I0216 17:31:39.950075 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 17:31:40.066398 master-0 kubenswrapper[4652]: I0216 17:31:40.066332 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 17:31:40.094074 master-0 kubenswrapper[4652]: I0216 17:31:40.093897 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 17:31:40.214646 master-0 kubenswrapper[4652]: I0216 17:31:40.214611 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 17:31:40.294110 master-0 kubenswrapper[4652]: I0216 17:31:40.294048 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 17:31:40.349200 master-0 kubenswrapper[4652]: I0216 17:31:40.349099 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-q2gzj" Feb 16 17:31:40.396320 master-0 kubenswrapper[4652]: I0216 17:31:40.396229 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 17:31:40.454894 master-0 kubenswrapper[4652]: I0216 17:31:40.454825 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:31:40.471995 master-0 kubenswrapper[4652]: I0216 17:31:40.471958 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 17:31:40.503443 master-0 kubenswrapper[4652]: I0216 17:31:40.503369 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 17:31:40.505189 master-0 kubenswrapper[4652]: I0216 17:31:40.505146 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 17:31:40.511566 master-0 kubenswrapper[4652]: I0216 17:31:40.511526 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 17:31:40.515061 master-0 kubenswrapper[4652]: I0216 17:31:40.515023 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" Feb 16 17:31:40.538495 master-0 kubenswrapper[4652]: I0216 17:31:40.538437 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-gvwqd" Feb 16 17:31:40.593335 master-0 kubenswrapper[4652]: I0216 17:31:40.593277 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 17:31:40.595603 master-0 kubenswrapper[4652]: I0216 17:31:40.595548 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 17:31:40.615659 master-0 kubenswrapper[4652]: I0216 17:31:40.615543 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 17:31:40.633222 master-0 kubenswrapper[4652]: I0216 17:31:40.633176 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 17:31:40.675689 master-0 kubenswrapper[4652]: I0216 17:31:40.675635 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 17:31:40.684629 master-0 kubenswrapper[4652]: I0216 17:31:40.684589 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-kh5s4" Feb 16 17:31:40.722689 master-0 kubenswrapper[4652]: I0216 17:31:40.722626 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-j874l" Feb 16 17:31:40.792876 master-0 kubenswrapper[4652]: I0216 17:31:40.792773 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 17:31:40.810609 master-0 kubenswrapper[4652]: I0216 17:31:40.810545 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 17:31:40.840489 master-0 kubenswrapper[4652]: I0216 17:31:40.840416 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 17:31:40.886137 master-0 kubenswrapper[4652]: I0216 17:31:40.885996 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 17:31:40.912633 master-0 kubenswrapper[4652]: I0216 17:31:40.912573 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 17:31:40.982978 master-0 kubenswrapper[4652]: I0216 17:31:40.982890 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 17:31:41.084712 master-0 kubenswrapper[4652]: I0216 17:31:41.084652 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 17:31:41.283083 master-0 kubenswrapper[4652]: I0216 17:31:41.282965 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 17:31:41.395208 master-0 kubenswrapper[4652]: I0216 17:31:41.395118 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 17:31:41.419224 master-0 kubenswrapper[4652]: I0216 17:31:41.419143 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 17:31:41.464390 master-0 kubenswrapper[4652]: I0216 17:31:41.464343 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 17:31:41.470357 master-0 kubenswrapper[4652]: I0216 17:31:41.470317 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 17:31:41.514699 master-0 kubenswrapper[4652]: I0216 17:31:41.514655 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 17:31:41.522454 master-0 kubenswrapper[4652]: I0216 17:31:41.522407 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 17:31:41.569065 master-0 kubenswrapper[4652]: I0216 17:31:41.568946 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 17:31:41.679775 master-0 kubenswrapper[4652]: I0216 17:31:41.679715 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 17:31:41.722630 master-0 kubenswrapper[4652]: I0216 17:31:41.722583 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 17:31:41.749710 master-0 kubenswrapper[4652]: I0216 17:31:41.749651 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-hk5sk" Feb 16 17:31:41.753670 master-0 kubenswrapper[4652]: I0216 17:31:41.753625 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 17:31:41.770482 master-0 kubenswrapper[4652]: I0216 17:31:41.770131 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 17:31:41.782626 master-0 kubenswrapper[4652]: I0216 17:31:41.782586 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 17:31:41.798086 master-0 kubenswrapper[4652]: I0216 17:31:41.798033 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 17:31:41.833122 master-0 kubenswrapper[4652]: I0216 17:31:41.832990 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 16 17:31:41.863684 master-0 kubenswrapper[4652]: I0216 17:31:41.863625 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" Feb 16 17:31:41.940468 master-0 kubenswrapper[4652]: I0216 17:31:41.940392 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 17:31:41.944927 master-0 kubenswrapper[4652]: I0216 17:31:41.944876 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 17:31:41.986655 master-0 kubenswrapper[4652]: I0216 17:31:41.986585 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 17:31:41.998960 master-0 kubenswrapper[4652]: I0216 17:31:41.998915 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 17:31:42.133183 master-0 kubenswrapper[4652]: I0216 17:31:42.133053 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 17:31:42.168910 master-0 kubenswrapper[4652]: I0216 17:31:42.168859 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 17:31:42.226550 master-0 kubenswrapper[4652]: I0216 17:31:42.226355 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-sk6rc" Feb 16 17:31:42.287144 master-0 kubenswrapper[4652]: I0216 17:31:42.287067 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 17:31:42.338170 master-0 kubenswrapper[4652]: I0216 17:31:42.338092 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 17:31:42.368964 master-0 kubenswrapper[4652]: I0216 17:31:42.368894 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 16 17:31:42.374792 master-0 kubenswrapper[4652]: I0216 17:31:42.374751 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 17:31:42.479563 master-0 kubenswrapper[4652]: I0216 17:31:42.479497 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 17:31:42.492093 master-0 kubenswrapper[4652]: I0216 17:31:42.492051 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:31:42.573440 master-0 kubenswrapper[4652]: I0216 17:31:42.573378 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-wnnb7" Feb 16 17:31:42.632759 master-0 kubenswrapper[4652]: I0216 17:31:42.632688 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 17:31:42.648679 master-0 kubenswrapper[4652]: I0216 17:31:42.648613 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 17:31:42.703429 master-0 kubenswrapper[4652]: I0216 17:31:42.703375 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 17:31:42.743320 master-0 kubenswrapper[4652]: I0216 17:31:42.743148 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 17:31:42.759042 master-0 kubenswrapper[4652]: I0216 17:31:42.759001 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 17:31:42.777846 master-0 kubenswrapper[4652]: I0216 17:31:42.777790 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-x2982" Feb 16 17:31:42.830727 master-0 kubenswrapper[4652]: I0216 17:31:42.830671 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 16 17:31:42.831716 master-0 kubenswrapper[4652]: I0216 17:31:42.831418 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 16 17:31:42.927009 master-0 kubenswrapper[4652]: I0216 17:31:42.926959 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 17:31:42.949350 master-0 kubenswrapper[4652]: I0216 17:31:42.949300 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 17:31:43.006163 master-0 kubenswrapper[4652]: I0216 17:31:43.006043 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 17:31:43.035385 master-0 kubenswrapper[4652]: I0216 17:31:43.035317 4652 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 17:31:43.040560 master-0 kubenswrapper[4652]: I0216 17:31:43.040517 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:31:43.040667 master-0 kubenswrapper[4652]: I0216 17:31:43.040567 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:31:43.044357 master-0 kubenswrapper[4652]: I0216 17:31:43.044324 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:31:43.060084 master-0 kubenswrapper[4652]: I0216 17:31:43.060024 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=25.060010459 podStartE2EDuration="25.060010459s" podCreationTimestamp="2026-02-16 17:31:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:31:43.058337864 +0000 UTC m=+460.446506420" watchObservedRunningTime="2026-02-16 17:31:43.060010459 +0000 UTC m=+460.448178975" Feb 16 17:31:43.069991 master-0 kubenswrapper[4652]: I0216 17:31:43.069955 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 17:31:43.106773 master-0 kubenswrapper[4652]: I0216 17:31:43.106721 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 17:31:43.133522 master-0 kubenswrapper[4652]: I0216 17:31:43.133467 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 17:31:43.164694 master-0 kubenswrapper[4652]: I0216 17:31:43.164636 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 17:31:43.209354 master-0 kubenswrapper[4652]: I0216 17:31:43.209304 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:31:43.252136 master-0 kubenswrapper[4652]: I0216 17:31:43.252091 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 17:31:43.263479 master-0 kubenswrapper[4652]: I0216 17:31:43.263400 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-rzjlw" Feb 16 17:31:43.301676 master-0 kubenswrapper[4652]: I0216 17:31:43.301634 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:31:43.334033 master-0 kubenswrapper[4652]: I0216 17:31:43.333972 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 17:31:43.375167 master-0 kubenswrapper[4652]: I0216 17:31:43.375109 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-nkhdh" Feb 16 17:31:43.496141 master-0 kubenswrapper[4652]: I0216 17:31:43.496075 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-nslxl" Feb 16 17:31:43.582705 master-0 kubenswrapper[4652]: I0216 17:31:43.582557 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 17:31:43.633337 master-0 kubenswrapper[4652]: I0216 17:31:43.633233 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 17:31:43.647412 master-0 kubenswrapper[4652]: I0216 17:31:43.647346 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 17:31:43.739299 master-0 kubenswrapper[4652]: I0216 17:31:43.739201 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 17:31:43.876666 master-0 kubenswrapper[4652]: I0216 17:31:43.876432 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lc8g2" Feb 16 17:31:43.910370 master-0 kubenswrapper[4652]: I0216 17:31:43.910243 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-bstss" Feb 16 17:31:43.947444 master-0 kubenswrapper[4652]: I0216 17:31:43.947298 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 17:31:44.032280 master-0 kubenswrapper[4652]: I0216 17:31:44.032203 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 17:31:44.140045 master-0 kubenswrapper[4652]: I0216 17:31:44.139870 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 16 17:31:44.197748 master-0 kubenswrapper[4652]: I0216 17:31:44.197672 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 17:31:44.221686 master-0 kubenswrapper[4652]: I0216 17:31:44.221621 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 17:31:44.224346 master-0 kubenswrapper[4652]: I0216 17:31:44.224312 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 17:31:44.234725 master-0 kubenswrapper[4652]: I0216 17:31:44.234686 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 17:31:44.387490 master-0 kubenswrapper[4652]: I0216 17:31:44.387382 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 17:31:44.408807 master-0 kubenswrapper[4652]: I0216 17:31:44.408690 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 17:31:44.688978 master-0 kubenswrapper[4652]: I0216 17:31:44.688923 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 16 17:31:44.758971 master-0 kubenswrapper[4652]: I0216 17:31:44.758902 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 17:31:45.274470 master-0 kubenswrapper[4652]: I0216 17:31:45.274414 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 17:31:45.396386 master-0 kubenswrapper[4652]: I0216 17:31:45.396321 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 17:31:45.512909 master-0 kubenswrapper[4652]: I0216 17:31:45.512871 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 17:31:45.528848 master-0 kubenswrapper[4652]: I0216 17:31:45.528688 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 17:31:45.625741 master-0 kubenswrapper[4652]: I0216 17:31:45.625692 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 17:31:45.652416 master-0 kubenswrapper[4652]: I0216 17:31:45.652361 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 17:31:45.822197 master-0 kubenswrapper[4652]: I0216 17:31:45.822039 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 16 17:31:46.129584 master-0 kubenswrapper[4652]: I0216 17:31:46.129387 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-94r9k" Feb 16 17:31:46.150404 master-0 kubenswrapper[4652]: I0216 17:31:46.150342 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 17:31:46.155336 master-0 kubenswrapper[4652]: I0216 17:31:46.155284 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 17:31:46.845869 master-0 kubenswrapper[4652]: I0216 17:31:46.845788 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 17:31:46.958425 master-0 kubenswrapper[4652]: I0216 17:31:46.958340 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 17:31:47.240923 master-0 kubenswrapper[4652]: I0216 17:31:47.240877 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 17:31:47.598474 master-0 kubenswrapper[4652]: I0216 17:31:47.598338 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 17:31:47.756271 master-0 kubenswrapper[4652]: I0216 17:31:47.756141 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 17:31:47.951596 master-0 kubenswrapper[4652]: I0216 17:31:47.951496 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 17:31:51.869911 master-0 kubenswrapper[4652]: I0216 17:31:51.869844 4652 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 17:31:51.871156 master-0 kubenswrapper[4652]: I0216 17:31:51.870300 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="0ef22dacc42282620a76fbbcd3b157ad" containerName="startup-monitor" containerID="cri-o://572c8ca93b1c4bbde361decb2adf3bbf7a12381dd3ea534766cfa124369da8b0" gracePeriod=5 Feb 16 17:31:56.988270 master-0 kubenswrapper[4652]: I0216 17:31:56.988206 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_0ef22dacc42282620a76fbbcd3b157ad/startup-monitor/0.log" Feb 16 17:31:56.988929 master-0 kubenswrapper[4652]: I0216 17:31:56.988292 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:31:57.172015 master-0 kubenswrapper[4652]: I0216 17:31:57.167877 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-lock\") pod \"0ef22dacc42282620a76fbbcd3b157ad\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " Feb 16 17:31:57.172015 master-0 kubenswrapper[4652]: I0216 17:31:57.167986 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-log\") pod \"0ef22dacc42282620a76fbbcd3b157ad\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " Feb 16 17:31:57.172015 master-0 kubenswrapper[4652]: I0216 17:31:57.168043 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-manifests\") pod \"0ef22dacc42282620a76fbbcd3b157ad\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " Feb 16 17:31:57.172015 master-0 kubenswrapper[4652]: I0216 17:31:57.168087 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-resource-dir\") pod \"0ef22dacc42282620a76fbbcd3b157ad\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " Feb 16 17:31:57.172015 master-0 kubenswrapper[4652]: I0216 17:31:57.168119 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-pod-resource-dir\") pod \"0ef22dacc42282620a76fbbcd3b157ad\" (UID: \"0ef22dacc42282620a76fbbcd3b157ad\") " Feb 16 17:31:57.172015 master-0 kubenswrapper[4652]: I0216 17:31:57.168510 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-log" (OuterVolumeSpecName: "var-log") pod "0ef22dacc42282620a76fbbcd3b157ad" (UID: "0ef22dacc42282620a76fbbcd3b157ad"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:31:57.172015 master-0 kubenswrapper[4652]: I0216 17:31:57.168607 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-lock" (OuterVolumeSpecName: "var-lock") pod "0ef22dacc42282620a76fbbcd3b157ad" (UID: "0ef22dacc42282620a76fbbcd3b157ad"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:31:57.172015 master-0 kubenswrapper[4652]: I0216 17:31:57.168646 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-manifests" (OuterVolumeSpecName: "manifests") pod "0ef22dacc42282620a76fbbcd3b157ad" (UID: "0ef22dacc42282620a76fbbcd3b157ad"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:31:57.172015 master-0 kubenswrapper[4652]: I0216 17:31:57.168689 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "0ef22dacc42282620a76fbbcd3b157ad" (UID: "0ef22dacc42282620a76fbbcd3b157ad"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:31:57.175025 master-0 kubenswrapper[4652]: I0216 17:31:57.174941 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "0ef22dacc42282620a76fbbcd3b157ad" (UID: "0ef22dacc42282620a76fbbcd3b157ad"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:31:57.269903 master-0 kubenswrapper[4652]: I0216 17:31:57.269806 4652 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:57.269903 master-0 kubenswrapper[4652]: I0216 17:31:57.269869 4652 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-var-log\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:57.269903 master-0 kubenswrapper[4652]: I0216 17:31:57.269881 4652 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-manifests\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:57.269903 master-0 kubenswrapper[4652]: I0216 17:31:57.269892 4652 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:57.269903 master-0 kubenswrapper[4652]: I0216 17:31:57.269906 4652 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/0ef22dacc42282620a76fbbcd3b157ad-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:31:57.732678 master-0 kubenswrapper[4652]: I0216 17:31:57.732618 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_0ef22dacc42282620a76fbbcd3b157ad/startup-monitor/0.log" Feb 16 17:31:57.732678 master-0 kubenswrapper[4652]: I0216 17:31:57.732676 4652 generic.go:334] "Generic (PLEG): container finished" podID="0ef22dacc42282620a76fbbcd3b157ad" containerID="572c8ca93b1c4bbde361decb2adf3bbf7a12381dd3ea534766cfa124369da8b0" exitCode=137 Feb 16 17:31:57.732960 master-0 kubenswrapper[4652]: I0216 17:31:57.732721 4652 scope.go:117] "RemoveContainer" containerID="572c8ca93b1c4bbde361decb2adf3bbf7a12381dd3ea534766cfa124369da8b0" Feb 16 17:31:57.732960 master-0 kubenswrapper[4652]: I0216 17:31:57.732790 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:31:57.751822 master-0 kubenswrapper[4652]: I0216 17:31:57.751789 4652 scope.go:117] "RemoveContainer" containerID="572c8ca93b1c4bbde361decb2adf3bbf7a12381dd3ea534766cfa124369da8b0" Feb 16 17:31:57.752179 master-0 kubenswrapper[4652]: E0216 17:31:57.752144 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"572c8ca93b1c4bbde361decb2adf3bbf7a12381dd3ea534766cfa124369da8b0\": container with ID starting with 572c8ca93b1c4bbde361decb2adf3bbf7a12381dd3ea534766cfa124369da8b0 not found: ID does not exist" containerID="572c8ca93b1c4bbde361decb2adf3bbf7a12381dd3ea534766cfa124369da8b0" Feb 16 17:31:57.752304 master-0 kubenswrapper[4652]: I0216 17:31:57.752175 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"572c8ca93b1c4bbde361decb2adf3bbf7a12381dd3ea534766cfa124369da8b0"} err="failed to get container status \"572c8ca93b1c4bbde361decb2adf3bbf7a12381dd3ea534766cfa124369da8b0\": rpc error: code = NotFound desc = could not find container \"572c8ca93b1c4bbde361decb2adf3bbf7a12381dd3ea534766cfa124369da8b0\": container with ID starting with 572c8ca93b1c4bbde361decb2adf3bbf7a12381dd3ea534766cfa124369da8b0 not found: ID does not exist" Feb 16 17:31:58.758824 master-0 kubenswrapper[4652]: I0216 17:31:58.758790 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef22dacc42282620a76fbbcd3b157ad" path="/var/lib/kubelet/pods/0ef22dacc42282620a76fbbcd3b157ad/volumes" Feb 16 17:32:18.754312 master-0 kubenswrapper[4652]: I0216 17:32:18.754231 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Feb 16 17:32:18.754855 master-0 kubenswrapper[4652]: E0216 17:32:18.754590 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef22dacc42282620a76fbbcd3b157ad" containerName="startup-monitor" Feb 16 17:32:18.754855 master-0 kubenswrapper[4652]: I0216 17:32:18.754611 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef22dacc42282620a76fbbcd3b157ad" containerName="startup-monitor" Feb 16 17:32:18.754855 master-0 kubenswrapper[4652]: E0216 17:32:18.754633 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" containerName="installer" Feb 16 17:32:18.754855 master-0 kubenswrapper[4652]: I0216 17:32:18.754645 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" containerName="installer" Feb 16 17:32:18.754855 master-0 kubenswrapper[4652]: E0216 17:32:18.754673 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" containerName="installer" Feb 16 17:32:18.754855 master-0 kubenswrapper[4652]: I0216 17:32:18.754685 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" containerName="installer" Feb 16 17:32:18.755038 master-0 kubenswrapper[4652]: I0216 17:32:18.754881 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef22dacc42282620a76fbbcd3b157ad" containerName="startup-monitor" Feb 16 17:32:18.755038 master-0 kubenswrapper[4652]: I0216 17:32:18.754922 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d172f32-1e6a-4002-b856-a10a5449a643" containerName="installer" Feb 16 17:32:18.755038 master-0 kubenswrapper[4652]: I0216 17:32:18.754944 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="963634f3-94ac-4b84-a92e-6224fb4692e0" containerName="installer" Feb 16 17:32:18.755819 master-0 kubenswrapper[4652]: I0216 17:32:18.755790 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 16 17:32:18.758481 master-0 kubenswrapper[4652]: I0216 17:32:18.758424 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 17:32:18.758801 master-0 kubenswrapper[4652]: I0216 17:32:18.758769 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-qlqr4" Feb 16 17:32:18.760973 master-0 kubenswrapper[4652]: I0216 17:32:18.760930 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Feb 16 17:32:18.841372 master-0 kubenswrapper[4652]: I0216 17:32:18.840906 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c5a56f02-a904-4a72-9195-50d31662d559-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"c5a56f02-a904-4a72-9195-50d31662d559\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 16 17:32:18.841372 master-0 kubenswrapper[4652]: I0216 17:32:18.841120 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5a56f02-a904-4a72-9195-50d31662d559-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"c5a56f02-a904-4a72-9195-50d31662d559\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 16 17:32:18.841722 master-0 kubenswrapper[4652]: I0216 17:32:18.841441 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c5a56f02-a904-4a72-9195-50d31662d559-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"c5a56f02-a904-4a72-9195-50d31662d559\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 16 17:32:18.942132 master-0 kubenswrapper[4652]: I0216 17:32:18.942075 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5a56f02-a904-4a72-9195-50d31662d559-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"c5a56f02-a904-4a72-9195-50d31662d559\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 16 17:32:18.942386 master-0 kubenswrapper[4652]: I0216 17:32:18.942150 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c5a56f02-a904-4a72-9195-50d31662d559-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"c5a56f02-a904-4a72-9195-50d31662d559\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 16 17:32:18.942386 master-0 kubenswrapper[4652]: I0216 17:32:18.942196 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c5a56f02-a904-4a72-9195-50d31662d559-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"c5a56f02-a904-4a72-9195-50d31662d559\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 16 17:32:18.942386 master-0 kubenswrapper[4652]: I0216 17:32:18.942210 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c5a56f02-a904-4a72-9195-50d31662d559-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"c5a56f02-a904-4a72-9195-50d31662d559\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 16 17:32:18.942386 master-0 kubenswrapper[4652]: I0216 17:32:18.942244 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c5a56f02-a904-4a72-9195-50d31662d559-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"c5a56f02-a904-4a72-9195-50d31662d559\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 16 17:32:18.959848 master-0 kubenswrapper[4652]: I0216 17:32:18.959801 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5a56f02-a904-4a72-9195-50d31662d559-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"c5a56f02-a904-4a72-9195-50d31662d559\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 16 17:32:19.085560 master-0 kubenswrapper[4652]: I0216 17:32:19.085451 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 16 17:32:19.506803 master-0 kubenswrapper[4652]: I0216 17:32:19.506745 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Feb 16 17:32:19.961686 master-0 kubenswrapper[4652]: I0216 17:32:19.961613 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"c5a56f02-a904-4a72-9195-50d31662d559","Type":"ContainerStarted","Data":"4409057387defa6a80a7630189849388b9807430b49f3fd5babdb009652049a2"} Feb 16 17:32:19.961686 master-0 kubenswrapper[4652]: I0216 17:32:19.961661 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"c5a56f02-a904-4a72-9195-50d31662d559","Type":"ContainerStarted","Data":"05c679210958fbf6666ec49f256a62d3c42825e80c988feb41b444c4f6574e21"} Feb 16 17:32:19.980176 master-0 kubenswrapper[4652]: I0216 17:32:19.980067 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" podStartSLOduration=1.9800502 podStartE2EDuration="1.9800502s" podCreationTimestamp="2026-02-16 17:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:32:19.978610802 +0000 UTC m=+497.366779328" watchObservedRunningTime="2026-02-16 17:32:19.9800502 +0000 UTC m=+497.368218716" Feb 16 17:32:52.426648 master-0 kubenswrapper[4652]: I0216 17:32:52.426528 4652 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 17:32:52.427741 master-0 kubenswrapper[4652]: I0216 17:32:52.426786 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" containerID="cri-o://0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b" gracePeriod=30 Feb 16 17:32:52.427741 master-0 kubenswrapper[4652]: I0216 17:32:52.426860 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" containerID="cri-o://b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676" gracePeriod=30 Feb 16 17:32:52.428574 master-0 kubenswrapper[4652]: I0216 17:32:52.428442 4652 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 17:32:52.428862 master-0 kubenswrapper[4652]: E0216 17:32:52.428808 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 17:32:52.428862 master-0 kubenswrapper[4652]: I0216 17:32:52.428835 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 17:32:52.428862 master-0 kubenswrapper[4652]: E0216 17:32:52.428849 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 17:32:52.428862 master-0 kubenswrapper[4652]: I0216 17:32:52.428857 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 17:32:52.428862 master-0 kubenswrapper[4652]: E0216 17:32:52.428869 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 17:32:52.429402 master-0 kubenswrapper[4652]: I0216 17:32:52.428877 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 17:32:52.429402 master-0 kubenswrapper[4652]: E0216 17:32:52.428887 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 17:32:52.429402 master-0 kubenswrapper[4652]: I0216 17:32:52.428895 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 17:32:52.429402 master-0 kubenswrapper[4652]: E0216 17:32:52.428911 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 17:32:52.429402 master-0 kubenswrapper[4652]: I0216 17:32:52.428919 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 17:32:52.429402 master-0 kubenswrapper[4652]: I0216 17:32:52.429060 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 17:32:52.429402 master-0 kubenswrapper[4652]: I0216 17:32:52.429077 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 17:32:52.429402 master-0 kubenswrapper[4652]: I0216 17:32:52.429085 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 17:32:52.429402 master-0 kubenswrapper[4652]: I0216 17:32:52.429093 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="cluster-policy-controller" Feb 16 17:32:52.429402 master-0 kubenswrapper[4652]: I0216 17:32:52.429343 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerName="kube-controller-manager" Feb 16 17:32:52.430300 master-0 kubenswrapper[4652]: I0216 17:32:52.430212 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:32:52.500630 master-0 kubenswrapper[4652]: I0216 17:32:52.500561 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 17:32:52.537803 master-0 kubenswrapper[4652]: I0216 17:32:52.537725 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5f305fd0e0544b7ae949be70a648f4f7-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5f305fd0e0544b7ae949be70a648f4f7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:32:52.538238 master-0 kubenswrapper[4652]: I0216 17:32:52.538201 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5f305fd0e0544b7ae949be70a648f4f7-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5f305fd0e0544b7ae949be70a648f4f7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:32:52.615712 master-0 kubenswrapper[4652]: I0216 17:32:52.615659 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:32:52.640224 master-0 kubenswrapper[4652]: I0216 17:32:52.639982 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5f305fd0e0544b7ae949be70a648f4f7-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5f305fd0e0544b7ae949be70a648f4f7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:32:52.640224 master-0 kubenswrapper[4652]: I0216 17:32:52.640111 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5f305fd0e0544b7ae949be70a648f4f7-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5f305fd0e0544b7ae949be70a648f4f7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:32:52.640224 master-0 kubenswrapper[4652]: I0216 17:32:52.640112 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5f305fd0e0544b7ae949be70a648f4f7-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5f305fd0e0544b7ae949be70a648f4f7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:32:52.640224 master-0 kubenswrapper[4652]: I0216 17:32:52.640139 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5f305fd0e0544b7ae949be70a648f4f7-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"5f305fd0e0544b7ae949be70a648f4f7\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:32:52.661334 master-0 kubenswrapper[4652]: I0216 17:32:52.661271 4652 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="67aa7027-5cfd-41e1-9f0a-cb3a00bd09ba" Feb 16 17:32:52.741224 master-0 kubenswrapper[4652]: I0216 17:32:52.741162 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 17:32:52.741224 master-0 kubenswrapper[4652]: I0216 17:32:52.741229 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 17:32:52.741610 master-0 kubenswrapper[4652]: I0216 17:32:52.741275 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config" (OuterVolumeSpecName: "config") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:32:52.741610 master-0 kubenswrapper[4652]: I0216 17:32:52.741293 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 17:32:52.741610 master-0 kubenswrapper[4652]: I0216 17:32:52.741339 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets" (OuterVolumeSpecName: "secrets") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:32:52.741610 master-0 kubenswrapper[4652]: I0216 17:32:52.741359 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:32:52.741610 master-0 kubenswrapper[4652]: I0216 17:32:52.741378 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 17:32:52.741610 master-0 kubenswrapper[4652]: I0216 17:32:52.741455 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs" (OuterVolumeSpecName: "logs") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:32:52.741610 master-0 kubenswrapper[4652]: I0216 17:32:52.741498 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") pod \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\" (UID: \"80420f2e7c3cdda71f7d0d6ccbe6f9f3\") " Feb 16 17:32:52.742105 master-0 kubenswrapper[4652]: I0216 17:32:52.741894 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:32:52.742105 master-0 kubenswrapper[4652]: I0216 17:32:52.741906 4652 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Feb 16 17:32:52.742105 master-0 kubenswrapper[4652]: I0216 17:32:52.741915 4652 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-secrets\") on node \"master-0\" DevicePath \"\"" Feb 16 17:32:52.742105 master-0 kubenswrapper[4652]: I0216 17:32:52.741925 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:32:52.742105 master-0 kubenswrapper[4652]: I0216 17:32:52.741943 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "80420f2e7c3cdda71f7d0d6ccbe6f9f3" (UID: "80420f2e7c3cdda71f7d0d6ccbe6f9f3"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:32:52.755698 master-0 kubenswrapper[4652]: I0216 17:32:52.755648 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" path="/var/lib/kubelet/pods/80420f2e7c3cdda71f7d0d6ccbe6f9f3/volumes" Feb 16 17:32:52.756605 master-0 kubenswrapper[4652]: I0216 17:32:52.756583 4652 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Feb 16 17:32:52.773972 master-0 kubenswrapper[4652]: I0216 17:32:52.773908 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 17:32:52.773972 master-0 kubenswrapper[4652]: I0216 17:32:52.773946 4652 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="67aa7027-5cfd-41e1-9f0a-cb3a00bd09ba" Feb 16 17:32:52.777328 master-0 kubenswrapper[4652]: I0216 17:32:52.777291 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:32:52.779210 master-0 kubenswrapper[4652]: I0216 17:32:52.779173 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Feb 16 17:32:52.779289 master-0 kubenswrapper[4652]: I0216 17:32:52.779199 4652 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="67aa7027-5cfd-41e1-9f0a-cb3a00bd09ba" Feb 16 17:32:52.795356 master-0 kubenswrapper[4652]: W0216 17:32:52.795236 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f305fd0e0544b7ae949be70a648f4f7.slice/crio-4e179b8e1acf5b0e85eed15165278090e8d9a8bef53967462213684d876e96ca WatchSource:0}: Error finding container 4e179b8e1acf5b0e85eed15165278090e8d9a8bef53967462213684d876e96ca: Status 404 returned error can't find the container with id 4e179b8e1acf5b0e85eed15165278090e8d9a8bef53967462213684d876e96ca Feb 16 17:32:52.843339 master-0 kubenswrapper[4652]: I0216 17:32:52.843208 4652 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/80420f2e7c3cdda71f7d0d6ccbe6f9f3-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Feb 16 17:32:53.180051 master-0 kubenswrapper[4652]: I0216 17:32:53.180004 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5f305fd0e0544b7ae949be70a648f4f7","Type":"ContainerStarted","Data":"a48e44f48198e4db49af49e3cef0fd62f710895db4941d8940f6bbcfc9115369"} Feb 16 17:32:53.180051 master-0 kubenswrapper[4652]: I0216 17:32:53.180051 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5f305fd0e0544b7ae949be70a648f4f7","Type":"ContainerStarted","Data":"4e179b8e1acf5b0e85eed15165278090e8d9a8bef53967462213684d876e96ca"} Feb 16 17:32:53.182797 master-0 kubenswrapper[4652]: I0216 17:32:53.182763 4652 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676" exitCode=0 Feb 16 17:32:53.182797 master-0 kubenswrapper[4652]: I0216 17:32:53.182788 4652 generic.go:334] "Generic (PLEG): container finished" podID="80420f2e7c3cdda71f7d0d6ccbe6f9f3" containerID="0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b" exitCode=0 Feb 16 17:32:53.182914 master-0 kubenswrapper[4652]: I0216 17:32:53.182830 4652 scope.go:117] "RemoveContainer" containerID="b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676" Feb 16 17:32:53.182959 master-0 kubenswrapper[4652]: I0216 17:32:53.182931 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Feb 16 17:32:53.186131 master-0 kubenswrapper[4652]: I0216 17:32:53.186105 4652 generic.go:334] "Generic (PLEG): container finished" podID="c5a56f02-a904-4a72-9195-50d31662d559" containerID="4409057387defa6a80a7630189849388b9807430b49f3fd5babdb009652049a2" exitCode=0 Feb 16 17:32:53.186189 master-0 kubenswrapper[4652]: I0216 17:32:53.186154 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"c5a56f02-a904-4a72-9195-50d31662d559","Type":"ContainerDied","Data":"4409057387defa6a80a7630189849388b9807430b49f3fd5babdb009652049a2"} Feb 16 17:32:53.201609 master-0 kubenswrapper[4652]: I0216 17:32:53.200613 4652 scope.go:117] "RemoveContainer" containerID="0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b" Feb 16 17:32:53.217426 master-0 kubenswrapper[4652]: I0216 17:32:53.217383 4652 scope.go:117] "RemoveContainer" containerID="c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc" Feb 16 17:32:53.240376 master-0 kubenswrapper[4652]: I0216 17:32:53.240332 4652 scope.go:117] "RemoveContainer" containerID="95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838" Feb 16 17:32:53.257104 master-0 kubenswrapper[4652]: I0216 17:32:53.256995 4652 scope.go:117] "RemoveContainer" containerID="b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676" Feb 16 17:32:53.257580 master-0 kubenswrapper[4652]: E0216 17:32:53.257540 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676\": container with ID starting with b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676 not found: ID does not exist" containerID="b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676" Feb 16 17:32:53.257667 master-0 kubenswrapper[4652]: I0216 17:32:53.257573 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676"} err="failed to get container status \"b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676\": rpc error: code = NotFound desc = could not find container \"b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676\": container with ID starting with b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676 not found: ID does not exist" Feb 16 17:32:53.257667 master-0 kubenswrapper[4652]: I0216 17:32:53.257598 4652 scope.go:117] "RemoveContainer" containerID="0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b" Feb 16 17:32:53.258000 master-0 kubenswrapper[4652]: E0216 17:32:53.257950 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b\": container with ID starting with 0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b not found: ID does not exist" containerID="0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b" Feb 16 17:32:53.258077 master-0 kubenswrapper[4652]: I0216 17:32:53.258007 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b"} err="failed to get container status \"0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b\": rpc error: code = NotFound desc = could not find container \"0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b\": container with ID starting with 0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b not found: ID does not exist" Feb 16 17:32:53.258077 master-0 kubenswrapper[4652]: I0216 17:32:53.258042 4652 scope.go:117] "RemoveContainer" containerID="c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc" Feb 16 17:32:53.258481 master-0 kubenswrapper[4652]: E0216 17:32:53.258437 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc\": container with ID starting with c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc not found: ID does not exist" containerID="c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc" Feb 16 17:32:53.258481 master-0 kubenswrapper[4652]: I0216 17:32:53.258463 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc"} err="failed to get container status \"c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc\": rpc error: code = NotFound desc = could not find container \"c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc\": container with ID starting with c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc not found: ID does not exist" Feb 16 17:32:53.258481 master-0 kubenswrapper[4652]: I0216 17:32:53.258477 4652 scope.go:117] "RemoveContainer" containerID="95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838" Feb 16 17:32:53.258865 master-0 kubenswrapper[4652]: E0216 17:32:53.258833 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838\": container with ID starting with 95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838 not found: ID does not exist" containerID="95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838" Feb 16 17:32:53.258951 master-0 kubenswrapper[4652]: I0216 17:32:53.258870 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838"} err="failed to get container status \"95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838\": rpc error: code = NotFound desc = could not find container \"95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838\": container with ID starting with 95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838 not found: ID does not exist" Feb 16 17:32:53.258951 master-0 kubenswrapper[4652]: I0216 17:32:53.258899 4652 scope.go:117] "RemoveContainer" containerID="b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676" Feb 16 17:32:53.259223 master-0 kubenswrapper[4652]: I0216 17:32:53.259182 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676"} err="failed to get container status \"b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676\": rpc error: code = NotFound desc = could not find container \"b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676\": container with ID starting with b7dacc84648817df343de49745028abb06419dd6fed484ef51d1ff63bc942676 not found: ID does not exist" Feb 16 17:32:53.259223 master-0 kubenswrapper[4652]: I0216 17:32:53.259218 4652 scope.go:117] "RemoveContainer" containerID="0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b" Feb 16 17:32:53.259530 master-0 kubenswrapper[4652]: I0216 17:32:53.259503 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b"} err="failed to get container status \"0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b\": rpc error: code = NotFound desc = could not find container \"0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b\": container with ID starting with 0b777450b82ef32f087a9da43a997829f093f7fa4784d74b7528b198ad989c4b not found: ID does not exist" Feb 16 17:32:53.259530 master-0 kubenswrapper[4652]: I0216 17:32:53.259523 4652 scope.go:117] "RemoveContainer" containerID="c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc" Feb 16 17:32:53.259748 master-0 kubenswrapper[4652]: I0216 17:32:53.259726 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc"} err="failed to get container status \"c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc\": rpc error: code = NotFound desc = could not find container \"c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc\": container with ID starting with c75fa4ecf6988458954826cf05f6b61fe6e9ae7d0edc07d67e6f06f41e48bcdc not found: ID does not exist" Feb 16 17:32:53.259748 master-0 kubenswrapper[4652]: I0216 17:32:53.259742 4652 scope.go:117] "RemoveContainer" containerID="95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838" Feb 16 17:32:53.259992 master-0 kubenswrapper[4652]: I0216 17:32:53.259964 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838"} err="failed to get container status \"95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838\": rpc error: code = NotFound desc = could not find container \"95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838\": container with ID starting with 95e6dcc1eaac7663dc235705bd5f762414f0d18a15d00f92c6da3b036fb26838 not found: ID does not exist" Feb 16 17:32:54.197163 master-0 kubenswrapper[4652]: I0216 17:32:54.197100 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5f305fd0e0544b7ae949be70a648f4f7","Type":"ContainerStarted","Data":"cbe696b52c701a95154d7c627e87465ef37a24dcef0fa0a27f9537b62cc823b4"} Feb 16 17:32:54.197163 master-0 kubenswrapper[4652]: I0216 17:32:54.197148 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5f305fd0e0544b7ae949be70a648f4f7","Type":"ContainerStarted","Data":"15bdf4adb469c8e49b844325cb3d5f079a624d6ed75dcf73bfdeb27983d77e0b"} Feb 16 17:32:54.197163 master-0 kubenswrapper[4652]: I0216 17:32:54.197160 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"5f305fd0e0544b7ae949be70a648f4f7","Type":"ContainerStarted","Data":"3d2828c0d80b4c7afd8ffd37f8c260ee8dedebb2fc940c473cd50dec8ae63210"} Feb 16 17:32:54.231377 master-0 kubenswrapper[4652]: I0216 17:32:54.231239 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.231210824 podStartE2EDuration="2.231210824s" podCreationTimestamp="2026-02-16 17:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:32:54.221943156 +0000 UTC m=+531.610111682" watchObservedRunningTime="2026-02-16 17:32:54.231210824 +0000 UTC m=+531.619379380" Feb 16 17:32:54.518152 master-0 kubenswrapper[4652]: I0216 17:32:54.518101 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 16 17:32:54.671755 master-0 kubenswrapper[4652]: I0216 17:32:54.671693 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c5a56f02-a904-4a72-9195-50d31662d559-kubelet-dir\") pod \"c5a56f02-a904-4a72-9195-50d31662d559\" (UID: \"c5a56f02-a904-4a72-9195-50d31662d559\") " Feb 16 17:32:54.671755 master-0 kubenswrapper[4652]: I0216 17:32:54.671765 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c5a56f02-a904-4a72-9195-50d31662d559-var-lock\") pod \"c5a56f02-a904-4a72-9195-50d31662d559\" (UID: \"c5a56f02-a904-4a72-9195-50d31662d559\") " Feb 16 17:32:54.672203 master-0 kubenswrapper[4652]: I0216 17:32:54.671842 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5a56f02-a904-4a72-9195-50d31662d559-kube-api-access\") pod \"c5a56f02-a904-4a72-9195-50d31662d559\" (UID: \"c5a56f02-a904-4a72-9195-50d31662d559\") " Feb 16 17:32:54.672203 master-0 kubenswrapper[4652]: I0216 17:32:54.671843 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a56f02-a904-4a72-9195-50d31662d559-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c5a56f02-a904-4a72-9195-50d31662d559" (UID: "c5a56f02-a904-4a72-9195-50d31662d559"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:32:54.672203 master-0 kubenswrapper[4652]: I0216 17:32:54.671906 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5a56f02-a904-4a72-9195-50d31662d559-var-lock" (OuterVolumeSpecName: "var-lock") pod "c5a56f02-a904-4a72-9195-50d31662d559" (UID: "c5a56f02-a904-4a72-9195-50d31662d559"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:32:54.672203 master-0 kubenswrapper[4652]: I0216 17:32:54.672144 4652 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c5a56f02-a904-4a72-9195-50d31662d559-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:32:54.672203 master-0 kubenswrapper[4652]: I0216 17:32:54.672161 4652 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c5a56f02-a904-4a72-9195-50d31662d559-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:32:54.674458 master-0 kubenswrapper[4652]: I0216 17:32:54.674402 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5a56f02-a904-4a72-9195-50d31662d559-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5a56f02-a904-4a72-9195-50d31662d559" (UID: "c5a56f02-a904-4a72-9195-50d31662d559"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:32:54.774073 master-0 kubenswrapper[4652]: I0216 17:32:54.774024 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5a56f02-a904-4a72-9195-50d31662d559-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:32:55.207609 master-0 kubenswrapper[4652]: I0216 17:32:55.207560 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"c5a56f02-a904-4a72-9195-50d31662d559","Type":"ContainerDied","Data":"05c679210958fbf6666ec49f256a62d3c42825e80c988feb41b444c4f6574e21"} Feb 16 17:32:55.207609 master-0 kubenswrapper[4652]: I0216 17:32:55.207612 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05c679210958fbf6666ec49f256a62d3c42825e80c988feb41b444c4f6574e21" Feb 16 17:32:55.208127 master-0 kubenswrapper[4652]: I0216 17:32:55.207581 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Feb 16 17:33:02.778219 master-0 kubenswrapper[4652]: I0216 17:33:02.778153 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:02.778219 master-0 kubenswrapper[4652]: I0216 17:33:02.778217 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:02.779033 master-0 kubenswrapper[4652]: I0216 17:33:02.778264 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:02.779033 master-0 kubenswrapper[4652]: I0216 17:33:02.778279 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:02.782576 master-0 kubenswrapper[4652]: I0216 17:33:02.782550 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:02.782797 master-0 kubenswrapper[4652]: I0216 17:33:02.782767 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:03.004188 master-0 kubenswrapper[4652]: E0216 17:33:03.004096 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67636bc611814bbf34e6bb9093e3c3fce5ce2b828a2dd05d2b7fdd2dd015348f\": container with ID starting with 67636bc611814bbf34e6bb9093e3c3fce5ce2b828a2dd05d2b7fdd2dd015348f not found: ID does not exist" containerID="67636bc611814bbf34e6bb9093e3c3fce5ce2b828a2dd05d2b7fdd2dd015348f" Feb 16 17:33:03.004353 master-0 kubenswrapper[4652]: I0216 17:33:03.004205 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="67636bc611814bbf34e6bb9093e3c3fce5ce2b828a2dd05d2b7fdd2dd015348f" err="rpc error: code = NotFound desc = could not find container \"67636bc611814bbf34e6bb9093e3c3fce5ce2b828a2dd05d2b7fdd2dd015348f\": container with ID starting with 67636bc611814bbf34e6bb9093e3c3fce5ce2b828a2dd05d2b7fdd2dd015348f not found: ID does not exist" Feb 16 17:33:03.004831 master-0 kubenswrapper[4652]: E0216 17:33:03.004786 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aab010443e5953fae2765f398f189cc7072cddd5f5db6fb4755e40a70cbb95c4\": container with ID starting with aab010443e5953fae2765f398f189cc7072cddd5f5db6fb4755e40a70cbb95c4 not found: ID does not exist" containerID="aab010443e5953fae2765f398f189cc7072cddd5f5db6fb4755e40a70cbb95c4" Feb 16 17:33:03.004893 master-0 kubenswrapper[4652]: I0216 17:33:03.004850 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="aab010443e5953fae2765f398f189cc7072cddd5f5db6fb4755e40a70cbb95c4" err="rpc error: code = NotFound desc = could not find container \"aab010443e5953fae2765f398f189cc7072cddd5f5db6fb4755e40a70cbb95c4\": container with ID starting with aab010443e5953fae2765f398f189cc7072cddd5f5db6fb4755e40a70cbb95c4 not found: ID does not exist" Feb 16 17:33:03.005565 master-0 kubenswrapper[4652]: E0216 17:33:03.005527 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62478ab5c23f3ea9a0e6eac2c1335867a9ed27280579a87a6023cc6f8b882123\": container with ID starting with 62478ab5c23f3ea9a0e6eac2c1335867a9ed27280579a87a6023cc6f8b882123 not found: ID does not exist" containerID="62478ab5c23f3ea9a0e6eac2c1335867a9ed27280579a87a6023cc6f8b882123" Feb 16 17:33:03.005628 master-0 kubenswrapper[4652]: I0216 17:33:03.005576 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="62478ab5c23f3ea9a0e6eac2c1335867a9ed27280579a87a6023cc6f8b882123" err="rpc error: code = NotFound desc = could not find container \"62478ab5c23f3ea9a0e6eac2c1335867a9ed27280579a87a6023cc6f8b882123\": container with ID starting with 62478ab5c23f3ea9a0e6eac2c1335867a9ed27280579a87a6023cc6f8b882123 not found: ID does not exist" Feb 16 17:33:03.006080 master-0 kubenswrapper[4652]: E0216 17:33:03.006028 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64282a4ce180de12ca5dda82666544a85bcd78785d5b9841fd753f40e066bf7d\": container with ID starting with 64282a4ce180de12ca5dda82666544a85bcd78785d5b9841fd753f40e066bf7d not found: ID does not exist" containerID="64282a4ce180de12ca5dda82666544a85bcd78785d5b9841fd753f40e066bf7d" Feb 16 17:33:03.006160 master-0 kubenswrapper[4652]: I0216 17:33:03.006097 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="64282a4ce180de12ca5dda82666544a85bcd78785d5b9841fd753f40e066bf7d" err="rpc error: code = NotFound desc = could not find container \"64282a4ce180de12ca5dda82666544a85bcd78785d5b9841fd753f40e066bf7d\": container with ID starting with 64282a4ce180de12ca5dda82666544a85bcd78785d5b9841fd753f40e066bf7d not found: ID does not exist" Feb 16 17:33:03.006645 master-0 kubenswrapper[4652]: E0216 17:33:03.006606 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e\": container with ID starting with d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e not found: ID does not exist" containerID="d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e" Feb 16 17:33:03.006719 master-0 kubenswrapper[4652]: I0216 17:33:03.006654 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e" err="rpc error: code = NotFound desc = could not find container \"d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e\": container with ID starting with d54435268699b5dc02b3724fd4ebc95d522940a0665dadaaf5801a043f6d163e not found: ID does not exist" Feb 16 17:33:03.268351 master-0 kubenswrapper[4652]: I0216 17:33:03.268299 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:03.269338 master-0 kubenswrapper[4652]: I0216 17:33:03.269302 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:17.246334 master-0 kubenswrapper[4652]: I0216 17:33:17.246216 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 16 17:33:17.247177 master-0 kubenswrapper[4652]: E0216 17:33:17.246537 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5a56f02-a904-4a72-9195-50d31662d559" containerName="installer" Feb 16 17:33:17.247177 master-0 kubenswrapper[4652]: I0216 17:33:17.246551 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5a56f02-a904-4a72-9195-50d31662d559" containerName="installer" Feb 16 17:33:17.247177 master-0 kubenswrapper[4652]: I0216 17:33:17.246713 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5a56f02-a904-4a72-9195-50d31662d559" containerName="installer" Feb 16 17:33:17.247375 master-0 kubenswrapper[4652]: I0216 17:33:17.247191 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 17:33:17.250699 master-0 kubenswrapper[4652]: I0216 17:33:17.249397 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Feb 16 17:33:17.250998 master-0 kubenswrapper[4652]: I0216 17:33:17.250877 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-cwb2w" Feb 16 17:33:17.295482 master-0 kubenswrapper[4652]: I0216 17:33:17.261546 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 16 17:33:17.329496 master-0 kubenswrapper[4652]: I0216 17:33:17.329435 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1d4b0be-15d7-48b3-96ea-975059f378a3-kube-api-access\") pod \"installer-5-master-0\" (UID: \"d1d4b0be-15d7-48b3-96ea-975059f378a3\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 17:33:17.329496 master-0 kubenswrapper[4652]: I0216 17:33:17.329502 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1d4b0be-15d7-48b3-96ea-975059f378a3-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"d1d4b0be-15d7-48b3-96ea-975059f378a3\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 17:33:17.329749 master-0 kubenswrapper[4652]: I0216 17:33:17.329648 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1d4b0be-15d7-48b3-96ea-975059f378a3-var-lock\") pod \"installer-5-master-0\" (UID: \"d1d4b0be-15d7-48b3-96ea-975059f378a3\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 17:33:17.430984 master-0 kubenswrapper[4652]: I0216 17:33:17.430901 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1d4b0be-15d7-48b3-96ea-975059f378a3-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"d1d4b0be-15d7-48b3-96ea-975059f378a3\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 17:33:17.431215 master-0 kubenswrapper[4652]: I0216 17:33:17.431026 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1d4b0be-15d7-48b3-96ea-975059f378a3-var-lock\") pod \"installer-5-master-0\" (UID: \"d1d4b0be-15d7-48b3-96ea-975059f378a3\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 17:33:17.431215 master-0 kubenswrapper[4652]: I0216 17:33:17.431058 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1d4b0be-15d7-48b3-96ea-975059f378a3-kube-api-access\") pod \"installer-5-master-0\" (UID: \"d1d4b0be-15d7-48b3-96ea-975059f378a3\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 17:33:17.431215 master-0 kubenswrapper[4652]: I0216 17:33:17.431178 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1d4b0be-15d7-48b3-96ea-975059f378a3-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"d1d4b0be-15d7-48b3-96ea-975059f378a3\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 17:33:17.431666 master-0 kubenswrapper[4652]: I0216 17:33:17.431625 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1d4b0be-15d7-48b3-96ea-975059f378a3-var-lock\") pod \"installer-5-master-0\" (UID: \"d1d4b0be-15d7-48b3-96ea-975059f378a3\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 17:33:17.446715 master-0 kubenswrapper[4652]: I0216 17:33:17.446656 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1d4b0be-15d7-48b3-96ea-975059f378a3-kube-api-access\") pod \"installer-5-master-0\" (UID: \"d1d4b0be-15d7-48b3-96ea-975059f378a3\") " pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 17:33:17.625157 master-0 kubenswrapper[4652]: I0216 17:33:17.624982 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 17:33:18.044377 master-0 kubenswrapper[4652]: I0216 17:33:18.043490 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Feb 16 17:33:18.044631 master-0 kubenswrapper[4652]: W0216 17:33:18.044534 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd1d4b0be_15d7_48b3_96ea_975059f378a3.slice/crio-9dba8354ebabf152921d8f0e1c9129df07b06261ec76279b1a1249cda6c18e2d WatchSource:0}: Error finding container 9dba8354ebabf152921d8f0e1c9129df07b06261ec76279b1a1249cda6c18e2d: Status 404 returned error can't find the container with id 9dba8354ebabf152921d8f0e1c9129df07b06261ec76279b1a1249cda6c18e2d Feb 16 17:33:18.362605 master-0 kubenswrapper[4652]: I0216 17:33:18.362551 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"d1d4b0be-15d7-48b3-96ea-975059f378a3","Type":"ContainerStarted","Data":"9dba8354ebabf152921d8f0e1c9129df07b06261ec76279b1a1249cda6c18e2d"} Feb 16 17:33:19.370346 master-0 kubenswrapper[4652]: I0216 17:33:19.370280 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"d1d4b0be-15d7-48b3-96ea-975059f378a3","Type":"ContainerStarted","Data":"2f0840a98335258c62bb457090feca8692c70f33ea42f75b49b4e3b5465d992a"} Feb 16 17:33:19.391454 master-0 kubenswrapper[4652]: I0216 17:33:19.391344 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=2.391324016 podStartE2EDuration="2.391324016s" podCreationTimestamp="2026-02-16 17:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:33:19.388735046 +0000 UTC m=+556.776903582" watchObservedRunningTime="2026-02-16 17:33:19.391324016 +0000 UTC m=+556.779492532" Feb 16 17:33:23.999703 master-0 kubenswrapper[4652]: I0216 17:33:23.999619 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 16 17:33:24.000946 master-0 kubenswrapper[4652]: I0216 17:33:24.000878 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 17:33:24.003160 master-0 kubenswrapper[4652]: I0216 17:33:24.003106 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 17:33:24.003532 master-0 kubenswrapper[4652]: I0216 17:33:24.003494 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-qlqr4" Feb 16 17:33:24.008828 master-0 kubenswrapper[4652]: I0216 17:33:24.008772 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 16 17:33:24.025676 master-0 kubenswrapper[4652]: I0216 17:33:24.025602 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1de40371-9346-429a-a5c7-48822e50c3c9-var-lock\") pod \"installer-4-master-0\" (UID: \"1de40371-9346-429a-a5c7-48822e50c3c9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 17:33:24.025904 master-0 kubenswrapper[4652]: I0216 17:33:24.025769 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1de40371-9346-429a-a5c7-48822e50c3c9-kube-api-access\") pod \"installer-4-master-0\" (UID: \"1de40371-9346-429a-a5c7-48822e50c3c9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 17:33:24.025966 master-0 kubenswrapper[4652]: I0216 17:33:24.025952 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1de40371-9346-429a-a5c7-48822e50c3c9-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"1de40371-9346-429a-a5c7-48822e50c3c9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 17:33:24.127150 master-0 kubenswrapper[4652]: I0216 17:33:24.127077 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1de40371-9346-429a-a5c7-48822e50c3c9-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"1de40371-9346-429a-a5c7-48822e50c3c9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 17:33:24.127415 master-0 kubenswrapper[4652]: I0216 17:33:24.127194 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1de40371-9346-429a-a5c7-48822e50c3c9-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"1de40371-9346-429a-a5c7-48822e50c3c9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 17:33:24.127415 master-0 kubenswrapper[4652]: I0216 17:33:24.127199 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1de40371-9346-429a-a5c7-48822e50c3c9-var-lock\") pod \"installer-4-master-0\" (UID: \"1de40371-9346-429a-a5c7-48822e50c3c9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 17:33:24.127415 master-0 kubenswrapper[4652]: I0216 17:33:24.127273 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1de40371-9346-429a-a5c7-48822e50c3c9-var-lock\") pod \"installer-4-master-0\" (UID: \"1de40371-9346-429a-a5c7-48822e50c3c9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 17:33:24.127415 master-0 kubenswrapper[4652]: I0216 17:33:24.127341 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1de40371-9346-429a-a5c7-48822e50c3c9-kube-api-access\") pod \"installer-4-master-0\" (UID: \"1de40371-9346-429a-a5c7-48822e50c3c9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 17:33:24.143900 master-0 kubenswrapper[4652]: I0216 17:33:24.143441 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1de40371-9346-429a-a5c7-48822e50c3c9-kube-api-access\") pod \"installer-4-master-0\" (UID: \"1de40371-9346-429a-a5c7-48822e50c3c9\") " pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 17:33:24.323156 master-0 kubenswrapper[4652]: I0216 17:33:24.323003 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 17:33:24.707978 master-0 kubenswrapper[4652]: I0216 17:33:24.707924 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Feb 16 17:33:25.418379 master-0 kubenswrapper[4652]: I0216 17:33:25.418233 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"1de40371-9346-429a-a5c7-48822e50c3c9","Type":"ContainerStarted","Data":"92dcb151012081bce137f302f69b010f095fa051dc9b49cb6c2d5c69d6692935"} Feb 16 17:33:25.418379 master-0 kubenswrapper[4652]: I0216 17:33:25.418299 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"1de40371-9346-429a-a5c7-48822e50c3c9","Type":"ContainerStarted","Data":"f1fbde4fae97d1f48771d64d56da072140f137ff0409b1bf3bce89125bc122b2"} Feb 16 17:33:25.435321 master-0 kubenswrapper[4652]: I0216 17:33:25.435211 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=2.435192409 podStartE2EDuration="2.435192409s" podCreationTimestamp="2026-02-16 17:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:33:25.43223005 +0000 UTC m=+562.820398586" watchObservedRunningTime="2026-02-16 17:33:25.435192409 +0000 UTC m=+562.823360935" Feb 16 17:33:32.471977 master-0 kubenswrapper[4652]: I0216 17:33:32.471913 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 16 17:33:32.472877 master-0 kubenswrapper[4652]: I0216 17:33:32.472781 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 17:33:32.474500 master-0 kubenswrapper[4652]: I0216 17:33:32.474453 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-z9qtm" Feb 16 17:33:32.474805 master-0 kubenswrapper[4652]: I0216 17:33:32.474762 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 17:33:32.485854 master-0 kubenswrapper[4652]: I0216 17:33:32.485816 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 16 17:33:32.644006 master-0 kubenswrapper[4652]: I0216 17:33:32.643934 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/71e06521-316a-4ca0-9e59-b0f196db417e-var-lock\") pod \"installer-6-master-0\" (UID: \"71e06521-316a-4ca0-9e59-b0f196db417e\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 17:33:32.644006 master-0 kubenswrapper[4652]: I0216 17:33:32.643993 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71e06521-316a-4ca0-9e59-b0f196db417e-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"71e06521-316a-4ca0-9e59-b0f196db417e\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 17:33:32.644279 master-0 kubenswrapper[4652]: I0216 17:33:32.644176 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71e06521-316a-4ca0-9e59-b0f196db417e-kube-api-access\") pod \"installer-6-master-0\" (UID: \"71e06521-316a-4ca0-9e59-b0f196db417e\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 17:33:32.745056 master-0 kubenswrapper[4652]: I0216 17:33:32.744869 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/71e06521-316a-4ca0-9e59-b0f196db417e-var-lock\") pod \"installer-6-master-0\" (UID: \"71e06521-316a-4ca0-9e59-b0f196db417e\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 17:33:32.745056 master-0 kubenswrapper[4652]: I0216 17:33:32.744948 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/71e06521-316a-4ca0-9e59-b0f196db417e-var-lock\") pod \"installer-6-master-0\" (UID: \"71e06521-316a-4ca0-9e59-b0f196db417e\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 17:33:32.745056 master-0 kubenswrapper[4652]: I0216 17:33:32.744988 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71e06521-316a-4ca0-9e59-b0f196db417e-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"71e06521-316a-4ca0-9e59-b0f196db417e\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 17:33:32.745056 master-0 kubenswrapper[4652]: I0216 17:33:32.745033 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71e06521-316a-4ca0-9e59-b0f196db417e-kube-api-access\") pod \"installer-6-master-0\" (UID: \"71e06521-316a-4ca0-9e59-b0f196db417e\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 17:33:32.745544 master-0 kubenswrapper[4652]: I0216 17:33:32.745148 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71e06521-316a-4ca0-9e59-b0f196db417e-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"71e06521-316a-4ca0-9e59-b0f196db417e\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 17:33:32.764166 master-0 kubenswrapper[4652]: I0216 17:33:32.764095 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71e06521-316a-4ca0-9e59-b0f196db417e-kube-api-access\") pod \"installer-6-master-0\" (UID: \"71e06521-316a-4ca0-9e59-b0f196db417e\") " pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 17:33:32.798550 master-0 kubenswrapper[4652]: I0216 17:33:32.798471 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 17:33:33.196144 master-0 kubenswrapper[4652]: I0216 17:33:33.196094 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-6-master-0"] Feb 16 17:33:33.201096 master-0 kubenswrapper[4652]: W0216 17:33:33.201039 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod71e06521_316a_4ca0_9e59_b0f196db417e.slice/crio-aa296c3c9cf3461ccee98dd54df3eeee332318951c91e4ca537e877e1d4ceb6a WatchSource:0}: Error finding container aa296c3c9cf3461ccee98dd54df3eeee332318951c91e4ca537e877e1d4ceb6a: Status 404 returned error can't find the container with id aa296c3c9cf3461ccee98dd54df3eeee332318951c91e4ca537e877e1d4ceb6a Feb 16 17:33:33.469195 master-0 kubenswrapper[4652]: I0216 17:33:33.469055 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"71e06521-316a-4ca0-9e59-b0f196db417e","Type":"ContainerStarted","Data":"aa296c3c9cf3461ccee98dd54df3eeee332318951c91e4ca537e877e1d4ceb6a"} Feb 16 17:33:34.478535 master-0 kubenswrapper[4652]: I0216 17:33:34.478472 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"71e06521-316a-4ca0-9e59-b0f196db417e","Type":"ContainerStarted","Data":"c2a3a5285a191c4f671ab0213ed3b1f5a76bad29f5e3ffec713950b4c9c60599"} Feb 16 17:33:34.503980 master-0 kubenswrapper[4652]: I0216 17:33:34.503884 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-6-master-0" podStartSLOduration=2.503853432 podStartE2EDuration="2.503853432s" podCreationTimestamp="2026-02-16 17:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:33:34.499362462 +0000 UTC m=+571.887530978" watchObservedRunningTime="2026-02-16 17:33:34.503853432 +0000 UTC m=+571.892021948" Feb 16 17:33:36.493580 master-0 kubenswrapper[4652]: I0216 17:33:36.493519 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler-cert-syncer/4.log" Feb 16 17:33:36.494571 master-0 kubenswrapper[4652]: I0216 17:33:36.494529 4652 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="208e46a3c641476f9960cdd4e77a82fbdec0a87c2f2f91e56dfe5eb0ed0268f8" exitCode=1 Feb 16 17:33:36.494713 master-0 kubenswrapper[4652]: I0216 17:33:36.494574 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerDied","Data":"208e46a3c641476f9960cdd4e77a82fbdec0a87c2f2f91e56dfe5eb0ed0268f8"} Feb 16 17:33:36.495148 master-0 kubenswrapper[4652]: I0216 17:33:36.495113 4652 scope.go:117] "RemoveContainer" containerID="208e46a3c641476f9960cdd4e77a82fbdec0a87c2f2f91e56dfe5eb0ed0268f8" Feb 16 17:33:37.504213 master-0 kubenswrapper[4652]: I0216 17:33:37.504134 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler-cert-syncer/4.log" Feb 16 17:33:37.505092 master-0 kubenswrapper[4652]: I0216 17:33:37.505048 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"b8fa563c7331931f00ce0006e522f0f1","Type":"ContainerStarted","Data":"edd4c5d0e652b5757bdccf907fa4067f8e355e19438b3b15ad493b63e9b63bb2"} Feb 16 17:33:49.422953 master-0 kubenswrapper[4652]: I0216 17:33:49.422883 4652 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: I0216 17:33:49.423286 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-recovery-controller" containerID="cri-o://df0e3c6ca3dd8af42c8e9ac9cdc311a5a319df0fa4ca786ea177a90a6aefea49" gracePeriod=30 Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: I0216 17:33:49.423298 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-cert-syncer" containerID="cri-o://edd4c5d0e652b5757bdccf907fa4067f8e355e19438b3b15ad493b63e9b63bb2" gracePeriod=30 Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: I0216 17:33:49.423223 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" containerID="cri-o://94ea5b6007080bd428cd7b6fe066cc75a9a70841a978304207907aa746d9ac27" gracePeriod=30 Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: I0216 17:33:49.424774 4652 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: E0216 17:33:49.425432 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="wait-for-host-port" Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: I0216 17:33:49.425457 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="wait-for-host-port" Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: E0216 17:33:49.425475 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: I0216 17:33:49.425487 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: E0216 17:33:49.425519 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-cert-syncer" Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: I0216 17:33:49.425535 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-cert-syncer" Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: E0216 17:33:49.425555 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-cert-syncer" Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: I0216 17:33:49.425566 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-cert-syncer" Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: E0216 17:33:49.425584 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-recovery-controller" Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: I0216 17:33:49.425596 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-recovery-controller" Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: I0216 17:33:49.425880 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-recovery-controller" Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: I0216 17:33:49.425902 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-cert-syncer" Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: I0216 17:33:49.425922 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler-cert-syncer" Feb 16 17:33:49.426299 master-0 kubenswrapper[4652]: I0216 17:33:49.425953 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8fa563c7331931f00ce0006e522f0f1" containerName="kube-scheduler" Feb 16 17:33:49.575432 master-0 kubenswrapper[4652]: I0216 17:33:49.575370 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler-cert-syncer/5.log" Feb 16 17:33:49.576487 master-0 kubenswrapper[4652]: I0216 17:33:49.576457 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler-cert-syncer/4.log" Feb 16 17:33:49.577908 master-0 kubenswrapper[4652]: I0216 17:33:49.577864 4652 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="edd4c5d0e652b5757bdccf907fa4067f8e355e19438b3b15ad493b63e9b63bb2" exitCode=2 Feb 16 17:33:49.577908 master-0 kubenswrapper[4652]: I0216 17:33:49.577890 4652 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="df0e3c6ca3dd8af42c8e9ac9cdc311a5a319df0fa4ca786ea177a90a6aefea49" exitCode=0 Feb 16 17:33:49.577908 master-0 kubenswrapper[4652]: I0216 17:33:49.577898 4652 generic.go:334] "Generic (PLEG): container finished" podID="b8fa563c7331931f00ce0006e522f0f1" containerID="94ea5b6007080bd428cd7b6fe066cc75a9a70841a978304207907aa746d9ac27" exitCode=0 Feb 16 17:33:49.578113 master-0 kubenswrapper[4652]: I0216 17:33:49.577929 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60ad9673b9da87a543bba4e5a24b9c3c17606af8ac65c311daabcc313339be82" Feb 16 17:33:49.578113 master-0 kubenswrapper[4652]: I0216 17:33:49.577944 4652 scope.go:117] "RemoveContainer" containerID="208e46a3c641476f9960cdd4e77a82fbdec0a87c2f2f91e56dfe5eb0ed0268f8" Feb 16 17:33:49.605355 master-0 kubenswrapper[4652]: I0216 17:33:49.605223 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:33:49.606209 master-0 kubenswrapper[4652]: I0216 17:33:49.606161 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:33:49.699354 master-0 kubenswrapper[4652]: I0216 17:33:49.699226 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler-cert-syncer/5.log" Feb 16 17:33:49.700940 master-0 kubenswrapper[4652]: I0216 17:33:49.700905 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:33:49.703910 master-0 kubenswrapper[4652]: I0216 17:33:49.703865 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="b8fa563c7331931f00ce0006e522f0f1" podUID="952766c3a88fd12345a552f1277199f9" Feb 16 17:33:49.707662 master-0 kubenswrapper[4652]: I0216 17:33:49.707622 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:33:49.707752 master-0 kubenswrapper[4652]: I0216 17:33:49.707714 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:33:49.707810 master-0 kubenswrapper[4652]: I0216 17:33:49.707772 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:33:49.707867 master-0 kubenswrapper[4652]: I0216 17:33:49.707852 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/952766c3a88fd12345a552f1277199f9-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"952766c3a88fd12345a552f1277199f9\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:33:49.809094 master-0 kubenswrapper[4652]: I0216 17:33:49.809014 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") pod \"b8fa563c7331931f00ce0006e522f0f1\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " Feb 16 17:33:49.809094 master-0 kubenswrapper[4652]: I0216 17:33:49.809085 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") pod \"b8fa563c7331931f00ce0006e522f0f1\" (UID: \"b8fa563c7331931f00ce0006e522f0f1\") " Feb 16 17:33:49.809442 master-0 kubenswrapper[4652]: I0216 17:33:49.809208 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "b8fa563c7331931f00ce0006e522f0f1" (UID: "b8fa563c7331931f00ce0006e522f0f1"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:33:49.809442 master-0 kubenswrapper[4652]: I0216 17:33:49.809352 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b8fa563c7331931f00ce0006e522f0f1" (UID: "b8fa563c7331931f00ce0006e522f0f1"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:33:49.809570 master-0 kubenswrapper[4652]: I0216 17:33:49.809555 4652 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:33:49.809618 master-0 kubenswrapper[4652]: I0216 17:33:49.809571 4652 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b8fa563c7331931f00ce0006e522f0f1-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:33:50.587233 master-0 kubenswrapper[4652]: I0216 17:33:50.587147 4652 generic.go:334] "Generic (PLEG): container finished" podID="d1d4b0be-15d7-48b3-96ea-975059f378a3" containerID="2f0840a98335258c62bb457090feca8692c70f33ea42f75b49b4e3b5465d992a" exitCode=0 Feb 16 17:33:50.587233 master-0 kubenswrapper[4652]: I0216 17:33:50.587223 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"d1d4b0be-15d7-48b3-96ea-975059f378a3","Type":"ContainerDied","Data":"2f0840a98335258c62bb457090feca8692c70f33ea42f75b49b4e3b5465d992a"} Feb 16 17:33:50.589479 master-0 kubenswrapper[4652]: I0216 17:33:50.589422 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_b8fa563c7331931f00ce0006e522f0f1/kube-scheduler-cert-syncer/5.log" Feb 16 17:33:50.590876 master-0 kubenswrapper[4652]: I0216 17:33:50.590834 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:33:50.612688 master-0 kubenswrapper[4652]: I0216 17:33:50.611207 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="b8fa563c7331931f00ce0006e522f0f1" podUID="952766c3a88fd12345a552f1277199f9" Feb 16 17:33:50.619864 master-0 kubenswrapper[4652]: I0216 17:33:50.619808 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="b8fa563c7331931f00ce0006e522f0f1" podUID="952766c3a88fd12345a552f1277199f9" Feb 16 17:33:50.755461 master-0 kubenswrapper[4652]: I0216 17:33:50.755390 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8fa563c7331931f00ce0006e522f0f1" path="/var/lib/kubelet/pods/b8fa563c7331931f00ce0006e522f0f1/volumes" Feb 16 17:33:51.883469 master-0 kubenswrapper[4652]: I0216 17:33:51.883404 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 17:33:52.045943 master-0 kubenswrapper[4652]: I0216 17:33:52.045655 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1d4b0be-15d7-48b3-96ea-975059f378a3-kubelet-dir\") pod \"d1d4b0be-15d7-48b3-96ea-975059f378a3\" (UID: \"d1d4b0be-15d7-48b3-96ea-975059f378a3\") " Feb 16 17:33:52.045943 master-0 kubenswrapper[4652]: I0216 17:33:52.045710 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1d4b0be-15d7-48b3-96ea-975059f378a3-var-lock\") pod \"d1d4b0be-15d7-48b3-96ea-975059f378a3\" (UID: \"d1d4b0be-15d7-48b3-96ea-975059f378a3\") " Feb 16 17:33:52.045943 master-0 kubenswrapper[4652]: I0216 17:33:52.045740 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1d4b0be-15d7-48b3-96ea-975059f378a3-kube-api-access\") pod \"d1d4b0be-15d7-48b3-96ea-975059f378a3\" (UID: \"d1d4b0be-15d7-48b3-96ea-975059f378a3\") " Feb 16 17:33:52.046564 master-0 kubenswrapper[4652]: I0216 17:33:52.046490 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1d4b0be-15d7-48b3-96ea-975059f378a3-var-lock" (OuterVolumeSpecName: "var-lock") pod "d1d4b0be-15d7-48b3-96ea-975059f378a3" (UID: "d1d4b0be-15d7-48b3-96ea-975059f378a3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:33:52.046732 master-0 kubenswrapper[4652]: I0216 17:33:52.046708 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1d4b0be-15d7-48b3-96ea-975059f378a3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d1d4b0be-15d7-48b3-96ea-975059f378a3" (UID: "d1d4b0be-15d7-48b3-96ea-975059f378a3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:33:52.048440 master-0 kubenswrapper[4652]: I0216 17:33:52.048404 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1d4b0be-15d7-48b3-96ea-975059f378a3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d1d4b0be-15d7-48b3-96ea-975059f378a3" (UID: "d1d4b0be-15d7-48b3-96ea-975059f378a3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:33:52.148041 master-0 kubenswrapper[4652]: I0216 17:33:52.147943 4652 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d1d4b0be-15d7-48b3-96ea-975059f378a3-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:33:52.148296 master-0 kubenswrapper[4652]: I0216 17:33:52.148282 4652 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/d1d4b0be-15d7-48b3-96ea-975059f378a3-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:33:52.148382 master-0 kubenswrapper[4652]: I0216 17:33:52.148370 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d1d4b0be-15d7-48b3-96ea-975059f378a3-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:33:52.604788 master-0 kubenswrapper[4652]: I0216 17:33:52.604727 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"d1d4b0be-15d7-48b3-96ea-975059f378a3","Type":"ContainerDied","Data":"9dba8354ebabf152921d8f0e1c9129df07b06261ec76279b1a1249cda6c18e2d"} Feb 16 17:33:52.604788 master-0 kubenswrapper[4652]: I0216 17:33:52.604773 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dba8354ebabf152921d8f0e1c9129df07b06261ec76279b1a1249cda6c18e2d" Feb 16 17:33:52.605059 master-0 kubenswrapper[4652]: I0216 17:33:52.604851 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Feb 16 17:33:57.718076 master-0 kubenswrapper[4652]: I0216 17:33:57.717999 4652 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.718382 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="kube-controller-manager" containerID="cri-o://a48e44f48198e4db49af49e3cef0fd62f710895db4941d8940f6bbcfc9115369" gracePeriod=30 Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.718475 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="cluster-policy-controller" containerID="cri-o://3d2828c0d80b4c7afd8ffd37f8c260ee8dedebb2fc940c473cd50dec8ae63210" gracePeriod=30 Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.718487 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://15bdf4adb469c8e49b844325cb3d5f079a624d6ed75dcf73bfdeb27983d77e0b" gracePeriod=30 Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.718513 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://cbe696b52c701a95154d7c627e87465ef37a24dcef0fa0a27f9537b62cc823b4" gracePeriod=30 Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.720132 4652 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: E0216 17:33:57.720587 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="cluster-policy-controller" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.720610 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="cluster-policy-controller" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: E0216 17:33:57.720631 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1d4b0be-15d7-48b3-96ea-975059f378a3" containerName="installer" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.720643 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1d4b0be-15d7-48b3-96ea-975059f378a3" containerName="installer" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: E0216 17:33:57.720666 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="kube-controller-manager" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.720677 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="kube-controller-manager" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: E0216 17:33:57.720700 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="kube-controller-manager-recovery-controller" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.720713 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="kube-controller-manager-recovery-controller" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: E0216 17:33:57.720733 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="kube-controller-manager-cert-syncer" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.720743 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="kube-controller-manager-cert-syncer" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.720932 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="kube-controller-manager" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.720959 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="kube-controller-manager-recovery-controller" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.720979 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="kube-controller-manager-cert-syncer" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.720995 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1d4b0be-15d7-48b3-96ea-975059f378a3" containerName="installer" Feb 16 17:33:57.721483 master-0 kubenswrapper[4652]: I0216 17:33:57.721015 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f305fd0e0544b7ae949be70a648f4f7" containerName="cluster-policy-controller" Feb 16 17:33:57.830906 master-0 kubenswrapper[4652]: I0216 17:33:57.830793 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/31a8353b1ad9c25fa07fedd5b1af1bb1-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"31a8353b1ad9c25fa07fedd5b1af1bb1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:57.831329 master-0 kubenswrapper[4652]: I0216 17:33:57.830954 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/31a8353b1ad9c25fa07fedd5b1af1bb1-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"31a8353b1ad9c25fa07fedd5b1af1bb1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:57.913671 master-0 kubenswrapper[4652]: I0216 17:33:57.913599 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5f305fd0e0544b7ae949be70a648f4f7/kube-controller-manager-cert-syncer/0.log" Feb 16 17:33:57.914703 master-0 kubenswrapper[4652]: I0216 17:33:57.914663 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:57.920040 master-0 kubenswrapper[4652]: I0216 17:33:57.919984 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="5f305fd0e0544b7ae949be70a648f4f7" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" Feb 16 17:33:57.934822 master-0 kubenswrapper[4652]: I0216 17:33:57.934757 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/31a8353b1ad9c25fa07fedd5b1af1bb1-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"31a8353b1ad9c25fa07fedd5b1af1bb1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:57.934965 master-0 kubenswrapper[4652]: I0216 17:33:57.934849 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/31a8353b1ad9c25fa07fedd5b1af1bb1-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"31a8353b1ad9c25fa07fedd5b1af1bb1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:57.935021 master-0 kubenswrapper[4652]: I0216 17:33:57.934974 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/31a8353b1ad9c25fa07fedd5b1af1bb1-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"31a8353b1ad9c25fa07fedd5b1af1bb1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:57.935059 master-0 kubenswrapper[4652]: I0216 17:33:57.935027 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/31a8353b1ad9c25fa07fedd5b1af1bb1-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"31a8353b1ad9c25fa07fedd5b1af1bb1\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:58.035857 master-0 kubenswrapper[4652]: I0216 17:33:58.035704 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5f305fd0e0544b7ae949be70a648f4f7-cert-dir\") pod \"5f305fd0e0544b7ae949be70a648f4f7\" (UID: \"5f305fd0e0544b7ae949be70a648f4f7\") " Feb 16 17:33:58.035857 master-0 kubenswrapper[4652]: I0216 17:33:58.035805 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f305fd0e0544b7ae949be70a648f4f7-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "5f305fd0e0544b7ae949be70a648f4f7" (UID: "5f305fd0e0544b7ae949be70a648f4f7"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:33:58.035857 master-0 kubenswrapper[4652]: I0216 17:33:58.035837 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5f305fd0e0544b7ae949be70a648f4f7-resource-dir\") pod \"5f305fd0e0544b7ae949be70a648f4f7\" (UID: \"5f305fd0e0544b7ae949be70a648f4f7\") " Feb 16 17:33:58.036123 master-0 kubenswrapper[4652]: I0216 17:33:58.035911 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f305fd0e0544b7ae949be70a648f4f7-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "5f305fd0e0544b7ae949be70a648f4f7" (UID: "5f305fd0e0544b7ae949be70a648f4f7"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:33:58.036607 master-0 kubenswrapper[4652]: I0216 17:33:58.036550 4652 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/5f305fd0e0544b7ae949be70a648f4f7-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:33:58.036607 master-0 kubenswrapper[4652]: I0216 17:33:58.036571 4652 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/5f305fd0e0544b7ae949be70a648f4f7-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:33:58.648233 master-0 kubenswrapper[4652]: I0216 17:33:58.648169 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_5f305fd0e0544b7ae949be70a648f4f7/kube-controller-manager-cert-syncer/0.log" Feb 16 17:33:58.648947 master-0 kubenswrapper[4652]: I0216 17:33:58.648895 4652 generic.go:334] "Generic (PLEG): container finished" podID="5f305fd0e0544b7ae949be70a648f4f7" containerID="cbe696b52c701a95154d7c627e87465ef37a24dcef0fa0a27f9537b62cc823b4" exitCode=0 Feb 16 17:33:58.648947 master-0 kubenswrapper[4652]: I0216 17:33:58.648937 4652 generic.go:334] "Generic (PLEG): container finished" podID="5f305fd0e0544b7ae949be70a648f4f7" containerID="15bdf4adb469c8e49b844325cb3d5f079a624d6ed75dcf73bfdeb27983d77e0b" exitCode=2 Feb 16 17:33:58.649064 master-0 kubenswrapper[4652]: I0216 17:33:58.648952 4652 generic.go:334] "Generic (PLEG): container finished" podID="5f305fd0e0544b7ae949be70a648f4f7" containerID="3d2828c0d80b4c7afd8ffd37f8c260ee8dedebb2fc940c473cd50dec8ae63210" exitCode=0 Feb 16 17:33:58.649064 master-0 kubenswrapper[4652]: I0216 17:33:58.648961 4652 generic.go:334] "Generic (PLEG): container finished" podID="5f305fd0e0544b7ae949be70a648f4f7" containerID="a48e44f48198e4db49af49e3cef0fd62f710895db4941d8940f6bbcfc9115369" exitCode=0 Feb 16 17:33:58.649064 master-0 kubenswrapper[4652]: I0216 17:33:58.648995 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e179b8e1acf5b0e85eed15165278090e8d9a8bef53967462213684d876e96ca" Feb 16 17:33:58.649575 master-0 kubenswrapper[4652]: I0216 17:33:58.649527 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:33:58.651282 master-0 kubenswrapper[4652]: I0216 17:33:58.650588 4652 generic.go:334] "Generic (PLEG): container finished" podID="1de40371-9346-429a-a5c7-48822e50c3c9" containerID="92dcb151012081bce137f302f69b010f095fa051dc9b49cb6c2d5c69d6692935" exitCode=0 Feb 16 17:33:58.651521 master-0 kubenswrapper[4652]: I0216 17:33:58.650653 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"1de40371-9346-429a-a5c7-48822e50c3c9","Type":"ContainerDied","Data":"92dcb151012081bce137f302f69b010f095fa051dc9b49cb6c2d5c69d6692935"} Feb 16 17:33:58.653512 master-0 kubenswrapper[4652]: I0216 17:33:58.653472 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="5f305fd0e0544b7ae949be70a648f4f7" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" Feb 16 17:33:58.673351 master-0 kubenswrapper[4652]: I0216 17:33:58.673277 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="5f305fd0e0544b7ae949be70a648f4f7" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" Feb 16 17:33:58.753317 master-0 kubenswrapper[4652]: I0216 17:33:58.753225 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f305fd0e0544b7ae949be70a648f4f7" path="/var/lib/kubelet/pods/5f305fd0e0544b7ae949be70a648f4f7/volumes" Feb 16 17:33:59.984632 master-0 kubenswrapper[4652]: I0216 17:33:59.984565 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 17:34:00.169557 master-0 kubenswrapper[4652]: I0216 17:34:00.169453 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1de40371-9346-429a-a5c7-48822e50c3c9-var-lock\") pod \"1de40371-9346-429a-a5c7-48822e50c3c9\" (UID: \"1de40371-9346-429a-a5c7-48822e50c3c9\") " Feb 16 17:34:00.169557 master-0 kubenswrapper[4652]: I0216 17:34:00.169552 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1de40371-9346-429a-a5c7-48822e50c3c9-kubelet-dir\") pod \"1de40371-9346-429a-a5c7-48822e50c3c9\" (UID: \"1de40371-9346-429a-a5c7-48822e50c3c9\") " Feb 16 17:34:00.170022 master-0 kubenswrapper[4652]: I0216 17:34:00.169592 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de40371-9346-429a-a5c7-48822e50c3c9-var-lock" (OuterVolumeSpecName: "var-lock") pod "1de40371-9346-429a-a5c7-48822e50c3c9" (UID: "1de40371-9346-429a-a5c7-48822e50c3c9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:34:00.170022 master-0 kubenswrapper[4652]: I0216 17:34:00.169616 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1de40371-9346-429a-a5c7-48822e50c3c9-kube-api-access\") pod \"1de40371-9346-429a-a5c7-48822e50c3c9\" (UID: \"1de40371-9346-429a-a5c7-48822e50c3c9\") " Feb 16 17:34:00.170022 master-0 kubenswrapper[4652]: I0216 17:34:00.169614 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1de40371-9346-429a-a5c7-48822e50c3c9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1de40371-9346-429a-a5c7-48822e50c3c9" (UID: "1de40371-9346-429a-a5c7-48822e50c3c9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:34:00.170022 master-0 kubenswrapper[4652]: I0216 17:34:00.170021 4652 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1de40371-9346-429a-a5c7-48822e50c3c9-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:34:00.170421 master-0 kubenswrapper[4652]: I0216 17:34:00.170039 4652 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1de40371-9346-429a-a5c7-48822e50c3c9-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:34:00.173471 master-0 kubenswrapper[4652]: I0216 17:34:00.173430 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de40371-9346-429a-a5c7-48822e50c3c9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1de40371-9346-429a-a5c7-48822e50c3c9" (UID: "1de40371-9346-429a-a5c7-48822e50c3c9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:34:00.272626 master-0 kubenswrapper[4652]: I0216 17:34:00.272554 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1de40371-9346-429a-a5c7-48822e50c3c9-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:34:00.668127 master-0 kubenswrapper[4652]: I0216 17:34:00.667976 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"1de40371-9346-429a-a5c7-48822e50c3c9","Type":"ContainerDied","Data":"f1fbde4fae97d1f48771d64d56da072140f137ff0409b1bf3bce89125bc122b2"} Feb 16 17:34:00.668127 master-0 kubenswrapper[4652]: I0216 17:34:00.668055 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1fbde4fae97d1f48771d64d56da072140f137ff0409b1bf3bce89125bc122b2" Feb 16 17:34:00.668127 master-0 kubenswrapper[4652]: I0216 17:34:00.668079 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Feb 16 17:34:01.744835 master-0 kubenswrapper[4652]: I0216 17:34:01.744726 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:34:01.763824 master-0 kubenswrapper[4652]: I0216 17:34:01.763766 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="676e24eb-bc42-4b39-8762-94da3ed718e7" Feb 16 17:34:01.763824 master-0 kubenswrapper[4652]: I0216 17:34:01.763810 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="676e24eb-bc42-4b39-8762-94da3ed718e7" Feb 16 17:34:01.785819 master-0 kubenswrapper[4652]: I0216 17:34:01.785746 4652 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:34:01.793964 master-0 kubenswrapper[4652]: I0216 17:34:01.793889 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 17:34:01.802424 master-0 kubenswrapper[4652]: I0216 17:34:01.802365 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:34:01.806068 master-0 kubenswrapper[4652]: I0216 17:34:01.806045 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 17:34:01.811759 master-0 kubenswrapper[4652]: I0216 17:34:01.811709 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Feb 16 17:34:01.836901 master-0 kubenswrapper[4652]: W0216 17:34:01.836820 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod952766c3a88fd12345a552f1277199f9.slice/crio-e4f805877858f95d3787c2812d20965bc46440066822e446703a6a678725b69d WatchSource:0}: Error finding container e4f805877858f95d3787c2812d20965bc46440066822e446703a6a678725b69d: Status 404 returned error can't find the container with id e4f805877858f95d3787c2812d20965bc46440066822e446703a6a678725b69d Feb 16 17:34:02.684899 master-0 kubenswrapper[4652]: I0216 17:34:02.684834 4652 generic.go:334] "Generic (PLEG): container finished" podID="952766c3a88fd12345a552f1277199f9" containerID="39ef923be9ea23648cefc75f32d4ff9051fdcb46b82cb0bcc4010ef1384b291c" exitCode=0 Feb 16 17:34:02.684899 master-0 kubenswrapper[4652]: I0216 17:34:02.684899 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerDied","Data":"39ef923be9ea23648cefc75f32d4ff9051fdcb46b82cb0bcc4010ef1384b291c"} Feb 16 17:34:02.685368 master-0 kubenswrapper[4652]: I0216 17:34:02.684938 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"e4f805877858f95d3787c2812d20965bc46440066822e446703a6a678725b69d"} Feb 16 17:34:03.017433 master-0 kubenswrapper[4652]: I0216 17:34:03.017387 4652 scope.go:117] "RemoveContainer" containerID="df0e3c6ca3dd8af42c8e9ac9cdc311a5a319df0fa4ca786ea177a90a6aefea49" Feb 16 17:34:03.050771 master-0 kubenswrapper[4652]: I0216 17:34:03.050732 4652 scope.go:117] "RemoveContainer" containerID="94ea5b6007080bd428cd7b6fe066cc75a9a70841a978304207907aa746d9ac27" Feb 16 17:34:03.082315 master-0 kubenswrapper[4652]: I0216 17:34:03.081887 4652 scope.go:117] "RemoveContainer" containerID="8418547cd53261f1b77929899a0ab7c7d55cf1c91b349c65456cae4040067db4" Feb 16 17:34:03.116664 master-0 kubenswrapper[4652]: E0216 17:34:03.116607 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7438c404beed179198e48ea8b265d5cc7d16f7a97df452ce2fd8035a2ab01d\": container with ID starting with 3a7438c404beed179198e48ea8b265d5cc7d16f7a97df452ce2fd8035a2ab01d not found: ID does not exist" containerID="3a7438c404beed179198e48ea8b265d5cc7d16f7a97df452ce2fd8035a2ab01d" Feb 16 17:34:03.116878 master-0 kubenswrapper[4652]: I0216 17:34:03.116815 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="3a7438c404beed179198e48ea8b265d5cc7d16f7a97df452ce2fd8035a2ab01d" err="rpc error: code = NotFound desc = could not find container \"3a7438c404beed179198e48ea8b265d5cc7d16f7a97df452ce2fd8035a2ab01d\": container with ID starting with 3a7438c404beed179198e48ea8b265d5cc7d16f7a97df452ce2fd8035a2ab01d not found: ID does not exist" Feb 16 17:34:03.117393 master-0 kubenswrapper[4652]: E0216 17:34:03.117351 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5da187904655b0c19cc34caacbc09e4445fb954c92e48787fb496fa93fb862d3\": container with ID starting with 5da187904655b0c19cc34caacbc09e4445fb954c92e48787fb496fa93fb862d3 not found: ID does not exist" containerID="5da187904655b0c19cc34caacbc09e4445fb954c92e48787fb496fa93fb862d3" Feb 16 17:34:03.117481 master-0 kubenswrapper[4652]: I0216 17:34:03.117391 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="5da187904655b0c19cc34caacbc09e4445fb954c92e48787fb496fa93fb862d3" err="rpc error: code = NotFound desc = could not find container \"5da187904655b0c19cc34caacbc09e4445fb954c92e48787fb496fa93fb862d3\": container with ID starting with 5da187904655b0c19cc34caacbc09e4445fb954c92e48787fb496fa93fb862d3 not found: ID does not exist" Feb 16 17:34:03.117844 master-0 kubenswrapper[4652]: E0216 17:34:03.117711 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01765c8bc2f28fd305b50bff98604cd983df450cc2ab5cd1fc4e41b470b0da56\": container with ID starting with 01765c8bc2f28fd305b50bff98604cd983df450cc2ab5cd1fc4e41b470b0da56 not found: ID does not exist" containerID="01765c8bc2f28fd305b50bff98604cd983df450cc2ab5cd1fc4e41b470b0da56" Feb 16 17:34:03.117844 master-0 kubenswrapper[4652]: I0216 17:34:03.117743 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="01765c8bc2f28fd305b50bff98604cd983df450cc2ab5cd1fc4e41b470b0da56" err="rpc error: code = NotFound desc = could not find container \"01765c8bc2f28fd305b50bff98604cd983df450cc2ab5cd1fc4e41b470b0da56\": container with ID starting with 01765c8bc2f28fd305b50bff98604cd983df450cc2ab5cd1fc4e41b470b0da56 not found: ID does not exist" Feb 16 17:34:03.118420 master-0 kubenswrapper[4652]: E0216 17:34:03.118224 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bfeebda3fb13280918301b723ffaeab4eecd5112011293cfc4c1452d31aedfc\": container with ID starting with 3bfeebda3fb13280918301b723ffaeab4eecd5112011293cfc4c1452d31aedfc not found: ID does not exist" containerID="3bfeebda3fb13280918301b723ffaeab4eecd5112011293cfc4c1452d31aedfc" Feb 16 17:34:03.118420 master-0 kubenswrapper[4652]: I0216 17:34:03.118271 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="3bfeebda3fb13280918301b723ffaeab4eecd5112011293cfc4c1452d31aedfc" err="rpc error: code = NotFound desc = could not find container \"3bfeebda3fb13280918301b723ffaeab4eecd5112011293cfc4c1452d31aedfc\": container with ID starting with 3bfeebda3fb13280918301b723ffaeab4eecd5112011293cfc4c1452d31aedfc not found: ID does not exist" Feb 16 17:34:03.119178 master-0 kubenswrapper[4652]: E0216 17:34:03.119125 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f79c4f95d84447d2060b3a9bdc40c88a369d3407543549b9827cfbca809475bb\": container with ID starting with f79c4f95d84447d2060b3a9bdc40c88a369d3407543549b9827cfbca809475bb not found: ID does not exist" containerID="f79c4f95d84447d2060b3a9bdc40c88a369d3407543549b9827cfbca809475bb" Feb 16 17:34:03.119364 master-0 kubenswrapper[4652]: I0216 17:34:03.119186 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="f79c4f95d84447d2060b3a9bdc40c88a369d3407543549b9827cfbca809475bb" err="rpc error: code = NotFound desc = could not find container \"f79c4f95d84447d2060b3a9bdc40c88a369d3407543549b9827cfbca809475bb\": container with ID starting with f79c4f95d84447d2060b3a9bdc40c88a369d3407543549b9827cfbca809475bb not found: ID does not exist" Feb 16 17:34:03.119657 master-0 kubenswrapper[4652]: E0216 17:34:03.119597 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"144a82e5a5354aec78564d4133ad4c697667355391b76ed74ba7976065eba35a\": container with ID starting with 144a82e5a5354aec78564d4133ad4c697667355391b76ed74ba7976065eba35a not found: ID does not exist" containerID="144a82e5a5354aec78564d4133ad4c697667355391b76ed74ba7976065eba35a" Feb 16 17:34:03.119657 master-0 kubenswrapper[4652]: I0216 17:34:03.119625 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="144a82e5a5354aec78564d4133ad4c697667355391b76ed74ba7976065eba35a" err="rpc error: code = NotFound desc = could not find container \"144a82e5a5354aec78564d4133ad4c697667355391b76ed74ba7976065eba35a\": container with ID starting with 144a82e5a5354aec78564d4133ad4c697667355391b76ed74ba7976065eba35a not found: ID does not exist" Feb 16 17:34:03.119975 master-0 kubenswrapper[4652]: E0216 17:34:03.119940 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35f5cd1992e81a7af4f5f45d4bb9187b72081e16a022193c8645c6082cefaf93\": container with ID starting with 35f5cd1992e81a7af4f5f45d4bb9187b72081e16a022193c8645c6082cefaf93 not found: ID does not exist" containerID="35f5cd1992e81a7af4f5f45d4bb9187b72081e16a022193c8645c6082cefaf93" Feb 16 17:34:03.120066 master-0 kubenswrapper[4652]: I0216 17:34:03.119985 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="35f5cd1992e81a7af4f5f45d4bb9187b72081e16a022193c8645c6082cefaf93" err="rpc error: code = NotFound desc = could not find container \"35f5cd1992e81a7af4f5f45d4bb9187b72081e16a022193c8645c6082cefaf93\": container with ID starting with 35f5cd1992e81a7af4f5f45d4bb9187b72081e16a022193c8645c6082cefaf93 not found: ID does not exist" Feb 16 17:34:03.120390 master-0 kubenswrapper[4652]: E0216 17:34:03.120354 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8774224188f63a305c99868a1a126c4172ed3b7488104d79c4b6d14629a0d4ee\": container with ID starting with 8774224188f63a305c99868a1a126c4172ed3b7488104d79c4b6d14629a0d4ee not found: ID does not exist" containerID="8774224188f63a305c99868a1a126c4172ed3b7488104d79c4b6d14629a0d4ee" Feb 16 17:34:03.120482 master-0 kubenswrapper[4652]: I0216 17:34:03.120399 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="8774224188f63a305c99868a1a126c4172ed3b7488104d79c4b6d14629a0d4ee" err="rpc error: code = NotFound desc = could not find container \"8774224188f63a305c99868a1a126c4172ed3b7488104d79c4b6d14629a0d4ee\": container with ID starting with 8774224188f63a305c99868a1a126c4172ed3b7488104d79c4b6d14629a0d4ee not found: ID does not exist" Feb 16 17:34:03.120788 master-0 kubenswrapper[4652]: E0216 17:34:03.120754 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6a60e780e1ec46760e0dc56aa2e2446cbe16e5a5a365f586b3a5546ece26409\": container with ID starting with d6a60e780e1ec46760e0dc56aa2e2446cbe16e5a5a365f586b3a5546ece26409 not found: ID does not exist" containerID="d6a60e780e1ec46760e0dc56aa2e2446cbe16e5a5a365f586b3a5546ece26409" Feb 16 17:34:03.120879 master-0 kubenswrapper[4652]: I0216 17:34:03.120793 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="d6a60e780e1ec46760e0dc56aa2e2446cbe16e5a5a365f586b3a5546ece26409" err="rpc error: code = NotFound desc = could not find container \"d6a60e780e1ec46760e0dc56aa2e2446cbe16e5a5a365f586b3a5546ece26409\": container with ID starting with d6a60e780e1ec46760e0dc56aa2e2446cbe16e5a5a365f586b3a5546ece26409 not found: ID does not exist" Feb 16 17:34:03.710219 master-0 kubenswrapper[4652]: I0216 17:34:03.710076 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"1fe6bc182214240b034574c6a9118ccc124f496667f1addcefbe3959e12ca5b9"} Feb 16 17:34:03.710219 master-0 kubenswrapper[4652]: I0216 17:34:03.710148 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"d0b0245080f99a58095c921b8a63d578f426a9cd31ade94ce5128f0337eeed27"} Feb 16 17:34:03.710219 master-0 kubenswrapper[4652]: I0216 17:34:03.710164 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"952766c3a88fd12345a552f1277199f9","Type":"ContainerStarted","Data":"1ef226ae293d5fb6b9fa1a50eb33bceb036f183303430e740c90d234e186d263"} Feb 16 17:34:03.710219 master-0 kubenswrapper[4652]: I0216 17:34:03.710228 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:34:03.735006 master-0 kubenswrapper[4652]: I0216 17:34:03.734898 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.734879085 podStartE2EDuration="2.734879085s" podCreationTimestamp="2026-02-16 17:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:34:03.732699187 +0000 UTC m=+601.120867723" watchObservedRunningTime="2026-02-16 17:34:03.734879085 +0000 UTC m=+601.123047611" Feb 16 17:34:11.317990 master-0 kubenswrapper[4652]: I0216 17:34:11.317907 4652 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:34:11.318526 master-0 kubenswrapper[4652]: I0216 17:34:11.318284 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver" containerID="cri-o://f6b7271a04cc4daa6d1770f988b8eabff167278bdea4ccbf77e19ff3390112e7" gracePeriod=15 Feb 16 17:34:11.318526 master-0 kubenswrapper[4652]: I0216 17:34:11.318362 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6bbe807b9ce4dc7c6e7fcc8d4a47cb1d2dedb5ad58f4702ef0b65f1e8d8e7bb8" gracePeriod=15 Feb 16 17:34:11.318526 master-0 kubenswrapper[4652]: I0216 17:34:11.318370 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://34bf25d986801d0b46ede8fde9ead2765c451c45a097605d361e2e2d2b05e878" gracePeriod=15 Feb 16 17:34:11.319961 master-0 kubenswrapper[4652]: I0216 17:34:11.318296 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-check-endpoints" containerID="cri-o://284583b4fbcd96e9535401a9d9fa6c2c190471d2cc1c24ecec6c46f11411d325" gracePeriod=15 Feb 16 17:34:11.319961 master-0 kubenswrapper[4652]: I0216 17:34:11.318424 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-syncer" containerID="cri-o://164b3a54c598ea12b9aa8bd249c91c60806eda6d501871a369e2d5c5b7879903" gracePeriod=15 Feb 16 17:34:11.319961 master-0 kubenswrapper[4652]: I0216 17:34:11.319351 4652 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: E0216 17:34:11.320002 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver" Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: I0216 17:34:11.320020 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver" Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: E0216 17:34:11.320040 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: I0216 17:34:11.320049 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: E0216 17:34:11.320062 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-insecure-readyz" Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: I0216 17:34:11.320070 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-insecure-readyz" Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: E0216 17:34:11.320081 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="setup" Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: I0216 17:34:11.320088 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="setup" Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: E0216 17:34:11.320104 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-syncer" Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: I0216 17:34:11.320112 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-syncer" Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: E0216 17:34:11.320124 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-check-endpoints" Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: I0216 17:34:11.320132 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-check-endpoints" Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: E0216 17:34:11.320147 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de40371-9346-429a-a5c7-48822e50c3c9" containerName="installer" Feb 16 17:34:11.320157 master-0 kubenswrapper[4652]: I0216 17:34:11.320155 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de40371-9346-429a-a5c7-48822e50c3c9" containerName="installer" Feb 16 17:34:11.320768 master-0 kubenswrapper[4652]: I0216 17:34:11.320317 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-insecure-readyz" Feb 16 17:34:11.320768 master-0 kubenswrapper[4652]: I0216 17:34:11.320331 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-check-endpoints" Feb 16 17:34:11.320768 master-0 kubenswrapper[4652]: I0216 17:34:11.320346 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de40371-9346-429a-a5c7-48822e50c3c9" containerName="installer" Feb 16 17:34:11.320768 master-0 kubenswrapper[4652]: I0216 17:34:11.320361 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 17:34:11.320768 master-0 kubenswrapper[4652]: I0216 17:34:11.320376 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver-cert-syncer" Feb 16 17:34:11.320768 master-0 kubenswrapper[4652]: I0216 17:34:11.320387 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa15f80078a2bfbe2234a74ab4da87c" containerName="kube-apiserver" Feb 16 17:34:11.324459 master-0 kubenswrapper[4652]: I0216 17:34:11.324407 4652 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 17:34:11.325760 master-0 kubenswrapper[4652]: I0216 17:34:11.325714 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.340376 master-0 kubenswrapper[4652]: I0216 17:34:11.339198 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="faa15f80078a2bfbe2234a74ab4da87c" podUID="afa8ee25cec0b37c40dad37c52b89d42" Feb 16 17:34:11.474551 master-0 kubenswrapper[4652]: I0216 17:34:11.474507 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:11.474551 master-0 kubenswrapper[4652]: I0216 17:34:11.474557 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.474792 master-0 kubenswrapper[4652]: I0216 17:34:11.474623 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.474792 master-0 kubenswrapper[4652]: I0216 17:34:11.474645 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:11.474792 master-0 kubenswrapper[4652]: I0216 17:34:11.474673 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.474792 master-0 kubenswrapper[4652]: I0216 17:34:11.474702 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.475041 master-0 kubenswrapper[4652]: I0216 17:34:11.474788 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.475041 master-0 kubenswrapper[4652]: I0216 17:34:11.474902 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:11.576817 master-0 kubenswrapper[4652]: I0216 17:34:11.576677 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.577008 master-0 kubenswrapper[4652]: I0216 17:34:11.576864 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.577008 master-0 kubenswrapper[4652]: I0216 17:34:11.576977 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.577080 master-0 kubenswrapper[4652]: I0216 17:34:11.577049 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:11.577146 master-0 kubenswrapper[4652]: I0216 17:34:11.577072 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.577146 master-0 kubenswrapper[4652]: I0216 17:34:11.577124 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:11.577233 master-0 kubenswrapper[4652]: I0216 17:34:11.577148 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:11.577233 master-0 kubenswrapper[4652]: I0216 17:34:11.577178 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:11.577233 master-0 kubenswrapper[4652]: I0216 17:34:11.577207 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.577387 master-0 kubenswrapper[4652]: I0216 17:34:11.577238 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.577387 master-0 kubenswrapper[4652]: I0216 17:34:11.577344 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.577387 master-0 kubenswrapper[4652]: I0216 17:34:11.577386 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:11.577485 master-0 kubenswrapper[4652]: I0216 17:34:11.577434 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.577573 master-0 kubenswrapper[4652]: I0216 17:34:11.577535 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.577685 master-0 kubenswrapper[4652]: I0216 17:34:11.577626 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:11.578638 master-0 kubenswrapper[4652]: I0216 17:34:11.578590 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/afa8ee25cec0b37c40dad37c52b89d42-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"afa8ee25cec0b37c40dad37c52b89d42\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:11.745772 master-0 kubenswrapper[4652]: I0216 17:34:11.745677 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:11.762547 master-0 kubenswrapper[4652]: I0216 17:34:11.762491 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:11.762547 master-0 kubenswrapper[4652]: I0216 17:34:11.762527 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:11.763586 master-0 kubenswrapper[4652]: E0216 17:34:11.763497 4652 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:11.764558 master-0 kubenswrapper[4652]: I0216 17:34:11.764496 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:11.778124 master-0 kubenswrapper[4652]: I0216 17:34:11.778056 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_faa15f80078a2bfbe2234a74ab4da87c/kube-apiserver-cert-syncer/0.log" Feb 16 17:34:11.779533 master-0 kubenswrapper[4652]: I0216 17:34:11.779461 4652 generic.go:334] "Generic (PLEG): container finished" podID="faa15f80078a2bfbe2234a74ab4da87c" containerID="284583b4fbcd96e9535401a9d9fa6c2c190471d2cc1c24ecec6c46f11411d325" exitCode=0 Feb 16 17:34:11.779533 master-0 kubenswrapper[4652]: I0216 17:34:11.779484 4652 generic.go:334] "Generic (PLEG): container finished" podID="faa15f80078a2bfbe2234a74ab4da87c" containerID="6bbe807b9ce4dc7c6e7fcc8d4a47cb1d2dedb5ad58f4702ef0b65f1e8d8e7bb8" exitCode=0 Feb 16 17:34:11.779533 master-0 kubenswrapper[4652]: I0216 17:34:11.779492 4652 generic.go:334] "Generic (PLEG): container finished" podID="faa15f80078a2bfbe2234a74ab4da87c" containerID="34bf25d986801d0b46ede8fde9ead2765c451c45a097605d361e2e2d2b05e878" exitCode=0 Feb 16 17:34:11.779533 master-0 kubenswrapper[4652]: I0216 17:34:11.779498 4652 generic.go:334] "Generic (PLEG): container finished" podID="faa15f80078a2bfbe2234a74ab4da87c" containerID="164b3a54c598ea12b9aa8bd249c91c60806eda6d501871a369e2d5c5b7879903" exitCode=2 Feb 16 17:34:11.782028 master-0 kubenswrapper[4652]: I0216 17:34:11.781947 4652 generic.go:334] "Generic (PLEG): container finished" podID="71e06521-316a-4ca0-9e59-b0f196db417e" containerID="c2a3a5285a191c4f671ab0213ed3b1f5a76bad29f5e3ffec713950b4c9c60599" exitCode=0 Feb 16 17:34:11.782028 master-0 kubenswrapper[4652]: I0216 17:34:11.782014 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"71e06521-316a-4ca0-9e59-b0f196db417e","Type":"ContainerDied","Data":"c2a3a5285a191c4f671ab0213ed3b1f5a76bad29f5e3ffec713950b4c9c60599"} Feb 16 17:34:11.783398 master-0 kubenswrapper[4652]: I0216 17:34:11.783313 4652 status_manager.go:851] "Failed to get status for pod" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:11.799681 master-0 kubenswrapper[4652]: W0216 17:34:11.799609 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31a8353b1ad9c25fa07fedd5b1af1bb1.slice/crio-4721acb340029b0b0a818ba337ade7e139f1109ff3c82ccb6eb96bf75d10d202 WatchSource:0}: Error finding container 4721acb340029b0b0a818ba337ade7e139f1109ff3c82ccb6eb96bf75d10d202: Status 404 returned error can't find the container with id 4721acb340029b0b0a818ba337ade7e139f1109ff3c82ccb6eb96bf75d10d202 Feb 16 17:34:11.803360 master-0 kubenswrapper[4652]: E0216 17:34:11.803211 4652 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-master-0.1894ca8aff5cce1e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:31a8353b1ad9c25fa07fedd5b1af1bb1,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:34:11.80201731 +0000 UTC m=+609.190185826,LastTimestamp:2026-02-16 17:34:11.80201731 +0000 UTC m=+609.190185826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:34:12.753739 master-0 kubenswrapper[4652]: I0216 17:34:12.753676 4652 status_manager.go:851] "Failed to get status for pod" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:12.759647 master-0 kubenswrapper[4652]: I0216 17:34:12.754398 4652 status_manager.go:851] "Failed to get status for pod" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:12.788664 master-0 kubenswrapper[4652]: I0216 17:34:12.788603 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"31a8353b1ad9c25fa07fedd5b1af1bb1","Type":"ContainerStarted","Data":"6b9b07417d49e4f508537761df188a7c95b33d2e6c767102326be4da51a7ff7b"} Feb 16 17:34:12.788664 master-0 kubenswrapper[4652]: I0216 17:34:12.788651 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"31a8353b1ad9c25fa07fedd5b1af1bb1","Type":"ContainerStarted","Data":"b7664730dd1ed69dbf6e8a6e5f3ccfa21d631bbd28803c13074f2d5406736fe0"} Feb 16 17:34:12.788664 master-0 kubenswrapper[4652]: I0216 17:34:12.788662 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"31a8353b1ad9c25fa07fedd5b1af1bb1","Type":"ContainerStarted","Data":"2aca9a47a9e3d15922bc29f7712de5235c1f59e767130d3b779f08b367c97d92"} Feb 16 17:34:12.788664 master-0 kubenswrapper[4652]: I0216 17:34:12.788670 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"31a8353b1ad9c25fa07fedd5b1af1bb1","Type":"ContainerStarted","Data":"4721acb340029b0b0a818ba337ade7e139f1109ff3c82ccb6eb96bf75d10d202"} Feb 16 17:34:13.090989 master-0 kubenswrapper[4652]: I0216 17:34:13.090943 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 17:34:13.091763 master-0 kubenswrapper[4652]: I0216 17:34:13.091733 4652 status_manager.go:851] "Failed to get status for pod" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.092142 master-0 kubenswrapper[4652]: I0216 17:34:13.092103 4652 status_manager.go:851] "Failed to get status for pod" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.099821 master-0 kubenswrapper[4652]: I0216 17:34:13.099635 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/71e06521-316a-4ca0-9e59-b0f196db417e-var-lock\") pod \"71e06521-316a-4ca0-9e59-b0f196db417e\" (UID: \"71e06521-316a-4ca0-9e59-b0f196db417e\") " Feb 16 17:34:13.099987 master-0 kubenswrapper[4652]: I0216 17:34:13.099838 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71e06521-316a-4ca0-9e59-b0f196db417e-kube-api-access\") pod \"71e06521-316a-4ca0-9e59-b0f196db417e\" (UID: \"71e06521-316a-4ca0-9e59-b0f196db417e\") " Feb 16 17:34:13.099987 master-0 kubenswrapper[4652]: I0216 17:34:13.099767 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71e06521-316a-4ca0-9e59-b0f196db417e-var-lock" (OuterVolumeSpecName: "var-lock") pod "71e06521-316a-4ca0-9e59-b0f196db417e" (UID: "71e06521-316a-4ca0-9e59-b0f196db417e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:34:13.100538 master-0 kubenswrapper[4652]: I0216 17:34:13.100389 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71e06521-316a-4ca0-9e59-b0f196db417e-kubelet-dir\") pod \"71e06521-316a-4ca0-9e59-b0f196db417e\" (UID: \"71e06521-316a-4ca0-9e59-b0f196db417e\") " Feb 16 17:34:13.100538 master-0 kubenswrapper[4652]: I0216 17:34:13.100525 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/71e06521-316a-4ca0-9e59-b0f196db417e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "71e06521-316a-4ca0-9e59-b0f196db417e" (UID: "71e06521-316a-4ca0-9e59-b0f196db417e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:34:13.100804 master-0 kubenswrapper[4652]: I0216 17:34:13.100789 4652 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/71e06521-316a-4ca0-9e59-b0f196db417e-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:34:13.100863 master-0 kubenswrapper[4652]: I0216 17:34:13.100811 4652 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71e06521-316a-4ca0-9e59-b0f196db417e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:34:13.104685 master-0 kubenswrapper[4652]: I0216 17:34:13.104617 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71e06521-316a-4ca0-9e59-b0f196db417e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "71e06521-316a-4ca0-9e59-b0f196db417e" (UID: "71e06521-316a-4ca0-9e59-b0f196db417e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:34:13.201378 master-0 kubenswrapper[4652]: I0216 17:34:13.201322 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71e06521-316a-4ca0-9e59-b0f196db417e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Feb 16 17:34:13.680878 master-0 kubenswrapper[4652]: I0216 17:34:13.680849 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_faa15f80078a2bfbe2234a74ab4da87c/kube-apiserver-cert-syncer/0.log" Feb 16 17:34:13.681692 master-0 kubenswrapper[4652]: I0216 17:34:13.681669 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:13.682871 master-0 kubenswrapper[4652]: I0216 17:34:13.682806 4652 status_manager.go:851] "Failed to get status for pod" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.683570 master-0 kubenswrapper[4652]: I0216 17:34:13.683533 4652 status_manager.go:851] "Failed to get status for pod" podUID="faa15f80078a2bfbe2234a74ab4da87c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.684164 master-0 kubenswrapper[4652]: I0216 17:34:13.684132 4652 status_manager.go:851] "Failed to get status for pod" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.737585 master-0 kubenswrapper[4652]: I0216 17:34:13.737543 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-audit-dir\") pod \"faa15f80078a2bfbe2234a74ab4da87c\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " Feb 16 17:34:13.737784 master-0 kubenswrapper[4652]: I0216 17:34:13.737627 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-resource-dir\") pod \"faa15f80078a2bfbe2234a74ab4da87c\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " Feb 16 17:34:13.737784 master-0 kubenswrapper[4652]: I0216 17:34:13.737663 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-cert-dir\") pod \"faa15f80078a2bfbe2234a74ab4da87c\" (UID: \"faa15f80078a2bfbe2234a74ab4da87c\") " Feb 16 17:34:13.737855 master-0 kubenswrapper[4652]: I0216 17:34:13.737808 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "faa15f80078a2bfbe2234a74ab4da87c" (UID: "faa15f80078a2bfbe2234a74ab4da87c"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:34:13.737919 master-0 kubenswrapper[4652]: I0216 17:34:13.737885 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "faa15f80078a2bfbe2234a74ab4da87c" (UID: "faa15f80078a2bfbe2234a74ab4da87c"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:34:13.738021 master-0 kubenswrapper[4652]: I0216 17:34:13.738001 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "faa15f80078a2bfbe2234a74ab4da87c" (UID: "faa15f80078a2bfbe2234a74ab4da87c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:34:13.738115 master-0 kubenswrapper[4652]: I0216 17:34:13.738007 4652 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:34:13.738221 master-0 kubenswrapper[4652]: I0216 17:34:13.738206 4652 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-cert-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:34:13.798621 master-0 kubenswrapper[4652]: I0216 17:34:13.798560 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"31a8353b1ad9c25fa07fedd5b1af1bb1","Type":"ContainerStarted","Data":"4f4581f6b1219ae35b6554a4b7c312ba0535532e12c75e94d712911d9e8e04aa"} Feb 16 17:34:13.799144 master-0 kubenswrapper[4652]: I0216 17:34:13.798964 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:13.799144 master-0 kubenswrapper[4652]: I0216 17:34:13.798998 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:13.799552 master-0 kubenswrapper[4652]: E0216 17:34:13.799517 4652 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:13.799734 master-0 kubenswrapper[4652]: I0216 17:34:13.799691 4652 status_manager.go:851] "Failed to get status for pod" podUID="faa15f80078a2bfbe2234a74ab4da87c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.800375 master-0 kubenswrapper[4652]: I0216 17:34:13.800350 4652 status_manager.go:851] "Failed to get status for pod" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.800866 master-0 kubenswrapper[4652]: I0216 17:34:13.800841 4652 status_manager.go:851] "Failed to get status for pod" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.801506 master-0 kubenswrapper[4652]: I0216 17:34:13.801486 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_faa15f80078a2bfbe2234a74ab4da87c/kube-apiserver-cert-syncer/0.log" Feb 16 17:34:13.802107 master-0 kubenswrapper[4652]: I0216 17:34:13.802078 4652 generic.go:334] "Generic (PLEG): container finished" podID="faa15f80078a2bfbe2234a74ab4da87c" containerID="f6b7271a04cc4daa6d1770f988b8eabff167278bdea4ccbf77e19ff3390112e7" exitCode=0 Feb 16 17:34:13.802219 master-0 kubenswrapper[4652]: I0216 17:34:13.802191 4652 scope.go:117] "RemoveContainer" containerID="284583b4fbcd96e9535401a9d9fa6c2c190471d2cc1c24ecec6c46f11411d325" Feb 16 17:34:13.802374 master-0 kubenswrapper[4652]: I0216 17:34:13.802348 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:13.805369 master-0 kubenswrapper[4652]: I0216 17:34:13.805349 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-6-master-0" event={"ID":"71e06521-316a-4ca0-9e59-b0f196db417e","Type":"ContainerDied","Data":"aa296c3c9cf3461ccee98dd54df3eeee332318951c91e4ca537e877e1d4ceb6a"} Feb 16 17:34:13.805488 master-0 kubenswrapper[4652]: I0216 17:34:13.805471 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa296c3c9cf3461ccee98dd54df3eeee332318951c91e4ca537e877e1d4ceb6a" Feb 16 17:34:13.805577 master-0 kubenswrapper[4652]: I0216 17:34:13.805405 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-6-master-0" Feb 16 17:34:13.819535 master-0 kubenswrapper[4652]: I0216 17:34:13.819501 4652 status_manager.go:851] "Failed to get status for pod" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.820141 master-0 kubenswrapper[4652]: I0216 17:34:13.820100 4652 status_manager.go:851] "Failed to get status for pod" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.820765 master-0 kubenswrapper[4652]: I0216 17:34:13.820640 4652 status_manager.go:851] "Failed to get status for pod" podUID="faa15f80078a2bfbe2234a74ab4da87c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.820898 master-0 kubenswrapper[4652]: I0216 17:34:13.820871 4652 scope.go:117] "RemoveContainer" containerID="6bbe807b9ce4dc7c6e7fcc8d4a47cb1d2dedb5ad58f4702ef0b65f1e8d8e7bb8" Feb 16 17:34:13.821631 master-0 kubenswrapper[4652]: I0216 17:34:13.821589 4652 status_manager.go:851] "Failed to get status for pod" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.822084 master-0 kubenswrapper[4652]: I0216 17:34:13.822053 4652 status_manager.go:851] "Failed to get status for pod" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.822511 master-0 kubenswrapper[4652]: I0216 17:34:13.822465 4652 status_manager.go:851] "Failed to get status for pod" podUID="faa15f80078a2bfbe2234a74ab4da87c" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:13.836837 master-0 kubenswrapper[4652]: I0216 17:34:13.836715 4652 scope.go:117] "RemoveContainer" containerID="34bf25d986801d0b46ede8fde9ead2765c451c45a097605d361e2e2d2b05e878" Feb 16 17:34:13.839882 master-0 kubenswrapper[4652]: I0216 17:34:13.839839 4652 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/faa15f80078a2bfbe2234a74ab4da87c-audit-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:34:13.853658 master-0 kubenswrapper[4652]: I0216 17:34:13.853621 4652 scope.go:117] "RemoveContainer" containerID="164b3a54c598ea12b9aa8bd249c91c60806eda6d501871a369e2d5c5b7879903" Feb 16 17:34:13.869887 master-0 kubenswrapper[4652]: I0216 17:34:13.869834 4652 scope.go:117] "RemoveContainer" containerID="f6b7271a04cc4daa6d1770f988b8eabff167278bdea4ccbf77e19ff3390112e7" Feb 16 17:34:13.897743 master-0 kubenswrapper[4652]: I0216 17:34:13.897687 4652 scope.go:117] "RemoveContainer" containerID="bb0c27734e1f9fd9ebf97aa709cc6f607812f77f245028c8df2c4bb9dec553ce" Feb 16 17:34:13.928797 master-0 kubenswrapper[4652]: I0216 17:34:13.928532 4652 scope.go:117] "RemoveContainer" containerID="284583b4fbcd96e9535401a9d9fa6c2c190471d2cc1c24ecec6c46f11411d325" Feb 16 17:34:13.928973 master-0 kubenswrapper[4652]: E0216 17:34:13.928925 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"284583b4fbcd96e9535401a9d9fa6c2c190471d2cc1c24ecec6c46f11411d325\": container with ID starting with 284583b4fbcd96e9535401a9d9fa6c2c190471d2cc1c24ecec6c46f11411d325 not found: ID does not exist" containerID="284583b4fbcd96e9535401a9d9fa6c2c190471d2cc1c24ecec6c46f11411d325" Feb 16 17:34:13.929088 master-0 kubenswrapper[4652]: I0216 17:34:13.928972 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"284583b4fbcd96e9535401a9d9fa6c2c190471d2cc1c24ecec6c46f11411d325"} err="failed to get container status \"284583b4fbcd96e9535401a9d9fa6c2c190471d2cc1c24ecec6c46f11411d325\": rpc error: code = NotFound desc = could not find container \"284583b4fbcd96e9535401a9d9fa6c2c190471d2cc1c24ecec6c46f11411d325\": container with ID starting with 284583b4fbcd96e9535401a9d9fa6c2c190471d2cc1c24ecec6c46f11411d325 not found: ID does not exist" Feb 16 17:34:13.929088 master-0 kubenswrapper[4652]: I0216 17:34:13.929001 4652 scope.go:117] "RemoveContainer" containerID="6bbe807b9ce4dc7c6e7fcc8d4a47cb1d2dedb5ad58f4702ef0b65f1e8d8e7bb8" Feb 16 17:34:13.929355 master-0 kubenswrapper[4652]: E0216 17:34:13.929314 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bbe807b9ce4dc7c6e7fcc8d4a47cb1d2dedb5ad58f4702ef0b65f1e8d8e7bb8\": container with ID starting with 6bbe807b9ce4dc7c6e7fcc8d4a47cb1d2dedb5ad58f4702ef0b65f1e8d8e7bb8 not found: ID does not exist" containerID="6bbe807b9ce4dc7c6e7fcc8d4a47cb1d2dedb5ad58f4702ef0b65f1e8d8e7bb8" Feb 16 17:34:13.929436 master-0 kubenswrapper[4652]: I0216 17:34:13.929354 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bbe807b9ce4dc7c6e7fcc8d4a47cb1d2dedb5ad58f4702ef0b65f1e8d8e7bb8"} err="failed to get container status \"6bbe807b9ce4dc7c6e7fcc8d4a47cb1d2dedb5ad58f4702ef0b65f1e8d8e7bb8\": rpc error: code = NotFound desc = could not find container \"6bbe807b9ce4dc7c6e7fcc8d4a47cb1d2dedb5ad58f4702ef0b65f1e8d8e7bb8\": container with ID starting with 6bbe807b9ce4dc7c6e7fcc8d4a47cb1d2dedb5ad58f4702ef0b65f1e8d8e7bb8 not found: ID does not exist" Feb 16 17:34:13.929436 master-0 kubenswrapper[4652]: I0216 17:34:13.929380 4652 scope.go:117] "RemoveContainer" containerID="34bf25d986801d0b46ede8fde9ead2765c451c45a097605d361e2e2d2b05e878" Feb 16 17:34:13.929792 master-0 kubenswrapper[4652]: E0216 17:34:13.929758 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34bf25d986801d0b46ede8fde9ead2765c451c45a097605d361e2e2d2b05e878\": container with ID starting with 34bf25d986801d0b46ede8fde9ead2765c451c45a097605d361e2e2d2b05e878 not found: ID does not exist" containerID="34bf25d986801d0b46ede8fde9ead2765c451c45a097605d361e2e2d2b05e878" Feb 16 17:34:13.929855 master-0 kubenswrapper[4652]: I0216 17:34:13.929786 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34bf25d986801d0b46ede8fde9ead2765c451c45a097605d361e2e2d2b05e878"} err="failed to get container status \"34bf25d986801d0b46ede8fde9ead2765c451c45a097605d361e2e2d2b05e878\": rpc error: code = NotFound desc = could not find container \"34bf25d986801d0b46ede8fde9ead2765c451c45a097605d361e2e2d2b05e878\": container with ID starting with 34bf25d986801d0b46ede8fde9ead2765c451c45a097605d361e2e2d2b05e878 not found: ID does not exist" Feb 16 17:34:13.929855 master-0 kubenswrapper[4652]: I0216 17:34:13.929803 4652 scope.go:117] "RemoveContainer" containerID="164b3a54c598ea12b9aa8bd249c91c60806eda6d501871a369e2d5c5b7879903" Feb 16 17:34:13.930062 master-0 kubenswrapper[4652]: E0216 17:34:13.930022 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"164b3a54c598ea12b9aa8bd249c91c60806eda6d501871a369e2d5c5b7879903\": container with ID starting with 164b3a54c598ea12b9aa8bd249c91c60806eda6d501871a369e2d5c5b7879903 not found: ID does not exist" containerID="164b3a54c598ea12b9aa8bd249c91c60806eda6d501871a369e2d5c5b7879903" Feb 16 17:34:13.930129 master-0 kubenswrapper[4652]: I0216 17:34:13.930060 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164b3a54c598ea12b9aa8bd249c91c60806eda6d501871a369e2d5c5b7879903"} err="failed to get container status \"164b3a54c598ea12b9aa8bd249c91c60806eda6d501871a369e2d5c5b7879903\": rpc error: code = NotFound desc = could not find container \"164b3a54c598ea12b9aa8bd249c91c60806eda6d501871a369e2d5c5b7879903\": container with ID starting with 164b3a54c598ea12b9aa8bd249c91c60806eda6d501871a369e2d5c5b7879903 not found: ID does not exist" Feb 16 17:34:13.930129 master-0 kubenswrapper[4652]: I0216 17:34:13.930080 4652 scope.go:117] "RemoveContainer" containerID="f6b7271a04cc4daa6d1770f988b8eabff167278bdea4ccbf77e19ff3390112e7" Feb 16 17:34:13.930383 master-0 kubenswrapper[4652]: E0216 17:34:13.930321 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b7271a04cc4daa6d1770f988b8eabff167278bdea4ccbf77e19ff3390112e7\": container with ID starting with f6b7271a04cc4daa6d1770f988b8eabff167278bdea4ccbf77e19ff3390112e7 not found: ID does not exist" containerID="f6b7271a04cc4daa6d1770f988b8eabff167278bdea4ccbf77e19ff3390112e7" Feb 16 17:34:13.930383 master-0 kubenswrapper[4652]: I0216 17:34:13.930351 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b7271a04cc4daa6d1770f988b8eabff167278bdea4ccbf77e19ff3390112e7"} err="failed to get container status \"f6b7271a04cc4daa6d1770f988b8eabff167278bdea4ccbf77e19ff3390112e7\": rpc error: code = NotFound desc = could not find container \"f6b7271a04cc4daa6d1770f988b8eabff167278bdea4ccbf77e19ff3390112e7\": container with ID starting with f6b7271a04cc4daa6d1770f988b8eabff167278bdea4ccbf77e19ff3390112e7 not found: ID does not exist" Feb 16 17:34:13.930383 master-0 kubenswrapper[4652]: I0216 17:34:13.930373 4652 scope.go:117] "RemoveContainer" containerID="bb0c27734e1f9fd9ebf97aa709cc6f607812f77f245028c8df2c4bb9dec553ce" Feb 16 17:34:13.930617 master-0 kubenswrapper[4652]: E0216 17:34:13.930592 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb0c27734e1f9fd9ebf97aa709cc6f607812f77f245028c8df2c4bb9dec553ce\": container with ID starting with bb0c27734e1f9fd9ebf97aa709cc6f607812f77f245028c8df2c4bb9dec553ce not found: ID does not exist" containerID="bb0c27734e1f9fd9ebf97aa709cc6f607812f77f245028c8df2c4bb9dec553ce" Feb 16 17:34:13.930683 master-0 kubenswrapper[4652]: I0216 17:34:13.930617 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb0c27734e1f9fd9ebf97aa709cc6f607812f77f245028c8df2c4bb9dec553ce"} err="failed to get container status \"bb0c27734e1f9fd9ebf97aa709cc6f607812f77f245028c8df2c4bb9dec553ce\": rpc error: code = NotFound desc = could not find container \"bb0c27734e1f9fd9ebf97aa709cc6f607812f77f245028c8df2c4bb9dec553ce\": container with ID starting with bb0c27734e1f9fd9ebf97aa709cc6f607812f77f245028c8df2c4bb9dec553ce not found: ID does not exist" Feb 16 17:34:14.753825 master-0 kubenswrapper[4652]: I0216 17:34:14.753757 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa15f80078a2bfbe2234a74ab4da87c" path="/var/lib/kubelet/pods/faa15f80078a2bfbe2234a74ab4da87c/volumes" Feb 16 17:34:14.816844 master-0 kubenswrapper[4652]: I0216 17:34:14.816801 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:14.816844 master-0 kubenswrapper[4652]: I0216 17:34:14.816830 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:14.817465 master-0 kubenswrapper[4652]: E0216 17:34:14.817344 4652 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:16.375558 master-0 kubenswrapper[4652]: E0216 17:34:16.375483 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:16.376130 master-0 kubenswrapper[4652]: I0216 17:34:16.376095 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:16.397817 master-0 kubenswrapper[4652]: W0216 17:34:16.397734 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9a2b3a37af32e5d570b82bfd956f250.slice/crio-ad8e5cba0bcaa3969e50e8362adfb7775757bd07eb1021d20ebdfd3c30472c12 WatchSource:0}: Error finding container ad8e5cba0bcaa3969e50e8362adfb7775757bd07eb1021d20ebdfd3c30472c12: Status 404 returned error can't find the container with id ad8e5cba0bcaa3969e50e8362adfb7775757bd07eb1021d20ebdfd3c30472c12 Feb 16 17:34:16.834719 master-0 kubenswrapper[4652]: I0216 17:34:16.834663 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a9a2b3a37af32e5d570b82bfd956f250","Type":"ContainerStarted","Data":"2b736de15d55c5c33cbcc57e1b8dc4d93da4a3c91fa4899a8e98cbfd4b63e35a"} Feb 16 17:34:16.834719 master-0 kubenswrapper[4652]: I0216 17:34:16.834718 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a9a2b3a37af32e5d570b82bfd956f250","Type":"ContainerStarted","Data":"ad8e5cba0bcaa3969e50e8362adfb7775757bd07eb1021d20ebdfd3c30472c12"} Feb 16 17:34:16.836365 master-0 kubenswrapper[4652]: I0216 17:34:16.836317 4652 status_manager.go:851] "Failed to get status for pod" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:16.836456 master-0 kubenswrapper[4652]: E0216 17:34:16.836386 4652 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:34:16.836856 master-0 kubenswrapper[4652]: I0216 17:34:16.836818 4652 status_manager.go:851] "Failed to get status for pod" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:17.244925 master-0 kubenswrapper[4652]: E0216 17:34:17.244772 4652 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-master-0.1894ca8aff5cce1e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:31a8353b1ad9c25fa07fedd5b1af1bb1,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6299220482f0e3c3f393e5eda761e3fab67e96ddffbf71a7a77408359401533d\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-02-16 17:34:11.80201731 +0000 UTC m=+609.190185826,LastTimestamp:2026-02-16 17:34:11.80201731 +0000 UTC m=+609.190185826,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Feb 16 17:34:21.422298 master-0 kubenswrapper[4652]: E0216 17:34:21.422191 4652 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:21.423013 master-0 kubenswrapper[4652]: E0216 17:34:21.422840 4652 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:21.423676 master-0 kubenswrapper[4652]: E0216 17:34:21.423373 4652 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:21.423917 master-0 kubenswrapper[4652]: E0216 17:34:21.423876 4652 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:21.424444 master-0 kubenswrapper[4652]: E0216 17:34:21.424408 4652 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:21.424444 master-0 kubenswrapper[4652]: I0216 17:34:21.424438 4652 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 17:34:21.424857 master-0 kubenswrapper[4652]: E0216 17:34:21.424817 4652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Feb 16 17:34:21.626550 master-0 kubenswrapper[4652]: E0216 17:34:21.626474 4652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Feb 16 17:34:21.765222 master-0 kubenswrapper[4652]: I0216 17:34:21.765066 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:21.765222 master-0 kubenswrapper[4652]: I0216 17:34:21.765129 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:21.765222 master-0 kubenswrapper[4652]: I0216 17:34:21.765175 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:21.765222 master-0 kubenswrapper[4652]: I0216 17:34:21.765190 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:21.765763 master-0 kubenswrapper[4652]: I0216 17:34:21.765652 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:21.765763 master-0 kubenswrapper[4652]: I0216 17:34:21.765671 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:21.766661 master-0 kubenswrapper[4652]: E0216 17:34:21.766584 4652 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:21.771655 master-0 kubenswrapper[4652]: I0216 17:34:21.771594 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:21.772761 master-0 kubenswrapper[4652]: I0216 17:34:21.772693 4652 status_manager.go:851] "Failed to get status for pod" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:21.773154 master-0 kubenswrapper[4652]: I0216 17:34:21.773095 4652 status_manager.go:851] "Failed to get status for pod" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:21.876991 master-0 kubenswrapper[4652]: I0216 17:34:21.876899 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:21.876991 master-0 kubenswrapper[4652]: I0216 17:34:21.876935 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:21.877761 master-0 kubenswrapper[4652]: E0216 17:34:21.877707 4652 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:22.027943 master-0 kubenswrapper[4652]: E0216 17:34:22.027821 4652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Feb 16 17:34:22.750513 master-0 kubenswrapper[4652]: I0216 17:34:22.750439 4652 status_manager.go:851] "Failed to get status for pod" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:22.751317 master-0 kubenswrapper[4652]: I0216 17:34:22.751232 4652 status_manager.go:851] "Failed to get status for pod" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:22.829880 master-0 kubenswrapper[4652]: E0216 17:34:22.829792 4652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Feb 16 17:34:24.431812 master-0 kubenswrapper[4652]: E0216 17:34:24.431623 4652 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Feb 16 17:34:24.745616 master-0 kubenswrapper[4652]: I0216 17:34:24.745338 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:24.747364 master-0 kubenswrapper[4652]: I0216 17:34:24.747226 4652 status_manager.go:851] "Failed to get status for pod" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:24.748755 master-0 kubenswrapper[4652]: I0216 17:34:24.748697 4652 status_manager.go:851] "Failed to get status for pod" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:24.759348 master-0 kubenswrapper[4652]: I0216 17:34:24.759297 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e41f299c-b815-4ef7-a901-b7720b1ac5c6" Feb 16 17:34:24.759348 master-0 kubenswrapper[4652]: I0216 17:34:24.759341 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e41f299c-b815-4ef7-a901-b7720b1ac5c6" Feb 16 17:34:24.760552 master-0 kubenswrapper[4652]: E0216 17:34:24.760474 4652 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:24.761297 master-0 kubenswrapper[4652]: I0216 17:34:24.761231 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:24.765409 master-0 kubenswrapper[4652]: I0216 17:34:24.765358 4652 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 17:34:24.765489 master-0 kubenswrapper[4652]: I0216 17:34:24.765435 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 17:34:24.779201 master-0 kubenswrapper[4652]: W0216 17:34:24.778998 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafa8ee25cec0b37c40dad37c52b89d42.slice/crio-abe87d605e53ddd74e895fcced9f9dbfca1131b0d8f45db07433a70d6dcf793e WatchSource:0}: Error finding container abe87d605e53ddd74e895fcced9f9dbfca1131b0d8f45db07433a70d6dcf793e: Status 404 returned error can't find the container with id abe87d605e53ddd74e895fcced9f9dbfca1131b0d8f45db07433a70d6dcf793e Feb 16 17:34:24.896744 master-0 kubenswrapper[4652]: I0216 17:34:24.896654 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"abe87d605e53ddd74e895fcced9f9dbfca1131b0d8f45db07433a70d6dcf793e"} Feb 16 17:34:25.911755 master-0 kubenswrapper[4652]: I0216 17:34:25.911575 4652 generic.go:334] "Generic (PLEG): container finished" podID="afa8ee25cec0b37c40dad37c52b89d42" containerID="94ed6dc09494987714e75a03ea1c07fa3eca9bc5433efc3e01797e7d51df4bdb" exitCode=0 Feb 16 17:34:25.911755 master-0 kubenswrapper[4652]: I0216 17:34:25.911629 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerDied","Data":"94ed6dc09494987714e75a03ea1c07fa3eca9bc5433efc3e01797e7d51df4bdb"} Feb 16 17:34:25.912521 master-0 kubenswrapper[4652]: I0216 17:34:25.911891 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e41f299c-b815-4ef7-a901-b7720b1ac5c6" Feb 16 17:34:25.912521 master-0 kubenswrapper[4652]: I0216 17:34:25.911913 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e41f299c-b815-4ef7-a901-b7720b1ac5c6" Feb 16 17:34:25.912860 master-0 kubenswrapper[4652]: E0216 17:34:25.912818 4652 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:25.912907 master-0 kubenswrapper[4652]: I0216 17:34:25.912866 4652 status_manager.go:851] "Failed to get status for pod" podUID="31a8353b1ad9c25fa07fedd5b1af1bb1" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:25.914661 master-0 kubenswrapper[4652]: I0216 17:34:25.914513 4652 status_manager.go:851] "Failed to get status for pod" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" pod="openshift-kube-apiserver/installer-6-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Feb 16 17:34:26.929592 master-0 kubenswrapper[4652]: I0216 17:34:26.929539 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"f7a32a39926047270c835ed795f21bb60dab9c1b41f8b8d029cdc19c05a4e97b"} Feb 16 17:34:26.929592 master-0 kubenswrapper[4652]: I0216 17:34:26.929588 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"7c85f25367ceabf1fdbb580431386770114f357b719bdcee875e89ec33fa29f9"} Feb 16 17:34:26.929592 master-0 kubenswrapper[4652]: I0216 17:34:26.929601 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"fe495fdf92a5fb4d41d9fcc633cc4d4097dd4a2695ef404d88090233575e348c"} Feb 16 17:34:26.930244 master-0 kubenswrapper[4652]: I0216 17:34:26.929614 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"bef1150a5cb7f48a2c56f1a01605ae08ff2d78d4976c7fa2896b5d587b445eb8"} Feb 16 17:34:27.941515 master-0 kubenswrapper[4652]: I0216 17:34:27.941468 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"afa8ee25cec0b37c40dad37c52b89d42","Type":"ContainerStarted","Data":"142b5ac3161e4a9235489a29a4a07a9b56ab9749f25d5d53240424bd8485dcd6"} Feb 16 17:34:27.942290 master-0 kubenswrapper[4652]: I0216 17:34:27.942269 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:27.942458 master-0 kubenswrapper[4652]: I0216 17:34:27.942435 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e41f299c-b815-4ef7-a901-b7720b1ac5c6" Feb 16 17:34:27.942568 master-0 kubenswrapper[4652]: I0216 17:34:27.942545 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e41f299c-b815-4ef7-a901-b7720b1ac5c6" Feb 16 17:34:29.762168 master-0 kubenswrapper[4652]: I0216 17:34:29.762091 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:29.762168 master-0 kubenswrapper[4652]: I0216 17:34:29.762169 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:29.768562 master-0 kubenswrapper[4652]: I0216 17:34:29.768511 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:31.772048 master-0 kubenswrapper[4652]: I0216 17:34:31.771968 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:31.772972 master-0 kubenswrapper[4652]: I0216 17:34:31.772623 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:31.772972 master-0 kubenswrapper[4652]: I0216 17:34:31.772652 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:31.790501 master-0 kubenswrapper[4652]: I0216 17:34:31.789578 4652 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:32.663023 master-0 kubenswrapper[4652]: I0216 17:34:32.662953 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:32.663450 master-0 kubenswrapper[4652]: I0216 17:34:32.663345 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:32.663450 master-0 kubenswrapper[4652]: I0216 17:34:32.663373 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:32.668592 master-0 kubenswrapper[4652]: I0216 17:34:32.668513 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Feb 16 17:34:32.961864 master-0 kubenswrapper[4652]: I0216 17:34:32.961806 4652 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:32.978842 master-0 kubenswrapper[4652]: I0216 17:34:32.978766 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="31a8353b1ad9c25fa07fedd5b1af1bb1" podUID="5552e2ee-30c4-4bde-a282-465b14855dad" Feb 16 17:34:32.980549 master-0 kubenswrapper[4652]: I0216 17:34:32.980514 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:32.980693 master-0 kubenswrapper[4652]: I0216 17:34:32.980678 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="12ba1db0-d19a-4a91-a65d-05840894286d" Feb 16 17:34:32.980833 master-0 kubenswrapper[4652]: I0216 17:34:32.980801 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e41f299c-b815-4ef7-a901-b7720b1ac5c6" Feb 16 17:34:32.980833 master-0 kubenswrapper[4652]: I0216 17:34:32.980831 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e41f299c-b815-4ef7-a901-b7720b1ac5c6" Feb 16 17:34:32.982125 master-0 kubenswrapper[4652]: I0216 17:34:32.982060 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="afa8ee25cec0b37c40dad37c52b89d42" podUID="23251387-f8e7-45be-93ec-bbf4203dadc1" Feb 16 17:34:32.989271 master-0 kubenswrapper[4652]: I0216 17:34:32.989166 4652 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-master-0" containerID="cri-o://bef1150a5cb7f48a2c56f1a01605ae08ff2d78d4976c7fa2896b5d587b445eb8" Feb 16 17:34:32.989271 master-0 kubenswrapper[4652]: I0216 17:34:32.989237 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:32.991643 master-0 kubenswrapper[4652]: I0216 17:34:32.991601 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="31a8353b1ad9c25fa07fedd5b1af1bb1" podUID="5552e2ee-30c4-4bde-a282-465b14855dad" Feb 16 17:34:33.986262 master-0 kubenswrapper[4652]: I0216 17:34:33.986216 4652 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e41f299c-b815-4ef7-a901-b7720b1ac5c6" Feb 16 17:34:33.986762 master-0 kubenswrapper[4652]: I0216 17:34:33.986749 4652 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="e41f299c-b815-4ef7-a901-b7720b1ac5c6" Feb 16 17:34:42.755014 master-0 kubenswrapper[4652]: I0216 17:34:42.754943 4652 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="afa8ee25cec0b37c40dad37c52b89d42" podUID="23251387-f8e7-45be-93ec-bbf4203dadc1" Feb 16 17:34:42.804045 master-0 kubenswrapper[4652]: I0216 17:34:42.803949 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 17:34:42.863102 master-0 kubenswrapper[4652]: I0216 17:34:42.863026 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Feb 16 17:34:43.113787 master-0 kubenswrapper[4652]: I0216 17:34:43.113646 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-x2982" Feb 16 17:34:43.587728 master-0 kubenswrapper[4652]: I0216 17:34:43.587669 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 17:34:43.700721 master-0 kubenswrapper[4652]: I0216 17:34:43.700665 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 17:34:43.982882 master-0 kubenswrapper[4652]: I0216 17:34:43.982819 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 17:34:44.063564 master-0 kubenswrapper[4652]: I0216 17:34:44.063510 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-mzz6s" Feb 16 17:34:44.280846 master-0 kubenswrapper[4652]: I0216 17:34:44.280688 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 17:34:44.314851 master-0 kubenswrapper[4652]: I0216 17:34:44.314766 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 17:34:44.581274 master-0 kubenswrapper[4652]: I0216 17:34:44.581128 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 17:34:44.606392 master-0 kubenswrapper[4652]: I0216 17:34:44.606343 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 17:34:44.625522 master-0 kubenswrapper[4652]: I0216 17:34:44.625461 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Feb 16 17:34:44.642698 master-0 kubenswrapper[4652]: I0216 17:34:44.642628 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-b4rnj" Feb 16 17:34:44.733838 master-0 kubenswrapper[4652]: I0216 17:34:44.733748 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 17:34:44.814814 master-0 kubenswrapper[4652]: I0216 17:34:44.814752 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 17:34:44.820394 master-0 kubenswrapper[4652]: I0216 17:34:44.820340 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Feb 16 17:34:44.833823 master-0 kubenswrapper[4652]: I0216 17:34:44.833716 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 17:34:45.050742 master-0 kubenswrapper[4652]: I0216 17:34:45.050668 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-5lx84" Feb 16 17:34:45.121957 master-0 kubenswrapper[4652]: I0216 17:34:45.121808 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 17:34:45.129065 master-0 kubenswrapper[4652]: I0216 17:34:45.129012 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 17:34:45.247054 master-0 kubenswrapper[4652]: I0216 17:34:45.247009 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-6858s" Feb 16 17:34:45.247332 master-0 kubenswrapper[4652]: I0216 17:34:45.247125 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Feb 16 17:34:45.358912 master-0 kubenswrapper[4652]: I0216 17:34:45.358811 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Feb 16 17:34:45.475137 master-0 kubenswrapper[4652]: I0216 17:34:45.475061 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-qmzhq" Feb 16 17:34:45.578529 master-0 kubenswrapper[4652]: I0216 17:34:45.578429 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-4vdvea1506oin" Feb 16 17:34:45.717146 master-0 kubenswrapper[4652]: I0216 17:34:45.717100 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Feb 16 17:34:45.748894 master-0 kubenswrapper[4652]: I0216 17:34:45.748779 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-6nhmo5tgfmegb" Feb 16 17:34:45.788813 master-0 kubenswrapper[4652]: I0216 17:34:45.788756 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Feb 16 17:34:45.793199 master-0 kubenswrapper[4652]: I0216 17:34:45.793174 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 17:34:45.835597 master-0 kubenswrapper[4652]: I0216 17:34:45.835525 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Feb 16 17:34:45.854403 master-0 kubenswrapper[4652]: I0216 17:34:45.854277 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 17:34:46.008462 master-0 kubenswrapper[4652]: I0216 17:34:46.008296 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-nslxl" Feb 16 17:34:46.155659 master-0 kubenswrapper[4652]: I0216 17:34:46.155548 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Feb 16 17:34:46.258935 master-0 kubenswrapper[4652]: I0216 17:34:46.258758 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 17:34:46.292154 master-0 kubenswrapper[4652]: I0216 17:34:46.292097 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Feb 16 17:34:46.314699 master-0 kubenswrapper[4652]: I0216 17:34:46.314621 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Feb 16 17:34:46.365607 master-0 kubenswrapper[4652]: I0216 17:34:46.365525 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Feb 16 17:34:46.443156 master-0 kubenswrapper[4652]: I0216 17:34:46.443065 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 17:34:46.473167 master-0 kubenswrapper[4652]: I0216 17:34:46.473077 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 17:34:46.506980 master-0 kubenswrapper[4652]: I0216 17:34:46.506926 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 17:34:46.524158 master-0 kubenswrapper[4652]: I0216 17:34:46.524050 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 17:34:46.535690 master-0 kubenswrapper[4652]: I0216 17:34:46.535632 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 17:34:46.586726 master-0 kubenswrapper[4652]: I0216 17:34:46.586677 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 17:34:46.587103 master-0 kubenswrapper[4652]: I0216 17:34:46.586866 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Feb 16 17:34:46.679498 master-0 kubenswrapper[4652]: I0216 17:34:46.679433 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 17:34:46.699359 master-0 kubenswrapper[4652]: I0216 17:34:46.699226 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 17:34:46.701725 master-0 kubenswrapper[4652]: I0216 17:34:46.701696 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Feb 16 17:34:46.768290 master-0 kubenswrapper[4652]: I0216 17:34:46.767879 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-wnnb7" Feb 16 17:34:46.783750 master-0 kubenswrapper[4652]: I0216 17:34:46.783604 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 17:34:46.836667 master-0 kubenswrapper[4652]: I0216 17:34:46.836633 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 17:34:46.840528 master-0 kubenswrapper[4652]: I0216 17:34:46.840508 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Feb 16 17:34:46.878286 master-0 kubenswrapper[4652]: I0216 17:34:46.878213 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 17:34:46.889335 master-0 kubenswrapper[4652]: I0216 17:34:46.889305 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 17:34:46.960770 master-0 kubenswrapper[4652]: I0216 17:34:46.960715 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 17:34:47.017853 master-0 kubenswrapper[4652]: I0216 17:34:47.017791 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Feb 16 17:34:47.026931 master-0 kubenswrapper[4652]: I0216 17:34:47.026877 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 17:34:47.100971 master-0 kubenswrapper[4652]: I0216 17:34:47.100794 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 17:34:47.151826 master-0 kubenswrapper[4652]: I0216 17:34:47.151618 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-nkhdh" Feb 16 17:34:47.208293 master-0 kubenswrapper[4652]: I0216 17:34:47.204839 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:34:47.249129 master-0 kubenswrapper[4652]: I0216 17:34:47.249063 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Feb 16 17:34:47.315546 master-0 kubenswrapper[4652]: I0216 17:34:47.315485 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Feb 16 17:34:47.445871 master-0 kubenswrapper[4652]: I0216 17:34:47.445762 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-gtxjb" Feb 16 17:34:47.454159 master-0 kubenswrapper[4652]: I0216 17:34:47.454053 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-sk6rc" Feb 16 17:34:47.551159 master-0 kubenswrapper[4652]: I0216 17:34:47.551089 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 17:34:47.598918 master-0 kubenswrapper[4652]: I0216 17:34:47.598855 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 17:34:47.648284 master-0 kubenswrapper[4652]: I0216 17:34:47.648199 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-94r9k" Feb 16 17:34:47.687989 master-0 kubenswrapper[4652]: I0216 17:34:47.687920 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Feb 16 17:34:47.743093 master-0 kubenswrapper[4652]: I0216 17:34:47.742975 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-lcpkn" Feb 16 17:34:47.777076 master-0 kubenswrapper[4652]: I0216 17:34:47.777012 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 17:34:47.873306 master-0 kubenswrapper[4652]: I0216 17:34:47.873212 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 17:34:47.887963 master-0 kubenswrapper[4652]: I0216 17:34:47.887919 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-bstss" Feb 16 17:34:47.960600 master-0 kubenswrapper[4652]: I0216 17:34:47.960542 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 17:34:48.029978 master-0 kubenswrapper[4652]: I0216 17:34:48.029858 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 17:34:48.070268 master-0 kubenswrapper[4652]: I0216 17:34:48.070195 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 17:34:48.075506 master-0 kubenswrapper[4652]: I0216 17:34:48.075461 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Feb 16 17:34:48.107915 master-0 kubenswrapper[4652]: I0216 17:34:48.107860 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Feb 16 17:34:48.146624 master-0 kubenswrapper[4652]: I0216 17:34:48.146569 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Feb 16 17:34:48.266055 master-0 kubenswrapper[4652]: I0216 17:34:48.266017 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 17:34:48.279079 master-0 kubenswrapper[4652]: I0216 17:34:48.279026 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 17:34:48.301189 master-0 kubenswrapper[4652]: I0216 17:34:48.301059 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 17:34:48.311191 master-0 kubenswrapper[4652]: I0216 17:34:48.311143 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 17:34:48.316589 master-0 kubenswrapper[4652]: I0216 17:34:48.316560 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 17:34:48.349698 master-0 kubenswrapper[4652]: I0216 17:34:48.349650 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 17:34:48.434290 master-0 kubenswrapper[4652]: I0216 17:34:48.434167 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-rzjlw" Feb 16 17:34:48.440271 master-0 kubenswrapper[4652]: I0216 17:34:48.440179 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 17:34:48.513434 master-0 kubenswrapper[4652]: I0216 17:34:48.513377 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 17:34:48.540141 master-0 kubenswrapper[4652]: I0216 17:34:48.540069 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 17:34:48.545030 master-0 kubenswrapper[4652]: I0216 17:34:48.544970 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 17:34:48.594756 master-0 kubenswrapper[4652]: I0216 17:34:48.594638 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Feb 16 17:34:48.629794 master-0 kubenswrapper[4652]: I0216 17:34:48.629748 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 17:34:48.644688 master-0 kubenswrapper[4652]: I0216 17:34:48.644575 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 17:34:48.656313 master-0 kubenswrapper[4652]: I0216 17:34:48.656204 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 17:34:48.667928 master-0 kubenswrapper[4652]: I0216 17:34:48.667895 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 17:34:48.771110 master-0 kubenswrapper[4652]: I0216 17:34:48.771061 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Feb 16 17:34:48.791237 master-0 kubenswrapper[4652]: I0216 17:34:48.791165 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Feb 16 17:34:48.815913 master-0 kubenswrapper[4652]: I0216 17:34:48.815847 4652 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 17:34:48.866391 master-0 kubenswrapper[4652]: I0216 17:34:48.866200 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Feb 16 17:34:48.881777 master-0 kubenswrapper[4652]: I0216 17:34:48.881707 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Feb 16 17:34:49.013919 master-0 kubenswrapper[4652]: I0216 17:34:49.013881 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 17:34:49.106337 master-0 kubenswrapper[4652]: I0216 17:34:49.106240 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 17:34:49.137473 master-0 kubenswrapper[4652]: I0216 17:34:49.137367 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 17:34:49.162611 master-0 kubenswrapper[4652]: I0216 17:34:49.162547 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 17:34:49.224552 master-0 kubenswrapper[4652]: I0216 17:34:49.224490 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 17:34:49.227810 master-0 kubenswrapper[4652]: I0216 17:34:49.227775 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Feb 16 17:34:49.250588 master-0 kubenswrapper[4652]: I0216 17:34:49.250544 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 17:34:49.255777 master-0 kubenswrapper[4652]: I0216 17:34:49.255742 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 17:34:49.287854 master-0 kubenswrapper[4652]: I0216 17:34:49.287805 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 17:34:49.294382 master-0 kubenswrapper[4652]: I0216 17:34:49.294329 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 17:34:49.324955 master-0 kubenswrapper[4652]: I0216 17:34:49.324904 4652 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 17:34:49.332329 master-0 kubenswrapper[4652]: I0216 17:34:49.332237 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:34:49.332531 master-0 kubenswrapper[4652]: I0216 17:34:49.332353 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-apiserver/kube-apiserver-master-0"] Feb 16 17:34:49.338228 master-0 kubenswrapper[4652]: I0216 17:34:49.338189 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Feb 16 17:34:49.352904 master-0 kubenswrapper[4652]: I0216 17:34:49.352825 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=18.352806412 podStartE2EDuration="18.352806412s" podCreationTimestamp="2026-02-16 17:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:34:49.350595364 +0000 UTC m=+646.738763900" watchObservedRunningTime="2026-02-16 17:34:49.352806412 +0000 UTC m=+646.740974938" Feb 16 17:34:49.375849 master-0 kubenswrapper[4652]: I0216 17:34:49.375771 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=17.375752725 podStartE2EDuration="17.375752725s" podCreationTimestamp="2026-02-16 17:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:34:49.370499817 +0000 UTC m=+646.758668333" watchObservedRunningTime="2026-02-16 17:34:49.375752725 +0000 UTC m=+646.763921241" Feb 16 17:34:49.387368 master-0 kubenswrapper[4652]: I0216 17:34:49.387323 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 17:34:49.391831 master-0 kubenswrapper[4652]: I0216 17:34:49.391753 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 17:34:49.407307 master-0 kubenswrapper[4652]: I0216 17:34:49.407269 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 17:34:49.436262 master-0 kubenswrapper[4652]: I0216 17:34:49.436194 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 17:34:49.455177 master-0 kubenswrapper[4652]: I0216 17:34:49.455125 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 17:34:49.521434 master-0 kubenswrapper[4652]: I0216 17:34:49.521368 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:34:49.522222 master-0 kubenswrapper[4652]: I0216 17:34:49.522185 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 17:34:49.548030 master-0 kubenswrapper[4652]: I0216 17:34:49.547978 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 17:34:49.586369 master-0 kubenswrapper[4652]: I0216 17:34:49.586313 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 17:34:49.651120 master-0 kubenswrapper[4652]: I0216 17:34:49.650992 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 17:34:49.696932 master-0 kubenswrapper[4652]: I0216 17:34:49.696870 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 17:34:49.755405 master-0 kubenswrapper[4652]: I0216 17:34:49.755332 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Feb 16 17:34:49.796231 master-0 kubenswrapper[4652]: I0216 17:34:49.796155 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 17:34:49.807699 master-0 kubenswrapper[4652]: I0216 17:34:49.807636 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Feb 16 17:34:49.834585 master-0 kubenswrapper[4652]: I0216 17:34:49.834468 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 17:34:49.854592 master-0 kubenswrapper[4652]: I0216 17:34:49.854499 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 17:34:49.864686 master-0 kubenswrapper[4652]: I0216 17:34:49.864624 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Feb 16 17:34:49.909224 master-0 kubenswrapper[4652]: I0216 17:34:49.909076 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Feb 16 17:34:49.978466 master-0 kubenswrapper[4652]: I0216 17:34:49.978417 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 17:34:50.028063 master-0 kubenswrapper[4652]: I0216 17:34:50.028008 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 17:34:50.036376 master-0 kubenswrapper[4652]: I0216 17:34:50.036326 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-wxz7g" Feb 16 17:34:50.076614 master-0 kubenswrapper[4652]: I0216 17:34:50.076569 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 17:34:50.155959 master-0 kubenswrapper[4652]: I0216 17:34:50.155881 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 17:34:50.217052 master-0 kubenswrapper[4652]: I0216 17:34:50.216785 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Feb 16 17:34:50.240572 master-0 kubenswrapper[4652]: I0216 17:34:50.240506 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 17:34:50.262699 master-0 kubenswrapper[4652]: I0216 17:34:50.262627 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Feb 16 17:34:50.313017 master-0 kubenswrapper[4652]: I0216 17:34:50.312911 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Feb 16 17:34:50.373428 master-0 kubenswrapper[4652]: I0216 17:34:50.373366 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Feb 16 17:34:50.374416 master-0 kubenswrapper[4652]: I0216 17:34:50.374388 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 17:34:50.379752 master-0 kubenswrapper[4652]: I0216 17:34:50.379698 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 17:34:50.455741 master-0 kubenswrapper[4652]: I0216 17:34:50.455675 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Feb 16 17:34:50.534222 master-0 kubenswrapper[4652]: I0216 17:34:50.534059 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 17:34:50.543593 master-0 kubenswrapper[4652]: I0216 17:34:50.543533 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 17:34:50.544324 master-0 kubenswrapper[4652]: I0216 17:34:50.544206 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 17:34:50.695681 master-0 kubenswrapper[4652]: I0216 17:34:50.695552 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 17:34:50.741431 master-0 kubenswrapper[4652]: I0216 17:34:50.741334 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 17:34:50.851748 master-0 kubenswrapper[4652]: I0216 17:34:50.851633 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 17:34:50.889527 master-0 kubenswrapper[4652]: I0216 17:34:50.889467 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 17:34:50.909501 master-0 kubenswrapper[4652]: I0216 17:34:50.909466 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 17:34:50.915881 master-0 kubenswrapper[4652]: I0216 17:34:50.915858 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 17:34:50.984385 master-0 kubenswrapper[4652]: I0216 17:34:50.984347 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 17:34:51.006996 master-0 kubenswrapper[4652]: I0216 17:34:51.006941 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Feb 16 17:34:51.088446 master-0 kubenswrapper[4652]: I0216 17:34:51.088389 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Feb 16 17:34:51.091425 master-0 kubenswrapper[4652]: I0216 17:34:51.091385 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 17:34:51.198870 master-0 kubenswrapper[4652]: I0216 17:34:51.198817 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 17:34:51.202647 master-0 kubenswrapper[4652]: I0216 17:34:51.202601 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Feb 16 17:34:51.244450 master-0 kubenswrapper[4652]: I0216 17:34:51.244363 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Feb 16 17:34:51.249916 master-0 kubenswrapper[4652]: I0216 17:34:51.249869 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 17:34:51.257601 master-0 kubenswrapper[4652]: I0216 17:34:51.257557 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 17:34:51.325996 master-0 kubenswrapper[4652]: I0216 17:34:51.325915 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Feb 16 17:34:51.384616 master-0 kubenswrapper[4652]: I0216 17:34:51.384565 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-r5p9m" Feb 16 17:34:51.417431 master-0 kubenswrapper[4652]: I0216 17:34:51.417355 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Feb 16 17:34:51.444088 master-0 kubenswrapper[4652]: I0216 17:34:51.444024 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 17:34:51.450698 master-0 kubenswrapper[4652]: I0216 17:34:51.450621 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 17:34:51.480444 master-0 kubenswrapper[4652]: I0216 17:34:51.480370 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 17:34:51.515548 master-0 kubenswrapper[4652]: I0216 17:34:51.515448 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 17:34:51.538689 master-0 kubenswrapper[4652]: I0216 17:34:51.538628 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 17:34:51.559619 master-0 kubenswrapper[4652]: I0216 17:34:51.559561 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 17:34:51.587717 master-0 kubenswrapper[4652]: I0216 17:34:51.587656 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 17:34:51.590281 master-0 kubenswrapper[4652]: I0216 17:34:51.590228 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-q5h8t" Feb 16 17:34:51.647715 master-0 kubenswrapper[4652]: I0216 17:34:51.647673 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Feb 16 17:34:51.673946 master-0 kubenswrapper[4652]: I0216 17:34:51.673897 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Feb 16 17:34:51.737614 master-0 kubenswrapper[4652]: I0216 17:34:51.737512 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 17:34:51.806786 master-0 kubenswrapper[4652]: I0216 17:34:51.806742 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Feb 16 17:34:51.925142 master-0 kubenswrapper[4652]: I0216 17:34:51.925083 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 17:34:51.941792 master-0 kubenswrapper[4652]: I0216 17:34:51.941739 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 17:34:51.997970 master-0 kubenswrapper[4652]: I0216 17:34:51.997795 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 17:34:52.013454 master-0 kubenswrapper[4652]: I0216 17:34:52.013397 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 17:34:52.023210 master-0 kubenswrapper[4652]: I0216 17:34:52.023165 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 17:34:52.095185 master-0 kubenswrapper[4652]: I0216 17:34:52.095109 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 17:34:52.147103 master-0 kubenswrapper[4652]: I0216 17:34:52.147039 4652 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 17:34:52.207298 master-0 kubenswrapper[4652]: I0216 17:34:52.206773 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-3enh2b6fkpcog" Feb 16 17:34:52.249713 master-0 kubenswrapper[4652]: I0216 17:34:52.249569 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Feb 16 17:34:52.252943 master-0 kubenswrapper[4652]: I0216 17:34:52.252899 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 17:34:52.299985 master-0 kubenswrapper[4652]: I0216 17:34:52.299938 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 17:34:52.347274 master-0 kubenswrapper[4652]: I0216 17:34:52.345109 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:34:52.410147 master-0 kubenswrapper[4652]: I0216 17:34:52.410103 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 17:34:52.434037 master-0 kubenswrapper[4652]: I0216 17:34:52.433989 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Feb 16 17:34:52.497717 master-0 kubenswrapper[4652]: I0216 17:34:52.497671 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 17:34:52.503074 master-0 kubenswrapper[4652]: I0216 17:34:52.502994 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 17:34:52.565539 master-0 kubenswrapper[4652]: I0216 17:34:52.565480 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-skfh4"] Feb 16 17:34:52.565770 master-0 kubenswrapper[4652]: E0216 17:34:52.565745 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" containerName="installer" Feb 16 17:34:52.565770 master-0 kubenswrapper[4652]: I0216 17:34:52.565755 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" containerName="installer" Feb 16 17:34:52.565914 master-0 kubenswrapper[4652]: I0216 17:34:52.565889 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="71e06521-316a-4ca0-9e59-b0f196db417e" containerName="installer" Feb 16 17:34:52.566349 master-0 kubenswrapper[4652]: I0216 17:34:52.566320 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:52.570235 master-0 kubenswrapper[4652]: W0216 17:34:52.570189 4652 reflector.go:561] object-"sushy-emulator"/"os-client-config": failed to list *v1.Secret: secrets "os-client-config" is forbidden: User "system:node:master-0" cannot list resource "secrets" in API group "" in the namespace "sushy-emulator": no relationship found between node 'master-0' and this object Feb 16 17:34:52.570385 master-0 kubenswrapper[4652]: E0216 17:34:52.570258 4652 reflector.go:158] "Unhandled Error" err="object-\"sushy-emulator\"/\"os-client-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"os-client-config\" is forbidden: User \"system:node:master-0\" cannot list resource \"secrets\" in API group \"\" in the namespace \"sushy-emulator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 16 17:34:52.570385 master-0 kubenswrapper[4652]: W0216 17:34:52.570289 4652 reflector.go:561] object-"sushy-emulator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "sushy-emulator": no relationship found between node 'master-0' and this object Feb 16 17:34:52.570385 master-0 kubenswrapper[4652]: W0216 17:34:52.570293 4652 reflector.go:561] object-"sushy-emulator"/"sushy-emulator-config": failed to list *v1.ConfigMap: configmaps "sushy-emulator-config" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "sushy-emulator": no relationship found between node 'master-0' and this object Feb 16 17:34:52.570385 master-0 kubenswrapper[4652]: E0216 17:34:52.570319 4652 reflector.go:158] "Unhandled Error" err="object-\"sushy-emulator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"sushy-emulator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 16 17:34:52.570385 master-0 kubenswrapper[4652]: E0216 17:34:52.570328 4652 reflector.go:158] "Unhandled Error" err="object-\"sushy-emulator\"/\"sushy-emulator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"sushy-emulator-config\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"sushy-emulator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 16 17:34:52.570787 master-0 kubenswrapper[4652]: W0216 17:34:52.570755 4652 reflector.go:561] object-"sushy-emulator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:master-0" cannot list resource "configmaps" in API group "" in the namespace "sushy-emulator": no relationship found between node 'master-0' and this object Feb 16 17:34:52.570851 master-0 kubenswrapper[4652]: E0216 17:34:52.570784 4652 reflector.go:158] "Unhandled Error" err="object-\"sushy-emulator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:master-0\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"sushy-emulator\": no relationship found between node 'master-0' and this object" logger="UnhandledError" Feb 16 17:34:52.590077 master-0 kubenswrapper[4652]: I0216 17:34:52.590014 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-skfh4"] Feb 16 17:34:52.607308 master-0 kubenswrapper[4652]: I0216 17:34:52.601824 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Feb 16 17:34:52.647105 master-0 kubenswrapper[4652]: I0216 17:34:52.647038 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnkp8\" (UniqueName: \"kubernetes.io/projected/95f052f3-eab9-49a0-b95f-51722af6f1f9-kube-api-access-xnkp8\") pod \"sushy-emulator-58f4c9b998-skfh4\" (UID: \"95f052f3-eab9-49a0-b95f-51722af6f1f9\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:52.647105 master-0 kubenswrapper[4652]: I0216 17:34:52.647106 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/95f052f3-eab9-49a0-b95f-51722af6f1f9-os-client-config\") pod \"sushy-emulator-58f4c9b998-skfh4\" (UID: \"95f052f3-eab9-49a0-b95f-51722af6f1f9\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:52.647337 master-0 kubenswrapper[4652]: I0216 17:34:52.647145 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/95f052f3-eab9-49a0-b95f-51722af6f1f9-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-skfh4\" (UID: \"95f052f3-eab9-49a0-b95f-51722af6f1f9\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:52.748500 master-0 kubenswrapper[4652]: I0216 17:34:52.748434 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnkp8\" (UniqueName: \"kubernetes.io/projected/95f052f3-eab9-49a0-b95f-51722af6f1f9-kube-api-access-xnkp8\") pod \"sushy-emulator-58f4c9b998-skfh4\" (UID: \"95f052f3-eab9-49a0-b95f-51722af6f1f9\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:52.748750 master-0 kubenswrapper[4652]: I0216 17:34:52.748540 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/95f052f3-eab9-49a0-b95f-51722af6f1f9-os-client-config\") pod \"sushy-emulator-58f4c9b998-skfh4\" (UID: \"95f052f3-eab9-49a0-b95f-51722af6f1f9\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:52.748750 master-0 kubenswrapper[4652]: I0216 17:34:52.748691 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/95f052f3-eab9-49a0-b95f-51722af6f1f9-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-skfh4\" (UID: \"95f052f3-eab9-49a0-b95f-51722af6f1f9\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:52.759084 master-0 kubenswrapper[4652]: I0216 17:34:52.759016 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 17:34:52.759293 master-0 kubenswrapper[4652]: I0216 17:34:52.759042 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 17:34:52.821762 master-0 kubenswrapper[4652]: I0216 17:34:52.821711 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Feb 16 17:34:52.886589 master-0 kubenswrapper[4652]: I0216 17:34:52.886532 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Feb 16 17:34:52.893452 master-0 kubenswrapper[4652]: I0216 17:34:52.893408 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Feb 16 17:34:52.919331 master-0 kubenswrapper[4652]: I0216 17:34:52.919229 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-gvwqd" Feb 16 17:34:53.013733 master-0 kubenswrapper[4652]: I0216 17:34:53.013599 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Feb 16 17:34:53.027842 master-0 kubenswrapper[4652]: I0216 17:34:53.027795 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 17:34:53.082686 master-0 kubenswrapper[4652]: I0216 17:34:53.082637 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 17:34:53.116094 master-0 kubenswrapper[4652]: I0216 17:34:53.116046 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Feb 16 17:34:53.120074 master-0 kubenswrapper[4652]: I0216 17:34:53.120042 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Feb 16 17:34:53.129088 master-0 kubenswrapper[4652]: I0216 17:34:53.128929 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 17:34:53.134498 master-0 kubenswrapper[4652]: I0216 17:34:53.134458 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 17:34:53.158650 master-0 kubenswrapper[4652]: I0216 17:34:53.158606 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 17:34:53.170649 master-0 kubenswrapper[4652]: I0216 17:34:53.170611 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 17:34:53.267270 master-0 kubenswrapper[4652]: I0216 17:34:53.267147 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 17:34:53.289590 master-0 kubenswrapper[4652]: I0216 17:34:53.289548 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:34:53.291440 master-0 kubenswrapper[4652]: I0216 17:34:53.291419 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 17:34:53.315865 master-0 kubenswrapper[4652]: I0216 17:34:53.315813 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 17:34:53.343268 master-0 kubenswrapper[4652]: I0216 17:34:53.343190 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 17:34:53.414454 master-0 kubenswrapper[4652]: I0216 17:34:53.414396 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Feb 16 17:34:53.503937 master-0 kubenswrapper[4652]: I0216 17:34:53.503855 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-q2gzj" Feb 16 17:34:53.520549 master-0 kubenswrapper[4652]: I0216 17:34:53.520419 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 17:34:53.570118 master-0 kubenswrapper[4652]: I0216 17:34:53.570056 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"kube-root-ca.crt" Feb 16 17:34:53.598563 master-0 kubenswrapper[4652]: I0216 17:34:53.598456 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 17:34:53.606778 master-0 kubenswrapper[4652]: I0216 17:34:53.606699 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Feb 16 17:34:53.642195 master-0 kubenswrapper[4652]: I0216 17:34:53.642122 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Feb 16 17:34:53.660192 master-0 kubenswrapper[4652]: I0216 17:34:53.660142 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 17:34:53.667608 master-0 kubenswrapper[4652]: I0216 17:34:53.667573 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 17:34:53.675837 master-0 kubenswrapper[4652]: I0216 17:34:53.675792 4652 reflector.go:368] Caches populated for *v1.Secret from object-"sushy-emulator"/"os-client-config" Feb 16 17:34:53.682762 master-0 kubenswrapper[4652]: I0216 17:34:53.682715 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/95f052f3-eab9-49a0-b95f-51722af6f1f9-os-client-config\") pod \"sushy-emulator-58f4c9b998-skfh4\" (UID: \"95f052f3-eab9-49a0-b95f-51722af6f1f9\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:53.740841 master-0 kubenswrapper[4652]: I0216 17:34:53.740780 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-b9gfw" Feb 16 17:34:53.749405 master-0 kubenswrapper[4652]: E0216 17:34:53.749342 4652 configmap.go:193] Couldn't get configMap sushy-emulator/sushy-emulator-config: failed to sync configmap cache: timed out waiting for the condition Feb 16 17:34:53.749405 master-0 kubenswrapper[4652]: E0216 17:34:53.749413 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/95f052f3-eab9-49a0-b95f-51722af6f1f9-sushy-emulator-config podName:95f052f3-eab9-49a0-b95f-51722af6f1f9 nodeName:}" failed. No retries permitted until 2026-02-16 17:34:54.24939776 +0000 UTC m=+651.637566276 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "sushy-emulator-config" (UniqueName: "kubernetes.io/configmap/95f052f3-eab9-49a0-b95f-51722af6f1f9-sushy-emulator-config") pod "sushy-emulator-58f4c9b998-skfh4" (UID: "95f052f3-eab9-49a0-b95f-51722af6f1f9") : failed to sync configmap cache: timed out waiting for the condition Feb 16 17:34:53.800262 master-0 kubenswrapper[4652]: I0216 17:34:53.800127 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-7mlbn" Feb 16 17:34:53.853067 master-0 kubenswrapper[4652]: I0216 17:34:53.853009 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 17:34:53.871182 master-0 kubenswrapper[4652]: I0216 17:34:53.871110 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"sushy-emulator-config" Feb 16 17:34:53.872363 master-0 kubenswrapper[4652]: I0216 17:34:53.872317 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Feb 16 17:34:53.904712 master-0 kubenswrapper[4652]: I0216 17:34:53.904662 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 17:34:53.907717 master-0 kubenswrapper[4652]: I0216 17:34:53.907675 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 17:34:54.000472 master-0 kubenswrapper[4652]: I0216 17:34:54.000391 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Feb 16 17:34:54.001836 master-0 kubenswrapper[4652]: I0216 17:34:54.001778 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Feb 16 17:34:54.025542 master-0 kubenswrapper[4652]: I0216 17:34:54.025051 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"sushy-emulator"/"openshift-service-ca.crt" Feb 16 17:34:54.034754 master-0 kubenswrapper[4652]: I0216 17:34:54.034725 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnkp8\" (UniqueName: \"kubernetes.io/projected/95f052f3-eab9-49a0-b95f-51722af6f1f9-kube-api-access-xnkp8\") pod \"sushy-emulator-58f4c9b998-skfh4\" (UID: \"95f052f3-eab9-49a0-b95f-51722af6f1f9\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:54.071970 master-0 kubenswrapper[4652]: I0216 17:34:54.071862 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 17:34:54.151469 master-0 kubenswrapper[4652]: I0216 17:34:54.151432 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 17:34:54.190912 master-0 kubenswrapper[4652]: I0216 17:34:54.190844 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Feb 16 17:34:54.276938 master-0 kubenswrapper[4652]: I0216 17:34:54.276875 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/95f052f3-eab9-49a0-b95f-51722af6f1f9-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-skfh4\" (UID: \"95f052f3-eab9-49a0-b95f-51722af6f1f9\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:54.278076 master-0 kubenswrapper[4652]: I0216 17:34:54.278043 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/95f052f3-eab9-49a0-b95f-51722af6f1f9-sushy-emulator-config\") pod \"sushy-emulator-58f4c9b998-skfh4\" (UID: \"95f052f3-eab9-49a0-b95f-51722af6f1f9\") " pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:54.284806 master-0 kubenswrapper[4652]: I0216 17:34:54.284488 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Feb 16 17:34:54.292598 master-0 kubenswrapper[4652]: I0216 17:34:54.292565 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 17:34:54.307382 master-0 kubenswrapper[4652]: I0216 17:34:54.307343 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 17:34:54.397009 master-0 kubenswrapper[4652]: I0216 17:34:54.396879 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:54.482776 master-0 kubenswrapper[4652]: I0216 17:34:54.482735 4652 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 17:34:54.495940 master-0 kubenswrapper[4652]: I0216 17:34:54.495908 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Feb 16 17:34:54.604788 master-0 kubenswrapper[4652]: I0216 17:34:54.603739 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Feb 16 17:34:54.623214 master-0 kubenswrapper[4652]: I0216 17:34:54.622941 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 17:34:54.649183 master-0 kubenswrapper[4652]: I0216 17:34:54.649076 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Feb 16 17:34:54.669071 master-0 kubenswrapper[4652]: I0216 17:34:54.669010 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 17:34:54.670858 master-0 kubenswrapper[4652]: I0216 17:34:54.670805 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 17:34:54.712658 master-0 kubenswrapper[4652]: I0216 17:34:54.712565 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 17:34:54.735304 master-0 kubenswrapper[4652]: I0216 17:34:54.735219 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 17:34:54.794418 master-0 kubenswrapper[4652]: I0216 17:34:54.794375 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 17:34:54.805612 master-0 kubenswrapper[4652]: I0216 17:34:54.805562 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 17:34:54.810602 master-0 kubenswrapper[4652]: I0216 17:34:54.810562 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-t46bw" Feb 16 17:34:54.871967 master-0 kubenswrapper[4652]: I0216 17:34:54.871915 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Feb 16 17:34:54.913628 master-0 kubenswrapper[4652]: I0216 17:34:54.913476 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Feb 16 17:34:54.919892 master-0 kubenswrapper[4652]: I0216 17:34:54.919854 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 17:34:54.940824 master-0 kubenswrapper[4652]: I0216 17:34:54.940766 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 17:34:55.067430 master-0 kubenswrapper[4652]: I0216 17:34:55.067364 4652 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 17:34:55.089107 master-0 kubenswrapper[4652]: I0216 17:34:55.089035 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 17:34:55.096973 master-0 kubenswrapper[4652]: I0216 17:34:55.096915 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 17:34:55.125407 master-0 kubenswrapper[4652]: I0216 17:34:55.125352 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Feb 16 17:34:55.185656 master-0 kubenswrapper[4652]: I0216 17:34:55.185611 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 17:34:55.219273 master-0 kubenswrapper[4652]: I0216 17:34:55.219190 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 17:34:55.238949 master-0 kubenswrapper[4652]: I0216 17:34:55.238896 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 17:34:55.281337 master-0 kubenswrapper[4652]: I0216 17:34:55.281274 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 17:34:55.359715 master-0 kubenswrapper[4652]: I0216 17:34:55.359633 4652 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Feb 16 17:34:55.359966 master-0 kubenswrapper[4652]: I0216 17:34:55.359880 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="a9a2b3a37af32e5d570b82bfd956f250" containerName="startup-monitor" containerID="cri-o://2b736de15d55c5c33cbcc57e1b8dc4d93da4a3c91fa4899a8e98cbfd4b63e35a" gracePeriod=5 Feb 16 17:34:55.398214 master-0 kubenswrapper[4652]: I0216 17:34:55.398159 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-ztpz8" Feb 16 17:34:55.459979 master-0 kubenswrapper[4652]: I0216 17:34:55.459874 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 17:34:55.539222 master-0 kubenswrapper[4652]: I0216 17:34:55.539178 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Feb 16 17:34:55.709153 master-0 kubenswrapper[4652]: I0216 17:34:55.709096 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 17:34:55.709153 master-0 kubenswrapper[4652]: I0216 17:34:55.709115 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Feb 16 17:34:55.737197 master-0 kubenswrapper[4652]: I0216 17:34:55.737043 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 17:34:55.819804 master-0 kubenswrapper[4652]: I0216 17:34:55.819738 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 17:34:55.842265 master-0 kubenswrapper[4652]: I0216 17:34:55.842174 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 17:34:55.857928 master-0 kubenswrapper[4652]: I0216 17:34:55.857869 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 17:34:55.908267 master-0 kubenswrapper[4652]: I0216 17:34:55.908210 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 17:34:55.925287 master-0 kubenswrapper[4652]: I0216 17:34:55.925214 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Feb 16 17:34:56.018525 master-0 kubenswrapper[4652]: I0216 17:34:56.018349 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 17:34:56.172544 master-0 kubenswrapper[4652]: I0216 17:34:56.172449 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Feb 16 17:34:56.206034 master-0 kubenswrapper[4652]: I0216 17:34:56.205976 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Feb 16 17:34:56.243119 master-0 kubenswrapper[4652]: I0216 17:34:56.243053 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Feb 16 17:34:56.485933 master-0 kubenswrapper[4652]: I0216 17:34:56.485880 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Feb 16 17:34:56.529761 master-0 kubenswrapper[4652]: I0216 17:34:56.529699 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 17:34:56.573437 master-0 kubenswrapper[4652]: I0216 17:34:56.573385 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 17:34:56.616607 master-0 kubenswrapper[4652]: I0216 17:34:56.616506 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 17:34:56.627350 master-0 kubenswrapper[4652]: I0216 17:34:56.627236 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Feb 16 17:34:56.660817 master-0 kubenswrapper[4652]: I0216 17:34:56.660754 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 17:34:56.711077 master-0 kubenswrapper[4652]: I0216 17:34:56.711003 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 17:34:56.800162 master-0 kubenswrapper[4652]: I0216 17:34:56.800050 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-hk5sk" Feb 16 17:34:57.012274 master-0 kubenswrapper[4652]: I0216 17:34:56.990864 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 17:34:57.131282 master-0 kubenswrapper[4652]: I0216 17:34:57.131161 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 17:34:57.163823 master-0 kubenswrapper[4652]: I0216 17:34:57.163360 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 17:34:57.222073 master-0 kubenswrapper[4652]: I0216 17:34:57.222019 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Feb 16 17:34:57.258420 master-0 kubenswrapper[4652]: I0216 17:34:57.258368 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 17:34:57.272159 master-0 kubenswrapper[4652]: I0216 17:34:57.272048 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Feb 16 17:34:57.350640 master-0 kubenswrapper[4652]: I0216 17:34:57.350606 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 17:34:57.352497 master-0 kubenswrapper[4652]: I0216 17:34:57.351457 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-j874l" Feb 16 17:34:57.383527 master-0 kubenswrapper[4652]: I0216 17:34:57.381554 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Feb 16 17:34:57.430004 master-0 kubenswrapper[4652]: I0216 17:34:57.429858 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 17:34:57.436240 master-0 kubenswrapper[4652]: I0216 17:34:57.436199 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 17:34:57.486692 master-0 kubenswrapper[4652]: I0216 17:34:57.482694 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 17:34:57.580358 master-0 kubenswrapper[4652]: I0216 17:34:57.580055 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-kh5s4" Feb 16 17:34:57.589511 master-0 kubenswrapper[4652]: E0216 17:34:57.589455 4652 log.go:32] "RunPodSandbox from runtime service failed" err=< Feb 16 17:34:57.589511 master-0 kubenswrapper[4652]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sushy-emulator-58f4c9b998-skfh4_sushy-emulator_95f052f3-eab9-49a0-b95f-51722af6f1f9_0(dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77): error adding pod sushy-emulator_sushy-emulator-58f4c9b998-skfh4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77" Netns:"/var/run/netns/476e8a8e-87b7-40c8-8a98-decc8d8d6701" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=sushy-emulator;K8S_POD_NAME=sushy-emulator-58f4c9b998-skfh4;K8S_POD_INFRA_CONTAINER_ID=dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77;K8S_POD_UID=95f052f3-eab9-49a0-b95f-51722af6f1f9" Path:"" ERRORED: error configuring pod [sushy-emulator/sushy-emulator-58f4c9b998-skfh4] networking: Multus: [sushy-emulator/sushy-emulator-58f4c9b998-skfh4/95f052f3-eab9-49a0-b95f-51722af6f1f9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod sushy-emulator-58f4c9b998-skfh4 in out of cluster comm: pod "sushy-emulator-58f4c9b998-skfh4" not found Feb 16 17:34:57.589511 master-0 kubenswrapper[4652]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:34:57.589511 master-0 kubenswrapper[4652]: > Feb 16 17:34:57.589802 master-0 kubenswrapper[4652]: E0216 17:34:57.589539 4652 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Feb 16 17:34:57.589802 master-0 kubenswrapper[4652]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sushy-emulator-58f4c9b998-skfh4_sushy-emulator_95f052f3-eab9-49a0-b95f-51722af6f1f9_0(dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77): error adding pod sushy-emulator_sushy-emulator-58f4c9b998-skfh4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77" Netns:"/var/run/netns/476e8a8e-87b7-40c8-8a98-decc8d8d6701" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=sushy-emulator;K8S_POD_NAME=sushy-emulator-58f4c9b998-skfh4;K8S_POD_INFRA_CONTAINER_ID=dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77;K8S_POD_UID=95f052f3-eab9-49a0-b95f-51722af6f1f9" Path:"" ERRORED: error configuring pod [sushy-emulator/sushy-emulator-58f4c9b998-skfh4] networking: Multus: [sushy-emulator/sushy-emulator-58f4c9b998-skfh4/95f052f3-eab9-49a0-b95f-51722af6f1f9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod sushy-emulator-58f4c9b998-skfh4 in out of cluster comm: pod "sushy-emulator-58f4c9b998-skfh4" not found Feb 16 17:34:57.589802 master-0 kubenswrapper[4652]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:34:57.589802 master-0 kubenswrapper[4652]: > pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:57.589802 master-0 kubenswrapper[4652]: E0216 17:34:57.589566 4652 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Feb 16 17:34:57.589802 master-0 kubenswrapper[4652]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sushy-emulator-58f4c9b998-skfh4_sushy-emulator_95f052f3-eab9-49a0-b95f-51722af6f1f9_0(dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77): error adding pod sushy-emulator_sushy-emulator-58f4c9b998-skfh4 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77" Netns:"/var/run/netns/476e8a8e-87b7-40c8-8a98-decc8d8d6701" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=sushy-emulator;K8S_POD_NAME=sushy-emulator-58f4c9b998-skfh4;K8S_POD_INFRA_CONTAINER_ID=dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77;K8S_POD_UID=95f052f3-eab9-49a0-b95f-51722af6f1f9" Path:"" ERRORED: error configuring pod [sushy-emulator/sushy-emulator-58f4c9b998-skfh4] networking: Multus: [sushy-emulator/sushy-emulator-58f4c9b998-skfh4/95f052f3-eab9-49a0-b95f-51722af6f1f9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod sushy-emulator-58f4c9b998-skfh4 in out of cluster comm: pod "sushy-emulator-58f4c9b998-skfh4" not found Feb 16 17:34:57.589802 master-0 kubenswrapper[4652]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Feb 16 17:34:57.589802 master-0 kubenswrapper[4652]: > pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:57.589802 master-0 kubenswrapper[4652]: E0216 17:34:57.589636 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"sushy-emulator-58f4c9b998-skfh4_sushy-emulator(95f052f3-eab9-49a0-b95f-51722af6f1f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"sushy-emulator-58f4c9b998-skfh4_sushy-emulator(95f052f3-eab9-49a0-b95f-51722af6f1f9)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sushy-emulator-58f4c9b998-skfh4_sushy-emulator_95f052f3-eab9-49a0-b95f-51722af6f1f9_0(dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77): error adding pod sushy-emulator_sushy-emulator-58f4c9b998-skfh4 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77\\\" Netns:\\\"/var/run/netns/476e8a8e-87b7-40c8-8a98-decc8d8d6701\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=sushy-emulator;K8S_POD_NAME=sushy-emulator-58f4c9b998-skfh4;K8S_POD_INFRA_CONTAINER_ID=dbc067f889a0ca2ce319fa8b07329b25170a59e4f88c259920326f9cab57be77;K8S_POD_UID=95f052f3-eab9-49a0-b95f-51722af6f1f9\\\" Path:\\\"\\\" ERRORED: error configuring pod [sushy-emulator/sushy-emulator-58f4c9b998-skfh4] networking: Multus: [sushy-emulator/sushy-emulator-58f4c9b998-skfh4/95f052f3-eab9-49a0-b95f-51722af6f1f9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod sushy-emulator-58f4c9b998-skfh4 in out of cluster comm: pod \\\"sushy-emulator-58f4c9b998-skfh4\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" podUID="95f052f3-eab9-49a0-b95f-51722af6f1f9" Feb 16 17:34:57.623443 master-0 kubenswrapper[4652]: I0216 17:34:57.623385 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 17:34:57.692524 master-0 kubenswrapper[4652]: I0216 17:34:57.692467 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 17:34:57.705411 master-0 kubenswrapper[4652]: I0216 17:34:57.705366 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Feb 16 17:34:57.717002 master-0 kubenswrapper[4652]: I0216 17:34:57.716960 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 17:34:57.824649 master-0 kubenswrapper[4652]: I0216 17:34:57.824594 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-lc8g2" Feb 16 17:34:57.835732 master-0 kubenswrapper[4652]: I0216 17:34:57.835680 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 17:34:57.836457 master-0 kubenswrapper[4652]: I0216 17:34:57.836426 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Feb 16 17:34:57.871796 master-0 kubenswrapper[4652]: I0216 17:34:57.871732 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Feb 16 17:34:57.881104 master-0 kubenswrapper[4652]: I0216 17:34:57.881058 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Feb 16 17:34:57.935921 master-0 kubenswrapper[4652]: I0216 17:34:57.935862 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 17:34:57.997643 master-0 kubenswrapper[4652]: I0216 17:34:57.997529 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 17:34:58.003327 master-0 kubenswrapper[4652]: I0216 17:34:58.003299 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 17:34:58.200602 master-0 kubenswrapper[4652]: I0216 17:34:58.200503 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:58.210526 master-0 kubenswrapper[4652]: I0216 17:34:58.210484 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:34:58.284590 master-0 kubenswrapper[4652]: I0216 17:34:58.278343 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Feb 16 17:34:58.405108 master-0 kubenswrapper[4652]: I0216 17:34:58.405027 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 17:34:58.480451 master-0 kubenswrapper[4652]: I0216 17:34:58.480386 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 17:34:58.563989 master-0 kubenswrapper[4652]: I0216 17:34:58.563877 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Feb 16 17:34:58.616314 master-0 kubenswrapper[4652]: I0216 17:34:58.600560 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-skfh4"] Feb 16 17:34:58.616759 master-0 kubenswrapper[4652]: I0216 17:34:58.609718 4652 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:34:58.683482 master-0 kubenswrapper[4652]: I0216 17:34:58.683417 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 17:34:58.724697 master-0 kubenswrapper[4652]: I0216 17:34:58.724611 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 17:34:58.762456 master-0 kubenswrapper[4652]: I0216 17:34:58.762364 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Feb 16 17:34:58.801174 master-0 kubenswrapper[4652]: I0216 17:34:58.801086 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 17:34:59.209607 master-0 kubenswrapper[4652]: I0216 17:34:59.207433 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" event={"ID":"95f052f3-eab9-49a0-b95f-51722af6f1f9","Type":"ContainerStarted","Data":"a5ef3681793fc53179c226419e2596e4276becca6648d8a7b613a5707a73217b"} Feb 16 17:34:59.325498 master-0 kubenswrapper[4652]: I0216 17:34:59.325423 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Feb 16 17:34:59.382789 master-0 kubenswrapper[4652]: I0216 17:34:59.382736 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 17:34:59.424081 master-0 kubenswrapper[4652]: I0216 17:34:59.424037 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 17:34:59.598212 master-0 kubenswrapper[4652]: I0216 17:34:59.598038 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 17:35:00.211924 master-0 kubenswrapper[4652]: I0216 17:35:00.211879 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 17:35:01.225923 master-0 kubenswrapper[4652]: I0216 17:35:01.225876 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a9a2b3a37af32e5d570b82bfd956f250/startup-monitor/0.log" Feb 16 17:35:01.226618 master-0 kubenswrapper[4652]: I0216 17:35:01.225945 4652 generic.go:334] "Generic (PLEG): container finished" podID="a9a2b3a37af32e5d570b82bfd956f250" containerID="2b736de15d55c5c33cbcc57e1b8dc4d93da4a3c91fa4899a8e98cbfd4b63e35a" exitCode=137 Feb 16 17:35:01.843093 master-0 kubenswrapper[4652]: I0216 17:35:01.843041 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a9a2b3a37af32e5d570b82bfd956f250/startup-monitor/0.log" Feb 16 17:35:01.843298 master-0 kubenswrapper[4652]: I0216 17:35:01.843110 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:35:01.903937 master-0 kubenswrapper[4652]: I0216 17:35:01.903776 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 16 17:35:01.903937 master-0 kubenswrapper[4652]: I0216 17:35:01.903860 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 16 17:35:01.903937 master-0 kubenswrapper[4652]: I0216 17:35:01.903913 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 16 17:35:01.903937 master-0 kubenswrapper[4652]: I0216 17:35:01.903950 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 16 17:35:01.904470 master-0 kubenswrapper[4652]: I0216 17:35:01.903967 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") pod \"a9a2b3a37af32e5d570b82bfd956f250\" (UID: \"a9a2b3a37af32e5d570b82bfd956f250\") " Feb 16 17:35:01.904470 master-0 kubenswrapper[4652]: I0216 17:35:01.903984 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests" (OuterVolumeSpecName: "manifests") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:35:01.904470 master-0 kubenswrapper[4652]: I0216 17:35:01.904043 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock" (OuterVolumeSpecName: "var-lock") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:35:01.904470 master-0 kubenswrapper[4652]: I0216 17:35:01.904074 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log" (OuterVolumeSpecName: "var-log") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:35:01.904470 master-0 kubenswrapper[4652]: I0216 17:35:01.904173 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:35:01.904820 master-0 kubenswrapper[4652]: I0216 17:35:01.904565 4652 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-lock\") on node \"master-0\" DevicePath \"\"" Feb 16 17:35:01.904820 master-0 kubenswrapper[4652]: I0216 17:35:01.904595 4652 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-var-log\") on node \"master-0\" DevicePath \"\"" Feb 16 17:35:01.904820 master-0 kubenswrapper[4652]: I0216 17:35:01.904613 4652 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:35:01.904820 master-0 kubenswrapper[4652]: I0216 17:35:01.904634 4652 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-manifests\") on node \"master-0\" DevicePath \"\"" Feb 16 17:35:01.908989 master-0 kubenswrapper[4652]: I0216 17:35:01.908934 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "a9a2b3a37af32e5d570b82bfd956f250" (UID: "a9a2b3a37af32e5d570b82bfd956f250"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:35:02.006512 master-0 kubenswrapper[4652]: I0216 17:35:02.006444 4652 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a9a2b3a37af32e5d570b82bfd956f250-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:35:02.234137 master-0 kubenswrapper[4652]: I0216 17:35:02.234060 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a9a2b3a37af32e5d570b82bfd956f250/startup-monitor/0.log" Feb 16 17:35:02.234137 master-0 kubenswrapper[4652]: I0216 17:35:02.234142 4652 scope.go:117] "RemoveContainer" containerID="2b736de15d55c5c33cbcc57e1b8dc4d93da4a3c91fa4899a8e98cbfd4b63e35a" Feb 16 17:35:02.234758 master-0 kubenswrapper[4652]: I0216 17:35:02.234216 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Feb 16 17:35:02.754465 master-0 kubenswrapper[4652]: I0216 17:35:02.754398 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a2b3a37af32e5d570b82bfd956f250" path="/var/lib/kubelet/pods/a9a2b3a37af32e5d570b82bfd956f250/volumes" Feb 16 17:35:05.255593 master-0 kubenswrapper[4652]: I0216 17:35:05.255458 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" event={"ID":"95f052f3-eab9-49a0-b95f-51722af6f1f9","Type":"ContainerStarted","Data":"6ed14ddf510a7493478e5ee5d442fd9c3dc90cca3c4f62db941894cc3a6caa0a"} Feb 16 17:35:05.285625 master-0 kubenswrapper[4652]: I0216 17:35:05.285513 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" podStartSLOduration=7.380467198 podStartE2EDuration="13.285494631s" podCreationTimestamp="2026-02-16 17:34:52 +0000 UTC" firstStartedPulling="2026-02-16 17:34:58.609654135 +0000 UTC m=+655.997822691" lastFinishedPulling="2026-02-16 17:35:04.514681578 +0000 UTC m=+661.902850124" observedRunningTime="2026-02-16 17:35:05.280543931 +0000 UTC m=+662.668712477" watchObservedRunningTime="2026-02-16 17:35:05.285494631 +0000 UTC m=+662.673663157" Feb 16 17:35:13.321785 master-0 kubenswrapper[4652]: I0216 17:35:13.321736 4652 generic.go:334] "Generic (PLEG): container finished" podID="74b2561b-933b-4c58-a63a-7a8c671d0ae9" containerID="e61d25d8c5643c0677cc37f1e09301730ebc6dee19ac41926d08e1447653354f" exitCode=0 Feb 16 17:35:13.321785 master-0 kubenswrapper[4652]: I0216 17:35:13.321782 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerDied","Data":"e61d25d8c5643c0677cc37f1e09301730ebc6dee19ac41926d08e1447653354f"} Feb 16 17:35:13.322489 master-0 kubenswrapper[4652]: I0216 17:35:13.322220 4652 scope.go:117] "RemoveContainer" containerID="e61d25d8c5643c0677cc37f1e09301730ebc6dee19ac41926d08e1447653354f" Feb 16 17:35:14.330805 master-0 kubenswrapper[4652]: I0216 17:35:14.330704 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" event={"ID":"74b2561b-933b-4c58-a63a-7a8c671d0ae9","Type":"ContainerStarted","Data":"32b61cb77b08192018821f29f65906fdf75c29e216a78ddcc5dbd17cec8d2c3f"} Feb 16 17:35:14.331423 master-0 kubenswrapper[4652]: I0216 17:35:14.331103 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:35:14.334752 master-0 kubenswrapper[4652]: I0216 17:35:14.334711 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-6cc5b65c6b-s4gp2" Feb 16 17:35:14.399350 master-0 kubenswrapper[4652]: I0216 17:35:14.397459 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:35:14.399350 master-0 kubenswrapper[4652]: I0216 17:35:14.397555 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:35:14.407312 master-0 kubenswrapper[4652]: I0216 17:35:14.406995 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:35:15.341769 master-0 kubenswrapper[4652]: I0216 17:35:15.341703 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:35:17.514900 master-0 kubenswrapper[4652]: I0216 17:35:17.514815 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/nova-console-poller-59f8d8d555-wcsb7"] Feb 16 17:35:17.515637 master-0 kubenswrapper[4652]: E0216 17:35:17.515280 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a2b3a37af32e5d570b82bfd956f250" containerName="startup-monitor" Feb 16 17:35:17.515637 master-0 kubenswrapper[4652]: I0216 17:35:17.515302 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a2b3a37af32e5d570b82bfd956f250" containerName="startup-monitor" Feb 16 17:35:17.515637 master-0 kubenswrapper[4652]: I0216 17:35:17.515582 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a2b3a37af32e5d570b82bfd956f250" containerName="startup-monitor" Feb 16 17:35:17.517157 master-0 kubenswrapper[4652]: I0216 17:35:17.517103 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-59f8d8d555-wcsb7" Feb 16 17:35:17.572295 master-0 kubenswrapper[4652]: I0216 17:35:17.528910 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-59f8d8d555-wcsb7"] Feb 16 17:35:17.573026 master-0 kubenswrapper[4652]: I0216 17:35:17.572955 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwxkz\" (UniqueName: \"kubernetes.io/projected/2c5f7933-0e77-443c-a372-369639c9e8ce-kube-api-access-dwxkz\") pod \"nova-console-poller-59f8d8d555-wcsb7\" (UID: \"2c5f7933-0e77-443c-a372-369639c9e8ce\") " pod="sushy-emulator/nova-console-poller-59f8d8d555-wcsb7" Feb 16 17:35:17.573570 master-0 kubenswrapper[4652]: I0216 17:35:17.573509 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/2c5f7933-0e77-443c-a372-369639c9e8ce-os-client-config\") pod \"nova-console-poller-59f8d8d555-wcsb7\" (UID: \"2c5f7933-0e77-443c-a372-369639c9e8ce\") " pod="sushy-emulator/nova-console-poller-59f8d8d555-wcsb7" Feb 16 17:35:17.675401 master-0 kubenswrapper[4652]: I0216 17:35:17.675323 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/2c5f7933-0e77-443c-a372-369639c9e8ce-os-client-config\") pod \"nova-console-poller-59f8d8d555-wcsb7\" (UID: \"2c5f7933-0e77-443c-a372-369639c9e8ce\") " pod="sushy-emulator/nova-console-poller-59f8d8d555-wcsb7" Feb 16 17:35:17.675676 master-0 kubenswrapper[4652]: I0216 17:35:17.675428 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwxkz\" (UniqueName: \"kubernetes.io/projected/2c5f7933-0e77-443c-a372-369639c9e8ce-kube-api-access-dwxkz\") pod \"nova-console-poller-59f8d8d555-wcsb7\" (UID: \"2c5f7933-0e77-443c-a372-369639c9e8ce\") " pod="sushy-emulator/nova-console-poller-59f8d8d555-wcsb7" Feb 16 17:35:17.679911 master-0 kubenswrapper[4652]: I0216 17:35:17.679853 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/2c5f7933-0e77-443c-a372-369639c9e8ce-os-client-config\") pod \"nova-console-poller-59f8d8d555-wcsb7\" (UID: \"2c5f7933-0e77-443c-a372-369639c9e8ce\") " pod="sushy-emulator/nova-console-poller-59f8d8d555-wcsb7" Feb 16 17:35:17.702850 master-0 kubenswrapper[4652]: I0216 17:35:17.702791 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwxkz\" (UniqueName: \"kubernetes.io/projected/2c5f7933-0e77-443c-a372-369639c9e8ce-kube-api-access-dwxkz\") pod \"nova-console-poller-59f8d8d555-wcsb7\" (UID: \"2c5f7933-0e77-443c-a372-369639c9e8ce\") " pod="sushy-emulator/nova-console-poller-59f8d8d555-wcsb7" Feb 16 17:35:17.889376 master-0 kubenswrapper[4652]: I0216 17:35:17.889226 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/nova-console-poller-59f8d8d555-wcsb7" Feb 16 17:35:18.280998 master-0 kubenswrapper[4652]: W0216 17:35:18.280950 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c5f7933_0e77_443c_a372_369639c9e8ce.slice/crio-60e262ad85bb6189f2beb206cbed37195afb47a82dcf03e0a84c7049a582ef1c WatchSource:0}: Error finding container 60e262ad85bb6189f2beb206cbed37195afb47a82dcf03e0a84c7049a582ef1c: Status 404 returned error can't find the container with id 60e262ad85bb6189f2beb206cbed37195afb47a82dcf03e0a84c7049a582ef1c Feb 16 17:35:18.283283 master-0 kubenswrapper[4652]: I0216 17:35:18.283220 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/nova-console-poller-59f8d8d555-wcsb7"] Feb 16 17:35:18.360026 master-0 kubenswrapper[4652]: I0216 17:35:18.359948 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-59f8d8d555-wcsb7" event={"ID":"2c5f7933-0e77-443c-a372-369639c9e8ce","Type":"ContainerStarted","Data":"60e262ad85bb6189f2beb206cbed37195afb47a82dcf03e0a84c7049a582ef1c"} Feb 16 17:35:23.401329 master-0 kubenswrapper[4652]: I0216 17:35:23.401156 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-59f8d8d555-wcsb7" event={"ID":"2c5f7933-0e77-443c-a372-369639c9e8ce","Type":"ContainerStarted","Data":"d753e041ca1429c657cef33e25e6e6d1728f35e95c3244bc8c43ce104ab4f0c5"} Feb 16 17:35:24.410294 master-0 kubenswrapper[4652]: I0216 17:35:24.410203 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/nova-console-poller-59f8d8d555-wcsb7" event={"ID":"2c5f7933-0e77-443c-a372-369639c9e8ce","Type":"ContainerStarted","Data":"32d7500650b789373b5cdce798a6ab75064a6c4176b34ecd16c4a4687c841fd2"} Feb 16 17:35:24.430637 master-0 kubenswrapper[4652]: I0216 17:35:24.430531 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/nova-console-poller-59f8d8d555-wcsb7" podStartSLOduration=2.0907819500000002 podStartE2EDuration="7.430511406s" podCreationTimestamp="2026-02-16 17:35:17 +0000 UTC" firstStartedPulling="2026-02-16 17:35:18.283329084 +0000 UTC m=+675.671497600" lastFinishedPulling="2026-02-16 17:35:23.62305854 +0000 UTC m=+681.011227056" observedRunningTime="2026-02-16 17:35:24.426380498 +0000 UTC m=+681.814549034" watchObservedRunningTime="2026-02-16 17:35:24.430511406 +0000 UTC m=+681.818679922" Feb 16 17:35:56.285006 master-0 kubenswrapper[4652]: I0216 17:35:56.284923 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch"] Feb 16 17:35:56.288877 master-0 kubenswrapper[4652]: I0216 17:35:56.288827 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" Feb 16 17:35:56.294012 master-0 kubenswrapper[4652]: I0216 17:35:56.293958 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch"] Feb 16 17:35:56.416361 master-0 kubenswrapper[4652]: I0216 17:35:56.416272 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a735fbfb-eb57-414e-b1f5-b568739c6e5b-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch\" (UID: \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" Feb 16 17:35:56.416361 master-0 kubenswrapper[4652]: I0216 17:35:56.416340 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h56mm\" (UniqueName: \"kubernetes.io/projected/a735fbfb-eb57-414e-b1f5-b568739c6e5b-kube-api-access-h56mm\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch\" (UID: \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" Feb 16 17:35:56.416745 master-0 kubenswrapper[4652]: I0216 17:35:56.416414 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a735fbfb-eb57-414e-b1f5-b568739c6e5b-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch\" (UID: \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" Feb 16 17:35:56.518127 master-0 kubenswrapper[4652]: I0216 17:35:56.518065 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a735fbfb-eb57-414e-b1f5-b568739c6e5b-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch\" (UID: \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" Feb 16 17:35:56.518127 master-0 kubenswrapper[4652]: I0216 17:35:56.518135 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h56mm\" (UniqueName: \"kubernetes.io/projected/a735fbfb-eb57-414e-b1f5-b568739c6e5b-kube-api-access-h56mm\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch\" (UID: \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" Feb 16 17:35:56.518424 master-0 kubenswrapper[4652]: I0216 17:35:56.518199 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a735fbfb-eb57-414e-b1f5-b568739c6e5b-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch\" (UID: \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" Feb 16 17:35:56.518736 master-0 kubenswrapper[4652]: I0216 17:35:56.518702 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a735fbfb-eb57-414e-b1f5-b568739c6e5b-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch\" (UID: \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" Feb 16 17:35:56.518794 master-0 kubenswrapper[4652]: I0216 17:35:56.518728 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a735fbfb-eb57-414e-b1f5-b568739c6e5b-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch\" (UID: \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" Feb 16 17:35:56.539283 master-0 kubenswrapper[4652]: I0216 17:35:56.539130 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h56mm\" (UniqueName: \"kubernetes.io/projected/a735fbfb-eb57-414e-b1f5-b568739c6e5b-kube-api-access-h56mm\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch\" (UID: \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" Feb 16 17:35:56.607237 master-0 kubenswrapper[4652]: I0216 17:35:56.607168 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" Feb 16 17:35:57.030285 master-0 kubenswrapper[4652]: I0216 17:35:57.030198 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch"] Feb 16 17:35:57.030667 master-0 kubenswrapper[4652]: W0216 17:35:57.030629 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda735fbfb_eb57_414e_b1f5_b568739c6e5b.slice/crio-3e5fb04076fdcde96161bfdeeef523dcf684b5a7397e4ccd2a98e993da374e80 WatchSource:0}: Error finding container 3e5fb04076fdcde96161bfdeeef523dcf684b5a7397e4ccd2a98e993da374e80: Status 404 returned error can't find the container with id 3e5fb04076fdcde96161bfdeeef523dcf684b5a7397e4ccd2a98e993da374e80 Feb 16 17:35:57.631522 master-0 kubenswrapper[4652]: I0216 17:35:57.631464 4652 generic.go:334] "Generic (PLEG): container finished" podID="a735fbfb-eb57-414e-b1f5-b568739c6e5b" containerID="4ed84abac602ef23f1cd6a30d7223ecac26470fa43766eecd9af6031e0acb3b0" exitCode=0 Feb 16 17:35:57.631522 master-0 kubenswrapper[4652]: I0216 17:35:57.631518 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" event={"ID":"a735fbfb-eb57-414e-b1f5-b568739c6e5b","Type":"ContainerDied","Data":"4ed84abac602ef23f1cd6a30d7223ecac26470fa43766eecd9af6031e0acb3b0"} Feb 16 17:35:57.632460 master-0 kubenswrapper[4652]: I0216 17:35:57.631551 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" event={"ID":"a735fbfb-eb57-414e-b1f5-b568739c6e5b","Type":"ContainerStarted","Data":"3e5fb04076fdcde96161bfdeeef523dcf684b5a7397e4ccd2a98e993da374e80"} Feb 16 17:35:59.649788 master-0 kubenswrapper[4652]: I0216 17:35:59.649709 4652 generic.go:334] "Generic (PLEG): container finished" podID="a735fbfb-eb57-414e-b1f5-b568739c6e5b" containerID="7c1bbfa46a90fac46acde6e8b60fe78725bd678848aadb84c3e736c5c79fc9fd" exitCode=0 Feb 16 17:35:59.649788 master-0 kubenswrapper[4652]: I0216 17:35:59.649770 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" event={"ID":"a735fbfb-eb57-414e-b1f5-b568739c6e5b","Type":"ContainerDied","Data":"7c1bbfa46a90fac46acde6e8b60fe78725bd678848aadb84c3e736c5c79fc9fd"} Feb 16 17:36:00.662062 master-0 kubenswrapper[4652]: I0216 17:36:00.661995 4652 generic.go:334] "Generic (PLEG): container finished" podID="a735fbfb-eb57-414e-b1f5-b568739c6e5b" containerID="d101aab38327e1bae0a47368bed0356a80f25f60fd4c63b814013390d3e78e37" exitCode=0 Feb 16 17:36:00.662062 master-0 kubenswrapper[4652]: I0216 17:36:00.662055 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" event={"ID":"a735fbfb-eb57-414e-b1f5-b568739c6e5b","Type":"ContainerDied","Data":"d101aab38327e1bae0a47368bed0356a80f25f60fd4c63b814013390d3e78e37"} Feb 16 17:36:01.984938 master-0 kubenswrapper[4652]: I0216 17:36:01.984742 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" Feb 16 17:36:02.114911 master-0 kubenswrapper[4652]: I0216 17:36:02.114809 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a735fbfb-eb57-414e-b1f5-b568739c6e5b-bundle\") pod \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\" (UID: \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\") " Feb 16 17:36:02.115206 master-0 kubenswrapper[4652]: I0216 17:36:02.114946 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a735fbfb-eb57-414e-b1f5-b568739c6e5b-util\") pod \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\" (UID: \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\") " Feb 16 17:36:02.115206 master-0 kubenswrapper[4652]: I0216 17:36:02.114999 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h56mm\" (UniqueName: \"kubernetes.io/projected/a735fbfb-eb57-414e-b1f5-b568739c6e5b-kube-api-access-h56mm\") pod \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\" (UID: \"a735fbfb-eb57-414e-b1f5-b568739c6e5b\") " Feb 16 17:36:02.116727 master-0 kubenswrapper[4652]: I0216 17:36:02.116644 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a735fbfb-eb57-414e-b1f5-b568739c6e5b-bundle" (OuterVolumeSpecName: "bundle") pod "a735fbfb-eb57-414e-b1f5-b568739c6e5b" (UID: "a735fbfb-eb57-414e-b1f5-b568739c6e5b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:36:02.118324 master-0 kubenswrapper[4652]: I0216 17:36:02.118237 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a735fbfb-eb57-414e-b1f5-b568739c6e5b-kube-api-access-h56mm" (OuterVolumeSpecName: "kube-api-access-h56mm") pod "a735fbfb-eb57-414e-b1f5-b568739c6e5b" (UID: "a735fbfb-eb57-414e-b1f5-b568739c6e5b"). InnerVolumeSpecName "kube-api-access-h56mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:36:02.148313 master-0 kubenswrapper[4652]: I0216 17:36:02.146956 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a735fbfb-eb57-414e-b1f5-b568739c6e5b-util" (OuterVolumeSpecName: "util") pod "a735fbfb-eb57-414e-b1f5-b568739c6e5b" (UID: "a735fbfb-eb57-414e-b1f5-b568739c6e5b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:36:02.217022 master-0 kubenswrapper[4652]: I0216 17:36:02.216872 4652 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a735fbfb-eb57-414e-b1f5-b568739c6e5b-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:02.217022 master-0 kubenswrapper[4652]: I0216 17:36:02.216925 4652 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a735fbfb-eb57-414e-b1f5-b568739c6e5b-util\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:02.217022 master-0 kubenswrapper[4652]: I0216 17:36:02.216940 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h56mm\" (UniqueName: \"kubernetes.io/projected/a735fbfb-eb57-414e-b1f5-b568739c6e5b-kube-api-access-h56mm\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:02.679085 master-0 kubenswrapper[4652]: I0216 17:36:02.679016 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" event={"ID":"a735fbfb-eb57-414e-b1f5-b568739c6e5b","Type":"ContainerDied","Data":"3e5fb04076fdcde96161bfdeeef523dcf684b5a7397e4ccd2a98e993da374e80"} Feb 16 17:36:02.679085 master-0 kubenswrapper[4652]: I0216 17:36:02.679068 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e5fb04076fdcde96161bfdeeef523dcf684b5a7397e4ccd2a98e993da374e80" Feb 16 17:36:02.679378 master-0 kubenswrapper[4652]: I0216 17:36:02.679121 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4qrnch" Feb 16 17:36:16.911038 master-0 kubenswrapper[4652]: I0216 17:36:16.910974 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-7dbc4567c8-bljw4"] Feb 16 17:36:16.911763 master-0 kubenswrapper[4652]: E0216 17:36:16.911321 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a735fbfb-eb57-414e-b1f5-b568739c6e5b" containerName="extract" Feb 16 17:36:16.911763 master-0 kubenswrapper[4652]: I0216 17:36:16.911338 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="a735fbfb-eb57-414e-b1f5-b568739c6e5b" containerName="extract" Feb 16 17:36:16.911763 master-0 kubenswrapper[4652]: E0216 17:36:16.911353 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a735fbfb-eb57-414e-b1f5-b568739c6e5b" containerName="util" Feb 16 17:36:16.911763 master-0 kubenswrapper[4652]: I0216 17:36:16.911363 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="a735fbfb-eb57-414e-b1f5-b568739c6e5b" containerName="util" Feb 16 17:36:16.911763 master-0 kubenswrapper[4652]: E0216 17:36:16.911382 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a735fbfb-eb57-414e-b1f5-b568739c6e5b" containerName="pull" Feb 16 17:36:16.911763 master-0 kubenswrapper[4652]: I0216 17:36:16.911390 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="a735fbfb-eb57-414e-b1f5-b568739c6e5b" containerName="pull" Feb 16 17:36:16.911763 master-0 kubenswrapper[4652]: I0216 17:36:16.911556 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="a735fbfb-eb57-414e-b1f5-b568739c6e5b" containerName="extract" Feb 16 17:36:16.912074 master-0 kubenswrapper[4652]: I0216 17:36:16.912053 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:16.917892 master-0 kubenswrapper[4652]: I0216 17:36:16.917846 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Feb 16 17:36:16.918107 master-0 kubenswrapper[4652]: I0216 17:36:16.918086 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Feb 16 17:36:16.918200 master-0 kubenswrapper[4652]: I0216 17:36:16.918180 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Feb 16 17:36:16.918349 master-0 kubenswrapper[4652]: I0216 17:36:16.918327 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Feb 16 17:36:16.918485 master-0 kubenswrapper[4652]: I0216 17:36:16.918235 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Feb 16 17:36:16.930180 master-0 kubenswrapper[4652]: I0216 17:36:16.930124 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7dbc4567c8-bljw4"] Feb 16 17:36:17.060474 master-0 kubenswrapper[4652]: I0216 17:36:17.060384 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b294eb5e-6f30-4669-a691-3f811bc8eceb-apiservice-cert\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.060474 master-0 kubenswrapper[4652]: I0216 17:36:17.060452 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b294eb5e-6f30-4669-a691-3f811bc8eceb-metrics-cert\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.060474 master-0 kubenswrapper[4652]: I0216 17:36:17.060484 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxgpl\" (UniqueName: \"kubernetes.io/projected/b294eb5e-6f30-4669-a691-3f811bc8eceb-kube-api-access-cxgpl\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.060873 master-0 kubenswrapper[4652]: I0216 17:36:17.060502 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b294eb5e-6f30-4669-a691-3f811bc8eceb-webhook-cert\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.060873 master-0 kubenswrapper[4652]: I0216 17:36:17.060533 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b294eb5e-6f30-4669-a691-3f811bc8eceb-socket-dir\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.161450 master-0 kubenswrapper[4652]: I0216 17:36:17.161320 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b294eb5e-6f30-4669-a691-3f811bc8eceb-apiservice-cert\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.161450 master-0 kubenswrapper[4652]: I0216 17:36:17.161392 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b294eb5e-6f30-4669-a691-3f811bc8eceb-metrics-cert\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.161450 master-0 kubenswrapper[4652]: I0216 17:36:17.161427 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxgpl\" (UniqueName: \"kubernetes.io/projected/b294eb5e-6f30-4669-a691-3f811bc8eceb-kube-api-access-cxgpl\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.161450 master-0 kubenswrapper[4652]: I0216 17:36:17.161451 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b294eb5e-6f30-4669-a691-3f811bc8eceb-webhook-cert\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.161751 master-0 kubenswrapper[4652]: I0216 17:36:17.161494 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b294eb5e-6f30-4669-a691-3f811bc8eceb-socket-dir\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.162018 master-0 kubenswrapper[4652]: I0216 17:36:17.161990 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b294eb5e-6f30-4669-a691-3f811bc8eceb-socket-dir\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.165240 master-0 kubenswrapper[4652]: I0216 17:36:17.165206 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b294eb5e-6f30-4669-a691-3f811bc8eceb-metrics-cert\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.165927 master-0 kubenswrapper[4652]: I0216 17:36:17.165867 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b294eb5e-6f30-4669-a691-3f811bc8eceb-webhook-cert\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.165985 master-0 kubenswrapper[4652]: I0216 17:36:17.165947 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b294eb5e-6f30-4669-a691-3f811bc8eceb-apiservice-cert\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.179177 master-0 kubenswrapper[4652]: I0216 17:36:17.179122 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxgpl\" (UniqueName: \"kubernetes.io/projected/b294eb5e-6f30-4669-a691-3f811bc8eceb-kube-api-access-cxgpl\") pod \"lvms-operator-7dbc4567c8-bljw4\" (UID: \"b294eb5e-6f30-4669-a691-3f811bc8eceb\") " pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.227779 master-0 kubenswrapper[4652]: I0216 17:36:17.227690 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:17.624682 master-0 kubenswrapper[4652]: I0216 17:36:17.624580 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7dbc4567c8-bljw4"] Feb 16 17:36:17.632347 master-0 kubenswrapper[4652]: W0216 17:36:17.632209 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb294eb5e_6f30_4669_a691_3f811bc8eceb.slice/crio-dac70e0dac368a28c0757a12e842851799d32d4f5ea4e97bb5a47b44a681fd62 WatchSource:0}: Error finding container dac70e0dac368a28c0757a12e842851799d32d4f5ea4e97bb5a47b44a681fd62: Status 404 returned error can't find the container with id dac70e0dac368a28c0757a12e842851799d32d4f5ea4e97bb5a47b44a681fd62 Feb 16 17:36:17.780378 master-0 kubenswrapper[4652]: I0216 17:36:17.780304 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" event={"ID":"b294eb5e-6f30-4669-a691-3f811bc8eceb","Type":"ContainerStarted","Data":"dac70e0dac368a28c0757a12e842851799d32d4f5ea4e97bb5a47b44a681fd62"} Feb 16 17:36:21.832338 master-0 kubenswrapper[4652]: I0216 17:36:21.831174 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" event={"ID":"b294eb5e-6f30-4669-a691-3f811bc8eceb","Type":"ContainerStarted","Data":"94876d8968991ea5f76c5153aa5118c51d7308b259197098980ef8f54f7a7c1c"} Feb 16 17:36:21.832338 master-0 kubenswrapper[4652]: I0216 17:36:21.831492 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:22.848022 master-0 kubenswrapper[4652]: I0216 17:36:22.847951 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" Feb 16 17:36:22.880540 master-0 kubenswrapper[4652]: I0216 17:36:22.880454 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-7dbc4567c8-bljw4" podStartSLOduration=2.891023324 podStartE2EDuration="6.880429027s" podCreationTimestamp="2026-02-16 17:36:16 +0000 UTC" firstStartedPulling="2026-02-16 17:36:17.635984614 +0000 UTC m=+735.024153130" lastFinishedPulling="2026-02-16 17:36:21.625390317 +0000 UTC m=+739.013558833" observedRunningTime="2026-02-16 17:36:21.865348209 +0000 UTC m=+739.253516735" watchObservedRunningTime="2026-02-16 17:36:22.880429027 +0000 UTC m=+740.268597873" Feb 16 17:36:25.698266 master-0 kubenswrapper[4652]: I0216 17:36:25.698196 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p"] Feb 16 17:36:25.699921 master-0 kubenswrapper[4652]: I0216 17:36:25.699891 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" Feb 16 17:36:25.710462 master-0 kubenswrapper[4652]: I0216 17:36:25.709927 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p"] Feb 16 17:36:25.820502 master-0 kubenswrapper[4652]: I0216 17:36:25.820378 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5baef6f4-465f-4023-ab6c-918c2188a9d6-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p\" (UID: \"5baef6f4-465f-4023-ab6c-918c2188a9d6\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" Feb 16 17:36:25.820502 master-0 kubenswrapper[4652]: I0216 17:36:25.820435 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5baef6f4-465f-4023-ab6c-918c2188a9d6-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p\" (UID: \"5baef6f4-465f-4023-ab6c-918c2188a9d6\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" Feb 16 17:36:25.820502 master-0 kubenswrapper[4652]: I0216 17:36:25.820468 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2qjg\" (UniqueName: \"kubernetes.io/projected/5baef6f4-465f-4023-ab6c-918c2188a9d6-kube-api-access-t2qjg\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p\" (UID: \"5baef6f4-465f-4023-ab6c-918c2188a9d6\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" Feb 16 17:36:25.922023 master-0 kubenswrapper[4652]: I0216 17:36:25.921940 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5baef6f4-465f-4023-ab6c-918c2188a9d6-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p\" (UID: \"5baef6f4-465f-4023-ab6c-918c2188a9d6\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" Feb 16 17:36:25.922023 master-0 kubenswrapper[4652]: I0216 17:36:25.921997 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5baef6f4-465f-4023-ab6c-918c2188a9d6-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p\" (UID: \"5baef6f4-465f-4023-ab6c-918c2188a9d6\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" Feb 16 17:36:25.922023 master-0 kubenswrapper[4652]: I0216 17:36:25.922024 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2qjg\" (UniqueName: \"kubernetes.io/projected/5baef6f4-465f-4023-ab6c-918c2188a9d6-kube-api-access-t2qjg\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p\" (UID: \"5baef6f4-465f-4023-ab6c-918c2188a9d6\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" Feb 16 17:36:25.924145 master-0 kubenswrapper[4652]: I0216 17:36:25.922836 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5baef6f4-465f-4023-ab6c-918c2188a9d6-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p\" (UID: \"5baef6f4-465f-4023-ab6c-918c2188a9d6\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" Feb 16 17:36:25.924145 master-0 kubenswrapper[4652]: I0216 17:36:25.922960 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5baef6f4-465f-4023-ab6c-918c2188a9d6-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p\" (UID: \"5baef6f4-465f-4023-ab6c-918c2188a9d6\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" Feb 16 17:36:25.943613 master-0 kubenswrapper[4652]: I0216 17:36:25.943547 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2qjg\" (UniqueName: \"kubernetes.io/projected/5baef6f4-465f-4023-ab6c-918c2188a9d6-kube-api-access-t2qjg\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p\" (UID: \"5baef6f4-465f-4023-ab6c-918c2188a9d6\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" Feb 16 17:36:26.014610 master-0 kubenswrapper[4652]: I0216 17:36:26.014538 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" Feb 16 17:36:26.107164 master-0 kubenswrapper[4652]: I0216 17:36:26.107012 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st"] Feb 16 17:36:26.108669 master-0 kubenswrapper[4652]: I0216 17:36:26.108627 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" Feb 16 17:36:26.128338 master-0 kubenswrapper[4652]: I0216 17:36:26.127579 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st"] Feb 16 17:36:26.226871 master-0 kubenswrapper[4652]: I0216 17:36:26.226737 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/744e75b3-9dc7-4605-b1e3-2e20eaecc503-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st\" (UID: \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" Feb 16 17:36:26.226871 master-0 kubenswrapper[4652]: I0216 17:36:26.226853 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/744e75b3-9dc7-4605-b1e3-2e20eaecc503-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st\" (UID: \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" Feb 16 17:36:26.227165 master-0 kubenswrapper[4652]: I0216 17:36:26.226885 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc7j7\" (UniqueName: \"kubernetes.io/projected/744e75b3-9dc7-4605-b1e3-2e20eaecc503-kube-api-access-hc7j7\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st\" (UID: \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" Feb 16 17:36:26.328748 master-0 kubenswrapper[4652]: I0216 17:36:26.328679 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/744e75b3-9dc7-4605-b1e3-2e20eaecc503-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st\" (UID: \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" Feb 16 17:36:26.329046 master-0 kubenswrapper[4652]: I0216 17:36:26.328858 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/744e75b3-9dc7-4605-b1e3-2e20eaecc503-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st\" (UID: \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" Feb 16 17:36:26.329046 master-0 kubenswrapper[4652]: I0216 17:36:26.328921 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc7j7\" (UniqueName: \"kubernetes.io/projected/744e75b3-9dc7-4605-b1e3-2e20eaecc503-kube-api-access-hc7j7\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st\" (UID: \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" Feb 16 17:36:26.329277 master-0 kubenswrapper[4652]: I0216 17:36:26.329225 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/744e75b3-9dc7-4605-b1e3-2e20eaecc503-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st\" (UID: \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" Feb 16 17:36:26.329347 master-0 kubenswrapper[4652]: I0216 17:36:26.329326 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/744e75b3-9dc7-4605-b1e3-2e20eaecc503-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st\" (UID: \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" Feb 16 17:36:26.346294 master-0 kubenswrapper[4652]: I0216 17:36:26.346239 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc7j7\" (UniqueName: \"kubernetes.io/projected/744e75b3-9dc7-4605-b1e3-2e20eaecc503-kube-api-access-hc7j7\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st\" (UID: \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" Feb 16 17:36:26.428928 master-0 kubenswrapper[4652]: I0216 17:36:26.428869 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" Feb 16 17:36:26.465390 master-0 kubenswrapper[4652]: I0216 17:36:26.465339 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p"] Feb 16 17:36:26.467104 master-0 kubenswrapper[4652]: W0216 17:36:26.467043 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5baef6f4_465f_4023_ab6c_918c2188a9d6.slice/crio-974e6a1e650bbd605d4e5de105776631f7baa5da008491429236d862677bae25 WatchSource:0}: Error finding container 974e6a1e650bbd605d4e5de105776631f7baa5da008491429236d862677bae25: Status 404 returned error can't find the container with id 974e6a1e650bbd605d4e5de105776631f7baa5da008491429236d862677bae25 Feb 16 17:36:26.700069 master-0 kubenswrapper[4652]: I0216 17:36:26.700003 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz"] Feb 16 17:36:26.701318 master-0 kubenswrapper[4652]: I0216 17:36:26.701227 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" Feb 16 17:36:26.709696 master-0 kubenswrapper[4652]: I0216 17:36:26.709127 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz"] Feb 16 17:36:26.734494 master-0 kubenswrapper[4652]: I0216 17:36:26.734431 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4t7r\" (UniqueName: \"kubernetes.io/projected/8383da06-c310-4c98-9ba5-989affcae1de-kube-api-access-z4t7r\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz\" (UID: \"8383da06-c310-4c98-9ba5-989affcae1de\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" Feb 16 17:36:26.734494 master-0 kubenswrapper[4652]: I0216 17:36:26.734509 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8383da06-c310-4c98-9ba5-989affcae1de-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz\" (UID: \"8383da06-c310-4c98-9ba5-989affcae1de\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" Feb 16 17:36:26.734829 master-0 kubenswrapper[4652]: I0216 17:36:26.734538 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8383da06-c310-4c98-9ba5-989affcae1de-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz\" (UID: \"8383da06-c310-4c98-9ba5-989affcae1de\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" Feb 16 17:36:26.836290 master-0 kubenswrapper[4652]: I0216 17:36:26.836235 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4t7r\" (UniqueName: \"kubernetes.io/projected/8383da06-c310-4c98-9ba5-989affcae1de-kube-api-access-z4t7r\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz\" (UID: \"8383da06-c310-4c98-9ba5-989affcae1de\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" Feb 16 17:36:26.836793 master-0 kubenswrapper[4652]: I0216 17:36:26.836326 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8383da06-c310-4c98-9ba5-989affcae1de-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz\" (UID: \"8383da06-c310-4c98-9ba5-989affcae1de\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" Feb 16 17:36:26.836793 master-0 kubenswrapper[4652]: I0216 17:36:26.836361 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8383da06-c310-4c98-9ba5-989affcae1de-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz\" (UID: \"8383da06-c310-4c98-9ba5-989affcae1de\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" Feb 16 17:36:26.836902 master-0 kubenswrapper[4652]: I0216 17:36:26.836835 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8383da06-c310-4c98-9ba5-989affcae1de-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz\" (UID: \"8383da06-c310-4c98-9ba5-989affcae1de\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" Feb 16 17:36:26.839570 master-0 kubenswrapper[4652]: I0216 17:36:26.839537 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8383da06-c310-4c98-9ba5-989affcae1de-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz\" (UID: \"8383da06-c310-4c98-9ba5-989affcae1de\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" Feb 16 17:36:26.860445 master-0 kubenswrapper[4652]: I0216 17:36:26.860382 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4t7r\" (UniqueName: \"kubernetes.io/projected/8383da06-c310-4c98-9ba5-989affcae1de-kube-api-access-z4t7r\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz\" (UID: \"8383da06-c310-4c98-9ba5-989affcae1de\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" Feb 16 17:36:26.864285 master-0 kubenswrapper[4652]: W0216 17:36:26.864235 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod744e75b3_9dc7_4605_b1e3_2e20eaecc503.slice/crio-6f131b24139ffcd44ddfd7b2deeb0e697351e9e1a3678b5224a8c80f619ee01e WatchSource:0}: Error finding container 6f131b24139ffcd44ddfd7b2deeb0e697351e9e1a3678b5224a8c80f619ee01e: Status 404 returned error can't find the container with id 6f131b24139ffcd44ddfd7b2deeb0e697351e9e1a3678b5224a8c80f619ee01e Feb 16 17:36:26.864285 master-0 kubenswrapper[4652]: I0216 17:36:26.864238 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st"] Feb 16 17:36:26.866594 master-0 kubenswrapper[4652]: I0216 17:36:26.866556 4652 generic.go:334] "Generic (PLEG): container finished" podID="5baef6f4-465f-4023-ab6c-918c2188a9d6" containerID="1be6ae05a6512b976234295f0a24a64c24240f096f452394581a024ac07aa89a" exitCode=0 Feb 16 17:36:26.866594 master-0 kubenswrapper[4652]: I0216 17:36:26.866585 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" event={"ID":"5baef6f4-465f-4023-ab6c-918c2188a9d6","Type":"ContainerDied","Data":"1be6ae05a6512b976234295f0a24a64c24240f096f452394581a024ac07aa89a"} Feb 16 17:36:26.866709 master-0 kubenswrapper[4652]: I0216 17:36:26.866603 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" event={"ID":"5baef6f4-465f-4023-ab6c-918c2188a9d6","Type":"ContainerStarted","Data":"974e6a1e650bbd605d4e5de105776631f7baa5da008491429236d862677bae25"} Feb 16 17:36:27.068815 master-0 kubenswrapper[4652]: I0216 17:36:27.068770 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" Feb 16 17:36:27.490813 master-0 kubenswrapper[4652]: I0216 17:36:27.490757 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz"] Feb 16 17:36:27.492087 master-0 kubenswrapper[4652]: W0216 17:36:27.492001 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8383da06_c310_4c98_9ba5_989affcae1de.slice/crio-ef02502fed3a28d9811ea8f156cf5f5af7d64374274a038d8f515c702a71fe6a WatchSource:0}: Error finding container ef02502fed3a28d9811ea8f156cf5f5af7d64374274a038d8f515c702a71fe6a: Status 404 returned error can't find the container with id ef02502fed3a28d9811ea8f156cf5f5af7d64374274a038d8f515c702a71fe6a Feb 16 17:36:27.875766 master-0 kubenswrapper[4652]: I0216 17:36:27.875628 4652 generic.go:334] "Generic (PLEG): container finished" podID="8383da06-c310-4c98-9ba5-989affcae1de" containerID="3c6b5c9f89f4a33c6ddaca52b3e9eb24230718ebfd7767dbe2dc8c0d49583946" exitCode=0 Feb 16 17:36:27.875766 master-0 kubenswrapper[4652]: I0216 17:36:27.875701 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" event={"ID":"8383da06-c310-4c98-9ba5-989affcae1de","Type":"ContainerDied","Data":"3c6b5c9f89f4a33c6ddaca52b3e9eb24230718ebfd7767dbe2dc8c0d49583946"} Feb 16 17:36:27.875766 master-0 kubenswrapper[4652]: I0216 17:36:27.875728 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" event={"ID":"8383da06-c310-4c98-9ba5-989affcae1de","Type":"ContainerStarted","Data":"ef02502fed3a28d9811ea8f156cf5f5af7d64374274a038d8f515c702a71fe6a"} Feb 16 17:36:27.877155 master-0 kubenswrapper[4652]: I0216 17:36:27.876961 4652 generic.go:334] "Generic (PLEG): container finished" podID="744e75b3-9dc7-4605-b1e3-2e20eaecc503" containerID="ecfbd579149f3eea21fe8ab158a3fef23a9475ca3e508162741a2f5b1512fed0" exitCode=0 Feb 16 17:36:27.877155 master-0 kubenswrapper[4652]: I0216 17:36:27.877022 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" event={"ID":"744e75b3-9dc7-4605-b1e3-2e20eaecc503","Type":"ContainerDied","Data":"ecfbd579149f3eea21fe8ab158a3fef23a9475ca3e508162741a2f5b1512fed0"} Feb 16 17:36:27.877155 master-0 kubenswrapper[4652]: I0216 17:36:27.877054 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" event={"ID":"744e75b3-9dc7-4605-b1e3-2e20eaecc503","Type":"ContainerStarted","Data":"6f131b24139ffcd44ddfd7b2deeb0e697351e9e1a3678b5224a8c80f619ee01e"} Feb 16 17:36:28.887473 master-0 kubenswrapper[4652]: I0216 17:36:28.887381 4652 generic.go:334] "Generic (PLEG): container finished" podID="5baef6f4-465f-4023-ab6c-918c2188a9d6" containerID="d79be2e5286039742fa445b33da9b79f475cec6beb0b6c8b045b9ed6776bfab2" exitCode=0 Feb 16 17:36:28.887473 master-0 kubenswrapper[4652]: I0216 17:36:28.887445 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" event={"ID":"5baef6f4-465f-4023-ab6c-918c2188a9d6","Type":"ContainerDied","Data":"d79be2e5286039742fa445b33da9b79f475cec6beb0b6c8b045b9ed6776bfab2"} Feb 16 17:36:29.895717 master-0 kubenswrapper[4652]: I0216 17:36:29.895667 4652 generic.go:334] "Generic (PLEG): container finished" podID="8383da06-c310-4c98-9ba5-989affcae1de" containerID="d113042b5f50b4510a480b6c91e06fd46fa7b2efe89b1087fd56cec8c5ac8116" exitCode=0 Feb 16 17:36:29.896185 master-0 kubenswrapper[4652]: I0216 17:36:29.895746 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" event={"ID":"8383da06-c310-4c98-9ba5-989affcae1de","Type":"ContainerDied","Data":"d113042b5f50b4510a480b6c91e06fd46fa7b2efe89b1087fd56cec8c5ac8116"} Feb 16 17:36:29.898266 master-0 kubenswrapper[4652]: I0216 17:36:29.898220 4652 generic.go:334] "Generic (PLEG): container finished" podID="5baef6f4-465f-4023-ab6c-918c2188a9d6" containerID="8db720bc1ee3ecc26f17d8b6ad7b65066acfaf0ecc3cd62cf2c429b3dbe5c46f" exitCode=0 Feb 16 17:36:29.898335 master-0 kubenswrapper[4652]: I0216 17:36:29.898276 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" event={"ID":"5baef6f4-465f-4023-ab6c-918c2188a9d6","Type":"ContainerDied","Data":"8db720bc1ee3ecc26f17d8b6ad7b65066acfaf0ecc3cd62cf2c429b3dbe5c46f"} Feb 16 17:36:30.908284 master-0 kubenswrapper[4652]: I0216 17:36:30.907678 4652 generic.go:334] "Generic (PLEG): container finished" podID="744e75b3-9dc7-4605-b1e3-2e20eaecc503" containerID="c0839b26652c282c2eb8b28754c3f93f162d9fb1cc734690ccfa9292db7bd6f1" exitCode=0 Feb 16 17:36:30.908284 master-0 kubenswrapper[4652]: I0216 17:36:30.907742 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" event={"ID":"744e75b3-9dc7-4605-b1e3-2e20eaecc503","Type":"ContainerDied","Data":"c0839b26652c282c2eb8b28754c3f93f162d9fb1cc734690ccfa9292db7bd6f1"} Feb 16 17:36:30.913087 master-0 kubenswrapper[4652]: I0216 17:36:30.912630 4652 generic.go:334] "Generic (PLEG): container finished" podID="8383da06-c310-4c98-9ba5-989affcae1de" containerID="0dcb6531493c8fd109aff8d9bdc9156dde82b56ac931b388f1ad012c36fac18f" exitCode=0 Feb 16 17:36:30.913087 master-0 kubenswrapper[4652]: I0216 17:36:30.912989 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" event={"ID":"8383da06-c310-4c98-9ba5-989affcae1de","Type":"ContainerDied","Data":"0dcb6531493c8fd109aff8d9bdc9156dde82b56ac931b388f1ad012c36fac18f"} Feb 16 17:36:31.309579 master-0 kubenswrapper[4652]: I0216 17:36:31.309327 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" Feb 16 17:36:31.422502 master-0 kubenswrapper[4652]: I0216 17:36:31.421556 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2qjg\" (UniqueName: \"kubernetes.io/projected/5baef6f4-465f-4023-ab6c-918c2188a9d6-kube-api-access-t2qjg\") pod \"5baef6f4-465f-4023-ab6c-918c2188a9d6\" (UID: \"5baef6f4-465f-4023-ab6c-918c2188a9d6\") " Feb 16 17:36:31.422502 master-0 kubenswrapper[4652]: I0216 17:36:31.421751 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5baef6f4-465f-4023-ab6c-918c2188a9d6-bundle\") pod \"5baef6f4-465f-4023-ab6c-918c2188a9d6\" (UID: \"5baef6f4-465f-4023-ab6c-918c2188a9d6\") " Feb 16 17:36:31.422502 master-0 kubenswrapper[4652]: I0216 17:36:31.421951 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5baef6f4-465f-4023-ab6c-918c2188a9d6-util\") pod \"5baef6f4-465f-4023-ab6c-918c2188a9d6\" (UID: \"5baef6f4-465f-4023-ab6c-918c2188a9d6\") " Feb 16 17:36:31.423133 master-0 kubenswrapper[4652]: I0216 17:36:31.423061 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5baef6f4-465f-4023-ab6c-918c2188a9d6-bundle" (OuterVolumeSpecName: "bundle") pod "5baef6f4-465f-4023-ab6c-918c2188a9d6" (UID: "5baef6f4-465f-4023-ab6c-918c2188a9d6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:36:31.426899 master-0 kubenswrapper[4652]: I0216 17:36:31.426850 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5baef6f4-465f-4023-ab6c-918c2188a9d6-kube-api-access-t2qjg" (OuterVolumeSpecName: "kube-api-access-t2qjg") pod "5baef6f4-465f-4023-ab6c-918c2188a9d6" (UID: "5baef6f4-465f-4023-ab6c-918c2188a9d6"). InnerVolumeSpecName "kube-api-access-t2qjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:36:31.455629 master-0 kubenswrapper[4652]: I0216 17:36:31.455525 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5baef6f4-465f-4023-ab6c-918c2188a9d6-util" (OuterVolumeSpecName: "util") pod "5baef6f4-465f-4023-ab6c-918c2188a9d6" (UID: "5baef6f4-465f-4023-ab6c-918c2188a9d6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:36:31.524183 master-0 kubenswrapper[4652]: I0216 17:36:31.524087 4652 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5baef6f4-465f-4023-ab6c-918c2188a9d6-util\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:31.524183 master-0 kubenswrapper[4652]: I0216 17:36:31.524139 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2qjg\" (UniqueName: \"kubernetes.io/projected/5baef6f4-465f-4023-ab6c-918c2188a9d6-kube-api-access-t2qjg\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:31.524183 master-0 kubenswrapper[4652]: I0216 17:36:31.524153 4652 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5baef6f4-465f-4023-ab6c-918c2188a9d6-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:31.925268 master-0 kubenswrapper[4652]: I0216 17:36:31.925150 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" event={"ID":"5baef6f4-465f-4023-ab6c-918c2188a9d6","Type":"ContainerDied","Data":"974e6a1e650bbd605d4e5de105776631f7baa5da008491429236d862677bae25"} Feb 16 17:36:31.925268 master-0 kubenswrapper[4652]: I0216 17:36:31.925215 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="974e6a1e650bbd605d4e5de105776631f7baa5da008491429236d862677bae25" Feb 16 17:36:31.925268 master-0 kubenswrapper[4652]: I0216 17:36:31.925166 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213m7z5p" Feb 16 17:36:31.928325 master-0 kubenswrapper[4652]: I0216 17:36:31.927963 4652 generic.go:334] "Generic (PLEG): container finished" podID="744e75b3-9dc7-4605-b1e3-2e20eaecc503" containerID="ff98934055ef26677969c5402add03538b0efff6f613b35d5846ddfee2537376" exitCode=0 Feb 16 17:36:31.928325 master-0 kubenswrapper[4652]: I0216 17:36:31.928002 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" event={"ID":"744e75b3-9dc7-4605-b1e3-2e20eaecc503","Type":"ContainerDied","Data":"ff98934055ef26677969c5402add03538b0efff6f613b35d5846ddfee2537376"} Feb 16 17:36:32.251154 master-0 kubenswrapper[4652]: I0216 17:36:32.251104 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" Feb 16 17:36:32.337806 master-0 kubenswrapper[4652]: I0216 17:36:32.337743 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4t7r\" (UniqueName: \"kubernetes.io/projected/8383da06-c310-4c98-9ba5-989affcae1de-kube-api-access-z4t7r\") pod \"8383da06-c310-4c98-9ba5-989affcae1de\" (UID: \"8383da06-c310-4c98-9ba5-989affcae1de\") " Feb 16 17:36:32.339233 master-0 kubenswrapper[4652]: I0216 17:36:32.339194 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8383da06-c310-4c98-9ba5-989affcae1de-util\") pod \"8383da06-c310-4c98-9ba5-989affcae1de\" (UID: \"8383da06-c310-4c98-9ba5-989affcae1de\") " Feb 16 17:36:32.339733 master-0 kubenswrapper[4652]: I0216 17:36:32.339697 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8383da06-c310-4c98-9ba5-989affcae1de-bundle\") pod \"8383da06-c310-4c98-9ba5-989affcae1de\" (UID: \"8383da06-c310-4c98-9ba5-989affcae1de\") " Feb 16 17:36:32.340046 master-0 kubenswrapper[4652]: I0216 17:36:32.340005 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8383da06-c310-4c98-9ba5-989affcae1de-bundle" (OuterVolumeSpecName: "bundle") pod "8383da06-c310-4c98-9ba5-989affcae1de" (UID: "8383da06-c310-4c98-9ba5-989affcae1de"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:36:32.345338 master-0 kubenswrapper[4652]: I0216 17:36:32.341727 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8383da06-c310-4c98-9ba5-989affcae1de-kube-api-access-z4t7r" (OuterVolumeSpecName: "kube-api-access-z4t7r") pod "8383da06-c310-4c98-9ba5-989affcae1de" (UID: "8383da06-c310-4c98-9ba5-989affcae1de"). InnerVolumeSpecName "kube-api-access-z4t7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:36:32.345922 master-0 kubenswrapper[4652]: I0216 17:36:32.345873 4652 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8383da06-c310-4c98-9ba5-989affcae1de-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:32.346018 master-0 kubenswrapper[4652]: I0216 17:36:32.346001 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4t7r\" (UniqueName: \"kubernetes.io/projected/8383da06-c310-4c98-9ba5-989affcae1de-kube-api-access-z4t7r\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:32.355745 master-0 kubenswrapper[4652]: I0216 17:36:32.355678 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8383da06-c310-4c98-9ba5-989affcae1de-util" (OuterVolumeSpecName: "util") pod "8383da06-c310-4c98-9ba5-989affcae1de" (UID: "8383da06-c310-4c98-9ba5-989affcae1de"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:36:32.447476 master-0 kubenswrapper[4652]: I0216 17:36:32.447384 4652 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8383da06-c310-4c98-9ba5-989affcae1de-util\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:32.939118 master-0 kubenswrapper[4652]: I0216 17:36:32.939016 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" Feb 16 17:36:32.939118 master-0 kubenswrapper[4652]: I0216 17:36:32.938992 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323eca8v8lz" event={"ID":"8383da06-c310-4c98-9ba5-989affcae1de","Type":"ContainerDied","Data":"ef02502fed3a28d9811ea8f156cf5f5af7d64374274a038d8f515c702a71fe6a"} Feb 16 17:36:32.939118 master-0 kubenswrapper[4652]: I0216 17:36:32.939093 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef02502fed3a28d9811ea8f156cf5f5af7d64374274a038d8f515c702a71fe6a" Feb 16 17:36:33.225644 master-0 kubenswrapper[4652]: I0216 17:36:33.224373 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" Feb 16 17:36:33.362125 master-0 kubenswrapper[4652]: I0216 17:36:33.362048 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/744e75b3-9dc7-4605-b1e3-2e20eaecc503-bundle\") pod \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\" (UID: \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\") " Feb 16 17:36:33.362125 master-0 kubenswrapper[4652]: I0216 17:36:33.362136 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc7j7\" (UniqueName: \"kubernetes.io/projected/744e75b3-9dc7-4605-b1e3-2e20eaecc503-kube-api-access-hc7j7\") pod \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\" (UID: \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\") " Feb 16 17:36:33.362548 master-0 kubenswrapper[4652]: I0216 17:36:33.362163 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/744e75b3-9dc7-4605-b1e3-2e20eaecc503-util\") pod \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\" (UID: \"744e75b3-9dc7-4605-b1e3-2e20eaecc503\") " Feb 16 17:36:33.363383 master-0 kubenswrapper[4652]: I0216 17:36:33.363321 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/744e75b3-9dc7-4605-b1e3-2e20eaecc503-bundle" (OuterVolumeSpecName: "bundle") pod "744e75b3-9dc7-4605-b1e3-2e20eaecc503" (UID: "744e75b3-9dc7-4605-b1e3-2e20eaecc503"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:36:33.366742 master-0 kubenswrapper[4652]: I0216 17:36:33.366685 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744e75b3-9dc7-4605-b1e3-2e20eaecc503-kube-api-access-hc7j7" (OuterVolumeSpecName: "kube-api-access-hc7j7") pod "744e75b3-9dc7-4605-b1e3-2e20eaecc503" (UID: "744e75b3-9dc7-4605-b1e3-2e20eaecc503"). InnerVolumeSpecName "kube-api-access-hc7j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:36:33.377638 master-0 kubenswrapper[4652]: I0216 17:36:33.377566 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/744e75b3-9dc7-4605-b1e3-2e20eaecc503-util" (OuterVolumeSpecName: "util") pod "744e75b3-9dc7-4605-b1e3-2e20eaecc503" (UID: "744e75b3-9dc7-4605-b1e3-2e20eaecc503"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:36:33.464243 master-0 kubenswrapper[4652]: I0216 17:36:33.464056 4652 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/744e75b3-9dc7-4605-b1e3-2e20eaecc503-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:33.464243 master-0 kubenswrapper[4652]: I0216 17:36:33.464108 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc7j7\" (UniqueName: \"kubernetes.io/projected/744e75b3-9dc7-4605-b1e3-2e20eaecc503-kube-api-access-hc7j7\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:33.464243 master-0 kubenswrapper[4652]: I0216 17:36:33.464127 4652 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/744e75b3-9dc7-4605-b1e3-2e20eaecc503-util\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:33.952609 master-0 kubenswrapper[4652]: I0216 17:36:33.952555 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" event={"ID":"744e75b3-9dc7-4605-b1e3-2e20eaecc503","Type":"ContainerDied","Data":"6f131b24139ffcd44ddfd7b2deeb0e697351e9e1a3678b5224a8c80f619ee01e"} Feb 16 17:36:33.952609 master-0 kubenswrapper[4652]: I0216 17:36:33.952615 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f131b24139ffcd44ddfd7b2deeb0e697351e9e1a3678b5224a8c80f619ee01e" Feb 16 17:36:33.953324 master-0 kubenswrapper[4652]: I0216 17:36:33.952663 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5p77st" Feb 16 17:36:34.524986 master-0 kubenswrapper[4652]: I0216 17:36:34.524893 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj"] Feb 16 17:36:34.525395 master-0 kubenswrapper[4652]: E0216 17:36:34.525349 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744e75b3-9dc7-4605-b1e3-2e20eaecc503" containerName="util" Feb 16 17:36:34.525395 master-0 kubenswrapper[4652]: I0216 17:36:34.525383 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="744e75b3-9dc7-4605-b1e3-2e20eaecc503" containerName="util" Feb 16 17:36:34.525506 master-0 kubenswrapper[4652]: E0216 17:36:34.525406 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8383da06-c310-4c98-9ba5-989affcae1de" containerName="extract" Feb 16 17:36:34.525506 master-0 kubenswrapper[4652]: I0216 17:36:34.525417 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="8383da06-c310-4c98-9ba5-989affcae1de" containerName="extract" Feb 16 17:36:34.525506 master-0 kubenswrapper[4652]: E0216 17:36:34.525433 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8383da06-c310-4c98-9ba5-989affcae1de" containerName="util" Feb 16 17:36:34.525506 master-0 kubenswrapper[4652]: I0216 17:36:34.525444 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="8383da06-c310-4c98-9ba5-989affcae1de" containerName="util" Feb 16 17:36:34.525506 master-0 kubenswrapper[4652]: E0216 17:36:34.525460 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5baef6f4-465f-4023-ab6c-918c2188a9d6" containerName="pull" Feb 16 17:36:34.525506 master-0 kubenswrapper[4652]: I0216 17:36:34.525470 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="5baef6f4-465f-4023-ab6c-918c2188a9d6" containerName="pull" Feb 16 17:36:34.525506 master-0 kubenswrapper[4652]: E0216 17:36:34.525486 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5baef6f4-465f-4023-ab6c-918c2188a9d6" containerName="extract" Feb 16 17:36:34.525506 master-0 kubenswrapper[4652]: I0216 17:36:34.525497 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="5baef6f4-465f-4023-ab6c-918c2188a9d6" containerName="extract" Feb 16 17:36:34.525506 master-0 kubenswrapper[4652]: E0216 17:36:34.525513 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5baef6f4-465f-4023-ab6c-918c2188a9d6" containerName="util" Feb 16 17:36:34.525921 master-0 kubenswrapper[4652]: I0216 17:36:34.525525 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="5baef6f4-465f-4023-ab6c-918c2188a9d6" containerName="util" Feb 16 17:36:34.525921 master-0 kubenswrapper[4652]: E0216 17:36:34.525547 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744e75b3-9dc7-4605-b1e3-2e20eaecc503" containerName="pull" Feb 16 17:36:34.525921 master-0 kubenswrapper[4652]: I0216 17:36:34.525559 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="744e75b3-9dc7-4605-b1e3-2e20eaecc503" containerName="pull" Feb 16 17:36:34.525921 master-0 kubenswrapper[4652]: E0216 17:36:34.525576 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744e75b3-9dc7-4605-b1e3-2e20eaecc503" containerName="extract" Feb 16 17:36:34.525921 master-0 kubenswrapper[4652]: I0216 17:36:34.525586 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="744e75b3-9dc7-4605-b1e3-2e20eaecc503" containerName="extract" Feb 16 17:36:34.525921 master-0 kubenswrapper[4652]: E0216 17:36:34.525605 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8383da06-c310-4c98-9ba5-989affcae1de" containerName="pull" Feb 16 17:36:34.525921 master-0 kubenswrapper[4652]: I0216 17:36:34.525616 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="8383da06-c310-4c98-9ba5-989affcae1de" containerName="pull" Feb 16 17:36:34.525921 master-0 kubenswrapper[4652]: I0216 17:36:34.525808 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="744e75b3-9dc7-4605-b1e3-2e20eaecc503" containerName="extract" Feb 16 17:36:34.525921 master-0 kubenswrapper[4652]: I0216 17:36:34.525834 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="8383da06-c310-4c98-9ba5-989affcae1de" containerName="extract" Feb 16 17:36:34.525921 master-0 kubenswrapper[4652]: I0216 17:36:34.525853 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="5baef6f4-465f-4023-ab6c-918c2188a9d6" containerName="extract" Feb 16 17:36:34.527764 master-0 kubenswrapper[4652]: I0216 17:36:34.527705 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" Feb 16 17:36:34.587081 master-0 kubenswrapper[4652]: I0216 17:36:34.586380 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf6bf\" (UniqueName: \"kubernetes.io/projected/86bd64bd-f611-4775-ae8c-96b2370a2a43-kube-api-access-qf6bf\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj\" (UID: \"86bd64bd-f611-4775-ae8c-96b2370a2a43\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" Feb 16 17:36:34.587081 master-0 kubenswrapper[4652]: I0216 17:36:34.586741 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86bd64bd-f611-4775-ae8c-96b2370a2a43-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj\" (UID: \"86bd64bd-f611-4775-ae8c-96b2370a2a43\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" Feb 16 17:36:34.589914 master-0 kubenswrapper[4652]: I0216 17:36:34.589860 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86bd64bd-f611-4775-ae8c-96b2370a2a43-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj\" (UID: \"86bd64bd-f611-4775-ae8c-96b2370a2a43\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" Feb 16 17:36:34.633490 master-0 kubenswrapper[4652]: I0216 17:36:34.631318 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj"] Feb 16 17:36:34.691571 master-0 kubenswrapper[4652]: I0216 17:36:34.691528 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86bd64bd-f611-4775-ae8c-96b2370a2a43-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj\" (UID: \"86bd64bd-f611-4775-ae8c-96b2370a2a43\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" Feb 16 17:36:34.691835 master-0 kubenswrapper[4652]: I0216 17:36:34.691820 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf6bf\" (UniqueName: \"kubernetes.io/projected/86bd64bd-f611-4775-ae8c-96b2370a2a43-kube-api-access-qf6bf\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj\" (UID: \"86bd64bd-f611-4775-ae8c-96b2370a2a43\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" Feb 16 17:36:34.691983 master-0 kubenswrapper[4652]: I0216 17:36:34.691967 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86bd64bd-f611-4775-ae8c-96b2370a2a43-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj\" (UID: \"86bd64bd-f611-4775-ae8c-96b2370a2a43\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" Feb 16 17:36:34.693717 master-0 kubenswrapper[4652]: I0216 17:36:34.692485 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86bd64bd-f611-4775-ae8c-96b2370a2a43-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj\" (UID: \"86bd64bd-f611-4775-ae8c-96b2370a2a43\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" Feb 16 17:36:34.693717 master-0 kubenswrapper[4652]: I0216 17:36:34.692503 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86bd64bd-f611-4775-ae8c-96b2370a2a43-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj\" (UID: \"86bd64bd-f611-4775-ae8c-96b2370a2a43\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" Feb 16 17:36:34.718834 master-0 kubenswrapper[4652]: I0216 17:36:34.718761 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf6bf\" (UniqueName: \"kubernetes.io/projected/86bd64bd-f611-4775-ae8c-96b2370a2a43-kube-api-access-qf6bf\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj\" (UID: \"86bd64bd-f611-4775-ae8c-96b2370a2a43\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" Feb 16 17:36:34.848520 master-0 kubenswrapper[4652]: I0216 17:36:34.848371 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" Feb 16 17:36:35.268317 master-0 kubenswrapper[4652]: I0216 17:36:35.268196 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj"] Feb 16 17:36:35.973963 master-0 kubenswrapper[4652]: I0216 17:36:35.973863 4652 generic.go:334] "Generic (PLEG): container finished" podID="86bd64bd-f611-4775-ae8c-96b2370a2a43" containerID="0195d1d6e838ab00a9b13e0238e21978f210e74e15e7b0f7c1a2c7c51700d3c3" exitCode=0 Feb 16 17:36:35.973963 master-0 kubenswrapper[4652]: I0216 17:36:35.973924 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" event={"ID":"86bd64bd-f611-4775-ae8c-96b2370a2a43","Type":"ContainerDied","Data":"0195d1d6e838ab00a9b13e0238e21978f210e74e15e7b0f7c1a2c7c51700d3c3"} Feb 16 17:36:35.973963 master-0 kubenswrapper[4652]: I0216 17:36:35.973962 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" event={"ID":"86bd64bd-f611-4775-ae8c-96b2370a2a43","Type":"ContainerStarted","Data":"abc23d9d901bd339ef688644488dcf8b388ac022ef0c11986d34bddfa13b476c"} Feb 16 17:36:37.995570 master-0 kubenswrapper[4652]: I0216 17:36:37.995512 4652 generic.go:334] "Generic (PLEG): container finished" podID="86bd64bd-f611-4775-ae8c-96b2370a2a43" containerID="5cef57f1fa35a69f6fe768b204ce5cf02eb582c784aebf515fe0d5637ce0144e" exitCode=0 Feb 16 17:36:37.995570 master-0 kubenswrapper[4652]: I0216 17:36:37.995559 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" event={"ID":"86bd64bd-f611-4775-ae8c-96b2370a2a43","Type":"ContainerDied","Data":"5cef57f1fa35a69f6fe768b204ce5cf02eb582c784aebf515fe0d5637ce0144e"} Feb 16 17:36:39.005049 master-0 kubenswrapper[4652]: I0216 17:36:39.004974 4652 generic.go:334] "Generic (PLEG): container finished" podID="86bd64bd-f611-4775-ae8c-96b2370a2a43" containerID="51ad4d4312e1ec89e700ac7bcd40c70355893b54c290278ab4e0166c318375f3" exitCode=0 Feb 16 17:36:39.005049 master-0 kubenswrapper[4652]: I0216 17:36:39.005035 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" event={"ID":"86bd64bd-f611-4775-ae8c-96b2370a2a43","Type":"ContainerDied","Data":"51ad4d4312e1ec89e700ac7bcd40c70355893b54c290278ab4e0166c318375f3"} Feb 16 17:36:39.219845 master-0 kubenswrapper[4652]: I0216 17:36:39.219770 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn"] Feb 16 17:36:39.220963 master-0 kubenswrapper[4652]: I0216 17:36:39.220939 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn" Feb 16 17:36:39.223079 master-0 kubenswrapper[4652]: I0216 17:36:39.223040 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Feb 16 17:36:39.223435 master-0 kubenswrapper[4652]: I0216 17:36:39.223394 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Feb 16 17:36:39.244692 master-0 kubenswrapper[4652]: I0216 17:36:39.244636 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn"] Feb 16 17:36:39.371849 master-0 kubenswrapper[4652]: I0216 17:36:39.371714 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjr5r\" (UniqueName: \"kubernetes.io/projected/d046120c-7314-4b1a-96fb-be92086a9bab-kube-api-access-rjr5r\") pod \"cert-manager-operator-controller-manager-66c8bdd694-g95bn\" (UID: \"d046120c-7314-4b1a-96fb-be92086a9bab\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn" Feb 16 17:36:39.372046 master-0 kubenswrapper[4652]: I0216 17:36:39.371926 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d046120c-7314-4b1a-96fb-be92086a9bab-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-g95bn\" (UID: \"d046120c-7314-4b1a-96fb-be92086a9bab\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn" Feb 16 17:36:39.473157 master-0 kubenswrapper[4652]: I0216 17:36:39.473068 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjr5r\" (UniqueName: \"kubernetes.io/projected/d046120c-7314-4b1a-96fb-be92086a9bab-kube-api-access-rjr5r\") pod \"cert-manager-operator-controller-manager-66c8bdd694-g95bn\" (UID: \"d046120c-7314-4b1a-96fb-be92086a9bab\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn" Feb 16 17:36:39.473422 master-0 kubenswrapper[4652]: I0216 17:36:39.473222 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d046120c-7314-4b1a-96fb-be92086a9bab-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-g95bn\" (UID: \"d046120c-7314-4b1a-96fb-be92086a9bab\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn" Feb 16 17:36:39.473966 master-0 kubenswrapper[4652]: I0216 17:36:39.473930 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d046120c-7314-4b1a-96fb-be92086a9bab-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-g95bn\" (UID: \"d046120c-7314-4b1a-96fb-be92086a9bab\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn" Feb 16 17:36:39.492195 master-0 kubenswrapper[4652]: I0216 17:36:39.492118 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjr5r\" (UniqueName: \"kubernetes.io/projected/d046120c-7314-4b1a-96fb-be92086a9bab-kube-api-access-rjr5r\") pod \"cert-manager-operator-controller-manager-66c8bdd694-g95bn\" (UID: \"d046120c-7314-4b1a-96fb-be92086a9bab\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn" Feb 16 17:36:39.537385 master-0 kubenswrapper[4652]: I0216 17:36:39.537317 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn" Feb 16 17:36:40.252334 master-0 kubenswrapper[4652]: I0216 17:36:40.252276 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn"] Feb 16 17:36:40.253830 master-0 kubenswrapper[4652]: W0216 17:36:40.253769 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd046120c_7314_4b1a_96fb_be92086a9bab.slice/crio-2fdafa833250210883bcdf8d47ed0d0fe22a9fcf97a00bd9672ab0b2bcd65328 WatchSource:0}: Error finding container 2fdafa833250210883bcdf8d47ed0d0fe22a9fcf97a00bd9672ab0b2bcd65328: Status 404 returned error can't find the container with id 2fdafa833250210883bcdf8d47ed0d0fe22a9fcf97a00bd9672ab0b2bcd65328 Feb 16 17:36:40.364577 master-0 kubenswrapper[4652]: I0216 17:36:40.364549 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" Feb 16 17:36:40.500129 master-0 kubenswrapper[4652]: I0216 17:36:40.500061 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86bd64bd-f611-4775-ae8c-96b2370a2a43-util\") pod \"86bd64bd-f611-4775-ae8c-96b2370a2a43\" (UID: \"86bd64bd-f611-4775-ae8c-96b2370a2a43\") " Feb 16 17:36:40.500431 master-0 kubenswrapper[4652]: I0216 17:36:40.500211 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qf6bf\" (UniqueName: \"kubernetes.io/projected/86bd64bd-f611-4775-ae8c-96b2370a2a43-kube-api-access-qf6bf\") pod \"86bd64bd-f611-4775-ae8c-96b2370a2a43\" (UID: \"86bd64bd-f611-4775-ae8c-96b2370a2a43\") " Feb 16 17:36:40.500431 master-0 kubenswrapper[4652]: I0216 17:36:40.500319 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86bd64bd-f611-4775-ae8c-96b2370a2a43-bundle\") pod \"86bd64bd-f611-4775-ae8c-96b2370a2a43\" (UID: \"86bd64bd-f611-4775-ae8c-96b2370a2a43\") " Feb 16 17:36:40.503757 master-0 kubenswrapper[4652]: I0216 17:36:40.503710 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86bd64bd-f611-4775-ae8c-96b2370a2a43-bundle" (OuterVolumeSpecName: "bundle") pod "86bd64bd-f611-4775-ae8c-96b2370a2a43" (UID: "86bd64bd-f611-4775-ae8c-96b2370a2a43"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:36:40.509351 master-0 kubenswrapper[4652]: I0216 17:36:40.509277 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86bd64bd-f611-4775-ae8c-96b2370a2a43-kube-api-access-qf6bf" (OuterVolumeSpecName: "kube-api-access-qf6bf") pod "86bd64bd-f611-4775-ae8c-96b2370a2a43" (UID: "86bd64bd-f611-4775-ae8c-96b2370a2a43"). InnerVolumeSpecName "kube-api-access-qf6bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:36:40.535429 master-0 kubenswrapper[4652]: I0216 17:36:40.535383 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86bd64bd-f611-4775-ae8c-96b2370a2a43-util" (OuterVolumeSpecName: "util") pod "86bd64bd-f611-4775-ae8c-96b2370a2a43" (UID: "86bd64bd-f611-4775-ae8c-96b2370a2a43"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:36:40.603206 master-0 kubenswrapper[4652]: I0216 17:36:40.603144 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qf6bf\" (UniqueName: \"kubernetes.io/projected/86bd64bd-f611-4775-ae8c-96b2370a2a43-kube-api-access-qf6bf\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:40.603206 master-0 kubenswrapper[4652]: I0216 17:36:40.603191 4652 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86bd64bd-f611-4775-ae8c-96b2370a2a43-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:40.603206 master-0 kubenswrapper[4652]: I0216 17:36:40.603202 4652 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86bd64bd-f611-4775-ae8c-96b2370a2a43-util\") on node \"master-0\" DevicePath \"\"" Feb 16 17:36:41.037413 master-0 kubenswrapper[4652]: I0216 17:36:41.037072 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn" event={"ID":"d046120c-7314-4b1a-96fb-be92086a9bab","Type":"ContainerStarted","Data":"2fdafa833250210883bcdf8d47ed0d0fe22a9fcf97a00bd9672ab0b2bcd65328"} Feb 16 17:36:41.042497 master-0 kubenswrapper[4652]: I0216 17:36:41.042428 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" event={"ID":"86bd64bd-f611-4775-ae8c-96b2370a2a43","Type":"ContainerDied","Data":"abc23d9d901bd339ef688644488dcf8b388ac022ef0c11986d34bddfa13b476c"} Feb 16 17:36:41.042497 master-0 kubenswrapper[4652]: I0216 17:36:41.042482 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abc23d9d901bd339ef688644488dcf8b388ac022ef0c11986d34bddfa13b476c" Feb 16 17:36:41.042757 master-0 kubenswrapper[4652]: I0216 17:36:41.042497 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mk5lj" Feb 16 17:36:44.068171 master-0 kubenswrapper[4652]: I0216 17:36:44.068075 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn" event={"ID":"d046120c-7314-4b1a-96fb-be92086a9bab","Type":"ContainerStarted","Data":"eca4ea4077be0039b51c19240f5b878650ace08ae0d4798caa7034d8d6b7f3d5"} Feb 16 17:36:44.095535 master-0 kubenswrapper[4652]: I0216 17:36:44.095447 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-g95bn" podStartSLOduration=1.9433901 podStartE2EDuration="5.095426222s" podCreationTimestamp="2026-02-16 17:36:39 +0000 UTC" firstStartedPulling="2026-02-16 17:36:40.263129745 +0000 UTC m=+757.651298261" lastFinishedPulling="2026-02-16 17:36:43.415165867 +0000 UTC m=+760.803334383" observedRunningTime="2026-02-16 17:36:44.089049385 +0000 UTC m=+761.477217981" watchObservedRunningTime="2026-02-16 17:36:44.095426222 +0000 UTC m=+761.483594738" Feb 16 17:36:47.710431 master-0 kubenswrapper[4652]: I0216 17:36:47.710367 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-sgsht"] Feb 16 17:36:47.711165 master-0 kubenswrapper[4652]: E0216 17:36:47.710638 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bd64bd-f611-4775-ae8c-96b2370a2a43" containerName="pull" Feb 16 17:36:47.711165 master-0 kubenswrapper[4652]: I0216 17:36:47.710649 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bd64bd-f611-4775-ae8c-96b2370a2a43" containerName="pull" Feb 16 17:36:47.711165 master-0 kubenswrapper[4652]: E0216 17:36:47.710660 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bd64bd-f611-4775-ae8c-96b2370a2a43" containerName="util" Feb 16 17:36:47.711165 master-0 kubenswrapper[4652]: I0216 17:36:47.710666 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bd64bd-f611-4775-ae8c-96b2370a2a43" containerName="util" Feb 16 17:36:47.711165 master-0 kubenswrapper[4652]: E0216 17:36:47.710674 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86bd64bd-f611-4775-ae8c-96b2370a2a43" containerName="extract" Feb 16 17:36:47.711165 master-0 kubenswrapper[4652]: I0216 17:36:47.710681 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="86bd64bd-f611-4775-ae8c-96b2370a2a43" containerName="extract" Feb 16 17:36:47.711165 master-0 kubenswrapper[4652]: I0216 17:36:47.710823 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="86bd64bd-f611-4775-ae8c-96b2370a2a43" containerName="extract" Feb 16 17:36:47.711511 master-0 kubenswrapper[4652]: I0216 17:36:47.711217 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-sgsht" Feb 16 17:36:47.713316 master-0 kubenswrapper[4652]: I0216 17:36:47.713242 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 17:36:47.713491 master-0 kubenswrapper[4652]: I0216 17:36:47.713450 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 17:36:47.736908 master-0 kubenswrapper[4652]: I0216 17:36:47.736843 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-sgsht"] Feb 16 17:36:47.826548 master-0 kubenswrapper[4652]: I0216 17:36:47.826482 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6gtm\" (UniqueName: \"kubernetes.io/projected/130129b7-7474-4a06-9039-44a9f59de076-kube-api-access-d6gtm\") pod \"cert-manager-webhook-6888856db4-sgsht\" (UID: \"130129b7-7474-4a06-9039-44a9f59de076\") " pod="cert-manager/cert-manager-webhook-6888856db4-sgsht" Feb 16 17:36:47.826548 master-0 kubenswrapper[4652]: I0216 17:36:47.826550 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/130129b7-7474-4a06-9039-44a9f59de076-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-sgsht\" (UID: \"130129b7-7474-4a06-9039-44a9f59de076\") " pod="cert-manager/cert-manager-webhook-6888856db4-sgsht" Feb 16 17:36:47.928542 master-0 kubenswrapper[4652]: I0216 17:36:47.928447 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6gtm\" (UniqueName: \"kubernetes.io/projected/130129b7-7474-4a06-9039-44a9f59de076-kube-api-access-d6gtm\") pod \"cert-manager-webhook-6888856db4-sgsht\" (UID: \"130129b7-7474-4a06-9039-44a9f59de076\") " pod="cert-manager/cert-manager-webhook-6888856db4-sgsht" Feb 16 17:36:47.928542 master-0 kubenswrapper[4652]: I0216 17:36:47.928542 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/130129b7-7474-4a06-9039-44a9f59de076-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-sgsht\" (UID: \"130129b7-7474-4a06-9039-44a9f59de076\") " pod="cert-manager/cert-manager-webhook-6888856db4-sgsht" Feb 16 17:36:47.947718 master-0 kubenswrapper[4652]: I0216 17:36:47.947676 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6gtm\" (UniqueName: \"kubernetes.io/projected/130129b7-7474-4a06-9039-44a9f59de076-kube-api-access-d6gtm\") pod \"cert-manager-webhook-6888856db4-sgsht\" (UID: \"130129b7-7474-4a06-9039-44a9f59de076\") " pod="cert-manager/cert-manager-webhook-6888856db4-sgsht" Feb 16 17:36:47.952522 master-0 kubenswrapper[4652]: I0216 17:36:47.952429 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/130129b7-7474-4a06-9039-44a9f59de076-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-sgsht\" (UID: \"130129b7-7474-4a06-9039-44a9f59de076\") " pod="cert-manager/cert-manager-webhook-6888856db4-sgsht" Feb 16 17:36:48.031716 master-0 kubenswrapper[4652]: I0216 17:36:48.031584 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-sgsht" Feb 16 17:36:48.545769 master-0 kubenswrapper[4652]: I0216 17:36:48.545693 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-sgsht"] Feb 16 17:36:49.114543 master-0 kubenswrapper[4652]: I0216 17:36:49.114487 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-sgsht" event={"ID":"130129b7-7474-4a06-9039-44a9f59de076","Type":"ContainerStarted","Data":"a992acc903389e37850a32c0d976789bafe13b6ec3286806ced107688acd648c"} Feb 16 17:36:49.265062 master-0 kubenswrapper[4652]: I0216 17:36:49.264995 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-nlt6j"] Feb 16 17:36:49.265886 master-0 kubenswrapper[4652]: I0216 17:36:49.265851 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-nlt6j" Feb 16 17:36:49.280174 master-0 kubenswrapper[4652]: I0216 17:36:49.280121 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-nlt6j"] Feb 16 17:36:49.377286 master-0 kubenswrapper[4652]: I0216 17:36:49.377109 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9555e47a-3578-4236-aac0-14a06ac3e5b0-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-nlt6j\" (UID: \"9555e47a-3578-4236-aac0-14a06ac3e5b0\") " pod="cert-manager/cert-manager-cainjector-5545bd876-nlt6j" Feb 16 17:36:49.377286 master-0 kubenswrapper[4652]: I0216 17:36:49.377198 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47r2d\" (UniqueName: \"kubernetes.io/projected/9555e47a-3578-4236-aac0-14a06ac3e5b0-kube-api-access-47r2d\") pod \"cert-manager-cainjector-5545bd876-nlt6j\" (UID: \"9555e47a-3578-4236-aac0-14a06ac3e5b0\") " pod="cert-manager/cert-manager-cainjector-5545bd876-nlt6j" Feb 16 17:36:49.478203 master-0 kubenswrapper[4652]: I0216 17:36:49.478152 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9555e47a-3578-4236-aac0-14a06ac3e5b0-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-nlt6j\" (UID: \"9555e47a-3578-4236-aac0-14a06ac3e5b0\") " pod="cert-manager/cert-manager-cainjector-5545bd876-nlt6j" Feb 16 17:36:49.478513 master-0 kubenswrapper[4652]: I0216 17:36:49.478494 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47r2d\" (UniqueName: \"kubernetes.io/projected/9555e47a-3578-4236-aac0-14a06ac3e5b0-kube-api-access-47r2d\") pod \"cert-manager-cainjector-5545bd876-nlt6j\" (UID: \"9555e47a-3578-4236-aac0-14a06ac3e5b0\") " pod="cert-manager/cert-manager-cainjector-5545bd876-nlt6j" Feb 16 17:36:49.501384 master-0 kubenswrapper[4652]: I0216 17:36:49.501335 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47r2d\" (UniqueName: \"kubernetes.io/projected/9555e47a-3578-4236-aac0-14a06ac3e5b0-kube-api-access-47r2d\") pod \"cert-manager-cainjector-5545bd876-nlt6j\" (UID: \"9555e47a-3578-4236-aac0-14a06ac3e5b0\") " pod="cert-manager/cert-manager-cainjector-5545bd876-nlt6j" Feb 16 17:36:49.502136 master-0 kubenswrapper[4652]: I0216 17:36:49.502084 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9555e47a-3578-4236-aac0-14a06ac3e5b0-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-nlt6j\" (UID: \"9555e47a-3578-4236-aac0-14a06ac3e5b0\") " pod="cert-manager/cert-manager-cainjector-5545bd876-nlt6j" Feb 16 17:36:49.590897 master-0 kubenswrapper[4652]: I0216 17:36:49.590836 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-nlt6j" Feb 16 17:36:50.035594 master-0 kubenswrapper[4652]: I0216 17:36:50.035540 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-nlt6j"] Feb 16 17:36:50.122826 master-0 kubenswrapper[4652]: I0216 17:36:50.122763 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-nlt6j" event={"ID":"9555e47a-3578-4236-aac0-14a06ac3e5b0","Type":"ContainerStarted","Data":"c44ab14b19d5d53525797c0bd80ff12c7f2b05844a4682e582a6c81703530100"} Feb 16 17:36:52.239353 master-0 kubenswrapper[4652]: I0216 17:36:52.239283 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-wgbrw"] Feb 16 17:36:52.240283 master-0 kubenswrapper[4652]: I0216 17:36:52.240245 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-wgbrw" Feb 16 17:36:52.242805 master-0 kubenswrapper[4652]: I0216 17:36:52.242781 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 17:36:52.243093 master-0 kubenswrapper[4652]: I0216 17:36:52.243049 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 17:36:52.253819 master-0 kubenswrapper[4652]: I0216 17:36:52.253758 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-wgbrw"] Feb 16 17:36:52.338089 master-0 kubenswrapper[4652]: I0216 17:36:52.338018 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkqg5\" (UniqueName: \"kubernetes.io/projected/a49613eb-7dcb-4f5d-9f77-eb36f7929112-kube-api-access-hkqg5\") pod \"nmstate-operator-694c9596b7-wgbrw\" (UID: \"a49613eb-7dcb-4f5d-9f77-eb36f7929112\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-wgbrw" Feb 16 17:36:52.444587 master-0 kubenswrapper[4652]: I0216 17:36:52.444531 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkqg5\" (UniqueName: \"kubernetes.io/projected/a49613eb-7dcb-4f5d-9f77-eb36f7929112-kube-api-access-hkqg5\") pod \"nmstate-operator-694c9596b7-wgbrw\" (UID: \"a49613eb-7dcb-4f5d-9f77-eb36f7929112\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-wgbrw" Feb 16 17:36:52.460576 master-0 kubenswrapper[4652]: I0216 17:36:52.460519 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkqg5\" (UniqueName: \"kubernetes.io/projected/a49613eb-7dcb-4f5d-9f77-eb36f7929112-kube-api-access-hkqg5\") pod \"nmstate-operator-694c9596b7-wgbrw\" (UID: \"a49613eb-7dcb-4f5d-9f77-eb36f7929112\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-wgbrw" Feb 16 17:36:52.568306 master-0 kubenswrapper[4652]: I0216 17:36:52.568181 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-wgbrw" Feb 16 17:36:53.133689 master-0 kubenswrapper[4652]: I0216 17:36:53.133630 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-wgbrw"] Feb 16 17:36:53.135009 master-0 kubenswrapper[4652]: W0216 17:36:53.134955 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda49613eb_7dcb_4f5d_9f77_eb36f7929112.slice/crio-9f3db97f27b3a658c4d3cccf3ec620010d5ad7d735ffe42a83d5bec4f33ea964 WatchSource:0}: Error finding container 9f3db97f27b3a658c4d3cccf3ec620010d5ad7d735ffe42a83d5bec4f33ea964: Status 404 returned error can't find the container with id 9f3db97f27b3a658c4d3cccf3ec620010d5ad7d735ffe42a83d5bec4f33ea964 Feb 16 17:36:53.147158 master-0 kubenswrapper[4652]: I0216 17:36:53.146292 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-sgsht" event={"ID":"130129b7-7474-4a06-9039-44a9f59de076","Type":"ContainerStarted","Data":"42ceec23e8d1c7ca89008dccd4e79d021b1e967f2289b4e59acc5ec7ff73e434"} Feb 16 17:36:53.147158 master-0 kubenswrapper[4652]: I0216 17:36:53.146503 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-sgsht" Feb 16 17:36:53.147512 master-0 kubenswrapper[4652]: I0216 17:36:53.147465 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-wgbrw" event={"ID":"a49613eb-7dcb-4f5d-9f77-eb36f7929112","Type":"ContainerStarted","Data":"9f3db97f27b3a658c4d3cccf3ec620010d5ad7d735ffe42a83d5bec4f33ea964"} Feb 16 17:36:53.149318 master-0 kubenswrapper[4652]: I0216 17:36:53.149268 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-nlt6j" event={"ID":"9555e47a-3578-4236-aac0-14a06ac3e5b0","Type":"ContainerStarted","Data":"6e501f1e2219bf914d54bf3b93589da2b9c6738efb1b927acee17c71f04c220e"} Feb 16 17:36:53.179606 master-0 kubenswrapper[4652]: I0216 17:36:53.179507 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-sgsht" podStartSLOduration=2.005204236 podStartE2EDuration="6.179480115s" podCreationTimestamp="2026-02-16 17:36:47 +0000 UTC" firstStartedPulling="2026-02-16 17:36:48.562476769 +0000 UTC m=+765.950645285" lastFinishedPulling="2026-02-16 17:36:52.736752638 +0000 UTC m=+770.124921164" observedRunningTime="2026-02-16 17:36:53.172124132 +0000 UTC m=+770.560292648" watchObservedRunningTime="2026-02-16 17:36:53.179480115 +0000 UTC m=+770.567648641" Feb 16 17:36:53.188885 master-0 kubenswrapper[4652]: I0216 17:36:53.188807 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-nlt6j" podStartSLOduration=1.502794398 podStartE2EDuration="4.1887858s" podCreationTimestamp="2026-02-16 17:36:49 +0000 UTC" firstStartedPulling="2026-02-16 17:36:50.041120113 +0000 UTC m=+767.429288629" lastFinishedPulling="2026-02-16 17:36:52.727111525 +0000 UTC m=+770.115280031" observedRunningTime="2026-02-16 17:36:53.187050524 +0000 UTC m=+770.575219060" watchObservedRunningTime="2026-02-16 17:36:53.1887858 +0000 UTC m=+770.576954326" Feb 16 17:36:55.777349 master-0 kubenswrapper[4652]: I0216 17:36:55.776649 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59"] Feb 16 17:36:55.777894 master-0 kubenswrapper[4652]: I0216 17:36:55.777513 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" Feb 16 17:36:55.783560 master-0 kubenswrapper[4652]: I0216 17:36:55.783518 4652 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 17:36:55.783560 master-0 kubenswrapper[4652]: I0216 17:36:55.783558 4652 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 17:36:55.783837 master-0 kubenswrapper[4652]: I0216 17:36:55.783793 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 17:36:55.783995 master-0 kubenswrapper[4652]: I0216 17:36:55.783913 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 17:36:55.798781 master-0 kubenswrapper[4652]: I0216 17:36:55.798222 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59"] Feb 16 17:36:55.824001 master-0 kubenswrapper[4652]: I0216 17:36:55.823940 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06583659-0f37-4c29-9336-6cb46626bfd1-webhook-cert\") pod \"metallb-operator-controller-manager-85cbb58865-c6k59\" (UID: \"06583659-0f37-4c29-9336-6cb46626bfd1\") " pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" Feb 16 17:36:55.824229 master-0 kubenswrapper[4652]: I0216 17:36:55.824040 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06583659-0f37-4c29-9336-6cb46626bfd1-apiservice-cert\") pod \"metallb-operator-controller-manager-85cbb58865-c6k59\" (UID: \"06583659-0f37-4c29-9336-6cb46626bfd1\") " pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" Feb 16 17:36:55.824229 master-0 kubenswrapper[4652]: I0216 17:36:55.824057 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbm56\" (UniqueName: \"kubernetes.io/projected/06583659-0f37-4c29-9336-6cb46626bfd1-kube-api-access-fbm56\") pod \"metallb-operator-controller-manager-85cbb58865-c6k59\" (UID: \"06583659-0f37-4c29-9336-6cb46626bfd1\") " pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" Feb 16 17:36:55.927736 master-0 kubenswrapper[4652]: I0216 17:36:55.924859 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06583659-0f37-4c29-9336-6cb46626bfd1-apiservice-cert\") pod \"metallb-operator-controller-manager-85cbb58865-c6k59\" (UID: \"06583659-0f37-4c29-9336-6cb46626bfd1\") " pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" Feb 16 17:36:55.927736 master-0 kubenswrapper[4652]: I0216 17:36:55.924906 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbm56\" (UniqueName: \"kubernetes.io/projected/06583659-0f37-4c29-9336-6cb46626bfd1-kube-api-access-fbm56\") pod \"metallb-operator-controller-manager-85cbb58865-c6k59\" (UID: \"06583659-0f37-4c29-9336-6cb46626bfd1\") " pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" Feb 16 17:36:55.927736 master-0 kubenswrapper[4652]: I0216 17:36:55.924999 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06583659-0f37-4c29-9336-6cb46626bfd1-webhook-cert\") pod \"metallb-operator-controller-manager-85cbb58865-c6k59\" (UID: \"06583659-0f37-4c29-9336-6cb46626bfd1\") " pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" Feb 16 17:36:55.928673 master-0 kubenswrapper[4652]: I0216 17:36:55.928600 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/06583659-0f37-4c29-9336-6cb46626bfd1-webhook-cert\") pod \"metallb-operator-controller-manager-85cbb58865-c6k59\" (UID: \"06583659-0f37-4c29-9336-6cb46626bfd1\") " pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" Feb 16 17:36:55.928877 master-0 kubenswrapper[4652]: I0216 17:36:55.928835 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/06583659-0f37-4c29-9336-6cb46626bfd1-apiservice-cert\") pod \"metallb-operator-controller-manager-85cbb58865-c6k59\" (UID: \"06583659-0f37-4c29-9336-6cb46626bfd1\") " pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" Feb 16 17:36:55.951606 master-0 kubenswrapper[4652]: I0216 17:36:55.951560 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbm56\" (UniqueName: \"kubernetes.io/projected/06583659-0f37-4c29-9336-6cb46626bfd1-kube-api-access-fbm56\") pod \"metallb-operator-controller-manager-85cbb58865-c6k59\" (UID: \"06583659-0f37-4c29-9336-6cb46626bfd1\") " pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" Feb 16 17:36:56.056845 master-0 kubenswrapper[4652]: I0216 17:36:56.056725 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp"] Feb 16 17:36:56.057810 master-0 kubenswrapper[4652]: I0216 17:36:56.057787 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" Feb 16 17:36:56.059810 master-0 kubenswrapper[4652]: I0216 17:36:56.059772 4652 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 17:36:56.060120 master-0 kubenswrapper[4652]: I0216 17:36:56.060093 4652 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 17:36:56.076228 master-0 kubenswrapper[4652]: I0216 17:36:56.076180 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp"] Feb 16 17:36:56.101837 master-0 kubenswrapper[4652]: I0216 17:36:56.097771 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" Feb 16 17:36:56.152283 master-0 kubenswrapper[4652]: I0216 17:36:56.147048 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5-apiservice-cert\") pod \"metallb-operator-webhook-server-674d8b687-qj4fp\" (UID: \"9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5\") " pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" Feb 16 17:36:56.152283 master-0 kubenswrapper[4652]: I0216 17:36:56.147132 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5-webhook-cert\") pod \"metallb-operator-webhook-server-674d8b687-qj4fp\" (UID: \"9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5\") " pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" Feb 16 17:36:56.152283 master-0 kubenswrapper[4652]: I0216 17:36:56.147202 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbwwx\" (UniqueName: \"kubernetes.io/projected/9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5-kube-api-access-bbwwx\") pod \"metallb-operator-webhook-server-674d8b687-qj4fp\" (UID: \"9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5\") " pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" Feb 16 17:36:56.248390 master-0 kubenswrapper[4652]: I0216 17:36:56.248331 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5-webhook-cert\") pod \"metallb-operator-webhook-server-674d8b687-qj4fp\" (UID: \"9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5\") " pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" Feb 16 17:36:56.248582 master-0 kubenswrapper[4652]: I0216 17:36:56.248428 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbwwx\" (UniqueName: \"kubernetes.io/projected/9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5-kube-api-access-bbwwx\") pod \"metallb-operator-webhook-server-674d8b687-qj4fp\" (UID: \"9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5\") " pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" Feb 16 17:36:56.248582 master-0 kubenswrapper[4652]: I0216 17:36:56.248493 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5-apiservice-cert\") pod \"metallb-operator-webhook-server-674d8b687-qj4fp\" (UID: \"9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5\") " pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" Feb 16 17:36:56.256127 master-0 kubenswrapper[4652]: I0216 17:36:56.256043 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5-webhook-cert\") pod \"metallb-operator-webhook-server-674d8b687-qj4fp\" (UID: \"9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5\") " pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" Feb 16 17:36:56.256416 master-0 kubenswrapper[4652]: I0216 17:36:56.256135 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5-apiservice-cert\") pod \"metallb-operator-webhook-server-674d8b687-qj4fp\" (UID: \"9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5\") " pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" Feb 16 17:36:56.313351 master-0 kubenswrapper[4652]: I0216 17:36:56.307206 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbwwx\" (UniqueName: \"kubernetes.io/projected/9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5-kube-api-access-bbwwx\") pod \"metallb-operator-webhook-server-674d8b687-qj4fp\" (UID: \"9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5\") " pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" Feb 16 17:36:56.405707 master-0 kubenswrapper[4652]: I0216 17:36:56.399407 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" Feb 16 17:36:56.597838 master-0 kubenswrapper[4652]: I0216 17:36:56.597600 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59"] Feb 16 17:36:57.044398 master-0 kubenswrapper[4652]: I0216 17:36:57.044025 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp"] Feb 16 17:36:57.206377 master-0 kubenswrapper[4652]: I0216 17:36:57.206318 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" event={"ID":"06583659-0f37-4c29-9336-6cb46626bfd1","Type":"ContainerStarted","Data":"72d0e8599a7197ca2dde3d539db24d4e0c48b4c54f611ec087d2b22d1f441bae"} Feb 16 17:36:57.207859 master-0 kubenswrapper[4652]: I0216 17:36:57.207828 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" event={"ID":"9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5","Type":"ContainerStarted","Data":"31676b9fa84acbebefcda9e39372eac4cad5346f544408fc1f5b2b2697b6e5b7"} Feb 16 17:36:57.209863 master-0 kubenswrapper[4652]: I0216 17:36:57.209827 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-wgbrw" event={"ID":"a49613eb-7dcb-4f5d-9f77-eb36f7929112","Type":"ContainerStarted","Data":"3c76891ff48ae1de1d35d77231e8f854066468ee6d7c8cae7ecf874f64f7b43d"} Feb 16 17:36:57.244491 master-0 kubenswrapper[4652]: I0216 17:36:57.244398 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-wgbrw" podStartSLOduration=2.303077514 podStartE2EDuration="5.244375331s" podCreationTimestamp="2026-02-16 17:36:52 +0000 UTC" firstStartedPulling="2026-02-16 17:36:53.137361639 +0000 UTC m=+770.525530155" lastFinishedPulling="2026-02-16 17:36:56.078659456 +0000 UTC m=+773.466827972" observedRunningTime="2026-02-16 17:36:57.244055632 +0000 UTC m=+774.632224138" watchObservedRunningTime="2026-02-16 17:36:57.244375331 +0000 UTC m=+774.632543867" Feb 16 17:36:58.039580 master-0 kubenswrapper[4652]: I0216 17:36:58.039516 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-sgsht" Feb 16 17:37:04.271797 master-0 kubenswrapper[4652]: I0216 17:37:04.271599 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" event={"ID":"06583659-0f37-4c29-9336-6cb46626bfd1","Type":"ContainerStarted","Data":"6a5ce446debd5946fe6178b50f113cd0f41c0b3fa288937372270f29d9dceb70"} Feb 16 17:37:04.273027 master-0 kubenswrapper[4652]: I0216 17:37:04.273002 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" Feb 16 17:37:04.274598 master-0 kubenswrapper[4652]: I0216 17:37:04.274573 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" event={"ID":"9cdd5392-44dd-46c2-8f1a-e63b7be5d0a5","Type":"ContainerStarted","Data":"7104afa143e19c3473405fd120b5b5ad7ff037fcb55efca815c05594f3522715"} Feb 16 17:37:04.275104 master-0 kubenswrapper[4652]: I0216 17:37:04.275077 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" Feb 16 17:37:04.299518 master-0 kubenswrapper[4652]: I0216 17:37:04.299417 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" podStartSLOduration=1.97781432 podStartE2EDuration="9.299392965s" podCreationTimestamp="2026-02-16 17:36:55 +0000 UTC" firstStartedPulling="2026-02-16 17:36:56.61683233 +0000 UTC m=+774.005000846" lastFinishedPulling="2026-02-16 17:37:03.938410975 +0000 UTC m=+781.326579491" observedRunningTime="2026-02-16 17:37:04.295914584 +0000 UTC m=+781.684083110" watchObservedRunningTime="2026-02-16 17:37:04.299392965 +0000 UTC m=+781.687561501" Feb 16 17:37:04.325075 master-0 kubenswrapper[4652]: I0216 17:37:04.324972 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" podStartSLOduration=1.392057549 podStartE2EDuration="8.324947826s" podCreationTimestamp="2026-02-16 17:36:56 +0000 UTC" firstStartedPulling="2026-02-16 17:36:57.039578742 +0000 UTC m=+774.427747248" lastFinishedPulling="2026-02-16 17:37:03.972469009 +0000 UTC m=+781.360637525" observedRunningTime="2026-02-16 17:37:04.321167357 +0000 UTC m=+781.709335893" watchObservedRunningTime="2026-02-16 17:37:04.324947826 +0000 UTC m=+781.713116342" Feb 16 17:37:05.815134 master-0 kubenswrapper[4652]: I0216 17:37:05.815055 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-8w2jw"] Feb 16 17:37:05.816048 master-0 kubenswrapper[4652]: I0216 17:37:05.815979 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8w2jw" Feb 16 17:37:05.819220 master-0 kubenswrapper[4652]: I0216 17:37:05.819179 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 17:37:05.819488 master-0 kubenswrapper[4652]: I0216 17:37:05.819468 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 17:37:05.833853 master-0 kubenswrapper[4652]: I0216 17:37:05.833804 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-8w2jw"] Feb 16 17:37:05.840094 master-0 kubenswrapper[4652]: I0216 17:37:05.840046 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-68nwt"] Feb 16 17:37:05.840904 master-0 kubenswrapper[4652]: I0216 17:37:05.840880 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-68nwt" Feb 16 17:37:05.878196 master-0 kubenswrapper[4652]: I0216 17:37:05.878112 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-68nwt"] Feb 16 17:37:05.961504 master-0 kubenswrapper[4652]: I0216 17:37:05.961432 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94bfa5c1-29cd-4e98-ad5d-223b8c374721-bound-sa-token\") pod \"cert-manager-545d4d4674-68nwt\" (UID: \"94bfa5c1-29cd-4e98-ad5d-223b8c374721\") " pod="cert-manager/cert-manager-545d4d4674-68nwt" Feb 16 17:37:05.961782 master-0 kubenswrapper[4652]: I0216 17:37:05.961736 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh6ln\" (UniqueName: \"kubernetes.io/projected/c88c8498-871b-4e56-9cc0-2e2d2b15121f-kube-api-access-vh6ln\") pod \"obo-prometheus-operator-68bc856cb9-8w2jw\" (UID: \"c88c8498-871b-4e56-9cc0-2e2d2b15121f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8w2jw" Feb 16 17:37:05.961846 master-0 kubenswrapper[4652]: I0216 17:37:05.961832 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92cvs\" (UniqueName: \"kubernetes.io/projected/94bfa5c1-29cd-4e98-ad5d-223b8c374721-kube-api-access-92cvs\") pod \"cert-manager-545d4d4674-68nwt\" (UID: \"94bfa5c1-29cd-4e98-ad5d-223b8c374721\") " pod="cert-manager/cert-manager-545d4d4674-68nwt" Feb 16 17:37:06.038023 master-0 kubenswrapper[4652]: I0216 17:37:06.037956 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m"] Feb 16 17:37:06.040987 master-0 kubenswrapper[4652]: I0216 17:37:06.040925 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m" Feb 16 17:37:06.063382 master-0 kubenswrapper[4652]: I0216 17:37:06.063326 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92cvs\" (UniqueName: \"kubernetes.io/projected/94bfa5c1-29cd-4e98-ad5d-223b8c374721-kube-api-access-92cvs\") pod \"cert-manager-545d4d4674-68nwt\" (UID: \"94bfa5c1-29cd-4e98-ad5d-223b8c374721\") " pod="cert-manager/cert-manager-545d4d4674-68nwt" Feb 16 17:37:06.063382 master-0 kubenswrapper[4652]: I0216 17:37:06.063389 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94bfa5c1-29cd-4e98-ad5d-223b8c374721-bound-sa-token\") pod \"cert-manager-545d4d4674-68nwt\" (UID: \"94bfa5c1-29cd-4e98-ad5d-223b8c374721\") " pod="cert-manager/cert-manager-545d4d4674-68nwt" Feb 16 17:37:06.063779 master-0 kubenswrapper[4652]: I0216 17:37:06.063535 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh6ln\" (UniqueName: \"kubernetes.io/projected/c88c8498-871b-4e56-9cc0-2e2d2b15121f-kube-api-access-vh6ln\") pod \"obo-prometheus-operator-68bc856cb9-8w2jw\" (UID: \"c88c8498-871b-4e56-9cc0-2e2d2b15121f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8w2jw" Feb 16 17:37:06.070619 master-0 kubenswrapper[4652]: I0216 17:37:06.070499 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 17:37:06.076857 master-0 kubenswrapper[4652]: I0216 17:37:06.076761 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4"] Feb 16 17:37:06.077927 master-0 kubenswrapper[4652]: I0216 17:37:06.077890 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4" Feb 16 17:37:06.099918 master-0 kubenswrapper[4652]: I0216 17:37:06.094490 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh6ln\" (UniqueName: \"kubernetes.io/projected/c88c8498-871b-4e56-9cc0-2e2d2b15121f-kube-api-access-vh6ln\") pod \"obo-prometheus-operator-68bc856cb9-8w2jw\" (UID: \"c88c8498-871b-4e56-9cc0-2e2d2b15121f\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8w2jw" Feb 16 17:37:06.109075 master-0 kubenswrapper[4652]: I0216 17:37:06.109015 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m"] Feb 16 17:37:06.118945 master-0 kubenswrapper[4652]: I0216 17:37:06.118911 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/94bfa5c1-29cd-4e98-ad5d-223b8c374721-bound-sa-token\") pod \"cert-manager-545d4d4674-68nwt\" (UID: \"94bfa5c1-29cd-4e98-ad5d-223b8c374721\") " pod="cert-manager/cert-manager-545d4d4674-68nwt" Feb 16 17:37:06.126057 master-0 kubenswrapper[4652]: I0216 17:37:06.125994 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4"] Feb 16 17:37:06.131022 master-0 kubenswrapper[4652]: I0216 17:37:06.130570 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8w2jw" Feb 16 17:37:06.160270 master-0 kubenswrapper[4652]: I0216 17:37:06.160119 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92cvs\" (UniqueName: \"kubernetes.io/projected/94bfa5c1-29cd-4e98-ad5d-223b8c374721-kube-api-access-92cvs\") pod \"cert-manager-545d4d4674-68nwt\" (UID: \"94bfa5c1-29cd-4e98-ad5d-223b8c374721\") " pod="cert-manager/cert-manager-545d4d4674-68nwt" Feb 16 17:37:06.175769 master-0 kubenswrapper[4652]: I0216 17:37:06.175724 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-68nwt" Feb 16 17:37:06.178331 master-0 kubenswrapper[4652]: I0216 17:37:06.177960 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f41d32fc-081f-42e3-b36c-0fc722a925a0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-cf968959d-nlht4\" (UID: \"f41d32fc-081f-42e3-b36c-0fc722a925a0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4" Feb 16 17:37:06.178331 master-0 kubenswrapper[4652]: I0216 17:37:06.178026 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b85e7917-0774-4fe3-87cf-b7a57d4d186e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-cf968959d-f2v6m\" (UID: \"b85e7917-0774-4fe3-87cf-b7a57d4d186e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m" Feb 16 17:37:06.178331 master-0 kubenswrapper[4652]: I0216 17:37:06.178078 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f41d32fc-081f-42e3-b36c-0fc722a925a0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-cf968959d-nlht4\" (UID: \"f41d32fc-081f-42e3-b36c-0fc722a925a0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4" Feb 16 17:37:06.178331 master-0 kubenswrapper[4652]: I0216 17:37:06.178118 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b85e7917-0774-4fe3-87cf-b7a57d4d186e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-cf968959d-f2v6m\" (UID: \"b85e7917-0774-4fe3-87cf-b7a57d4d186e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m" Feb 16 17:37:06.279972 master-0 kubenswrapper[4652]: I0216 17:37:06.279098 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b85e7917-0774-4fe3-87cf-b7a57d4d186e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-cf968959d-f2v6m\" (UID: \"b85e7917-0774-4fe3-87cf-b7a57d4d186e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m" Feb 16 17:37:06.279972 master-0 kubenswrapper[4652]: I0216 17:37:06.279231 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f41d32fc-081f-42e3-b36c-0fc722a925a0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-cf968959d-nlht4\" (UID: \"f41d32fc-081f-42e3-b36c-0fc722a925a0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4" Feb 16 17:37:06.279972 master-0 kubenswrapper[4652]: I0216 17:37:06.279290 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b85e7917-0774-4fe3-87cf-b7a57d4d186e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-cf968959d-f2v6m\" (UID: \"b85e7917-0774-4fe3-87cf-b7a57d4d186e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m" Feb 16 17:37:06.279972 master-0 kubenswrapper[4652]: I0216 17:37:06.279347 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f41d32fc-081f-42e3-b36c-0fc722a925a0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-cf968959d-nlht4\" (UID: \"f41d32fc-081f-42e3-b36c-0fc722a925a0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4" Feb 16 17:37:06.282803 master-0 kubenswrapper[4652]: I0216 17:37:06.282736 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f41d32fc-081f-42e3-b36c-0fc722a925a0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-cf968959d-nlht4\" (UID: \"f41d32fc-081f-42e3-b36c-0fc722a925a0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4" Feb 16 17:37:06.284693 master-0 kubenswrapper[4652]: I0216 17:37:06.284647 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f41d32fc-081f-42e3-b36c-0fc722a925a0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-cf968959d-nlht4\" (UID: \"f41d32fc-081f-42e3-b36c-0fc722a925a0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4" Feb 16 17:37:06.287506 master-0 kubenswrapper[4652]: I0216 17:37:06.287445 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6h6dn"] Feb 16 17:37:06.288640 master-0 kubenswrapper[4652]: I0216 17:37:06.288605 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6h6dn" Feb 16 17:37:06.293189 master-0 kubenswrapper[4652]: I0216 17:37:06.293137 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 17:37:06.293369 master-0 kubenswrapper[4652]: I0216 17:37:06.293319 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b85e7917-0774-4fe3-87cf-b7a57d4d186e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-cf968959d-f2v6m\" (UID: \"b85e7917-0774-4fe3-87cf-b7a57d4d186e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m" Feb 16 17:37:06.293436 master-0 kubenswrapper[4652]: I0216 17:37:06.293324 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b85e7917-0774-4fe3-87cf-b7a57d4d186e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-cf968959d-f2v6m\" (UID: \"b85e7917-0774-4fe3-87cf-b7a57d4d186e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m" Feb 16 17:37:06.358037 master-0 kubenswrapper[4652]: I0216 17:37:06.357354 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6h6dn"] Feb 16 17:37:06.358037 master-0 kubenswrapper[4652]: I0216 17:37:06.357944 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m" Feb 16 17:37:06.380715 master-0 kubenswrapper[4652]: I0216 17:37:06.380651 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47fgh\" (UniqueName: \"kubernetes.io/projected/66172e50-2e69-4ef3-a473-77370e8fc5a3-kube-api-access-47fgh\") pod \"observability-operator-59bdc8b94-6h6dn\" (UID: \"66172e50-2e69-4ef3-a473-77370e8fc5a3\") " pod="openshift-operators/observability-operator-59bdc8b94-6h6dn" Feb 16 17:37:06.380715 master-0 kubenswrapper[4652]: I0216 17:37:06.380712 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/66172e50-2e69-4ef3-a473-77370e8fc5a3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6h6dn\" (UID: \"66172e50-2e69-4ef3-a473-77370e8fc5a3\") " pod="openshift-operators/observability-operator-59bdc8b94-6h6dn" Feb 16 17:37:06.478990 master-0 kubenswrapper[4652]: I0216 17:37:06.477442 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l95mf"] Feb 16 17:37:06.478990 master-0 kubenswrapper[4652]: I0216 17:37:06.478416 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4" Feb 16 17:37:06.478990 master-0 kubenswrapper[4652]: I0216 17:37:06.478641 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-l95mf" Feb 16 17:37:06.485520 master-0 kubenswrapper[4652]: I0216 17:37:06.482694 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47fgh\" (UniqueName: \"kubernetes.io/projected/66172e50-2e69-4ef3-a473-77370e8fc5a3-kube-api-access-47fgh\") pod \"observability-operator-59bdc8b94-6h6dn\" (UID: \"66172e50-2e69-4ef3-a473-77370e8fc5a3\") " pod="openshift-operators/observability-operator-59bdc8b94-6h6dn" Feb 16 17:37:06.485520 master-0 kubenswrapper[4652]: I0216 17:37:06.482729 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/66172e50-2e69-4ef3-a473-77370e8fc5a3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6h6dn\" (UID: \"66172e50-2e69-4ef3-a473-77370e8fc5a3\") " pod="openshift-operators/observability-operator-59bdc8b94-6h6dn" Feb 16 17:37:06.486136 master-0 kubenswrapper[4652]: I0216 17:37:06.486100 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/66172e50-2e69-4ef3-a473-77370e8fc5a3-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6h6dn\" (UID: \"66172e50-2e69-4ef3-a473-77370e8fc5a3\") " pod="openshift-operators/observability-operator-59bdc8b94-6h6dn" Feb 16 17:37:06.492935 master-0 kubenswrapper[4652]: I0216 17:37:06.492876 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l95mf"] Feb 16 17:37:06.515483 master-0 kubenswrapper[4652]: I0216 17:37:06.515437 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47fgh\" (UniqueName: \"kubernetes.io/projected/66172e50-2e69-4ef3-a473-77370e8fc5a3-kube-api-access-47fgh\") pod \"observability-operator-59bdc8b94-6h6dn\" (UID: \"66172e50-2e69-4ef3-a473-77370e8fc5a3\") " pod="openshift-operators/observability-operator-59bdc8b94-6h6dn" Feb 16 17:37:06.584503 master-0 kubenswrapper[4652]: I0216 17:37:06.584387 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1b0b713f-4437-4538-9d0d-3414e13cab1c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l95mf\" (UID: \"1b0b713f-4437-4538-9d0d-3414e13cab1c\") " pod="openshift-operators/perses-operator-5bf474d74f-l95mf" Feb 16 17:37:06.584503 master-0 kubenswrapper[4652]: I0216 17:37:06.584430 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt7vn\" (UniqueName: \"kubernetes.io/projected/1b0b713f-4437-4538-9d0d-3414e13cab1c-kube-api-access-kt7vn\") pod \"perses-operator-5bf474d74f-l95mf\" (UID: \"1b0b713f-4437-4538-9d0d-3414e13cab1c\") " pod="openshift-operators/perses-operator-5bf474d74f-l95mf" Feb 16 17:37:06.635390 master-0 kubenswrapper[4652]: I0216 17:37:06.635335 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6h6dn" Feb 16 17:37:06.688907 master-0 kubenswrapper[4652]: I0216 17:37:06.688749 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1b0b713f-4437-4538-9d0d-3414e13cab1c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l95mf\" (UID: \"1b0b713f-4437-4538-9d0d-3414e13cab1c\") " pod="openshift-operators/perses-operator-5bf474d74f-l95mf" Feb 16 17:37:06.688907 master-0 kubenswrapper[4652]: I0216 17:37:06.688802 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt7vn\" (UniqueName: \"kubernetes.io/projected/1b0b713f-4437-4538-9d0d-3414e13cab1c-kube-api-access-kt7vn\") pod \"perses-operator-5bf474d74f-l95mf\" (UID: \"1b0b713f-4437-4538-9d0d-3414e13cab1c\") " pod="openshift-operators/perses-operator-5bf474d74f-l95mf" Feb 16 17:37:06.690121 master-0 kubenswrapper[4652]: I0216 17:37:06.690086 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/1b0b713f-4437-4538-9d0d-3414e13cab1c-openshift-service-ca\") pod \"perses-operator-5bf474d74f-l95mf\" (UID: \"1b0b713f-4437-4538-9d0d-3414e13cab1c\") " pod="openshift-operators/perses-operator-5bf474d74f-l95mf" Feb 16 17:37:06.710863 master-0 kubenswrapper[4652]: I0216 17:37:06.710815 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt7vn\" (UniqueName: \"kubernetes.io/projected/1b0b713f-4437-4538-9d0d-3414e13cab1c-kube-api-access-kt7vn\") pod \"perses-operator-5bf474d74f-l95mf\" (UID: \"1b0b713f-4437-4538-9d0d-3414e13cab1c\") " pod="openshift-operators/perses-operator-5bf474d74f-l95mf" Feb 16 17:37:06.753880 master-0 kubenswrapper[4652]: W0216 17:37:06.753822 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc88c8498_871b_4e56_9cc0_2e2d2b15121f.slice/crio-1c80a92b20a9c8365b52fa7c51f8bf424dadd03f8073110506a94970ebb44ec2 WatchSource:0}: Error finding container 1c80a92b20a9c8365b52fa7c51f8bf424dadd03f8073110506a94970ebb44ec2: Status 404 returned error can't find the container with id 1c80a92b20a9c8365b52fa7c51f8bf424dadd03f8073110506a94970ebb44ec2 Feb 16 17:37:06.765818 master-0 kubenswrapper[4652]: I0216 17:37:06.764675 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-8w2jw"] Feb 16 17:37:06.798400 master-0 kubenswrapper[4652]: I0216 17:37:06.798345 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-l95mf" Feb 16 17:37:07.304679 master-0 kubenswrapper[4652]: I0216 17:37:07.304396 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8w2jw" event={"ID":"c88c8498-871b-4e56-9cc0-2e2d2b15121f","Type":"ContainerStarted","Data":"1c80a92b20a9c8365b52fa7c51f8bf424dadd03f8073110506a94970ebb44ec2"} Feb 16 17:37:07.619121 master-0 kubenswrapper[4652]: W0216 17:37:07.619072 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66172e50_2e69_4ef3_a473_77370e8fc5a3.slice/crio-fb8a1c32784feede0b99240fef4adcf228a90421571f5877621d9cb2edc535d9 WatchSource:0}: Error finding container fb8a1c32784feede0b99240fef4adcf228a90421571f5877621d9cb2edc535d9: Status 404 returned error can't find the container with id fb8a1c32784feede0b99240fef4adcf228a90421571f5877621d9cb2edc535d9 Feb 16 17:37:07.620945 master-0 kubenswrapper[4652]: W0216 17:37:07.620891 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94bfa5c1_29cd_4e98_ad5d_223b8c374721.slice/crio-cddf582f21b8541f385d24fb45e27e442796ab0ea62efad716eadd0f17f2898d WatchSource:0}: Error finding container cddf582f21b8541f385d24fb45e27e442796ab0ea62efad716eadd0f17f2898d: Status 404 returned error can't find the container with id cddf582f21b8541f385d24fb45e27e442796ab0ea62efad716eadd0f17f2898d Feb 16 17:37:07.624842 master-0 kubenswrapper[4652]: I0216 17:37:07.624448 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-68nwt"] Feb 16 17:37:07.631355 master-0 kubenswrapper[4652]: I0216 17:37:07.631311 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m"] Feb 16 17:37:07.637081 master-0 kubenswrapper[4652]: I0216 17:37:07.636972 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4"] Feb 16 17:37:07.642601 master-0 kubenswrapper[4652]: I0216 17:37:07.642559 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6h6dn"] Feb 16 17:37:07.653546 master-0 kubenswrapper[4652]: W0216 17:37:07.653503 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf41d32fc_081f_42e3_b36c_0fc722a925a0.slice/crio-74c535882beb045e03197d6375733d9a037254d75be934dfaacc56b17e1605c6 WatchSource:0}: Error finding container 74c535882beb045e03197d6375733d9a037254d75be934dfaacc56b17e1605c6: Status 404 returned error can't find the container with id 74c535882beb045e03197d6375733d9a037254d75be934dfaacc56b17e1605c6 Feb 16 17:37:07.762054 master-0 kubenswrapper[4652]: I0216 17:37:07.761988 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-l95mf"] Feb 16 17:37:07.784759 master-0 kubenswrapper[4652]: W0216 17:37:07.784503 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b0b713f_4437_4538_9d0d_3414e13cab1c.slice/crio-1a617741330744482ac11035d8fb0f9bc0d1a38c31549a532e798aeaf2142ed9 WatchSource:0}: Error finding container 1a617741330744482ac11035d8fb0f9bc0d1a38c31549a532e798aeaf2142ed9: Status 404 returned error can't find the container with id 1a617741330744482ac11035d8fb0f9bc0d1a38c31549a532e798aeaf2142ed9 Feb 16 17:37:08.320102 master-0 kubenswrapper[4652]: I0216 17:37:08.320029 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4" event={"ID":"f41d32fc-081f-42e3-b36c-0fc722a925a0","Type":"ContainerStarted","Data":"74c535882beb045e03197d6375733d9a037254d75be934dfaacc56b17e1605c6"} Feb 16 17:37:08.332675 master-0 kubenswrapper[4652]: I0216 17:37:08.332617 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-l95mf" event={"ID":"1b0b713f-4437-4538-9d0d-3414e13cab1c","Type":"ContainerStarted","Data":"1a617741330744482ac11035d8fb0f9bc0d1a38c31549a532e798aeaf2142ed9"} Feb 16 17:37:08.340276 master-0 kubenswrapper[4652]: I0216 17:37:08.338475 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m" event={"ID":"b85e7917-0774-4fe3-87cf-b7a57d4d186e","Type":"ContainerStarted","Data":"fab45ffab18bd54283ee46a3828abaabea44eafe425cf261a7cdc3ffc566bf96"} Feb 16 17:37:08.349784 master-0 kubenswrapper[4652]: I0216 17:37:08.349726 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-68nwt" event={"ID":"94bfa5c1-29cd-4e98-ad5d-223b8c374721","Type":"ContainerStarted","Data":"6e4e940ad5103a13a5bce945ea79fba39eb60f4192a38b2bf4c76804efb77725"} Feb 16 17:37:08.349784 master-0 kubenswrapper[4652]: I0216 17:37:08.349786 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-68nwt" event={"ID":"94bfa5c1-29cd-4e98-ad5d-223b8c374721","Type":"ContainerStarted","Data":"cddf582f21b8541f385d24fb45e27e442796ab0ea62efad716eadd0f17f2898d"} Feb 16 17:37:08.358268 master-0 kubenswrapper[4652]: I0216 17:37:08.354372 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6h6dn" event={"ID":"66172e50-2e69-4ef3-a473-77370e8fc5a3","Type":"ContainerStarted","Data":"fb8a1c32784feede0b99240fef4adcf228a90421571f5877621d9cb2edc535d9"} Feb 16 17:37:08.393987 master-0 kubenswrapper[4652]: I0216 17:37:08.388785 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-68nwt" podStartSLOduration=3.388763164 podStartE2EDuration="3.388763164s" podCreationTimestamp="2026-02-16 17:37:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:37:08.386482764 +0000 UTC m=+785.774651290" watchObservedRunningTime="2026-02-16 17:37:08.388763164 +0000 UTC m=+785.776931680" Feb 16 17:37:16.405130 master-0 kubenswrapper[4652]: I0216 17:37:16.405079 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-674d8b687-qj4fp" Feb 16 17:37:20.467703 master-0 kubenswrapper[4652]: I0216 17:37:20.467635 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6h6dn" event={"ID":"66172e50-2e69-4ef3-a473-77370e8fc5a3","Type":"ContainerStarted","Data":"bd5a6eb5a129a6a0f73800609f2979f84a2480e52544e4a0cb7c954c9d1568c5"} Feb 16 17:37:20.468376 master-0 kubenswrapper[4652]: I0216 17:37:20.467900 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-6h6dn" Feb 16 17:37:20.469386 master-0 kubenswrapper[4652]: I0216 17:37:20.469355 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8w2jw" event={"ID":"c88c8498-871b-4e56-9cc0-2e2d2b15121f","Type":"ContainerStarted","Data":"4587c246fe6b14463283bc41c8fc510353e8a2129033c5b8dbb6bca4ef7b280e"} Feb 16 17:37:20.471034 master-0 kubenswrapper[4652]: I0216 17:37:20.470997 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4" event={"ID":"f41d32fc-081f-42e3-b36c-0fc722a925a0","Type":"ContainerStarted","Data":"2e5f895d02140d3ff3145b4be0f3cfca1749d9f9c5c73573bdb5d3c1c33b5de2"} Feb 16 17:37:20.472714 master-0 kubenswrapper[4652]: I0216 17:37:20.472688 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-l95mf" event={"ID":"1b0b713f-4437-4538-9d0d-3414e13cab1c","Type":"ContainerStarted","Data":"0720fd365e15ed686bcf86ec9bf920f7eab4bbe5970922bf61ffe7129a3ec5b7"} Feb 16 17:37:20.472829 master-0 kubenswrapper[4652]: I0216 17:37:20.472819 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-l95mf" Feb 16 17:37:20.474450 master-0 kubenswrapper[4652]: I0216 17:37:20.474417 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m" event={"ID":"b85e7917-0774-4fe3-87cf-b7a57d4d186e","Type":"ContainerStarted","Data":"e990f51a484a412c76499a301e9ae404f27d3882acdf60e0f8b61c839be2ce28"} Feb 16 17:37:20.493783 master-0 kubenswrapper[4652]: I0216 17:37:20.493678 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-6h6dn" podStartSLOduration=2.962774328 podStartE2EDuration="14.493659301s" podCreationTimestamp="2026-02-16 17:37:06 +0000 UTC" firstStartedPulling="2026-02-16 17:37:07.621532564 +0000 UTC m=+785.009701080" lastFinishedPulling="2026-02-16 17:37:19.152417537 +0000 UTC m=+796.540586053" observedRunningTime="2026-02-16 17:37:20.487000966 +0000 UTC m=+797.875169502" watchObservedRunningTime="2026-02-16 17:37:20.493659301 +0000 UTC m=+797.881827817" Feb 16 17:37:20.501412 master-0 kubenswrapper[4652]: I0216 17:37:20.501324 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-6h6dn" Feb 16 17:37:20.524501 master-0 kubenswrapper[4652]: I0216 17:37:20.524401 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-8w2jw" podStartSLOduration=3.300238908 podStartE2EDuration="15.524381168s" podCreationTimestamp="2026-02-16 17:37:05 +0000 UTC" firstStartedPulling="2026-02-16 17:37:06.757746548 +0000 UTC m=+784.145915084" lastFinishedPulling="2026-02-16 17:37:18.981888828 +0000 UTC m=+796.370057344" observedRunningTime="2026-02-16 17:37:20.518785001 +0000 UTC m=+797.906953517" watchObservedRunningTime="2026-02-16 17:37:20.524381168 +0000 UTC m=+797.912549684" Feb 16 17:37:20.550869 master-0 kubenswrapper[4652]: I0216 17:37:20.550782 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-f2v6m" podStartSLOduration=3.088800038 podStartE2EDuration="14.550761331s" podCreationTimestamp="2026-02-16 17:37:06 +0000 UTC" firstStartedPulling="2026-02-16 17:37:07.649408206 +0000 UTC m=+785.037576722" lastFinishedPulling="2026-02-16 17:37:19.111369489 +0000 UTC m=+796.499538015" observedRunningTime="2026-02-16 17:37:20.545741219 +0000 UTC m=+797.933909765" watchObservedRunningTime="2026-02-16 17:37:20.550761331 +0000 UTC m=+797.938929867" Feb 16 17:37:20.592738 master-0 kubenswrapper[4652]: I0216 17:37:20.582879 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-cf968959d-nlht4" podStartSLOduration=3.222877789 podStartE2EDuration="14.582859874s" podCreationTimestamp="2026-02-16 17:37:06 +0000 UTC" firstStartedPulling="2026-02-16 17:37:07.655808224 +0000 UTC m=+785.043976740" lastFinishedPulling="2026-02-16 17:37:19.015790309 +0000 UTC m=+796.403958825" observedRunningTime="2026-02-16 17:37:20.574915675 +0000 UTC m=+797.963084211" watchObservedRunningTime="2026-02-16 17:37:20.582859874 +0000 UTC m=+797.971028390" Feb 16 17:37:20.611434 master-0 kubenswrapper[4652]: I0216 17:37:20.611194 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-l95mf" podStartSLOduration=3.385033287 podStartE2EDuration="14.611176787s" podCreationTimestamp="2026-02-16 17:37:06 +0000 UTC" firstStartedPulling="2026-02-16 17:37:07.788457748 +0000 UTC m=+785.176626264" lastFinishedPulling="2026-02-16 17:37:19.014601248 +0000 UTC m=+796.402769764" observedRunningTime="2026-02-16 17:37:20.61013514 +0000 UTC m=+797.998303656" watchObservedRunningTime="2026-02-16 17:37:20.611176787 +0000 UTC m=+797.999345303" Feb 16 17:37:26.804549 master-0 kubenswrapper[4652]: I0216 17:37:26.804480 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-l95mf" Feb 16 17:37:36.103034 master-0 kubenswrapper[4652]: I0216 17:37:36.102978 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-85cbb58865-c6k59" Feb 16 17:37:45.212316 master-0 kubenswrapper[4652]: I0216 17:37:45.212204 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh"] Feb 16 17:37:45.226904 master-0 kubenswrapper[4652]: I0216 17:37:45.226732 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh" Feb 16 17:37:45.229209 master-0 kubenswrapper[4652]: I0216 17:37:45.229169 4652 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 17:37:45.237200 master-0 kubenswrapper[4652]: I0216 17:37:45.237161 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-tldzg"] Feb 16 17:37:45.245828 master-0 kubenswrapper[4652]: I0216 17:37:45.245642 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.247403 master-0 kubenswrapper[4652]: I0216 17:37:45.247366 4652 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 17:37:45.247680 master-0 kubenswrapper[4652]: I0216 17:37:45.247644 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 17:37:45.251292 master-0 kubenswrapper[4652]: I0216 17:37:45.251218 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh"] Feb 16 17:37:45.282092 master-0 kubenswrapper[4652]: I0216 17:37:45.282034 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-frr-conf\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.282359 master-0 kubenswrapper[4652]: I0216 17:37:45.282098 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-frr-startup\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.282359 master-0 kubenswrapper[4652]: I0216 17:37:45.282233 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq8w4\" (UniqueName: \"kubernetes.io/projected/532504ab-9d7a-4b85-8e34-b3d69ddb3931-kube-api-access-xq8w4\") pod \"frr-k8s-webhook-server-78b44bf5bb-h9dfh\" (UID: \"532504ab-9d7a-4b85-8e34-b3d69ddb3931\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh" Feb 16 17:37:45.282442 master-0 kubenswrapper[4652]: I0216 17:37:45.282363 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/532504ab-9d7a-4b85-8e34-b3d69ddb3931-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-h9dfh\" (UID: \"532504ab-9d7a-4b85-8e34-b3d69ddb3931\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh" Feb 16 17:37:45.282442 master-0 kubenswrapper[4652]: I0216 17:37:45.282402 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzsrw\" (UniqueName: \"kubernetes.io/projected/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-kube-api-access-wzsrw\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.282516 master-0 kubenswrapper[4652]: I0216 17:37:45.282472 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-reloader\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.282516 master-0 kubenswrapper[4652]: I0216 17:37:45.282509 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-metrics\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.282579 master-0 kubenswrapper[4652]: I0216 17:37:45.282551 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-frr-sockets\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.282624 master-0 kubenswrapper[4652]: I0216 17:37:45.282612 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-metrics-certs\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.384139 master-0 kubenswrapper[4652]: I0216 17:37:45.384067 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-reloader\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.384139 master-0 kubenswrapper[4652]: I0216 17:37:45.384129 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-metrics\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.384470 master-0 kubenswrapper[4652]: I0216 17:37:45.384403 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-frr-sockets\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.384532 master-0 kubenswrapper[4652]: I0216 17:37:45.384483 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-metrics-certs\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.384532 master-0 kubenswrapper[4652]: I0216 17:37:45.384523 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-frr-conf\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.384628 master-0 kubenswrapper[4652]: I0216 17:37:45.384570 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-frr-startup\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.384628 master-0 kubenswrapper[4652]: E0216 17:37:45.384593 4652 secret.go:189] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 16 17:37:45.384720 master-0 kubenswrapper[4652]: E0216 17:37:45.384666 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-metrics-certs podName:e6c3fe44-4380-4dbc-8e61-6f85a1820c82 nodeName:}" failed. No retries permitted until 2026-02-16 17:37:45.884649138 +0000 UTC m=+823.272817654 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-metrics-certs") pod "frr-k8s-tldzg" (UID: "e6c3fe44-4380-4dbc-8e61-6f85a1820c82") : secret "frr-k8s-certs-secret" not found Feb 16 17:37:45.384720 master-0 kubenswrapper[4652]: I0216 17:37:45.384693 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq8w4\" (UniqueName: \"kubernetes.io/projected/532504ab-9d7a-4b85-8e34-b3d69ddb3931-kube-api-access-xq8w4\") pod \"frr-k8s-webhook-server-78b44bf5bb-h9dfh\" (UID: \"532504ab-9d7a-4b85-8e34-b3d69ddb3931\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh" Feb 16 17:37:45.384823 master-0 kubenswrapper[4652]: I0216 17:37:45.384769 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/532504ab-9d7a-4b85-8e34-b3d69ddb3931-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-h9dfh\" (UID: \"532504ab-9d7a-4b85-8e34-b3d69ddb3931\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh" Feb 16 17:37:45.384823 master-0 kubenswrapper[4652]: I0216 17:37:45.384811 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzsrw\" (UniqueName: \"kubernetes.io/projected/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-kube-api-access-wzsrw\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.385915 master-0 kubenswrapper[4652]: I0216 17:37:45.385880 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-frr-sockets\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.385915 master-0 kubenswrapper[4652]: I0216 17:37:45.385862 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-reloader\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.386035 master-0 kubenswrapper[4652]: I0216 17:37:45.385973 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-metrics\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.386172 master-0 kubenswrapper[4652]: I0216 17:37:45.386141 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-frr-conf\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.386914 master-0 kubenswrapper[4652]: I0216 17:37:45.386881 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-frr-startup\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.388270 master-0 kubenswrapper[4652]: I0216 17:37:45.388214 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/532504ab-9d7a-4b85-8e34-b3d69ddb3931-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-h9dfh\" (UID: \"532504ab-9d7a-4b85-8e34-b3d69ddb3931\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh" Feb 16 17:37:45.493378 master-0 kubenswrapper[4652]: I0216 17:37:45.490420 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzsrw\" (UniqueName: \"kubernetes.io/projected/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-kube-api-access-wzsrw\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.493378 master-0 kubenswrapper[4652]: I0216 17:37:45.491704 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq8w4\" (UniqueName: \"kubernetes.io/projected/532504ab-9d7a-4b85-8e34-b3d69ddb3931-kube-api-access-xq8w4\") pod \"frr-k8s-webhook-server-78b44bf5bb-h9dfh\" (UID: \"532504ab-9d7a-4b85-8e34-b3d69ddb3931\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh" Feb 16 17:37:45.541424 master-0 kubenswrapper[4652]: I0216 17:37:45.541371 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh" Feb 16 17:37:45.545468 master-0 kubenswrapper[4652]: I0216 17:37:45.545417 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fcwq4"] Feb 16 17:37:45.547017 master-0 kubenswrapper[4652]: I0216 17:37:45.546978 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fcwq4" Feb 16 17:37:45.551269 master-0 kubenswrapper[4652]: I0216 17:37:45.550878 4652 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 17:37:45.551269 master-0 kubenswrapper[4652]: I0216 17:37:45.551141 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 17:37:45.556786 master-0 kubenswrapper[4652]: I0216 17:37:45.554084 4652 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 17:37:45.583319 master-0 kubenswrapper[4652]: I0216 17:37:45.583275 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-th2nx"] Feb 16 17:37:45.587822 master-0 kubenswrapper[4652]: I0216 17:37:45.587770 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-th2nx" Feb 16 17:37:45.589774 master-0 kubenswrapper[4652]: I0216 17:37:45.589734 4652 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 17:37:45.591573 master-0 kubenswrapper[4652]: I0216 17:37:45.591517 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0cd1cce4-1306-4329-bf75-80e1b3667809-metallb-excludel2\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:45.591644 master-0 kubenswrapper[4652]: I0216 17:37:45.591608 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0cd1cce4-1306-4329-bf75-80e1b3667809-metrics-certs\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:45.591688 master-0 kubenswrapper[4652]: I0216 17:37:45.591673 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0cd1cce4-1306-4329-bf75-80e1b3667809-memberlist\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:45.591725 master-0 kubenswrapper[4652]: I0216 17:37:45.591699 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcvkp\" (UniqueName: \"kubernetes.io/projected/0cd1cce4-1306-4329-bf75-80e1b3667809-kube-api-access-mcvkp\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:45.599293 master-0 kubenswrapper[4652]: I0216 17:37:45.599197 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-th2nx"] Feb 16 17:37:45.693699 master-0 kubenswrapper[4652]: I0216 17:37:45.693655 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0cd1cce4-1306-4329-bf75-80e1b3667809-memberlist\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:45.694314 master-0 kubenswrapper[4652]: E0216 17:37:45.693858 4652 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 17:37:45.694399 master-0 kubenswrapper[4652]: I0216 17:37:45.694230 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcvkp\" (UniqueName: \"kubernetes.io/projected/0cd1cce4-1306-4329-bf75-80e1b3667809-kube-api-access-mcvkp\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:45.694488 master-0 kubenswrapper[4652]: E0216 17:37:45.694461 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0cd1cce4-1306-4329-bf75-80e1b3667809-memberlist podName:0cd1cce4-1306-4329-bf75-80e1b3667809 nodeName:}" failed. No retries permitted until 2026-02-16 17:37:46.194348359 +0000 UTC m=+823.582516865 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0cd1cce4-1306-4329-bf75-80e1b3667809-memberlist") pod "speaker-fcwq4" (UID: "0cd1cce4-1306-4329-bf75-80e1b3667809") : secret "metallb-memberlist" not found Feb 16 17:37:45.696147 master-0 kubenswrapper[4652]: I0216 17:37:45.695758 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/514984df-7910-433f-ad1e-b5761b23473f-metrics-certs\") pod \"controller-69bbfbf88f-th2nx\" (UID: \"514984df-7910-433f-ad1e-b5761b23473f\") " pod="metallb-system/controller-69bbfbf88f-th2nx" Feb 16 17:37:45.696147 master-0 kubenswrapper[4652]: I0216 17:37:45.695944 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0cd1cce4-1306-4329-bf75-80e1b3667809-metallb-excludel2\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:45.696147 master-0 kubenswrapper[4652]: I0216 17:37:45.696019 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8xl8\" (UniqueName: \"kubernetes.io/projected/514984df-7910-433f-ad1e-b5761b23473f-kube-api-access-x8xl8\") pod \"controller-69bbfbf88f-th2nx\" (UID: \"514984df-7910-433f-ad1e-b5761b23473f\") " pod="metallb-system/controller-69bbfbf88f-th2nx" Feb 16 17:37:45.696147 master-0 kubenswrapper[4652]: I0216 17:37:45.696066 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0cd1cce4-1306-4329-bf75-80e1b3667809-metrics-certs\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:45.696529 master-0 kubenswrapper[4652]: I0216 17:37:45.696511 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/514984df-7910-433f-ad1e-b5761b23473f-cert\") pod \"controller-69bbfbf88f-th2nx\" (UID: \"514984df-7910-433f-ad1e-b5761b23473f\") " pod="metallb-system/controller-69bbfbf88f-th2nx" Feb 16 17:37:45.697807 master-0 kubenswrapper[4652]: I0216 17:37:45.697772 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/0cd1cce4-1306-4329-bf75-80e1b3667809-metallb-excludel2\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:45.700497 master-0 kubenswrapper[4652]: I0216 17:37:45.700434 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0cd1cce4-1306-4329-bf75-80e1b3667809-metrics-certs\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:45.712176 master-0 kubenswrapper[4652]: I0216 17:37:45.712114 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcvkp\" (UniqueName: \"kubernetes.io/projected/0cd1cce4-1306-4329-bf75-80e1b3667809-kube-api-access-mcvkp\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:45.798467 master-0 kubenswrapper[4652]: I0216 17:37:45.798371 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/514984df-7910-433f-ad1e-b5761b23473f-cert\") pod \"controller-69bbfbf88f-th2nx\" (UID: \"514984df-7910-433f-ad1e-b5761b23473f\") " pod="metallb-system/controller-69bbfbf88f-th2nx" Feb 16 17:37:45.798824 master-0 kubenswrapper[4652]: I0216 17:37:45.798800 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/514984df-7910-433f-ad1e-b5761b23473f-metrics-certs\") pod \"controller-69bbfbf88f-th2nx\" (UID: \"514984df-7910-433f-ad1e-b5761b23473f\") " pod="metallb-system/controller-69bbfbf88f-th2nx" Feb 16 17:37:45.799026 master-0 kubenswrapper[4652]: I0216 17:37:45.799007 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8xl8\" (UniqueName: \"kubernetes.io/projected/514984df-7910-433f-ad1e-b5761b23473f-kube-api-access-x8xl8\") pod \"controller-69bbfbf88f-th2nx\" (UID: \"514984df-7910-433f-ad1e-b5761b23473f\") " pod="metallb-system/controller-69bbfbf88f-th2nx" Feb 16 17:37:45.799989 master-0 kubenswrapper[4652]: I0216 17:37:45.799921 4652 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 17:37:45.802628 master-0 kubenswrapper[4652]: I0216 17:37:45.802598 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/514984df-7910-433f-ad1e-b5761b23473f-metrics-certs\") pod \"controller-69bbfbf88f-th2nx\" (UID: \"514984df-7910-433f-ad1e-b5761b23473f\") " pod="metallb-system/controller-69bbfbf88f-th2nx" Feb 16 17:37:45.810824 master-0 kubenswrapper[4652]: I0216 17:37:45.810781 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/514984df-7910-433f-ad1e-b5761b23473f-cert\") pod \"controller-69bbfbf88f-th2nx\" (UID: \"514984df-7910-433f-ad1e-b5761b23473f\") " pod="metallb-system/controller-69bbfbf88f-th2nx" Feb 16 17:37:45.813604 master-0 kubenswrapper[4652]: I0216 17:37:45.813577 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8xl8\" (UniqueName: \"kubernetes.io/projected/514984df-7910-433f-ad1e-b5761b23473f-kube-api-access-x8xl8\") pod \"controller-69bbfbf88f-th2nx\" (UID: \"514984df-7910-433f-ad1e-b5761b23473f\") " pod="metallb-system/controller-69bbfbf88f-th2nx" Feb 16 17:37:45.900640 master-0 kubenswrapper[4652]: I0216 17:37:45.900524 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-metrics-certs\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.903443 master-0 kubenswrapper[4652]: I0216 17:37:45.903410 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6c3fe44-4380-4dbc-8e61-6f85a1820c82-metrics-certs\") pod \"frr-k8s-tldzg\" (UID: \"e6c3fe44-4380-4dbc-8e61-6f85a1820c82\") " pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:45.939548 master-0 kubenswrapper[4652]: I0216 17:37:45.939425 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-th2nx" Feb 16 17:37:45.983271 master-0 kubenswrapper[4652]: I0216 17:37:45.983194 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh"] Feb 16 17:37:45.983895 master-0 kubenswrapper[4652]: W0216 17:37:45.983856 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod532504ab_9d7a_4b85_8e34_b3d69ddb3931.slice/crio-2a274d060c0523b2dcab017fe0c272f674ddc68a1cfbf43d655d1b356f20c5f3 WatchSource:0}: Error finding container 2a274d060c0523b2dcab017fe0c272f674ddc68a1cfbf43d655d1b356f20c5f3: Status 404 returned error can't find the container with id 2a274d060c0523b2dcab017fe0c272f674ddc68a1cfbf43d655d1b356f20c5f3 Feb 16 17:37:46.162541 master-0 kubenswrapper[4652]: I0216 17:37:46.162487 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tldzg" Feb 16 17:37:46.205622 master-0 kubenswrapper[4652]: I0216 17:37:46.205557 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0cd1cce4-1306-4329-bf75-80e1b3667809-memberlist\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:46.205843 master-0 kubenswrapper[4652]: E0216 17:37:46.205726 4652 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 17:37:46.205843 master-0 kubenswrapper[4652]: E0216 17:37:46.205790 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0cd1cce4-1306-4329-bf75-80e1b3667809-memberlist podName:0cd1cce4-1306-4329-bf75-80e1b3667809 nodeName:}" failed. No retries permitted until 2026-02-16 17:37:47.205772653 +0000 UTC m=+824.593941169 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/0cd1cce4-1306-4329-bf75-80e1b3667809-memberlist") pod "speaker-fcwq4" (UID: "0cd1cce4-1306-4329-bf75-80e1b3667809") : secret "metallb-memberlist" not found Feb 16 17:37:46.420886 master-0 kubenswrapper[4652]: I0216 17:37:46.419680 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-th2nx"] Feb 16 17:37:46.425688 master-0 kubenswrapper[4652]: W0216 17:37:46.425535 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod514984df_7910_433f_ad1e_b5761b23473f.slice/crio-5aa447803897aa3d6c264fc0b316f8b396b7aa6660282cd58b1b4640c8a16b69 WatchSource:0}: Error finding container 5aa447803897aa3d6c264fc0b316f8b396b7aa6660282cd58b1b4640c8a16b69: Status 404 returned error can't find the container with id 5aa447803897aa3d6c264fc0b316f8b396b7aa6660282cd58b1b4640c8a16b69 Feb 16 17:37:46.690101 master-0 kubenswrapper[4652]: I0216 17:37:46.689965 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tldzg" event={"ID":"e6c3fe44-4380-4dbc-8e61-6f85a1820c82","Type":"ContainerStarted","Data":"9021381c9f1ebfca9c0496675b9c46c1904f021299c1abd42da3934439e3544a"} Feb 16 17:37:46.691394 master-0 kubenswrapper[4652]: I0216 17:37:46.691373 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-th2nx" event={"ID":"514984df-7910-433f-ad1e-b5761b23473f","Type":"ContainerStarted","Data":"2843def630d9bac1414215132a53a29e729a57d5f2a8b7c0693c225089ea207b"} Feb 16 17:37:46.691496 master-0 kubenswrapper[4652]: I0216 17:37:46.691399 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-th2nx" event={"ID":"514984df-7910-433f-ad1e-b5761b23473f","Type":"ContainerStarted","Data":"5aa447803897aa3d6c264fc0b316f8b396b7aa6660282cd58b1b4640c8a16b69"} Feb 16 17:37:46.692688 master-0 kubenswrapper[4652]: I0216 17:37:46.692643 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh" event={"ID":"532504ab-9d7a-4b85-8e34-b3d69ddb3931","Type":"ContainerStarted","Data":"2a274d060c0523b2dcab017fe0c272f674ddc68a1cfbf43d655d1b356f20c5f3"} Feb 16 17:37:47.226513 master-0 kubenswrapper[4652]: I0216 17:37:47.223627 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0cd1cce4-1306-4329-bf75-80e1b3667809-memberlist\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:47.227476 master-0 kubenswrapper[4652]: I0216 17:37:47.227444 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/0cd1cce4-1306-4329-bf75-80e1b3667809-memberlist\") pod \"speaker-fcwq4\" (UID: \"0cd1cce4-1306-4329-bf75-80e1b3667809\") " pod="metallb-system/speaker-fcwq4" Feb 16 17:37:47.426277 master-0 kubenswrapper[4652]: I0216 17:37:47.424097 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fcwq4" Feb 16 17:37:47.473141 master-0 kubenswrapper[4652]: I0216 17:37:47.472497 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-xsplv"] Feb 16 17:37:47.475889 master-0 kubenswrapper[4652]: I0216 17:37:47.474207 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xsplv" Feb 16 17:37:47.481409 master-0 kubenswrapper[4652]: I0216 17:37:47.480569 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9"] Feb 16 17:37:47.484806 master-0 kubenswrapper[4652]: I0216 17:37:47.484304 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9" Feb 16 17:37:47.486617 master-0 kubenswrapper[4652]: I0216 17:37:47.485973 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 17:37:47.527176 master-0 kubenswrapper[4652]: I0216 17:37:47.527123 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-xsplv"] Feb 16 17:37:47.529721 master-0 kubenswrapper[4652]: I0216 17:37:47.529677 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/847810b1-5d52-414e-8c6e-46bfca98393a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-jhjp9\" (UID: \"847810b1-5d52-414e-8c6e-46bfca98393a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9" Feb 16 17:37:47.529803 master-0 kubenswrapper[4652]: I0216 17:37:47.529750 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ptpv\" (UniqueName: \"kubernetes.io/projected/e6ac7b0a-388f-45dc-b367-4067ea181a77-kube-api-access-2ptpv\") pod \"nmstate-metrics-58c85c668d-xsplv\" (UID: \"e6ac7b0a-388f-45dc-b367-4067ea181a77\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-xsplv" Feb 16 17:37:47.529803 master-0 kubenswrapper[4652]: I0216 17:37:47.529776 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74fbm\" (UniqueName: \"kubernetes.io/projected/847810b1-5d52-414e-8c6e-46bfca98393a-kube-api-access-74fbm\") pod \"nmstate-webhook-866bcb46dc-jhjp9\" (UID: \"847810b1-5d52-414e-8c6e-46bfca98393a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9" Feb 16 17:37:47.540613 master-0 kubenswrapper[4652]: I0216 17:37:47.540553 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-pwpz5"] Feb 16 17:37:47.542346 master-0 kubenswrapper[4652]: I0216 17:37:47.541896 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:47.551786 master-0 kubenswrapper[4652]: I0216 17:37:47.551670 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9"] Feb 16 17:37:47.631971 master-0 kubenswrapper[4652]: I0216 17:37:47.631703 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wtxl\" (UniqueName: \"kubernetes.io/projected/3e3aaef8-af2b-403e-b884-e9052dc6642a-kube-api-access-4wtxl\") pod \"nmstate-handler-pwpz5\" (UID: \"3e3aaef8-af2b-403e-b884-e9052dc6642a\") " pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:47.631971 master-0 kubenswrapper[4652]: I0216 17:37:47.631762 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/847810b1-5d52-414e-8c6e-46bfca98393a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-jhjp9\" (UID: \"847810b1-5d52-414e-8c6e-46bfca98393a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9" Feb 16 17:37:47.631971 master-0 kubenswrapper[4652]: I0216 17:37:47.631785 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3e3aaef8-af2b-403e-b884-e9052dc6642a-dbus-socket\") pod \"nmstate-handler-pwpz5\" (UID: \"3e3aaef8-af2b-403e-b884-e9052dc6642a\") " pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:47.631971 master-0 kubenswrapper[4652]: I0216 17:37:47.631808 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3e3aaef8-af2b-403e-b884-e9052dc6642a-nmstate-lock\") pod \"nmstate-handler-pwpz5\" (UID: \"3e3aaef8-af2b-403e-b884-e9052dc6642a\") " pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:47.631971 master-0 kubenswrapper[4652]: I0216 17:37:47.631828 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3e3aaef8-af2b-403e-b884-e9052dc6642a-ovs-socket\") pod \"nmstate-handler-pwpz5\" (UID: \"3e3aaef8-af2b-403e-b884-e9052dc6642a\") " pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:47.631971 master-0 kubenswrapper[4652]: I0216 17:37:47.631849 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ptpv\" (UniqueName: \"kubernetes.io/projected/e6ac7b0a-388f-45dc-b367-4067ea181a77-kube-api-access-2ptpv\") pod \"nmstate-metrics-58c85c668d-xsplv\" (UID: \"e6ac7b0a-388f-45dc-b367-4067ea181a77\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-xsplv" Feb 16 17:37:47.631971 master-0 kubenswrapper[4652]: I0216 17:37:47.631867 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74fbm\" (UniqueName: \"kubernetes.io/projected/847810b1-5d52-414e-8c6e-46bfca98393a-kube-api-access-74fbm\") pod \"nmstate-webhook-866bcb46dc-jhjp9\" (UID: \"847810b1-5d52-414e-8c6e-46bfca98393a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9" Feb 16 17:37:47.637539 master-0 kubenswrapper[4652]: I0216 17:37:47.636204 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/847810b1-5d52-414e-8c6e-46bfca98393a-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-jhjp9\" (UID: \"847810b1-5d52-414e-8c6e-46bfca98393a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9" Feb 16 17:37:47.674930 master-0 kubenswrapper[4652]: I0216 17:37:47.674882 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74fbm\" (UniqueName: \"kubernetes.io/projected/847810b1-5d52-414e-8c6e-46bfca98393a-kube-api-access-74fbm\") pod \"nmstate-webhook-866bcb46dc-jhjp9\" (UID: \"847810b1-5d52-414e-8c6e-46bfca98393a\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9" Feb 16 17:37:47.675168 master-0 kubenswrapper[4652]: I0216 17:37:47.674962 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm"] Feb 16 17:37:47.676674 master-0 kubenswrapper[4652]: I0216 17:37:47.676447 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" Feb 16 17:37:47.693036 master-0 kubenswrapper[4652]: I0216 17:37:47.692938 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 17:37:47.693382 master-0 kubenswrapper[4652]: I0216 17:37:47.693361 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 17:37:47.703347 master-0 kubenswrapper[4652]: I0216 17:37:47.703288 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ptpv\" (UniqueName: \"kubernetes.io/projected/e6ac7b0a-388f-45dc-b367-4067ea181a77-kube-api-access-2ptpv\") pod \"nmstate-metrics-58c85c668d-xsplv\" (UID: \"e6ac7b0a-388f-45dc-b367-4067ea181a77\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-xsplv" Feb 16 17:37:47.720308 master-0 kubenswrapper[4652]: I0216 17:37:47.720235 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fcwq4" event={"ID":"0cd1cce4-1306-4329-bf75-80e1b3667809","Type":"ContainerStarted","Data":"4d6ce0dc12fa6776ff86f9d05cca23660d37f6ca47f92cc4965dcaf1fd2333ad"} Feb 16 17:37:47.722268 master-0 kubenswrapper[4652]: I0216 17:37:47.722229 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm"] Feb 16 17:37:47.733415 master-0 kubenswrapper[4652]: I0216 17:37:47.733285 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wtxl\" (UniqueName: \"kubernetes.io/projected/3e3aaef8-af2b-403e-b884-e9052dc6642a-kube-api-access-4wtxl\") pod \"nmstate-handler-pwpz5\" (UID: \"3e3aaef8-af2b-403e-b884-e9052dc6642a\") " pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:47.733415 master-0 kubenswrapper[4652]: I0216 17:37:47.733342 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/723775ad-ae81-4016-b1df-4cb8d44df7fa-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-l25gm\" (UID: \"723775ad-ae81-4016-b1df-4cb8d44df7fa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" Feb 16 17:37:47.733787 master-0 kubenswrapper[4652]: I0216 17:37:47.733745 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3e3aaef8-af2b-403e-b884-e9052dc6642a-dbus-socket\") pod \"nmstate-handler-pwpz5\" (UID: \"3e3aaef8-af2b-403e-b884-e9052dc6642a\") " pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:47.733856 master-0 kubenswrapper[4652]: I0216 17:37:47.733821 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3e3aaef8-af2b-403e-b884-e9052dc6642a-nmstate-lock\") pod \"nmstate-handler-pwpz5\" (UID: \"3e3aaef8-af2b-403e-b884-e9052dc6642a\") " pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:47.733916 master-0 kubenswrapper[4652]: I0216 17:37:47.733862 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3e3aaef8-af2b-403e-b884-e9052dc6642a-ovs-socket\") pod \"nmstate-handler-pwpz5\" (UID: \"3e3aaef8-af2b-403e-b884-e9052dc6642a\") " pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:47.733968 master-0 kubenswrapper[4652]: I0216 17:37:47.733919 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/723775ad-ae81-4016-b1df-4cb8d44df7fa-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-l25gm\" (UID: \"723775ad-ae81-4016-b1df-4cb8d44df7fa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" Feb 16 17:37:47.734022 master-0 kubenswrapper[4652]: I0216 17:37:47.734007 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcgdd\" (UniqueName: \"kubernetes.io/projected/723775ad-ae81-4016-b1df-4cb8d44df7fa-kube-api-access-vcgdd\") pod \"nmstate-console-plugin-5c78fc5d65-l25gm\" (UID: \"723775ad-ae81-4016-b1df-4cb8d44df7fa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" Feb 16 17:37:47.734575 master-0 kubenswrapper[4652]: I0216 17:37:47.734551 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3e3aaef8-af2b-403e-b884-e9052dc6642a-dbus-socket\") pod \"nmstate-handler-pwpz5\" (UID: \"3e3aaef8-af2b-403e-b884-e9052dc6642a\") " pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:47.734646 master-0 kubenswrapper[4652]: I0216 17:37:47.734613 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3e3aaef8-af2b-403e-b884-e9052dc6642a-nmstate-lock\") pod \"nmstate-handler-pwpz5\" (UID: \"3e3aaef8-af2b-403e-b884-e9052dc6642a\") " pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:47.734646 master-0 kubenswrapper[4652]: I0216 17:37:47.734641 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3e3aaef8-af2b-403e-b884-e9052dc6642a-ovs-socket\") pod \"nmstate-handler-pwpz5\" (UID: \"3e3aaef8-af2b-403e-b884-e9052dc6642a\") " pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:47.760594 master-0 kubenswrapper[4652]: I0216 17:37:47.760551 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wtxl\" (UniqueName: \"kubernetes.io/projected/3e3aaef8-af2b-403e-b884-e9052dc6642a-kube-api-access-4wtxl\") pod \"nmstate-handler-pwpz5\" (UID: \"3e3aaef8-af2b-403e-b884-e9052dc6642a\") " pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:47.835759 master-0 kubenswrapper[4652]: I0216 17:37:47.835708 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/723775ad-ae81-4016-b1df-4cb8d44df7fa-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-l25gm\" (UID: \"723775ad-ae81-4016-b1df-4cb8d44df7fa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" Feb 16 17:37:47.836003 master-0 kubenswrapper[4652]: I0216 17:37:47.835909 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/723775ad-ae81-4016-b1df-4cb8d44df7fa-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-l25gm\" (UID: \"723775ad-ae81-4016-b1df-4cb8d44df7fa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" Feb 16 17:37:47.836003 master-0 kubenswrapper[4652]: I0216 17:37:47.835972 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcgdd\" (UniqueName: \"kubernetes.io/projected/723775ad-ae81-4016-b1df-4cb8d44df7fa-kube-api-access-vcgdd\") pod \"nmstate-console-plugin-5c78fc5d65-l25gm\" (UID: \"723775ad-ae81-4016-b1df-4cb8d44df7fa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" Feb 16 17:37:47.838030 master-0 kubenswrapper[4652]: I0216 17:37:47.837987 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/723775ad-ae81-4016-b1df-4cb8d44df7fa-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-l25gm\" (UID: \"723775ad-ae81-4016-b1df-4cb8d44df7fa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" Feb 16 17:37:47.850735 master-0 kubenswrapper[4652]: I0216 17:37:47.847307 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/723775ad-ae81-4016-b1df-4cb8d44df7fa-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-l25gm\" (UID: \"723775ad-ae81-4016-b1df-4cb8d44df7fa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" Feb 16 17:37:47.868790 master-0 kubenswrapper[4652]: I0216 17:37:47.868734 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-857c4d8798-hz7wp"] Feb 16 17:37:47.879552 master-0 kubenswrapper[4652]: I0216 17:37:47.876917 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:47.882072 master-0 kubenswrapper[4652]: I0216 17:37:47.881510 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcgdd\" (UniqueName: \"kubernetes.io/projected/723775ad-ae81-4016-b1df-4cb8d44df7fa-kube-api-access-vcgdd\") pod \"nmstate-console-plugin-5c78fc5d65-l25gm\" (UID: \"723775ad-ae81-4016-b1df-4cb8d44df7fa\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" Feb 16 17:37:47.884869 master-0 kubenswrapper[4652]: I0216 17:37:47.884810 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xsplv" Feb 16 17:37:47.885750 master-0 kubenswrapper[4652]: I0216 17:37:47.885688 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-857c4d8798-hz7wp"] Feb 16 17:37:47.898367 master-0 kubenswrapper[4652]: I0216 17:37:47.898316 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9" Feb 16 17:37:47.942268 master-0 kubenswrapper[4652]: I0216 17:37:47.937291 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15455066-5878-4fc0-afb9-e94fbb57028d-service-ca\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:47.942268 master-0 kubenswrapper[4652]: I0216 17:37:47.937383 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15455066-5878-4fc0-afb9-e94fbb57028d-console-config\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:47.942268 master-0 kubenswrapper[4652]: I0216 17:37:47.937407 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15455066-5878-4fc0-afb9-e94fbb57028d-console-oauth-config\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:47.942268 master-0 kubenswrapper[4652]: I0216 17:37:47.937503 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15455066-5878-4fc0-afb9-e94fbb57028d-console-serving-cert\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:47.942268 master-0 kubenswrapper[4652]: I0216 17:37:47.937527 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t9b9\" (UniqueName: \"kubernetes.io/projected/15455066-5878-4fc0-afb9-e94fbb57028d-kube-api-access-4t9b9\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:47.942268 master-0 kubenswrapper[4652]: I0216 17:37:47.937550 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15455066-5878-4fc0-afb9-e94fbb57028d-oauth-serving-cert\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:47.942268 master-0 kubenswrapper[4652]: I0216 17:37:47.937639 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15455066-5878-4fc0-afb9-e94fbb57028d-trusted-ca-bundle\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:47.942268 master-0 kubenswrapper[4652]: I0216 17:37:47.939818 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.028657 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.038582 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15455066-5878-4fc0-afb9-e94fbb57028d-trusted-ca-bundle\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.038636 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15455066-5878-4fc0-afb9-e94fbb57028d-service-ca\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.038661 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15455066-5878-4fc0-afb9-e94fbb57028d-console-config\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.038679 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15455066-5878-4fc0-afb9-e94fbb57028d-console-oauth-config\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.038741 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15455066-5878-4fc0-afb9-e94fbb57028d-console-serving-cert\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.038765 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t9b9\" (UniqueName: \"kubernetes.io/projected/15455066-5878-4fc0-afb9-e94fbb57028d-kube-api-access-4t9b9\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.038786 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15455066-5878-4fc0-afb9-e94fbb57028d-oauth-serving-cert\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.039669 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/15455066-5878-4fc0-afb9-e94fbb57028d-oauth-serving-cert\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.041187 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15455066-5878-4fc0-afb9-e94fbb57028d-trusted-ca-bundle\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.041754 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/15455066-5878-4fc0-afb9-e94fbb57028d-console-config\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.041838 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15455066-5878-4fc0-afb9-e94fbb57028d-service-ca\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.046947 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/15455066-5878-4fc0-afb9-e94fbb57028d-console-oauth-config\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.047241 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/15455066-5878-4fc0-afb9-e94fbb57028d-console-serving-cert\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.108510 master-0 kubenswrapper[4652]: I0216 17:37:48.058364 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t9b9\" (UniqueName: \"kubernetes.io/projected/15455066-5878-4fc0-afb9-e94fbb57028d-kube-api-access-4t9b9\") pod \"console-857c4d8798-hz7wp\" (UID: \"15455066-5878-4fc0-afb9-e94fbb57028d\") " pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.206631 master-0 kubenswrapper[4652]: I0216 17:37:48.199022 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:48.411023 master-0 kubenswrapper[4652]: I0216 17:37:48.410953 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-xsplv"] Feb 16 17:37:48.426645 master-0 kubenswrapper[4652]: W0216 17:37:48.426581 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6ac7b0a_388f_45dc_b367_4067ea181a77.slice/crio-ed83fd3ebb35ee3f679859d7e5b7f35380f6ef1e1ef1cd709f16a54e39c0f44c WatchSource:0}: Error finding container ed83fd3ebb35ee3f679859d7e5b7f35380f6ef1e1ef1cd709f16a54e39c0f44c: Status 404 returned error can't find the container with id ed83fd3ebb35ee3f679859d7e5b7f35380f6ef1e1ef1cd709f16a54e39c0f44c Feb 16 17:37:48.493007 master-0 kubenswrapper[4652]: I0216 17:37:48.492956 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9"] Feb 16 17:37:48.494458 master-0 kubenswrapper[4652]: W0216 17:37:48.494411 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod847810b1_5d52_414e_8c6e_46bfca98393a.slice/crio-f893ebd17668266737bc861ec927ddb7193c9d752add30a200afb5168d601877 WatchSource:0}: Error finding container f893ebd17668266737bc861ec927ddb7193c9d752add30a200afb5168d601877: Status 404 returned error can't find the container with id f893ebd17668266737bc861ec927ddb7193c9d752add30a200afb5168d601877 Feb 16 17:37:48.590616 master-0 kubenswrapper[4652]: W0216 17:37:48.590543 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod723775ad_ae81_4016_b1df_4cb8d44df7fa.slice/crio-984c2dbf72ef674bafce6e3a0480a51599ee987cebd6a093a9a0ecde4d318429 WatchSource:0}: Error finding container 984c2dbf72ef674bafce6e3a0480a51599ee987cebd6a093a9a0ecde4d318429: Status 404 returned error can't find the container with id 984c2dbf72ef674bafce6e3a0480a51599ee987cebd6a093a9a0ecde4d318429 Feb 16 17:37:48.591792 master-0 kubenswrapper[4652]: I0216 17:37:48.591183 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm"] Feb 16 17:37:48.700925 master-0 kubenswrapper[4652]: I0216 17:37:48.700871 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-857c4d8798-hz7wp"] Feb 16 17:37:48.703222 master-0 kubenswrapper[4652]: W0216 17:37:48.703181 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15455066_5878_4fc0_afb9_e94fbb57028d.slice/crio-dc78162464db70bcd06ffb1b3d41a929c73e6c032abc93de4282a6958e5a18ea WatchSource:0}: Error finding container dc78162464db70bcd06ffb1b3d41a929c73e6c032abc93de4282a6958e5a18ea: Status 404 returned error can't find the container with id dc78162464db70bcd06ffb1b3d41a929c73e6c032abc93de4282a6958e5a18ea Feb 16 17:37:48.729146 master-0 kubenswrapper[4652]: I0216 17:37:48.729076 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pwpz5" event={"ID":"3e3aaef8-af2b-403e-b884-e9052dc6642a","Type":"ContainerStarted","Data":"9446367909b04d35eddcfb81ea2e2163f3196a3b0452fe4ccbbb052cd518da21"} Feb 16 17:37:48.730878 master-0 kubenswrapper[4652]: I0216 17:37:48.730835 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" event={"ID":"723775ad-ae81-4016-b1df-4cb8d44df7fa","Type":"ContainerStarted","Data":"984c2dbf72ef674bafce6e3a0480a51599ee987cebd6a093a9a0ecde4d318429"} Feb 16 17:37:48.732477 master-0 kubenswrapper[4652]: I0216 17:37:48.732324 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9" event={"ID":"847810b1-5d52-414e-8c6e-46bfca98393a","Type":"ContainerStarted","Data":"f893ebd17668266737bc861ec927ddb7193c9d752add30a200afb5168d601877"} Feb 16 17:37:48.734371 master-0 kubenswrapper[4652]: I0216 17:37:48.734339 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fcwq4" event={"ID":"0cd1cce4-1306-4329-bf75-80e1b3667809","Type":"ContainerStarted","Data":"04e68c09e80c2aada3fd7a47848e6dfb01e9d103c64cb02fe20b3c14ae76f820"} Feb 16 17:37:48.735710 master-0 kubenswrapper[4652]: I0216 17:37:48.735676 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xsplv" event={"ID":"e6ac7b0a-388f-45dc-b367-4067ea181a77","Type":"ContainerStarted","Data":"ed83fd3ebb35ee3f679859d7e5b7f35380f6ef1e1ef1cd709f16a54e39c0f44c"} Feb 16 17:37:48.736888 master-0 kubenswrapper[4652]: I0216 17:37:48.736856 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-857c4d8798-hz7wp" event={"ID":"15455066-5878-4fc0-afb9-e94fbb57028d","Type":"ContainerStarted","Data":"dc78162464db70bcd06ffb1b3d41a929c73e6c032abc93de4282a6958e5a18ea"} Feb 16 17:37:49.744553 master-0 kubenswrapper[4652]: I0216 17:37:49.744476 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-857c4d8798-hz7wp" event={"ID":"15455066-5878-4fc0-afb9-e94fbb57028d","Type":"ContainerStarted","Data":"13aaff67667c2d05ac840e8a5d6e1a05f1935fe8b325ed975f3e45bd5a59ef80"} Feb 16 17:37:49.782663 master-0 kubenswrapper[4652]: I0216 17:37:49.782556 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-857c4d8798-hz7wp" podStartSLOduration=2.782532913 podStartE2EDuration="2.782532913s" podCreationTimestamp="2026-02-16 17:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:37:49.774767679 +0000 UTC m=+827.162936205" watchObservedRunningTime="2026-02-16 17:37:49.782532913 +0000 UTC m=+827.170701429" Feb 16 17:37:50.793303 master-0 kubenswrapper[4652]: I0216 17:37:50.793224 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fcwq4" event={"ID":"0cd1cce4-1306-4329-bf75-80e1b3667809","Type":"ContainerStarted","Data":"c3b2614eb12be02c187ad80d0c73e67c69165a38afde5fa4168b6b3aa1c59533"} Feb 16 17:37:50.793976 master-0 kubenswrapper[4652]: I0216 17:37:50.793296 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-th2nx" event={"ID":"514984df-7910-433f-ad1e-b5761b23473f","Type":"ContainerStarted","Data":"5016818f44dc0f9ed2d29cd64a53f6b30fb1f5a2b906286d8dfc7f3f2a283853"} Feb 16 17:37:50.793976 master-0 kubenswrapper[4652]: I0216 17:37:50.793341 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fcwq4" Feb 16 17:37:50.793976 master-0 kubenswrapper[4652]: I0216 17:37:50.793354 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-th2nx" Feb 16 17:37:50.806908 master-0 kubenswrapper[4652]: I0216 17:37:50.804214 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fcwq4" podStartSLOduration=4.1194655000000004 podStartE2EDuration="5.80419412s" podCreationTimestamp="2026-02-16 17:37:45 +0000 UTC" firstStartedPulling="2026-02-16 17:37:48.047677701 +0000 UTC m=+825.435846217" lastFinishedPulling="2026-02-16 17:37:49.732406321 +0000 UTC m=+827.120574837" observedRunningTime="2026-02-16 17:37:50.797461112 +0000 UTC m=+828.185629628" watchObservedRunningTime="2026-02-16 17:37:50.80419412 +0000 UTC m=+828.192362636" Feb 16 17:37:50.827614 master-0 kubenswrapper[4652]: I0216 17:37:50.826418 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-th2nx" podStartSLOduration=2.693481185 podStartE2EDuration="5.826394515s" podCreationTimestamp="2026-02-16 17:37:45 +0000 UTC" firstStartedPulling="2026-02-16 17:37:46.599495941 +0000 UTC m=+823.987664457" lastFinishedPulling="2026-02-16 17:37:49.732409271 +0000 UTC m=+827.120577787" observedRunningTime="2026-02-16 17:37:50.819943435 +0000 UTC m=+828.208111961" watchObservedRunningTime="2026-02-16 17:37:50.826394515 +0000 UTC m=+828.214563031" Feb 16 17:37:54.828854 master-0 kubenswrapper[4652]: I0216 17:37:54.828785 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xsplv" event={"ID":"e6ac7b0a-388f-45dc-b367-4067ea181a77","Type":"ContainerStarted","Data":"6d047dae5d4706874c4575824586ee336604e68c655cc1462bb9024963fe6abf"} Feb 16 17:37:54.828854 master-0 kubenswrapper[4652]: I0216 17:37:54.828845 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xsplv" event={"ID":"e6ac7b0a-388f-45dc-b367-4067ea181a77","Type":"ContainerStarted","Data":"6fab419e69a3177a8be83a42366603d20b4fea1f859a9738c04c13cb00b725f9"} Feb 16 17:37:54.830428 master-0 kubenswrapper[4652]: I0216 17:37:54.830400 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-pwpz5" event={"ID":"3e3aaef8-af2b-403e-b884-e9052dc6642a","Type":"ContainerStarted","Data":"268ec266f7dc5aa67942d05146d401f5d69c2db99af9e85d74a247f82484cdf6"} Feb 16 17:37:54.830662 master-0 kubenswrapper[4652]: I0216 17:37:54.830609 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:37:54.832104 master-0 kubenswrapper[4652]: I0216 17:37:54.832029 4652 generic.go:334] "Generic (PLEG): container finished" podID="e6c3fe44-4380-4dbc-8e61-6f85a1820c82" containerID="09d7413649bc3b0eebc83e291b459bcf48827907e46044dec5169871fd46efd1" exitCode=0 Feb 16 17:37:54.832184 master-0 kubenswrapper[4652]: I0216 17:37:54.832102 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tldzg" event={"ID":"e6c3fe44-4380-4dbc-8e61-6f85a1820c82","Type":"ContainerDied","Data":"09d7413649bc3b0eebc83e291b459bcf48827907e46044dec5169871fd46efd1"} Feb 16 17:37:54.836357 master-0 kubenswrapper[4652]: I0216 17:37:54.835677 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" event={"ID":"723775ad-ae81-4016-b1df-4cb8d44df7fa","Type":"ContainerStarted","Data":"a17b128b12d84c41323932e50ee336379e6f916e50d7ffeca0b728eee2574ecf"} Feb 16 17:37:54.837610 master-0 kubenswrapper[4652]: I0216 17:37:54.837578 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9" event={"ID":"847810b1-5d52-414e-8c6e-46bfca98393a","Type":"ContainerStarted","Data":"808bbbf5264b2b36fa182638b349251aee3c00f08f5132105988fbfa12ebe4a6"} Feb 16 17:37:54.837714 master-0 kubenswrapper[4652]: I0216 17:37:54.837628 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9" Feb 16 17:37:54.840706 master-0 kubenswrapper[4652]: I0216 17:37:54.840664 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh" event={"ID":"532504ab-9d7a-4b85-8e34-b3d69ddb3931","Type":"ContainerStarted","Data":"7ba90acb78bfc6b577c841ae43241fc12753413f6f00da61ea0f4127fed5714b"} Feb 16 17:37:54.841013 master-0 kubenswrapper[4652]: I0216 17:37:54.840886 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh" Feb 16 17:37:54.864209 master-0 kubenswrapper[4652]: I0216 17:37:54.864094 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-xsplv" podStartSLOduration=2.72210103 podStartE2EDuration="7.864070127s" podCreationTimestamp="2026-02-16 17:37:47 +0000 UTC" firstStartedPulling="2026-02-16 17:37:48.430124292 +0000 UTC m=+825.818292808" lastFinishedPulling="2026-02-16 17:37:53.572093389 +0000 UTC m=+830.960261905" observedRunningTime="2026-02-16 17:37:54.856445566 +0000 UTC m=+832.244614102" watchObservedRunningTime="2026-02-16 17:37:54.864070127 +0000 UTC m=+832.252238663" Feb 16 17:37:54.887297 master-0 kubenswrapper[4652]: I0216 17:37:54.887008 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh" podStartSLOduration=2.307717677 podStartE2EDuration="9.886988282s" podCreationTimestamp="2026-02-16 17:37:45 +0000 UTC" firstStartedPulling="2026-02-16 17:37:45.985930702 +0000 UTC m=+823.374099218" lastFinishedPulling="2026-02-16 17:37:53.565201307 +0000 UTC m=+830.953369823" observedRunningTime="2026-02-16 17:37:54.880165692 +0000 UTC m=+832.268334208" watchObservedRunningTime="2026-02-16 17:37:54.886988282 +0000 UTC m=+832.275156798" Feb 16 17:37:55.284617 master-0 kubenswrapper[4652]: I0216 17:37:55.284533 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-pwpz5" podStartSLOduration=2.637691232 podStartE2EDuration="8.28451404s" podCreationTimestamp="2026-02-16 17:37:47 +0000 UTC" firstStartedPulling="2026-02-16 17:37:47.966285973 +0000 UTC m=+825.354454489" lastFinishedPulling="2026-02-16 17:37:53.613108781 +0000 UTC m=+831.001277297" observedRunningTime="2026-02-16 17:37:55.280751841 +0000 UTC m=+832.668920357" watchObservedRunningTime="2026-02-16 17:37:55.28451404 +0000 UTC m=+832.672682566" Feb 16 17:37:55.307706 master-0 kubenswrapper[4652]: I0216 17:37:55.307622 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9" podStartSLOduration=3.225504612 podStartE2EDuration="8.30760627s" podCreationTimestamp="2026-02-16 17:37:47 +0000 UTC" firstStartedPulling="2026-02-16 17:37:48.496314438 +0000 UTC m=+825.884482954" lastFinishedPulling="2026-02-16 17:37:53.578416096 +0000 UTC m=+830.966584612" observedRunningTime="2026-02-16 17:37:55.303630525 +0000 UTC m=+832.691799061" watchObservedRunningTime="2026-02-16 17:37:55.30760627 +0000 UTC m=+832.695774786" Feb 16 17:37:55.850267 master-0 kubenswrapper[4652]: I0216 17:37:55.850181 4652 generic.go:334] "Generic (PLEG): container finished" podID="e6c3fe44-4380-4dbc-8e61-6f85a1820c82" containerID="a44f4d59cc99902beba73f8d7883f4f61b67d662ec1f6791c34b5cd904a87845" exitCode=0 Feb 16 17:37:55.850267 master-0 kubenswrapper[4652]: I0216 17:37:55.850230 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tldzg" event={"ID":"e6c3fe44-4380-4dbc-8e61-6f85a1820c82","Type":"ContainerDied","Data":"a44f4d59cc99902beba73f8d7883f4f61b67d662ec1f6791c34b5cd904a87845"} Feb 16 17:37:55.874320 master-0 kubenswrapper[4652]: I0216 17:37:55.874227 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-l25gm" podStartSLOduration=3.906991882 podStartE2EDuration="8.874204158s" podCreationTimestamp="2026-02-16 17:37:47 +0000 UTC" firstStartedPulling="2026-02-16 17:37:48.593956694 +0000 UTC m=+825.982125210" lastFinishedPulling="2026-02-16 17:37:53.56116897 +0000 UTC m=+830.949337486" observedRunningTime="2026-02-16 17:37:55.352455083 +0000 UTC m=+832.740623609" watchObservedRunningTime="2026-02-16 17:37:55.874204158 +0000 UTC m=+833.262372684" Feb 16 17:37:56.862773 master-0 kubenswrapper[4652]: I0216 17:37:56.862690 4652 generic.go:334] "Generic (PLEG): container finished" podID="e6c3fe44-4380-4dbc-8e61-6f85a1820c82" containerID="90ae0732c9169d0e69e036d650c1c6cca02bf24fa87fcecc89e5b27d3ffcbf5d" exitCode=0 Feb 16 17:37:56.863386 master-0 kubenswrapper[4652]: I0216 17:37:56.862772 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tldzg" event={"ID":"e6c3fe44-4380-4dbc-8e61-6f85a1820c82","Type":"ContainerDied","Data":"90ae0732c9169d0e69e036d650c1c6cca02bf24fa87fcecc89e5b27d3ffcbf5d"} Feb 16 17:37:57.878644 master-0 kubenswrapper[4652]: I0216 17:37:57.878511 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tldzg" event={"ID":"e6c3fe44-4380-4dbc-8e61-6f85a1820c82","Type":"ContainerStarted","Data":"3c75da5f347b6ea2a3782a906786e26b19236889aba2d3093e243ad5d90db7b3"} Feb 16 17:37:57.878644 master-0 kubenswrapper[4652]: I0216 17:37:57.878551 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tldzg" event={"ID":"e6c3fe44-4380-4dbc-8e61-6f85a1820c82","Type":"ContainerStarted","Data":"aa524f007978f90ce3f623cc95cf39bae434999f5a9ef4ad82cd52a0f80f2326"} Feb 16 17:37:57.878644 master-0 kubenswrapper[4652]: I0216 17:37:57.878562 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tldzg" event={"ID":"e6c3fe44-4380-4dbc-8e61-6f85a1820c82","Type":"ContainerStarted","Data":"4ff4e68409dcd5cc133eebcfb2a9c89ab9d0005028adfb410f77f4b95ac831d3"} Feb 16 17:37:57.878644 master-0 kubenswrapper[4652]: I0216 17:37:57.878575 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tldzg" event={"ID":"e6c3fe44-4380-4dbc-8e61-6f85a1820c82","Type":"ContainerStarted","Data":"479fb3de481ca62f56ef8e27571b0e6db57d8e71ac2e621d07aec78dfa21e4a5"} Feb 16 17:37:57.878644 master-0 kubenswrapper[4652]: I0216 17:37:57.878583 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tldzg" event={"ID":"e6c3fe44-4380-4dbc-8e61-6f85a1820c82","Type":"ContainerStarted","Data":"ba71ef1919b72ed417b09d2b22de4c51291120e0f9f67559bb97d38f14f83a2d"} Feb 16 17:37:58.201345 master-0 kubenswrapper[4652]: I0216 17:37:58.199113 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:58.201345 master-0 kubenswrapper[4652]: I0216 17:37:58.200102 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:58.204759 master-0 kubenswrapper[4652]: I0216 17:37:58.204708 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:58.890443 master-0 kubenswrapper[4652]: I0216 17:37:58.890393 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tldzg" event={"ID":"e6c3fe44-4380-4dbc-8e61-6f85a1820c82","Type":"ContainerStarted","Data":"4cd220c2242773f8681ce817c68deca0c891b34ab8dd757038a58e3b86b4d513"} Feb 16 17:37:58.894704 master-0 kubenswrapper[4652]: I0216 17:37:58.894663 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-857c4d8798-hz7wp" Feb 16 17:37:58.914939 master-0 kubenswrapper[4652]: I0216 17:37:58.914849 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-tldzg" podStartSLOduration=6.63916614 podStartE2EDuration="13.914830374s" podCreationTimestamp="2026-02-16 17:37:45 +0000 UTC" firstStartedPulling="2026-02-16 17:37:46.337495138 +0000 UTC m=+823.725663654" lastFinishedPulling="2026-02-16 17:37:53.613159372 +0000 UTC m=+831.001327888" observedRunningTime="2026-02-16 17:37:58.911144317 +0000 UTC m=+836.299312833" watchObservedRunningTime="2026-02-16 17:37:58.914830374 +0000 UTC m=+836.302998890" Feb 16 17:37:59.011432 master-0 kubenswrapper[4652]: I0216 17:37:59.011369 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-846d98f6c-cnjjz"] Feb 16 17:37:59.898027 master-0 kubenswrapper[4652]: I0216 17:37:59.897958 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-tldzg" Feb 16 17:38:01.163641 master-0 kubenswrapper[4652]: I0216 17:38:01.163489 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-tldzg" Feb 16 17:38:01.198922 master-0 kubenswrapper[4652]: I0216 17:38:01.198821 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-tldzg" Feb 16 17:38:02.967680 master-0 kubenswrapper[4652]: I0216 17:38:02.967616 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-pwpz5" Feb 16 17:38:05.546381 master-0 kubenswrapper[4652]: I0216 17:38:05.546326 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-h9dfh" Feb 16 17:38:05.943970 master-0 kubenswrapper[4652]: I0216 17:38:05.943886 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-th2nx" Feb 16 17:38:07.429343 master-0 kubenswrapper[4652]: I0216 17:38:07.429293 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fcwq4" Feb 16 17:38:07.906556 master-0 kubenswrapper[4652]: I0216 17:38:07.906425 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-jhjp9" Feb 16 17:38:13.676607 master-0 kubenswrapper[4652]: I0216 17:38:13.676462 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-qvcqr"] Feb 16 17:38:13.677665 master-0 kubenswrapper[4652]: I0216 17:38:13.677632 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.683180 master-0 kubenswrapper[4652]: I0216 17:38:13.683122 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Feb 16 17:38:13.688646 master-0 kubenswrapper[4652]: I0216 17:38:13.688597 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz6lf\" (UniqueName: \"kubernetes.io/projected/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-kube-api-access-dz6lf\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.688876 master-0 kubenswrapper[4652]: I0216 17:38:13.688816 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-csi-plugin-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.688965 master-0 kubenswrapper[4652]: I0216 17:38:13.688937 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-pod-volumes-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.689027 master-0 kubenswrapper[4652]: I0216 17:38:13.689008 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-registration-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.689075 master-0 kubenswrapper[4652]: I0216 17:38:13.689041 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-file-lock-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.689156 master-0 kubenswrapper[4652]: I0216 17:38:13.689129 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-node-plugin-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.689196 master-0 kubenswrapper[4652]: I0216 17:38:13.689157 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-device-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.689261 master-0 kubenswrapper[4652]: I0216 17:38:13.689229 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-run-udev\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.689303 master-0 kubenswrapper[4652]: I0216 17:38:13.689277 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-sys\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.689340 master-0 kubenswrapper[4652]: I0216 17:38:13.689315 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-metrics-cert\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.689374 master-0 kubenswrapper[4652]: I0216 17:38:13.689345 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-lvmd-config\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.691926 master-0 kubenswrapper[4652]: I0216 17:38:13.691802 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-qvcqr"] Feb 16 17:38:13.790642 master-0 kubenswrapper[4652]: I0216 17:38:13.790575 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-csi-plugin-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.790931 master-0 kubenswrapper[4652]: I0216 17:38:13.790668 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-pod-volumes-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.790931 master-0 kubenswrapper[4652]: I0216 17:38:13.790877 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-pod-volumes-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.790931 master-0 kubenswrapper[4652]: I0216 17:38:13.790892 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-registration-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791090 master-0 kubenswrapper[4652]: I0216 17:38:13.790960 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-file-lock-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791223 master-0 kubenswrapper[4652]: I0216 17:38:13.791194 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-registration-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791350 master-0 kubenswrapper[4652]: I0216 17:38:13.791310 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-node-plugin-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791350 master-0 kubenswrapper[4652]: I0216 17:38:13.791333 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-device-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791481 master-0 kubenswrapper[4652]: I0216 17:38:13.791393 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-run-udev\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791481 master-0 kubenswrapper[4652]: I0216 17:38:13.791418 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-sys\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791481 master-0 kubenswrapper[4652]: I0216 17:38:13.791411 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-csi-plugin-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791631 master-0 kubenswrapper[4652]: I0216 17:38:13.791524 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-device-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791631 master-0 kubenswrapper[4652]: I0216 17:38:13.791548 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-file-lock-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791631 master-0 kubenswrapper[4652]: I0216 17:38:13.791591 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-sys\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791812 master-0 kubenswrapper[4652]: I0216 17:38:13.791643 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-metrics-cert\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791812 master-0 kubenswrapper[4652]: I0216 17:38:13.791722 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-node-plugin-dir\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791812 master-0 kubenswrapper[4652]: I0216 17:38:13.791737 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-lvmd-config\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791998 master-0 kubenswrapper[4652]: I0216 17:38:13.791859 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz6lf\" (UniqueName: \"kubernetes.io/projected/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-kube-api-access-dz6lf\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.791998 master-0 kubenswrapper[4652]: I0216 17:38:13.791916 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-lvmd-config\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.792136 master-0 kubenswrapper[4652]: I0216 17:38:13.792101 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-run-udev\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.796801 master-0 kubenswrapper[4652]: I0216 17:38:13.796764 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-metrics-cert\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:13.815098 master-0 kubenswrapper[4652]: I0216 17:38:13.815045 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz6lf\" (UniqueName: \"kubernetes.io/projected/0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef-kube-api-access-dz6lf\") pod \"vg-manager-qvcqr\" (UID: \"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef\") " pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:14.059906 master-0 kubenswrapper[4652]: I0216 17:38:14.059691 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:14.520672 master-0 kubenswrapper[4652]: I0216 17:38:14.520606 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-qvcqr"] Feb 16 17:38:14.522416 master-0 kubenswrapper[4652]: W0216 17:38:14.522357 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b5496e7_7c4c_442e_bfd3_fb2f2dbda0ef.slice/crio-3236d35bd21c99ca66866c208ac028d017f027a5a71e03cb44e29478ef44ec47 WatchSource:0}: Error finding container 3236d35bd21c99ca66866c208ac028d017f027a5a71e03cb44e29478ef44ec47: Status 404 returned error can't find the container with id 3236d35bd21c99ca66866c208ac028d017f027a5a71e03cb44e29478ef44ec47 Feb 16 17:38:15.033271 master-0 kubenswrapper[4652]: I0216 17:38:15.033190 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-qvcqr" event={"ID":"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef","Type":"ContainerStarted","Data":"ba8111a22cc633ca95ff654b56c5205c74f43b68e0225f5f2cce8700fabe4897"} Feb 16 17:38:15.033271 master-0 kubenswrapper[4652]: I0216 17:38:15.033237 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-qvcqr" event={"ID":"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef","Type":"ContainerStarted","Data":"3236d35bd21c99ca66866c208ac028d017f027a5a71e03cb44e29478ef44ec47"} Feb 16 17:38:15.061463 master-0 kubenswrapper[4652]: I0216 17:38:15.061383 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-qvcqr" podStartSLOduration=2.061361354 podStartE2EDuration="2.061361354s" podCreationTimestamp="2026-02-16 17:38:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:38:15.051242717 +0000 UTC m=+852.439411233" watchObservedRunningTime="2026-02-16 17:38:15.061361354 +0000 UTC m=+852.449529870" Feb 16 17:38:16.166755 master-0 kubenswrapper[4652]: I0216 17:38:16.166689 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-tldzg" Feb 16 17:38:17.051061 master-0 kubenswrapper[4652]: I0216 17:38:17.051016 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-qvcqr_0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef/vg-manager/0.log" Feb 16 17:38:17.051312 master-0 kubenswrapper[4652]: I0216 17:38:17.051082 4652 generic.go:334] "Generic (PLEG): container finished" podID="0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef" containerID="ba8111a22cc633ca95ff654b56c5205c74f43b68e0225f5f2cce8700fabe4897" exitCode=1 Feb 16 17:38:17.051312 master-0 kubenswrapper[4652]: I0216 17:38:17.051119 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-qvcqr" event={"ID":"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef","Type":"ContainerDied","Data":"ba8111a22cc633ca95ff654b56c5205c74f43b68e0225f5f2cce8700fabe4897"} Feb 16 17:38:17.051739 master-0 kubenswrapper[4652]: I0216 17:38:17.051695 4652 scope.go:117] "RemoveContainer" containerID="ba8111a22cc633ca95ff654b56c5205c74f43b68e0225f5f2cce8700fabe4897" Feb 16 17:38:17.414201 master-0 kubenswrapper[4652]: I0216 17:38:17.414067 4652 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Feb 16 17:38:17.640902 master-0 kubenswrapper[4652]: I0216 17:38:17.640751 4652 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-02-16T17:38:17.414098439Z","Handler":null,"Name":""} Feb 16 17:38:17.642550 master-0 kubenswrapper[4652]: I0216 17:38:17.642510 4652 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Feb 16 17:38:17.642550 master-0 kubenswrapper[4652]: I0216 17:38:17.642544 4652 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Feb 16 17:38:18.064338 master-0 kubenswrapper[4652]: I0216 17:38:18.064280 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-qvcqr_0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef/vg-manager/0.log" Feb 16 17:38:18.064338 master-0 kubenswrapper[4652]: I0216 17:38:18.064326 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-qvcqr" event={"ID":"0b5496e7-7c4c-442e-bfd3-fb2f2dbda0ef","Type":"ContainerStarted","Data":"4b6216957b5fbf0bbd4a4bffc46399c39c45532dac0842c139f14ccc5c0939f6"} Feb 16 17:38:20.476159 master-0 kubenswrapper[4652]: I0216 17:38:20.476097 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-n2twb"] Feb 16 17:38:20.479937 master-0 kubenswrapper[4652]: I0216 17:38:20.477404 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n2twb" Feb 16 17:38:20.489316 master-0 kubenswrapper[4652]: I0216 17:38:20.481929 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 17:38:20.489316 master-0 kubenswrapper[4652]: I0216 17:38:20.487405 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 17:38:20.491446 master-0 kubenswrapper[4652]: I0216 17:38:20.491407 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n2twb"] Feb 16 17:38:20.543235 master-0 kubenswrapper[4652]: I0216 17:38:20.542093 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd8r6\" (UniqueName: \"kubernetes.io/projected/42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c-kube-api-access-dd8r6\") pod \"openstack-operator-index-n2twb\" (UID: \"42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c\") " pod="openstack-operators/openstack-operator-index-n2twb" Feb 16 17:38:20.649761 master-0 kubenswrapper[4652]: I0216 17:38:20.649702 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd8r6\" (UniqueName: \"kubernetes.io/projected/42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c-kube-api-access-dd8r6\") pod \"openstack-operator-index-n2twb\" (UID: \"42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c\") " pod="openstack-operators/openstack-operator-index-n2twb" Feb 16 17:38:20.669377 master-0 kubenswrapper[4652]: I0216 17:38:20.669338 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd8r6\" (UniqueName: \"kubernetes.io/projected/42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c-kube-api-access-dd8r6\") pod \"openstack-operator-index-n2twb\" (UID: \"42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c\") " pod="openstack-operators/openstack-operator-index-n2twb" Feb 16 17:38:20.820373 master-0 kubenswrapper[4652]: I0216 17:38:20.820224 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n2twb" Feb 16 17:38:21.357314 master-0 kubenswrapper[4652]: I0216 17:38:21.357199 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n2twb"] Feb 16 17:38:22.151979 master-0 kubenswrapper[4652]: I0216 17:38:22.151658 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n2twb" event={"ID":"42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c","Type":"ContainerStarted","Data":"5d04fedc9bf7ef348f0a30911205ce3274b72957acacd2c1551ece9de2e7dd3e"} Feb 16 17:38:23.160818 master-0 kubenswrapper[4652]: I0216 17:38:23.160775 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n2twb" event={"ID":"42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c","Type":"ContainerStarted","Data":"b746b76fd9f0710cba072eff1562eb232555af1fa5f1562fe3fcfd3b7a582db2"} Feb 16 17:38:24.051680 master-0 kubenswrapper[4652]: I0216 17:38:24.051581 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-846d98f6c-cnjjz" podUID="d959a347-b11d-4a51-9729-26b1b7842cc9" containerName="console" containerID="cri-o://b42d4c974fa1c61b26c543260c776911d99feda72bf82a421d07a2ff2bb063dd" gracePeriod=15 Feb 16 17:38:24.062388 master-0 kubenswrapper[4652]: I0216 17:38:24.062335 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:24.065429 master-0 kubenswrapper[4652]: I0216 17:38:24.065343 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:24.108994 master-0 kubenswrapper[4652]: I0216 17:38:24.108900 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-n2twb" podStartSLOduration=2.840897953 podStartE2EDuration="4.108878718s" podCreationTimestamp="2026-02-16 17:38:20 +0000 UTC" firstStartedPulling="2026-02-16 17:38:21.358467209 +0000 UTC m=+858.746635725" lastFinishedPulling="2026-02-16 17:38:22.626447964 +0000 UTC m=+860.014616490" observedRunningTime="2026-02-16 17:38:23.192390436 +0000 UTC m=+860.580558942" watchObservedRunningTime="2026-02-16 17:38:24.108878718 +0000 UTC m=+861.497047254" Feb 16 17:38:24.167872 master-0 kubenswrapper[4652]: I0216 17:38:24.167817 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:24.169455 master-0 kubenswrapper[4652]: I0216 17:38:24.169413 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-qvcqr" Feb 16 17:38:24.250908 master-0 kubenswrapper[4652]: I0216 17:38:24.250851 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n2twb"] Feb 16 17:38:24.549183 master-0 kubenswrapper[4652]: I0216 17:38:24.549148 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-846d98f6c-cnjjz_d959a347-b11d-4a51-9729-26b1b7842cc9/console/0.log" Feb 16 17:38:24.549515 master-0 kubenswrapper[4652]: I0216 17:38:24.549496 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:38:24.633067 master-0 kubenswrapper[4652]: I0216 17:38:24.633016 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-service-ca\") pod \"d959a347-b11d-4a51-9729-26b1b7842cc9\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " Feb 16 17:38:24.633493 master-0 kubenswrapper[4652]: I0216 17:38:24.633471 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-trusted-ca-bundle\") pod \"d959a347-b11d-4a51-9729-26b1b7842cc9\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " Feb 16 17:38:24.633632 master-0 kubenswrapper[4652]: I0216 17:38:24.633613 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d959a347-b11d-4a51-9729-26b1b7842cc9-console-serving-cert\") pod \"d959a347-b11d-4a51-9729-26b1b7842cc9\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " Feb 16 17:38:24.633819 master-0 kubenswrapper[4652]: I0216 17:38:24.633801 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d959a347-b11d-4a51-9729-26b1b7842cc9-console-oauth-config\") pod \"d959a347-b11d-4a51-9729-26b1b7842cc9\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " Feb 16 17:38:24.633943 master-0 kubenswrapper[4652]: I0216 17:38:24.633927 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-console-config\") pod \"d959a347-b11d-4a51-9729-26b1b7842cc9\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " Feb 16 17:38:24.634147 master-0 kubenswrapper[4652]: I0216 17:38:24.634129 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-oauth-serving-cert\") pod \"d959a347-b11d-4a51-9729-26b1b7842cc9\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " Feb 16 17:38:24.634286 master-0 kubenswrapper[4652]: I0216 17:38:24.634263 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4gvm\" (UniqueName: \"kubernetes.io/projected/d959a347-b11d-4a51-9729-26b1b7842cc9-kube-api-access-p4gvm\") pod \"d959a347-b11d-4a51-9729-26b1b7842cc9\" (UID: \"d959a347-b11d-4a51-9729-26b1b7842cc9\") " Feb 16 17:38:24.634531 master-0 kubenswrapper[4652]: I0216 17:38:24.633626 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-service-ca" (OuterVolumeSpecName: "service-ca") pod "d959a347-b11d-4a51-9729-26b1b7842cc9" (UID: "d959a347-b11d-4a51-9729-26b1b7842cc9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:38:24.634680 master-0 kubenswrapper[4652]: I0216 17:38:24.634637 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d959a347-b11d-4a51-9729-26b1b7842cc9" (UID: "d959a347-b11d-4a51-9729-26b1b7842cc9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:38:24.634780 master-0 kubenswrapper[4652]: I0216 17:38:24.634732 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d959a347-b11d-4a51-9729-26b1b7842cc9" (UID: "d959a347-b11d-4a51-9729-26b1b7842cc9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:38:24.634849 master-0 kubenswrapper[4652]: I0216 17:38:24.634798 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-console-config" (OuterVolumeSpecName: "console-config") pod "d959a347-b11d-4a51-9729-26b1b7842cc9" (UID: "d959a347-b11d-4a51-9729-26b1b7842cc9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:38:24.635143 master-0 kubenswrapper[4652]: I0216 17:38:24.635120 4652 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-console-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:38:24.635242 master-0 kubenswrapper[4652]: I0216 17:38:24.635226 4652 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:38:24.635363 master-0 kubenswrapper[4652]: I0216 17:38:24.635348 4652 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-service-ca\") on node \"master-0\" DevicePath \"\"" Feb 16 17:38:24.635457 master-0 kubenswrapper[4652]: I0216 17:38:24.635443 4652 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d959a347-b11d-4a51-9729-26b1b7842cc9-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:38:24.637760 master-0 kubenswrapper[4652]: I0216 17:38:24.637701 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d959a347-b11d-4a51-9729-26b1b7842cc9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d959a347-b11d-4a51-9729-26b1b7842cc9" (UID: "d959a347-b11d-4a51-9729-26b1b7842cc9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:38:24.637981 master-0 kubenswrapper[4652]: I0216 17:38:24.637933 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d959a347-b11d-4a51-9729-26b1b7842cc9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d959a347-b11d-4a51-9729-26b1b7842cc9" (UID: "d959a347-b11d-4a51-9729-26b1b7842cc9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:38:24.638830 master-0 kubenswrapper[4652]: I0216 17:38:24.638724 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d959a347-b11d-4a51-9729-26b1b7842cc9-kube-api-access-p4gvm" (OuterVolumeSpecName: "kube-api-access-p4gvm") pod "d959a347-b11d-4a51-9729-26b1b7842cc9" (UID: "d959a347-b11d-4a51-9729-26b1b7842cc9"). InnerVolumeSpecName "kube-api-access-p4gvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:38:24.736777 master-0 kubenswrapper[4652]: I0216 17:38:24.736627 4652 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d959a347-b11d-4a51-9729-26b1b7842cc9-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:38:24.736777 master-0 kubenswrapper[4652]: I0216 17:38:24.736678 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4gvm\" (UniqueName: \"kubernetes.io/projected/d959a347-b11d-4a51-9729-26b1b7842cc9-kube-api-access-p4gvm\") on node \"master-0\" DevicePath \"\"" Feb 16 17:38:24.736777 master-0 kubenswrapper[4652]: I0216 17:38:24.736692 4652 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d959a347-b11d-4a51-9729-26b1b7842cc9-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Feb 16 17:38:24.840453 master-0 kubenswrapper[4652]: I0216 17:38:24.840409 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7xrz7"] Feb 16 17:38:24.840776 master-0 kubenswrapper[4652]: E0216 17:38:24.840755 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d959a347-b11d-4a51-9729-26b1b7842cc9" containerName="console" Feb 16 17:38:24.840776 master-0 kubenswrapper[4652]: I0216 17:38:24.840770 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="d959a347-b11d-4a51-9729-26b1b7842cc9" containerName="console" Feb 16 17:38:24.840933 master-0 kubenswrapper[4652]: I0216 17:38:24.840916 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="d959a347-b11d-4a51-9729-26b1b7842cc9" containerName="console" Feb 16 17:38:24.842031 master-0 kubenswrapper[4652]: I0216 17:38:24.842008 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7xrz7" Feb 16 17:38:24.848410 master-0 kubenswrapper[4652]: I0216 17:38:24.848364 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7xrz7"] Feb 16 17:38:24.928926 master-0 kubenswrapper[4652]: E0216 17:38:24.928861 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd959a347_b11d_4a51_9729_26b1b7842cc9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd959a347_b11d_4a51_9729_26b1b7842cc9.slice/crio-7762b4372e40eae8aadbad57c09e3fe8177bcb14ae84ec8f433a960b14f37a7c\": RecentStats: unable to find data in memory cache]" Feb 16 17:38:24.941057 master-0 kubenswrapper[4652]: I0216 17:38:24.940987 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtnr7\" (UniqueName: \"kubernetes.io/projected/81d4a1ff-ac55-4e48-8199-83c00d5d771b-kube-api-access-dtnr7\") pod \"openstack-operator-index-7xrz7\" (UID: \"81d4a1ff-ac55-4e48-8199-83c00d5d771b\") " pod="openstack-operators/openstack-operator-index-7xrz7" Feb 16 17:38:25.043114 master-0 kubenswrapper[4652]: I0216 17:38:25.042944 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtnr7\" (UniqueName: \"kubernetes.io/projected/81d4a1ff-ac55-4e48-8199-83c00d5d771b-kube-api-access-dtnr7\") pod \"openstack-operator-index-7xrz7\" (UID: \"81d4a1ff-ac55-4e48-8199-83c00d5d771b\") " pod="openstack-operators/openstack-operator-index-7xrz7" Feb 16 17:38:25.067720 master-0 kubenswrapper[4652]: I0216 17:38:25.067641 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtnr7\" (UniqueName: \"kubernetes.io/projected/81d4a1ff-ac55-4e48-8199-83c00d5d771b-kube-api-access-dtnr7\") pod \"openstack-operator-index-7xrz7\" (UID: \"81d4a1ff-ac55-4e48-8199-83c00d5d771b\") " pod="openstack-operators/openstack-operator-index-7xrz7" Feb 16 17:38:25.172375 master-0 kubenswrapper[4652]: I0216 17:38:25.172311 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7xrz7" Feb 16 17:38:25.179995 master-0 kubenswrapper[4652]: I0216 17:38:25.179942 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-846d98f6c-cnjjz_d959a347-b11d-4a51-9729-26b1b7842cc9/console/0.log" Feb 16 17:38:25.180117 master-0 kubenswrapper[4652]: I0216 17:38:25.180026 4652 generic.go:334] "Generic (PLEG): container finished" podID="d959a347-b11d-4a51-9729-26b1b7842cc9" containerID="b42d4c974fa1c61b26c543260c776911d99feda72bf82a421d07a2ff2bb063dd" exitCode=2 Feb 16 17:38:25.180171 master-0 kubenswrapper[4652]: I0216 17:38:25.180124 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-846d98f6c-cnjjz" event={"ID":"d959a347-b11d-4a51-9729-26b1b7842cc9","Type":"ContainerDied","Data":"b42d4c974fa1c61b26c543260c776911d99feda72bf82a421d07a2ff2bb063dd"} Feb 16 17:38:25.180213 master-0 kubenswrapper[4652]: I0216 17:38:25.180160 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-846d98f6c-cnjjz" Feb 16 17:38:25.180213 master-0 kubenswrapper[4652]: I0216 17:38:25.180201 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-846d98f6c-cnjjz" event={"ID":"d959a347-b11d-4a51-9729-26b1b7842cc9","Type":"ContainerDied","Data":"7762b4372e40eae8aadbad57c09e3fe8177bcb14ae84ec8f433a960b14f37a7c"} Feb 16 17:38:25.180305 master-0 kubenswrapper[4652]: I0216 17:38:25.180265 4652 scope.go:117] "RemoveContainer" containerID="b42d4c974fa1c61b26c543260c776911d99feda72bf82a421d07a2ff2bb063dd" Feb 16 17:38:25.181464 master-0 kubenswrapper[4652]: I0216 17:38:25.181429 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-n2twb" podUID="42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c" containerName="registry-server" containerID="cri-o://b746b76fd9f0710cba072eff1562eb232555af1fa5f1562fe3fcfd3b7a582db2" gracePeriod=2 Feb 16 17:38:25.404637 master-0 kubenswrapper[4652]: I0216 17:38:25.404582 4652 scope.go:117] "RemoveContainer" containerID="b42d4c974fa1c61b26c543260c776911d99feda72bf82a421d07a2ff2bb063dd" Feb 16 17:38:25.405302 master-0 kubenswrapper[4652]: E0216 17:38:25.405215 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b42d4c974fa1c61b26c543260c776911d99feda72bf82a421d07a2ff2bb063dd\": container with ID starting with b42d4c974fa1c61b26c543260c776911d99feda72bf82a421d07a2ff2bb063dd not found: ID does not exist" containerID="b42d4c974fa1c61b26c543260c776911d99feda72bf82a421d07a2ff2bb063dd" Feb 16 17:38:25.405302 master-0 kubenswrapper[4652]: I0216 17:38:25.405286 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b42d4c974fa1c61b26c543260c776911d99feda72bf82a421d07a2ff2bb063dd"} err="failed to get container status \"b42d4c974fa1c61b26c543260c776911d99feda72bf82a421d07a2ff2bb063dd\": rpc error: code = NotFound desc = could not find container \"b42d4c974fa1c61b26c543260c776911d99feda72bf82a421d07a2ff2bb063dd\": container with ID starting with b42d4c974fa1c61b26c543260c776911d99feda72bf82a421d07a2ff2bb063dd not found: ID does not exist" Feb 16 17:38:25.434504 master-0 kubenswrapper[4652]: I0216 17:38:25.434418 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-846d98f6c-cnjjz"] Feb 16 17:38:25.450171 master-0 kubenswrapper[4652]: I0216 17:38:25.450086 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-846d98f6c-cnjjz"] Feb 16 17:38:25.693188 master-0 kubenswrapper[4652]: I0216 17:38:25.692968 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7xrz7"] Feb 16 17:38:25.744885 master-0 kubenswrapper[4652]: I0216 17:38:25.744853 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n2twb" Feb 16 17:38:25.876597 master-0 kubenswrapper[4652]: I0216 17:38:25.876455 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd8r6\" (UniqueName: \"kubernetes.io/projected/42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c-kube-api-access-dd8r6\") pod \"42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c\" (UID: \"42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c\") " Feb 16 17:38:25.881874 master-0 kubenswrapper[4652]: I0216 17:38:25.881812 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c-kube-api-access-dd8r6" (OuterVolumeSpecName: "kube-api-access-dd8r6") pod "42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c" (UID: "42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c"). InnerVolumeSpecName "kube-api-access-dd8r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:38:25.978841 master-0 kubenswrapper[4652]: I0216 17:38:25.978707 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd8r6\" (UniqueName: \"kubernetes.io/projected/42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c-kube-api-access-dd8r6\") on node \"master-0\" DevicePath \"\"" Feb 16 17:38:26.192743 master-0 kubenswrapper[4652]: I0216 17:38:26.192697 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7xrz7" event={"ID":"81d4a1ff-ac55-4e48-8199-83c00d5d771b","Type":"ContainerStarted","Data":"22d3bd9b6abc2e19006bebafec993d8b456c656e601bf76995cb95b9cc528c67"} Feb 16 17:38:26.194614 master-0 kubenswrapper[4652]: I0216 17:38:26.194574 4652 generic.go:334] "Generic (PLEG): container finished" podID="42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c" containerID="b746b76fd9f0710cba072eff1562eb232555af1fa5f1562fe3fcfd3b7a582db2" exitCode=0 Feb 16 17:38:26.194665 master-0 kubenswrapper[4652]: I0216 17:38:26.194639 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n2twb" Feb 16 17:38:26.194701 master-0 kubenswrapper[4652]: I0216 17:38:26.194673 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n2twb" event={"ID":"42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c","Type":"ContainerDied","Data":"b746b76fd9f0710cba072eff1562eb232555af1fa5f1562fe3fcfd3b7a582db2"} Feb 16 17:38:26.194747 master-0 kubenswrapper[4652]: I0216 17:38:26.194716 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n2twb" event={"ID":"42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c","Type":"ContainerDied","Data":"5d04fedc9bf7ef348f0a30911205ce3274b72957acacd2c1551ece9de2e7dd3e"} Feb 16 17:38:26.194789 master-0 kubenswrapper[4652]: I0216 17:38:26.194748 4652 scope.go:117] "RemoveContainer" containerID="b746b76fd9f0710cba072eff1562eb232555af1fa5f1562fe3fcfd3b7a582db2" Feb 16 17:38:26.228700 master-0 kubenswrapper[4652]: I0216 17:38:26.228662 4652 scope.go:117] "RemoveContainer" containerID="b746b76fd9f0710cba072eff1562eb232555af1fa5f1562fe3fcfd3b7a582db2" Feb 16 17:38:26.229342 master-0 kubenswrapper[4652]: E0216 17:38:26.229241 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b746b76fd9f0710cba072eff1562eb232555af1fa5f1562fe3fcfd3b7a582db2\": container with ID starting with b746b76fd9f0710cba072eff1562eb232555af1fa5f1562fe3fcfd3b7a582db2 not found: ID does not exist" containerID="b746b76fd9f0710cba072eff1562eb232555af1fa5f1562fe3fcfd3b7a582db2" Feb 16 17:38:26.229401 master-0 kubenswrapper[4652]: I0216 17:38:26.229318 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b746b76fd9f0710cba072eff1562eb232555af1fa5f1562fe3fcfd3b7a582db2"} err="failed to get container status \"b746b76fd9f0710cba072eff1562eb232555af1fa5f1562fe3fcfd3b7a582db2\": rpc error: code = NotFound desc = could not find container \"b746b76fd9f0710cba072eff1562eb232555af1fa5f1562fe3fcfd3b7a582db2\": container with ID starting with b746b76fd9f0710cba072eff1562eb232555af1fa5f1562fe3fcfd3b7a582db2 not found: ID does not exist" Feb 16 17:38:26.255740 master-0 kubenswrapper[4652]: I0216 17:38:26.255682 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n2twb"] Feb 16 17:38:26.263033 master-0 kubenswrapper[4652]: I0216 17:38:26.262986 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-n2twb"] Feb 16 17:38:26.762441 master-0 kubenswrapper[4652]: I0216 17:38:26.762372 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c" path="/var/lib/kubelet/pods/42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c/volumes" Feb 16 17:38:26.763132 master-0 kubenswrapper[4652]: I0216 17:38:26.763091 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d959a347-b11d-4a51-9729-26b1b7842cc9" path="/var/lib/kubelet/pods/d959a347-b11d-4a51-9729-26b1b7842cc9/volumes" Feb 16 17:38:27.205954 master-0 kubenswrapper[4652]: I0216 17:38:27.205900 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7xrz7" event={"ID":"81d4a1ff-ac55-4e48-8199-83c00d5d771b","Type":"ContainerStarted","Data":"2b25fc48a6a50b3940661dfbed80617a0e5e055e1b4b34796fe21688a51028a7"} Feb 16 17:38:27.227824 master-0 kubenswrapper[4652]: I0216 17:38:27.227739 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7xrz7" podStartSLOduration=2.803533784 podStartE2EDuration="3.227712836s" podCreationTimestamp="2026-02-16 17:38:24 +0000 UTC" firstStartedPulling="2026-02-16 17:38:25.704753943 +0000 UTC m=+863.092922469" lastFinishedPulling="2026-02-16 17:38:26.128933005 +0000 UTC m=+863.517101521" observedRunningTime="2026-02-16 17:38:27.222790696 +0000 UTC m=+864.610959232" watchObservedRunningTime="2026-02-16 17:38:27.227712836 +0000 UTC m=+864.615881392" Feb 16 17:38:35.173755 master-0 kubenswrapper[4652]: I0216 17:38:35.173696 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-7xrz7" Feb 16 17:38:35.175192 master-0 kubenswrapper[4652]: I0216 17:38:35.174398 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-7xrz7" Feb 16 17:38:35.239155 master-0 kubenswrapper[4652]: I0216 17:38:35.239092 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-7xrz7" Feb 16 17:38:35.322504 master-0 kubenswrapper[4652]: I0216 17:38:35.322436 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-7xrz7" Feb 16 17:38:36.490401 master-0 kubenswrapper[4652]: I0216 17:38:36.490289 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79"] Feb 16 17:38:36.491070 master-0 kubenswrapper[4652]: E0216 17:38:36.491038 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c" containerName="registry-server" Feb 16 17:38:36.491147 master-0 kubenswrapper[4652]: I0216 17:38:36.491072 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c" containerName="registry-server" Feb 16 17:38:36.491340 master-0 kubenswrapper[4652]: I0216 17:38:36.491307 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f89e40-dc6e-4c59-b4f3-b97bf5b3ee4c" containerName="registry-server" Feb 16 17:38:36.493150 master-0 kubenswrapper[4652]: I0216 17:38:36.493120 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" Feb 16 17:38:36.506726 master-0 kubenswrapper[4652]: I0216 17:38:36.506675 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79"] Feb 16 17:38:36.571395 master-0 kubenswrapper[4652]: I0216 17:38:36.571299 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a978c5b-cb93-4371-bc07-821619bae305-util\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79\" (UID: \"7a978c5b-cb93-4371-bc07-821619bae305\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" Feb 16 17:38:36.571395 master-0 kubenswrapper[4652]: I0216 17:38:36.571393 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn2ml\" (UniqueName: \"kubernetes.io/projected/7a978c5b-cb93-4371-bc07-821619bae305-kube-api-access-gn2ml\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79\" (UID: \"7a978c5b-cb93-4371-bc07-821619bae305\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" Feb 16 17:38:36.571702 master-0 kubenswrapper[4652]: I0216 17:38:36.571533 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a978c5b-cb93-4371-bc07-821619bae305-bundle\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79\" (UID: \"7a978c5b-cb93-4371-bc07-821619bae305\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" Feb 16 17:38:36.673088 master-0 kubenswrapper[4652]: I0216 17:38:36.672998 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a978c5b-cb93-4371-bc07-821619bae305-bundle\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79\" (UID: \"7a978c5b-cb93-4371-bc07-821619bae305\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" Feb 16 17:38:36.673437 master-0 kubenswrapper[4652]: I0216 17:38:36.673207 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a978c5b-cb93-4371-bc07-821619bae305-util\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79\" (UID: \"7a978c5b-cb93-4371-bc07-821619bae305\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" Feb 16 17:38:36.673437 master-0 kubenswrapper[4652]: I0216 17:38:36.673269 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn2ml\" (UniqueName: \"kubernetes.io/projected/7a978c5b-cb93-4371-bc07-821619bae305-kube-api-access-gn2ml\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79\" (UID: \"7a978c5b-cb93-4371-bc07-821619bae305\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" Feb 16 17:38:36.673669 master-0 kubenswrapper[4652]: I0216 17:38:36.673628 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a978c5b-cb93-4371-bc07-821619bae305-util\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79\" (UID: \"7a978c5b-cb93-4371-bc07-821619bae305\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" Feb 16 17:38:36.673768 master-0 kubenswrapper[4652]: I0216 17:38:36.673732 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a978c5b-cb93-4371-bc07-821619bae305-bundle\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79\" (UID: \"7a978c5b-cb93-4371-bc07-821619bae305\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" Feb 16 17:38:36.693807 master-0 kubenswrapper[4652]: I0216 17:38:36.693750 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn2ml\" (UniqueName: \"kubernetes.io/projected/7a978c5b-cb93-4371-bc07-821619bae305-kube-api-access-gn2ml\") pod \"4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79\" (UID: \"7a978c5b-cb93-4371-bc07-821619bae305\") " pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" Feb 16 17:38:36.811049 master-0 kubenswrapper[4652]: I0216 17:38:36.810916 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" Feb 16 17:38:37.220956 master-0 kubenswrapper[4652]: I0216 17:38:37.220900 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79"] Feb 16 17:38:37.227638 master-0 kubenswrapper[4652]: W0216 17:38:37.227584 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a978c5b_cb93_4371_bc07_821619bae305.slice/crio-af1bb45daa6779bae9ab2e19f11fefcb9e656e7c53350dc52a467acf231d40ee WatchSource:0}: Error finding container af1bb45daa6779bae9ab2e19f11fefcb9e656e7c53350dc52a467acf231d40ee: Status 404 returned error can't find the container with id af1bb45daa6779bae9ab2e19f11fefcb9e656e7c53350dc52a467acf231d40ee Feb 16 17:38:37.297512 master-0 kubenswrapper[4652]: I0216 17:38:37.297026 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" event={"ID":"7a978c5b-cb93-4371-bc07-821619bae305","Type":"ContainerStarted","Data":"af1bb45daa6779bae9ab2e19f11fefcb9e656e7c53350dc52a467acf231d40ee"} Feb 16 17:38:38.310057 master-0 kubenswrapper[4652]: I0216 17:38:38.310006 4652 generic.go:334] "Generic (PLEG): container finished" podID="7a978c5b-cb93-4371-bc07-821619bae305" containerID="5c61cbb4c9f77fed8242c5fae8771becb72a4bb9c2813ccc6b4860c8388b66cc" exitCode=0 Feb 16 17:38:38.310057 master-0 kubenswrapper[4652]: I0216 17:38:38.310054 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" event={"ID":"7a978c5b-cb93-4371-bc07-821619bae305","Type":"ContainerDied","Data":"5c61cbb4c9f77fed8242c5fae8771becb72a4bb9c2813ccc6b4860c8388b66cc"} Feb 16 17:38:40.345001 master-0 kubenswrapper[4652]: I0216 17:38:40.344916 4652 generic.go:334] "Generic (PLEG): container finished" podID="7a978c5b-cb93-4371-bc07-821619bae305" containerID="c9b8fb9ab1554381a9677e73a018bab31f0a3df19852f8939f1737545f941cfa" exitCode=0 Feb 16 17:38:40.345001 master-0 kubenswrapper[4652]: I0216 17:38:40.344980 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" event={"ID":"7a978c5b-cb93-4371-bc07-821619bae305","Type":"ContainerDied","Data":"c9b8fb9ab1554381a9677e73a018bab31f0a3df19852f8939f1737545f941cfa"} Feb 16 17:38:41.354959 master-0 kubenswrapper[4652]: I0216 17:38:41.354917 4652 generic.go:334] "Generic (PLEG): container finished" podID="7a978c5b-cb93-4371-bc07-821619bae305" containerID="3106ddc33528907e3631c42a34b01d8407034a15a2db4f89a49bc4740d3d5d66" exitCode=0 Feb 16 17:38:41.355593 master-0 kubenswrapper[4652]: I0216 17:38:41.355038 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" event={"ID":"7a978c5b-cb93-4371-bc07-821619bae305","Type":"ContainerDied","Data":"3106ddc33528907e3631c42a34b01d8407034a15a2db4f89a49bc4740d3d5d66"} Feb 16 17:38:42.728971 master-0 kubenswrapper[4652]: I0216 17:38:42.728890 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" Feb 16 17:38:42.875548 master-0 kubenswrapper[4652]: I0216 17:38:42.875499 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a978c5b-cb93-4371-bc07-821619bae305-bundle\") pod \"7a978c5b-cb93-4371-bc07-821619bae305\" (UID: \"7a978c5b-cb93-4371-bc07-821619bae305\") " Feb 16 17:38:42.875789 master-0 kubenswrapper[4652]: I0216 17:38:42.875604 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn2ml\" (UniqueName: \"kubernetes.io/projected/7a978c5b-cb93-4371-bc07-821619bae305-kube-api-access-gn2ml\") pod \"7a978c5b-cb93-4371-bc07-821619bae305\" (UID: \"7a978c5b-cb93-4371-bc07-821619bae305\") " Feb 16 17:38:42.875789 master-0 kubenswrapper[4652]: I0216 17:38:42.875750 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a978c5b-cb93-4371-bc07-821619bae305-util\") pod \"7a978c5b-cb93-4371-bc07-821619bae305\" (UID: \"7a978c5b-cb93-4371-bc07-821619bae305\") " Feb 16 17:38:42.876103 master-0 kubenswrapper[4652]: I0216 17:38:42.876081 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a978c5b-cb93-4371-bc07-821619bae305-bundle" (OuterVolumeSpecName: "bundle") pod "7a978c5b-cb93-4371-bc07-821619bae305" (UID: "7a978c5b-cb93-4371-bc07-821619bae305"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:38:42.876386 master-0 kubenswrapper[4652]: I0216 17:38:42.876359 4652 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a978c5b-cb93-4371-bc07-821619bae305-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:38:42.881107 master-0 kubenswrapper[4652]: I0216 17:38:42.881064 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a978c5b-cb93-4371-bc07-821619bae305-kube-api-access-gn2ml" (OuterVolumeSpecName: "kube-api-access-gn2ml") pod "7a978c5b-cb93-4371-bc07-821619bae305" (UID: "7a978c5b-cb93-4371-bc07-821619bae305"). InnerVolumeSpecName "kube-api-access-gn2ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:38:42.902198 master-0 kubenswrapper[4652]: I0216 17:38:42.902131 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a978c5b-cb93-4371-bc07-821619bae305-util" (OuterVolumeSpecName: "util") pod "7a978c5b-cb93-4371-bc07-821619bae305" (UID: "7a978c5b-cb93-4371-bc07-821619bae305"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:38:42.977797 master-0 kubenswrapper[4652]: I0216 17:38:42.977732 4652 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a978c5b-cb93-4371-bc07-821619bae305-util\") on node \"master-0\" DevicePath \"\"" Feb 16 17:38:42.977797 master-0 kubenswrapper[4652]: I0216 17:38:42.977790 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn2ml\" (UniqueName: \"kubernetes.io/projected/7a978c5b-cb93-4371-bc07-821619bae305-kube-api-access-gn2ml\") on node \"master-0\" DevicePath \"\"" Feb 16 17:38:43.375544 master-0 kubenswrapper[4652]: I0216 17:38:43.375438 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" event={"ID":"7a978c5b-cb93-4371-bc07-821619bae305","Type":"ContainerDied","Data":"af1bb45daa6779bae9ab2e19f11fefcb9e656e7c53350dc52a467acf231d40ee"} Feb 16 17:38:43.375772 master-0 kubenswrapper[4652]: I0216 17:38:43.375757 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af1bb45daa6779bae9ab2e19f11fefcb9e656e7c53350dc52a467acf231d40ee" Feb 16 17:38:43.375852 master-0 kubenswrapper[4652]: I0216 17:38:43.375532 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4bf6be8fe88744fb8c7a45482d50861896e90ebf8f05f0c089b9c27c21jwb79" Feb 16 17:38:49.263887 master-0 kubenswrapper[4652]: I0216 17:38:49.263819 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl"] Feb 16 17:38:49.264760 master-0 kubenswrapper[4652]: E0216 17:38:49.264191 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a978c5b-cb93-4371-bc07-821619bae305" containerName="pull" Feb 16 17:38:49.264760 master-0 kubenswrapper[4652]: I0216 17:38:49.264206 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a978c5b-cb93-4371-bc07-821619bae305" containerName="pull" Feb 16 17:38:49.264760 master-0 kubenswrapper[4652]: E0216 17:38:49.264237 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a978c5b-cb93-4371-bc07-821619bae305" containerName="extract" Feb 16 17:38:49.264760 master-0 kubenswrapper[4652]: I0216 17:38:49.264260 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a978c5b-cb93-4371-bc07-821619bae305" containerName="extract" Feb 16 17:38:49.264760 master-0 kubenswrapper[4652]: E0216 17:38:49.264281 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a978c5b-cb93-4371-bc07-821619bae305" containerName="util" Feb 16 17:38:49.264760 master-0 kubenswrapper[4652]: I0216 17:38:49.264289 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a978c5b-cb93-4371-bc07-821619bae305" containerName="util" Feb 16 17:38:49.264760 master-0 kubenswrapper[4652]: I0216 17:38:49.264499 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a978c5b-cb93-4371-bc07-821619bae305" containerName="extract" Feb 16 17:38:49.265098 master-0 kubenswrapper[4652]: I0216 17:38:49.265087 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl" Feb 16 17:38:49.294004 master-0 kubenswrapper[4652]: I0216 17:38:49.293949 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl"] Feb 16 17:38:49.391076 master-0 kubenswrapper[4652]: I0216 17:38:49.391008 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl626\" (UniqueName: \"kubernetes.io/projected/89596a49-765f-4891-a357-4368d1b06b40-kube-api-access-kl626\") pod \"openstack-operator-controller-init-7f8db498b4-v8ltl\" (UID: \"89596a49-765f-4891-a357-4368d1b06b40\") " pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl" Feb 16 17:38:49.499456 master-0 kubenswrapper[4652]: I0216 17:38:49.499235 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl626\" (UniqueName: \"kubernetes.io/projected/89596a49-765f-4891-a357-4368d1b06b40-kube-api-access-kl626\") pod \"openstack-operator-controller-init-7f8db498b4-v8ltl\" (UID: \"89596a49-765f-4891-a357-4368d1b06b40\") " pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl" Feb 16 17:38:49.573287 master-0 kubenswrapper[4652]: I0216 17:38:49.573185 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl626\" (UniqueName: \"kubernetes.io/projected/89596a49-765f-4891-a357-4368d1b06b40-kube-api-access-kl626\") pod \"openstack-operator-controller-init-7f8db498b4-v8ltl\" (UID: \"89596a49-765f-4891-a357-4368d1b06b40\") " pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl" Feb 16 17:38:49.587190 master-0 kubenswrapper[4652]: I0216 17:38:49.587137 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl" Feb 16 17:38:50.100884 master-0 kubenswrapper[4652]: I0216 17:38:50.100852 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl"] Feb 16 17:38:50.103632 master-0 kubenswrapper[4652]: W0216 17:38:50.103559 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89596a49_765f_4891_a357_4368d1b06b40.slice/crio-57d252c84774c2735ff9b668626fb5ba70ad39c1b14e48ba2953fcea1a96ca99 WatchSource:0}: Error finding container 57d252c84774c2735ff9b668626fb5ba70ad39c1b14e48ba2953fcea1a96ca99: Status 404 returned error can't find the container with id 57d252c84774c2735ff9b668626fb5ba70ad39c1b14e48ba2953fcea1a96ca99 Feb 16 17:38:50.434232 master-0 kubenswrapper[4652]: I0216 17:38:50.434172 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl" event={"ID":"89596a49-765f-4891-a357-4368d1b06b40","Type":"ContainerStarted","Data":"57d252c84774c2735ff9b668626fb5ba70ad39c1b14e48ba2953fcea1a96ca99"} Feb 16 17:38:54.470107 master-0 kubenswrapper[4652]: I0216 17:38:54.469981 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl" event={"ID":"89596a49-765f-4891-a357-4368d1b06b40","Type":"ContainerStarted","Data":"d0fd1ac5c385cfa0283af0da595aa71c0445aef1ae7f6c62eb05e3f467740d9f"} Feb 16 17:38:54.470826 master-0 kubenswrapper[4652]: I0216 17:38:54.470800 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl" Feb 16 17:38:54.506159 master-0 kubenswrapper[4652]: I0216 17:38:54.506073 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl" podStartSLOduration=1.446908778 podStartE2EDuration="5.506053567s" podCreationTimestamp="2026-02-16 17:38:49 +0000 UTC" firstStartedPulling="2026-02-16 17:38:50.115232007 +0000 UTC m=+887.503400563" lastFinishedPulling="2026-02-16 17:38:54.174376836 +0000 UTC m=+891.562545352" observedRunningTime="2026-02-16 17:38:54.502757758 +0000 UTC m=+891.890926284" watchObservedRunningTime="2026-02-16 17:38:54.506053567 +0000 UTC m=+891.894222083" Feb 16 17:38:59.590850 master-0 kubenswrapper[4652]: I0216 17:38:59.590793 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7f8db498b4-v8ltl" Feb 16 17:39:04.550802 master-0 kubenswrapper[4652]: I0216 17:39:04.550720 4652 scope.go:117] "RemoveContainer" containerID="15bdf4adb469c8e49b844325cb3d5f079a624d6ed75dcf73bfdeb27983d77e0b" Feb 16 17:39:04.573758 master-0 kubenswrapper[4652]: I0216 17:39:04.573683 4652 scope.go:117] "RemoveContainer" containerID="a48e44f48198e4db49af49e3cef0fd62f710895db4941d8940f6bbcfc9115369" Feb 16 17:39:04.598075 master-0 kubenswrapper[4652]: I0216 17:39:04.598029 4652 scope.go:117] "RemoveContainer" containerID="cbe696b52c701a95154d7c627e87465ef37a24dcef0fa0a27f9537b62cc823b4" Feb 16 17:39:04.613504 master-0 kubenswrapper[4652]: I0216 17:39:04.613001 4652 scope.go:117] "RemoveContainer" containerID="3d2828c0d80b4c7afd8ffd37f8c260ee8dedebb2fc940c473cd50dec8ae63210" Feb 16 17:39:20.466998 master-0 kubenswrapper[4652]: I0216 17:39:20.466625 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq"] Feb 16 17:39:20.468141 master-0 kubenswrapper[4652]: I0216 17:39:20.468076 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq" Feb 16 17:39:20.477277 master-0 kubenswrapper[4652]: I0216 17:39:20.477172 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx"] Feb 16 17:39:20.478573 master-0 kubenswrapper[4652]: I0216 17:39:20.478542 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx" Feb 16 17:39:20.489590 master-0 kubenswrapper[4652]: I0216 17:39:20.489184 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq"] Feb 16 17:39:20.497123 master-0 kubenswrapper[4652]: I0216 17:39:20.495825 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx"] Feb 16 17:39:20.528285 master-0 kubenswrapper[4652]: I0216 17:39:20.521050 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr"] Feb 16 17:39:20.528285 master-0 kubenswrapper[4652]: I0216 17:39:20.523341 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr" Feb 16 17:39:20.547550 master-0 kubenswrapper[4652]: I0216 17:39:20.545625 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr"] Feb 16 17:39:20.567183 master-0 kubenswrapper[4652]: I0216 17:39:20.563723 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj"] Feb 16 17:39:20.594622 master-0 kubenswrapper[4652]: I0216 17:39:20.587103 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gzmc\" (UniqueName: \"kubernetes.io/projected/482c56ed-0576-46a4-b961-15030e811005-kube-api-access-7gzmc\") pod \"cinder-operator-controller-manager-5d946d989d-8ppjx\" (UID: \"482c56ed-0576-46a4-b961-15030e811005\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx" Feb 16 17:39:20.594622 master-0 kubenswrapper[4652]: I0216 17:39:20.587261 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrsbp\" (UniqueName: \"kubernetes.io/projected/1c903b0b-4f9c-4739-8a74-cec124ddf2b8-kube-api-access-rrsbp\") pod \"barbican-operator-controller-manager-868647ff47-jmqqq\" (UID: \"1c903b0b-4f9c-4739-8a74-cec124ddf2b8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq" Feb 16 17:39:20.594622 master-0 kubenswrapper[4652]: I0216 17:39:20.587285 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45h6z\" (UniqueName: \"kubernetes.io/projected/73d82bf6-e57d-4909-b293-39b33f8c142e-kube-api-access-45h6z\") pod \"designate-operator-controller-manager-6d8bf5c495-pddtr\" (UID: \"73d82bf6-e57d-4909-b293-39b33f8c142e\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr" Feb 16 17:39:20.594622 master-0 kubenswrapper[4652]: I0216 17:39:20.587810 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l"] Feb 16 17:39:20.597491 master-0 kubenswrapper[4652]: I0216 17:39:20.597467 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj" Feb 16 17:39:20.600662 master-0 kubenswrapper[4652]: I0216 17:39:20.600616 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l" Feb 16 17:39:20.668623 master-0 kubenswrapper[4652]: I0216 17:39:20.666702 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj"] Feb 16 17:39:20.695284 master-0 kubenswrapper[4652]: I0216 17:39:20.692709 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pt99\" (UniqueName: \"kubernetes.io/projected/0a763571-3fe3-4093-9032-5137713d66a2-kube-api-access-4pt99\") pod \"heat-operator-controller-manager-69f49c598c-xv27l\" (UID: \"0a763571-3fe3-4093-9032-5137713d66a2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l" Feb 16 17:39:20.695284 master-0 kubenswrapper[4652]: I0216 17:39:20.692831 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qckqb\" (UniqueName: \"kubernetes.io/projected/7dcd9841-b2ee-4485-91fb-47e8ecbce567-kube-api-access-qckqb\") pod \"glance-operator-controller-manager-77987464f4-sv8qj\" (UID: \"7dcd9841-b2ee-4485-91fb-47e8ecbce567\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj" Feb 16 17:39:20.695284 master-0 kubenswrapper[4652]: I0216 17:39:20.692871 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrsbp\" (UniqueName: \"kubernetes.io/projected/1c903b0b-4f9c-4739-8a74-cec124ddf2b8-kube-api-access-rrsbp\") pod \"barbican-operator-controller-manager-868647ff47-jmqqq\" (UID: \"1c903b0b-4f9c-4739-8a74-cec124ddf2b8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq" Feb 16 17:39:20.695284 master-0 kubenswrapper[4652]: I0216 17:39:20.693015 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45h6z\" (UniqueName: \"kubernetes.io/projected/73d82bf6-e57d-4909-b293-39b33f8c142e-kube-api-access-45h6z\") pod \"designate-operator-controller-manager-6d8bf5c495-pddtr\" (UID: \"73d82bf6-e57d-4909-b293-39b33f8c142e\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr" Feb 16 17:39:20.695284 master-0 kubenswrapper[4652]: I0216 17:39:20.693117 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gzmc\" (UniqueName: \"kubernetes.io/projected/482c56ed-0576-46a4-b961-15030e811005-kube-api-access-7gzmc\") pod \"cinder-operator-controller-manager-5d946d989d-8ppjx\" (UID: \"482c56ed-0576-46a4-b961-15030e811005\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx" Feb 16 17:39:20.716495 master-0 kubenswrapper[4652]: I0216 17:39:20.716458 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45h6z\" (UniqueName: \"kubernetes.io/projected/73d82bf6-e57d-4909-b293-39b33f8c142e-kube-api-access-45h6z\") pod \"designate-operator-controller-manager-6d8bf5c495-pddtr\" (UID: \"73d82bf6-e57d-4909-b293-39b33f8c142e\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr" Feb 16 17:39:20.719554 master-0 kubenswrapper[4652]: I0216 17:39:20.719487 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gzmc\" (UniqueName: \"kubernetes.io/projected/482c56ed-0576-46a4-b961-15030e811005-kube-api-access-7gzmc\") pod \"cinder-operator-controller-manager-5d946d989d-8ppjx\" (UID: \"482c56ed-0576-46a4-b961-15030e811005\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx" Feb 16 17:39:20.738258 master-0 kubenswrapper[4652]: I0216 17:39:20.738182 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx" Feb 16 17:39:20.790293 master-0 kubenswrapper[4652]: I0216 17:39:20.785891 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrsbp\" (UniqueName: \"kubernetes.io/projected/1c903b0b-4f9c-4739-8a74-cec124ddf2b8-kube-api-access-rrsbp\") pod \"barbican-operator-controller-manager-868647ff47-jmqqq\" (UID: \"1c903b0b-4f9c-4739-8a74-cec124ddf2b8\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq" Feb 16 17:39:20.804195 master-0 kubenswrapper[4652]: I0216 17:39:20.803193 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qckqb\" (UniqueName: \"kubernetes.io/projected/7dcd9841-b2ee-4485-91fb-47e8ecbce567-kube-api-access-qckqb\") pod \"glance-operator-controller-manager-77987464f4-sv8qj\" (UID: \"7dcd9841-b2ee-4485-91fb-47e8ecbce567\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj" Feb 16 17:39:20.804195 master-0 kubenswrapper[4652]: I0216 17:39:20.803312 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pt99\" (UniqueName: \"kubernetes.io/projected/0a763571-3fe3-4093-9032-5137713d66a2-kube-api-access-4pt99\") pod \"heat-operator-controller-manager-69f49c598c-xv27l\" (UID: \"0a763571-3fe3-4093-9032-5137713d66a2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l" Feb 16 17:39:20.816826 master-0 kubenswrapper[4652]: I0216 17:39:20.809823 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l"] Feb 16 17:39:20.822351 master-0 kubenswrapper[4652]: I0216 17:39:20.822295 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq" Feb 16 17:39:20.828024 master-0 kubenswrapper[4652]: I0216 17:39:20.825747 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t"] Feb 16 17:39:20.828024 master-0 kubenswrapper[4652]: I0216 17:39:20.827015 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t" Feb 16 17:39:20.836274 master-0 kubenswrapper[4652]: I0216 17:39:20.833391 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t"] Feb 16 17:39:20.844567 master-0 kubenswrapper[4652]: I0216 17:39:20.844519 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pt99\" (UniqueName: \"kubernetes.io/projected/0a763571-3fe3-4093-9032-5137713d66a2-kube-api-access-4pt99\") pod \"heat-operator-controller-manager-69f49c598c-xv27l\" (UID: \"0a763571-3fe3-4093-9032-5137713d66a2\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l" Feb 16 17:39:20.865036 master-0 kubenswrapper[4652]: I0216 17:39:20.863525 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q"] Feb 16 17:39:20.865036 master-0 kubenswrapper[4652]: I0216 17:39:20.864976 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:20.868167 master-0 kubenswrapper[4652]: I0216 17:39:20.866833 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qckqb\" (UniqueName: \"kubernetes.io/projected/7dcd9841-b2ee-4485-91fb-47e8ecbce567-kube-api-access-qckqb\") pod \"glance-operator-controller-manager-77987464f4-sv8qj\" (UID: \"7dcd9841-b2ee-4485-91fb-47e8ecbce567\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj" Feb 16 17:39:20.868348 master-0 kubenswrapper[4652]: I0216 17:39:20.868180 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 17:39:20.899350 master-0 kubenswrapper[4652]: I0216 17:39:20.897368 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9"] Feb 16 17:39:20.899350 master-0 kubenswrapper[4652]: I0216 17:39:20.898334 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9" Feb 16 17:39:20.907041 master-0 kubenswrapper[4652]: I0216 17:39:20.904760 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzcw5\" (UniqueName: \"kubernetes.io/projected/93f5b04d-01a1-4a05-9c52-d0e12e388ecb-kube-api-access-gzcw5\") pod \"horizon-operator-controller-manager-5b9b8895d5-n4s9t\" (UID: \"93f5b04d-01a1-4a05-9c52-d0e12e388ecb\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t" Feb 16 17:39:20.923052 master-0 kubenswrapper[4652]: I0216 17:39:20.922317 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q"] Feb 16 17:39:20.939329 master-0 kubenswrapper[4652]: I0216 17:39:20.937709 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k"] Feb 16 17:39:20.939329 master-0 kubenswrapper[4652]: I0216 17:39:20.938619 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k" Feb 16 17:39:20.962880 master-0 kubenswrapper[4652]: I0216 17:39:20.956693 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr" Feb 16 17:39:20.962880 master-0 kubenswrapper[4652]: I0216 17:39:20.957990 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9"] Feb 16 17:39:20.965884 master-0 kubenswrapper[4652]: I0216 17:39:20.965838 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k"] Feb 16 17:39:20.980706 master-0 kubenswrapper[4652]: I0216 17:39:20.980581 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9"] Feb 16 17:39:20.982397 master-0 kubenswrapper[4652]: I0216 17:39:20.981437 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9" Feb 16 17:39:20.983241 master-0 kubenswrapper[4652]: I0216 17:39:20.983069 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj" Feb 16 17:39:21.010620 master-0 kubenswrapper[4652]: I0216 17:39:21.006278 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzcw5\" (UniqueName: \"kubernetes.io/projected/93f5b04d-01a1-4a05-9c52-d0e12e388ecb-kube-api-access-gzcw5\") pod \"horizon-operator-controller-manager-5b9b8895d5-n4s9t\" (UID: \"93f5b04d-01a1-4a05-9c52-d0e12e388ecb\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t" Feb 16 17:39:21.010620 master-0 kubenswrapper[4652]: I0216 17:39:21.006381 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert\") pod \"infra-operator-controller-manager-5f879c76b6-f4x7q\" (UID: \"68e37ed4-9304-4842-a4b5-9bc380c92262\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:21.010620 master-0 kubenswrapper[4652]: I0216 17:39:21.006493 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsps2\" (UniqueName: \"kubernetes.io/projected/b8fca871-c716-40c2-9e17-d85b8c71cca0-kube-api-access-xsps2\") pod \"ironic-operator-controller-manager-554564d7fc-sggd9\" (UID: \"b8fca871-c716-40c2-9e17-d85b8c71cca0\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9" Feb 16 17:39:21.010620 master-0 kubenswrapper[4652]: I0216 17:39:21.006527 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bgvg\" (UniqueName: \"kubernetes.io/projected/68e37ed4-9304-4842-a4b5-9bc380c92262-kube-api-access-8bgvg\") pod \"infra-operator-controller-manager-5f879c76b6-f4x7q\" (UID: \"68e37ed4-9304-4842-a4b5-9bc380c92262\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:21.025568 master-0 kubenswrapper[4652]: I0216 17:39:21.024495 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l" Feb 16 17:39:21.068363 master-0 kubenswrapper[4652]: I0216 17:39:21.067978 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzcw5\" (UniqueName: \"kubernetes.io/projected/93f5b04d-01a1-4a05-9c52-d0e12e388ecb-kube-api-access-gzcw5\") pod \"horizon-operator-controller-manager-5b9b8895d5-n4s9t\" (UID: \"93f5b04d-01a1-4a05-9c52-d0e12e388ecb\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t" Feb 16 17:39:21.084859 master-0 kubenswrapper[4652]: I0216 17:39:21.084798 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9"] Feb 16 17:39:21.108766 master-0 kubenswrapper[4652]: I0216 17:39:21.108704 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsps2\" (UniqueName: \"kubernetes.io/projected/b8fca871-c716-40c2-9e17-d85b8c71cca0-kube-api-access-xsps2\") pod \"ironic-operator-controller-manager-554564d7fc-sggd9\" (UID: \"b8fca871-c716-40c2-9e17-d85b8c71cca0\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9" Feb 16 17:39:21.108766 master-0 kubenswrapper[4652]: I0216 17:39:21.108773 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bgvg\" (UniqueName: \"kubernetes.io/projected/68e37ed4-9304-4842-a4b5-9bc380c92262-kube-api-access-8bgvg\") pod \"infra-operator-controller-manager-5f879c76b6-f4x7q\" (UID: \"68e37ed4-9304-4842-a4b5-9bc380c92262\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:21.109028 master-0 kubenswrapper[4652]: I0216 17:39:21.108880 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert\") pod \"infra-operator-controller-manager-5f879c76b6-f4x7q\" (UID: \"68e37ed4-9304-4842-a4b5-9bc380c92262\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:21.109028 master-0 kubenswrapper[4652]: I0216 17:39:21.108919 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w85bg\" (UniqueName: \"kubernetes.io/projected/cb821b42-d977-4380-9dcd-ac2684aa0ebd-kube-api-access-w85bg\") pod \"keystone-operator-controller-manager-b4d948c87-swv4k\" (UID: \"cb821b42-d977-4380-9dcd-ac2684aa0ebd\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k" Feb 16 17:39:21.109028 master-0 kubenswrapper[4652]: I0216 17:39:21.108970 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbxcq\" (UniqueName: \"kubernetes.io/projected/e53ab543-bdf3-44fc-adb4-8f1823a53e23-kube-api-access-cbxcq\") pod \"manila-operator-controller-manager-54f6768c69-rcsk9\" (UID: \"e53ab543-bdf3-44fc-adb4-8f1823a53e23\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9" Feb 16 17:39:21.109270 master-0 kubenswrapper[4652]: E0216 17:39:21.109195 4652 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:39:21.109417 master-0 kubenswrapper[4652]: E0216 17:39:21.109395 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert podName:68e37ed4-9304-4842-a4b5-9bc380c92262 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:21.609365063 +0000 UTC m=+918.997533579 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert") pod "infra-operator-controller-manager-5f879c76b6-f4x7q" (UID: "68e37ed4-9304-4842-a4b5-9bc380c92262") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:39:21.111165 master-0 kubenswrapper[4652]: I0216 17:39:21.109886 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq"] Feb 16 17:39:21.120657 master-0 kubenswrapper[4652]: I0216 17:39:21.117161 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq"] Feb 16 17:39:21.120657 master-0 kubenswrapper[4652]: I0216 17:39:21.117303 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq" Feb 16 17:39:21.130629 master-0 kubenswrapper[4652]: I0216 17:39:21.128583 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t" Feb 16 17:39:21.133602 master-0 kubenswrapper[4652]: I0216 17:39:21.133451 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bgvg\" (UniqueName: \"kubernetes.io/projected/68e37ed4-9304-4842-a4b5-9bc380c92262-kube-api-access-8bgvg\") pod \"infra-operator-controller-manager-5f879c76b6-f4x7q\" (UID: \"68e37ed4-9304-4842-a4b5-9bc380c92262\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:21.135462 master-0 kubenswrapper[4652]: I0216 17:39:21.135415 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm"] Feb 16 17:39:21.141335 master-0 kubenswrapper[4652]: I0216 17:39:21.136889 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm" Feb 16 17:39:21.162605 master-0 kubenswrapper[4652]: I0216 17:39:21.151361 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm"] Feb 16 17:39:21.164605 master-0 kubenswrapper[4652]: I0216 17:39:21.163654 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj"] Feb 16 17:39:21.165556 master-0 kubenswrapper[4652]: I0216 17:39:21.165008 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj" Feb 16 17:39:21.180357 master-0 kubenswrapper[4652]: I0216 17:39:21.174695 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsps2\" (UniqueName: \"kubernetes.io/projected/b8fca871-c716-40c2-9e17-d85b8c71cca0-kube-api-access-xsps2\") pod \"ironic-operator-controller-manager-554564d7fc-sggd9\" (UID: \"b8fca871-c716-40c2-9e17-d85b8c71cca0\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9" Feb 16 17:39:21.195930 master-0 kubenswrapper[4652]: I0216 17:39:21.193525 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9" Feb 16 17:39:21.211590 master-0 kubenswrapper[4652]: I0216 17:39:21.211528 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbxcq\" (UniqueName: \"kubernetes.io/projected/e53ab543-bdf3-44fc-adb4-8f1823a53e23-kube-api-access-cbxcq\") pod \"manila-operator-controller-manager-54f6768c69-rcsk9\" (UID: \"e53ab543-bdf3-44fc-adb4-8f1823a53e23\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9" Feb 16 17:39:21.212144 master-0 kubenswrapper[4652]: I0216 17:39:21.212087 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwz76\" (UniqueName: \"kubernetes.io/projected/25fc3b27-12ca-453a-866d-ae8f312e3fce-kube-api-access-fwz76\") pod \"mariadb-operator-controller-manager-6994f66f48-lqjrq\" (UID: \"25fc3b27-12ca-453a-866d-ae8f312e3fce\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq" Feb 16 17:39:21.212346 master-0 kubenswrapper[4652]: I0216 17:39:21.212313 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfxpz\" (UniqueName: \"kubernetes.io/projected/8c805d7e-74a8-405c-a828-3d93851ce223-kube-api-access-nfxpz\") pod \"neutron-operator-controller-manager-64ddbf8bb-4sgzm\" (UID: \"8c805d7e-74a8-405c-a828-3d93851ce223\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm" Feb 16 17:39:21.212759 master-0 kubenswrapper[4652]: I0216 17:39:21.212712 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w85bg\" (UniqueName: \"kubernetes.io/projected/cb821b42-d977-4380-9dcd-ac2684aa0ebd-kube-api-access-w85bg\") pod \"keystone-operator-controller-manager-b4d948c87-swv4k\" (UID: \"cb821b42-d977-4380-9dcd-ac2684aa0ebd\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k" Feb 16 17:39:21.258160 master-0 kubenswrapper[4652]: I0216 17:39:21.235637 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj"] Feb 16 17:39:21.258160 master-0 kubenswrapper[4652]: I0216 17:39:21.237450 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w85bg\" (UniqueName: \"kubernetes.io/projected/cb821b42-d977-4380-9dcd-ac2684aa0ebd-kube-api-access-w85bg\") pod \"keystone-operator-controller-manager-b4d948c87-swv4k\" (UID: \"cb821b42-d977-4380-9dcd-ac2684aa0ebd\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k" Feb 16 17:39:21.258160 master-0 kubenswrapper[4652]: I0216 17:39:21.242864 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbxcq\" (UniqueName: \"kubernetes.io/projected/e53ab543-bdf3-44fc-adb4-8f1823a53e23-kube-api-access-cbxcq\") pod \"manila-operator-controller-manager-54f6768c69-rcsk9\" (UID: \"e53ab543-bdf3-44fc-adb4-8f1823a53e23\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9" Feb 16 17:39:21.267646 master-0 kubenswrapper[4652]: I0216 17:39:21.267316 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs"] Feb 16 17:39:21.268784 master-0 kubenswrapper[4652]: I0216 17:39:21.268746 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs" Feb 16 17:39:21.289444 master-0 kubenswrapper[4652]: I0216 17:39:21.289367 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs"] Feb 16 17:39:21.357721 master-0 kubenswrapper[4652]: I0216 17:39:21.302161 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j"] Feb 16 17:39:21.357721 master-0 kubenswrapper[4652]: I0216 17:39:21.303466 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j" Feb 16 17:39:21.357721 master-0 kubenswrapper[4652]: I0216 17:39:21.320331 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm"] Feb 16 17:39:21.357721 master-0 kubenswrapper[4652]: I0216 17:39:21.321242 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j"] Feb 16 17:39:21.357721 master-0 kubenswrapper[4652]: I0216 17:39:21.321333 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:21.357721 master-0 kubenswrapper[4652]: I0216 17:39:21.322032 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj"] Feb 16 17:39:21.357721 master-0 kubenswrapper[4652]: I0216 17:39:21.322706 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj" Feb 16 17:39:21.357721 master-0 kubenswrapper[4652]: I0216 17:39:21.326371 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwz76\" (UniqueName: \"kubernetes.io/projected/25fc3b27-12ca-453a-866d-ae8f312e3fce-kube-api-access-fwz76\") pod \"mariadb-operator-controller-manager-6994f66f48-lqjrq\" (UID: \"25fc3b27-12ca-453a-866d-ae8f312e3fce\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq" Feb 16 17:39:21.357721 master-0 kubenswrapper[4652]: I0216 17:39:21.326425 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4mfd\" (UniqueName: \"kubernetes.io/projected/c48f8618-dc72-4a9a-9929-9bb841ad3a4b-kube-api-access-f4mfd\") pod \"nova-operator-controller-manager-567668f5cf-gcmjj\" (UID: \"c48f8618-dc72-4a9a-9929-9bb841ad3a4b\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj" Feb 16 17:39:21.357721 master-0 kubenswrapper[4652]: I0216 17:39:21.326459 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfxpz\" (UniqueName: \"kubernetes.io/projected/8c805d7e-74a8-405c-a828-3d93851ce223-kube-api-access-nfxpz\") pod \"neutron-operator-controller-manager-64ddbf8bb-4sgzm\" (UID: \"8c805d7e-74a8-405c-a828-3d93851ce223\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm" Feb 16 17:39:21.357721 master-0 kubenswrapper[4652]: I0216 17:39:21.329739 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 17:39:21.357721 master-0 kubenswrapper[4652]: I0216 17:39:21.341996 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9" Feb 16 17:39:21.359772 master-0 kubenswrapper[4652]: I0216 17:39:21.359739 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwz76\" (UniqueName: \"kubernetes.io/projected/25fc3b27-12ca-453a-866d-ae8f312e3fce-kube-api-access-fwz76\") pod \"mariadb-operator-controller-manager-6994f66f48-lqjrq\" (UID: \"25fc3b27-12ca-453a-866d-ae8f312e3fce\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq" Feb 16 17:39:21.383667 master-0 kubenswrapper[4652]: I0216 17:39:21.378399 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj"] Feb 16 17:39:21.383667 master-0 kubenswrapper[4652]: I0216 17:39:21.379326 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfxpz\" (UniqueName: \"kubernetes.io/projected/8c805d7e-74a8-405c-a828-3d93851ce223-kube-api-access-nfxpz\") pod \"neutron-operator-controller-manager-64ddbf8bb-4sgzm\" (UID: \"8c805d7e-74a8-405c-a828-3d93851ce223\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm" Feb 16 17:39:21.383667 master-0 kubenswrapper[4652]: I0216 17:39:21.382994 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm"] Feb 16 17:39:21.409850 master-0 kubenswrapper[4652]: I0216 17:39:21.409812 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq" Feb 16 17:39:21.411395 master-0 kubenswrapper[4652]: I0216 17:39:21.411380 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6"] Feb 16 17:39:21.412443 master-0 kubenswrapper[4652]: I0216 17:39:21.412427 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6" Feb 16 17:39:21.413644 master-0 kubenswrapper[4652]: I0216 17:39:21.413612 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6"] Feb 16 17:39:21.423679 master-0 kubenswrapper[4652]: I0216 17:39:21.423095 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm" Feb 16 17:39:21.442620 master-0 kubenswrapper[4652]: I0216 17:39:21.428337 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrpt4\" (UniqueName: \"kubernetes.io/projected/313fc27d-cc30-4146-b820-47d332777114-kube-api-access-vrpt4\") pod \"ovn-operator-controller-manager-d44cf6b75-tmx4j\" (UID: \"313fc27d-cc30-4146-b820-47d332777114\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j" Feb 16 17:39:21.442620 master-0 kubenswrapper[4652]: I0216 17:39:21.428409 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6vwf\" (UniqueName: \"kubernetes.io/projected/d2031c5c-58d9-49ee-a82e-da48fba84526-kube-api-access-z6vwf\") pod \"placement-operator-controller-manager-8497b45c89-pkhcj\" (UID: \"d2031c5c-58d9-49ee-a82e-da48fba84526\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj" Feb 16 17:39:21.442620 master-0 kubenswrapper[4652]: I0216 17:39:21.428482 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nzgx\" (UniqueName: \"kubernetes.io/projected/a44a1b70-8302-4276-a164-ab83b4a46945-kube-api-access-5nzgx\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm\" (UID: \"a44a1b70-8302-4276-a164-ab83b4a46945\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:21.442620 master-0 kubenswrapper[4652]: I0216 17:39:21.428526 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4mfd\" (UniqueName: \"kubernetes.io/projected/c48f8618-dc72-4a9a-9929-9bb841ad3a4b-kube-api-access-f4mfd\") pod \"nova-operator-controller-manager-567668f5cf-gcmjj\" (UID: \"c48f8618-dc72-4a9a-9929-9bb841ad3a4b\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj" Feb 16 17:39:21.442620 master-0 kubenswrapper[4652]: I0216 17:39:21.428776 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xjgb\" (UniqueName: \"kubernetes.io/projected/71c16787-8d47-4258-85be-938b26e3d7e7-kube-api-access-7xjgb\") pod \"octavia-operator-controller-manager-69f8888797-xv2qs\" (UID: \"71c16787-8d47-4258-85be-938b26e3d7e7\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs" Feb 16 17:39:21.442620 master-0 kubenswrapper[4652]: I0216 17:39:21.428823 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm\" (UID: \"a44a1b70-8302-4276-a164-ab83b4a46945\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:21.442620 master-0 kubenswrapper[4652]: I0216 17:39:21.431520 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8"] Feb 16 17:39:21.455426 master-0 kubenswrapper[4652]: I0216 17:39:21.453923 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8" Feb 16 17:39:21.465814 master-0 kubenswrapper[4652]: I0216 17:39:21.465746 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4mfd\" (UniqueName: \"kubernetes.io/projected/c48f8618-dc72-4a9a-9929-9bb841ad3a4b-kube-api-access-f4mfd\") pod \"nova-operator-controller-manager-567668f5cf-gcmjj\" (UID: \"c48f8618-dc72-4a9a-9929-9bb841ad3a4b\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj" Feb 16 17:39:21.482893 master-0 kubenswrapper[4652]: I0216 17:39:21.477308 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-7c6b4"] Feb 16 17:39:21.482893 master-0 kubenswrapper[4652]: I0216 17:39:21.478450 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-7c6b4" Feb 16 17:39:21.484930 master-0 kubenswrapper[4652]: I0216 17:39:21.484415 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8"] Feb 16 17:39:21.501351 master-0 kubenswrapper[4652]: I0216 17:39:21.499205 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-7c6b4"] Feb 16 17:39:21.519430 master-0 kubenswrapper[4652]: I0216 17:39:21.518981 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k" Feb 16 17:39:21.531090 master-0 kubenswrapper[4652]: I0216 17:39:21.531031 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrpt4\" (UniqueName: \"kubernetes.io/projected/313fc27d-cc30-4146-b820-47d332777114-kube-api-access-vrpt4\") pod \"ovn-operator-controller-manager-d44cf6b75-tmx4j\" (UID: \"313fc27d-cc30-4146-b820-47d332777114\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j" Feb 16 17:39:21.531090 master-0 kubenswrapper[4652]: I0216 17:39:21.531086 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6vwf\" (UniqueName: \"kubernetes.io/projected/d2031c5c-58d9-49ee-a82e-da48fba84526-kube-api-access-z6vwf\") pod \"placement-operator-controller-manager-8497b45c89-pkhcj\" (UID: \"d2031c5c-58d9-49ee-a82e-da48fba84526\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj" Feb 16 17:39:21.531399 master-0 kubenswrapper[4652]: I0216 17:39:21.531153 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nzgx\" (UniqueName: \"kubernetes.io/projected/a44a1b70-8302-4276-a164-ab83b4a46945-kube-api-access-5nzgx\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm\" (UID: \"a44a1b70-8302-4276-a164-ab83b4a46945\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:21.531399 master-0 kubenswrapper[4652]: I0216 17:39:21.531207 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcfj9\" (UniqueName: \"kubernetes.io/projected/37380c64-5bcd-4446-9a7e-e44745a24096-kube-api-access-hcfj9\") pod \"swift-operator-controller-manager-68f46476f-bhcg6\" (UID: \"37380c64-5bcd-4446-9a7e-e44745a24096\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6" Feb 16 17:39:21.531399 master-0 kubenswrapper[4652]: I0216 17:39:21.531286 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xjgb\" (UniqueName: \"kubernetes.io/projected/71c16787-8d47-4258-85be-938b26e3d7e7-kube-api-access-7xjgb\") pod \"octavia-operator-controller-manager-69f8888797-xv2qs\" (UID: \"71c16787-8d47-4258-85be-938b26e3d7e7\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs" Feb 16 17:39:21.531399 master-0 kubenswrapper[4652]: I0216 17:39:21.531339 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm\" (UID: \"a44a1b70-8302-4276-a164-ab83b4a46945\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:21.531399 master-0 kubenswrapper[4652]: I0216 17:39:21.531399 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw47w\" (UniqueName: \"kubernetes.io/projected/d806807a-49f5-4a93-b423-724c8ca48c84-kube-api-access-gw47w\") pod \"telemetry-operator-controller-manager-7f45b4ff68-wsws8\" (UID: \"d806807a-49f5-4a93-b423-724c8ca48c84\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8" Feb 16 17:39:21.532185 master-0 kubenswrapper[4652]: E0216 17:39:21.531874 4652 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:39:21.532185 master-0 kubenswrapper[4652]: E0216 17:39:21.531916 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert podName:a44a1b70-8302-4276-a164-ab83b4a46945 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:22.031900719 +0000 UTC m=+919.420069235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" (UID: "a44a1b70-8302-4276-a164-ab83b4a46945") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:39:21.542149 master-0 kubenswrapper[4652]: I0216 17:39:21.542075 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc"] Feb 16 17:39:21.543674 master-0 kubenswrapper[4652]: I0216 17:39:21.543645 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc" Feb 16 17:39:21.547492 master-0 kubenswrapper[4652]: I0216 17:39:21.547441 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nzgx\" (UniqueName: \"kubernetes.io/projected/a44a1b70-8302-4276-a164-ab83b4a46945-kube-api-access-5nzgx\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm\" (UID: \"a44a1b70-8302-4276-a164-ab83b4a46945\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:21.548292 master-0 kubenswrapper[4652]: I0216 17:39:21.548237 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrpt4\" (UniqueName: \"kubernetes.io/projected/313fc27d-cc30-4146-b820-47d332777114-kube-api-access-vrpt4\") pod \"ovn-operator-controller-manager-d44cf6b75-tmx4j\" (UID: \"313fc27d-cc30-4146-b820-47d332777114\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j" Feb 16 17:39:21.552831 master-0 kubenswrapper[4652]: I0216 17:39:21.552786 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc"] Feb 16 17:39:21.553448 master-0 kubenswrapper[4652]: I0216 17:39:21.553423 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6vwf\" (UniqueName: \"kubernetes.io/projected/d2031c5c-58d9-49ee-a82e-da48fba84526-kube-api-access-z6vwf\") pod \"placement-operator-controller-manager-8497b45c89-pkhcj\" (UID: \"d2031c5c-58d9-49ee-a82e-da48fba84526\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj" Feb 16 17:39:21.555146 master-0 kubenswrapper[4652]: I0216 17:39:21.555074 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj" Feb 16 17:39:21.567618 master-0 kubenswrapper[4652]: I0216 17:39:21.567576 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xjgb\" (UniqueName: \"kubernetes.io/projected/71c16787-8d47-4258-85be-938b26e3d7e7-kube-api-access-7xjgb\") pod \"octavia-operator-controller-manager-69f8888797-xv2qs\" (UID: \"71c16787-8d47-4258-85be-938b26e3d7e7\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs" Feb 16 17:39:21.578517 master-0 kubenswrapper[4652]: I0216 17:39:21.578484 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs" Feb 16 17:39:21.613304 master-0 kubenswrapper[4652]: I0216 17:39:21.612958 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96"] Feb 16 17:39:21.615159 master-0 kubenswrapper[4652]: I0216 17:39:21.615008 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:21.617106 master-0 kubenswrapper[4652]: I0216 17:39:21.616075 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j" Feb 16 17:39:21.618031 master-0 kubenswrapper[4652]: I0216 17:39:21.617940 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 17:39:21.618386 master-0 kubenswrapper[4652]: I0216 17:39:21.618150 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 17:39:21.634118 master-0 kubenswrapper[4652]: I0216 17:39:21.633015 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96"] Feb 16 17:39:21.637573 master-0 kubenswrapper[4652]: I0216 17:39:21.637541 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcfj9\" (UniqueName: \"kubernetes.io/projected/37380c64-5bcd-4446-9a7e-e44745a24096-kube-api-access-hcfj9\") pod \"swift-operator-controller-manager-68f46476f-bhcg6\" (UID: \"37380c64-5bcd-4446-9a7e-e44745a24096\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6" Feb 16 17:39:21.638237 master-0 kubenswrapper[4652]: I0216 17:39:21.638180 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjqsw\" (UniqueName: \"kubernetes.io/projected/7e497b24-455b-402b-b2d1-36ad774dec94-kube-api-access-cjqsw\") pod \"watcher-operator-controller-manager-5db88f68c-tmbxc\" (UID: \"7e497b24-455b-402b-b2d1-36ad774dec94\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc" Feb 16 17:39:21.639714 master-0 kubenswrapper[4652]: I0216 17:39:21.639690 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert\") pod \"infra-operator-controller-manager-5f879c76b6-f4x7q\" (UID: \"68e37ed4-9304-4842-a4b5-9bc380c92262\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:21.639892 master-0 kubenswrapper[4652]: I0216 17:39:21.639872 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62v9r\" (UniqueName: \"kubernetes.io/projected/2f1a42b6-5584-411c-b499-5d9c71d3e9f1-kube-api-access-62v9r\") pod \"test-operator-controller-manager-7866795846-7c6b4\" (UID: \"2f1a42b6-5584-411c-b499-5d9c71d3e9f1\") " pod="openstack-operators/test-operator-controller-manager-7866795846-7c6b4" Feb 16 17:39:21.640010 master-0 kubenswrapper[4652]: I0216 17:39:21.639993 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw47w\" (UniqueName: \"kubernetes.io/projected/d806807a-49f5-4a93-b423-724c8ca48c84-kube-api-access-gw47w\") pod \"telemetry-operator-controller-manager-7f45b4ff68-wsws8\" (UID: \"d806807a-49f5-4a93-b423-724c8ca48c84\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8" Feb 16 17:39:21.640514 master-0 kubenswrapper[4652]: E0216 17:39:21.640496 4652 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:39:21.640655 master-0 kubenswrapper[4652]: E0216 17:39:21.640643 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert podName:68e37ed4-9304-4842-a4b5-9bc380c92262 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:22.640625954 +0000 UTC m=+920.028794470 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert") pod "infra-operator-controller-manager-5f879c76b6-f4x7q" (UID: "68e37ed4-9304-4842-a4b5-9bc380c92262") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:39:21.661015 master-0 kubenswrapper[4652]: I0216 17:39:21.660969 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcfj9\" (UniqueName: \"kubernetes.io/projected/37380c64-5bcd-4446-9a7e-e44745a24096-kube-api-access-hcfj9\") pod \"swift-operator-controller-manager-68f46476f-bhcg6\" (UID: \"37380c64-5bcd-4446-9a7e-e44745a24096\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6" Feb 16 17:39:21.661520 master-0 kubenswrapper[4652]: I0216 17:39:21.661488 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7m8kc"] Feb 16 17:39:21.661606 master-0 kubenswrapper[4652]: I0216 17:39:21.661584 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw47w\" (UniqueName: \"kubernetes.io/projected/d806807a-49f5-4a93-b423-724c8ca48c84-kube-api-access-gw47w\") pod \"telemetry-operator-controller-manager-7f45b4ff68-wsws8\" (UID: \"d806807a-49f5-4a93-b423-724c8ca48c84\") " pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8" Feb 16 17:39:21.664608 master-0 kubenswrapper[4652]: I0216 17:39:21.664301 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7m8kc" Feb 16 17:39:21.671288 master-0 kubenswrapper[4652]: I0216 17:39:21.671184 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj" Feb 16 17:39:21.679956 master-0 kubenswrapper[4652]: I0216 17:39:21.679891 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7m8kc"] Feb 16 17:39:21.713608 master-0 kubenswrapper[4652]: I0216 17:39:21.712508 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6" Feb 16 17:39:21.742229 master-0 kubenswrapper[4652]: I0216 17:39:21.741692 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:21.742993 master-0 kubenswrapper[4652]: I0216 17:39:21.742691 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjqsw\" (UniqueName: \"kubernetes.io/projected/7e497b24-455b-402b-b2d1-36ad774dec94-kube-api-access-cjqsw\") pod \"watcher-operator-controller-manager-5db88f68c-tmbxc\" (UID: \"7e497b24-455b-402b-b2d1-36ad774dec94\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc" Feb 16 17:39:21.743305 master-0 kubenswrapper[4652]: I0216 17:39:21.743283 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nnlz\" (UniqueName: \"kubernetes.io/projected/a0b757aa-3fd1-48e0-9405-0008e2df1011-kube-api-access-6nnlz\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:21.743574 master-0 kubenswrapper[4652]: I0216 17:39:21.743554 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62v9r\" (UniqueName: \"kubernetes.io/projected/2f1a42b6-5584-411c-b499-5d9c71d3e9f1-kube-api-access-62v9r\") pod \"test-operator-controller-manager-7866795846-7c6b4\" (UID: \"2f1a42b6-5584-411c-b499-5d9c71d3e9f1\") " pod="openstack-operators/test-operator-controller-manager-7866795846-7c6b4" Feb 16 17:39:21.743781 master-0 kubenswrapper[4652]: I0216 17:39:21.743764 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:21.781355 master-0 kubenswrapper[4652]: I0216 17:39:21.776271 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq"] Feb 16 17:39:21.781355 master-0 kubenswrapper[4652]: I0216 17:39:21.781013 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjqsw\" (UniqueName: \"kubernetes.io/projected/7e497b24-455b-402b-b2d1-36ad774dec94-kube-api-access-cjqsw\") pod \"watcher-operator-controller-manager-5db88f68c-tmbxc\" (UID: \"7e497b24-455b-402b-b2d1-36ad774dec94\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc" Feb 16 17:39:21.786282 master-0 kubenswrapper[4652]: I0216 17:39:21.786205 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq" event={"ID":"1c903b0b-4f9c-4739-8a74-cec124ddf2b8","Type":"ContainerStarted","Data":"dbcf78b8c388a56c198e2d4429533e56094187bf9f53ae99314a7609223040c2"} Feb 16 17:39:21.791569 master-0 kubenswrapper[4652]: I0216 17:39:21.791533 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62v9r\" (UniqueName: \"kubernetes.io/projected/2f1a42b6-5584-411c-b499-5d9c71d3e9f1-kube-api-access-62v9r\") pod \"test-operator-controller-manager-7866795846-7c6b4\" (UID: \"2f1a42b6-5584-411c-b499-5d9c71d3e9f1\") " pod="openstack-operators/test-operator-controller-manager-7866795846-7c6b4" Feb 16 17:39:21.846455 master-0 kubenswrapper[4652]: I0216 17:39:21.846390 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4gnf\" (UniqueName: \"kubernetes.io/projected/f337e3a4-350f-4d2e-80b2-215c172d2a68-kube-api-access-h4gnf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7m8kc\" (UID: \"f337e3a4-350f-4d2e-80b2-215c172d2a68\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7m8kc" Feb 16 17:39:21.846455 master-0 kubenswrapper[4652]: I0216 17:39:21.846456 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:21.846745 master-0 kubenswrapper[4652]: I0216 17:39:21.846586 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nnlz\" (UniqueName: \"kubernetes.io/projected/a0b757aa-3fd1-48e0-9405-0008e2df1011-kube-api-access-6nnlz\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:21.846745 master-0 kubenswrapper[4652]: I0216 17:39:21.846644 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:21.847095 master-0 kubenswrapper[4652]: E0216 17:39:21.847074 4652 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:39:21.847170 master-0 kubenswrapper[4652]: E0216 17:39:21.847120 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs podName:a0b757aa-3fd1-48e0-9405-0008e2df1011 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:22.347107028 +0000 UTC m=+919.735275544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-mlz96" (UID: "a0b757aa-3fd1-48e0-9405-0008e2df1011") : secret "webhook-server-cert" not found Feb 16 17:39:21.849575 master-0 kubenswrapper[4652]: E0216 17:39:21.849540 4652 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:39:21.849676 master-0 kubenswrapper[4652]: E0216 17:39:21.849588 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs podName:a0b757aa-3fd1-48e0-9405-0008e2df1011 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:22.349575754 +0000 UTC m=+919.737744270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-mlz96" (UID: "a0b757aa-3fd1-48e0-9405-0008e2df1011") : secret "metrics-server-cert" not found Feb 16 17:39:21.852045 master-0 kubenswrapper[4652]: I0216 17:39:21.852009 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8" Feb 16 17:39:21.870811 master-0 kubenswrapper[4652]: I0216 17:39:21.870765 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nnlz\" (UniqueName: \"kubernetes.io/projected/a0b757aa-3fd1-48e0-9405-0008e2df1011-kube-api-access-6nnlz\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:21.957359 master-0 kubenswrapper[4652]: I0216 17:39:21.956827 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4gnf\" (UniqueName: \"kubernetes.io/projected/f337e3a4-350f-4d2e-80b2-215c172d2a68-kube-api-access-h4gnf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7m8kc\" (UID: \"f337e3a4-350f-4d2e-80b2-215c172d2a68\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7m8kc" Feb 16 17:39:21.963454 master-0 kubenswrapper[4652]: I0216 17:39:21.963416 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx"] Feb 16 17:39:21.974775 master-0 kubenswrapper[4652]: W0216 17:39:21.970333 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod482c56ed_0576_46a4_b961_15030e811005.slice/crio-06590738f800037fb9ea48fb42a70e9aa45a204d2891dc3119f5da884e7fc9b2 WatchSource:0}: Error finding container 06590738f800037fb9ea48fb42a70e9aa45a204d2891dc3119f5da884e7fc9b2: Status 404 returned error can't find the container with id 06590738f800037fb9ea48fb42a70e9aa45a204d2891dc3119f5da884e7fc9b2 Feb 16 17:39:21.981328 master-0 kubenswrapper[4652]: I0216 17:39:21.981282 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4gnf\" (UniqueName: \"kubernetes.io/projected/f337e3a4-350f-4d2e-80b2-215c172d2a68-kube-api-access-h4gnf\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7m8kc\" (UID: \"f337e3a4-350f-4d2e-80b2-215c172d2a68\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7m8kc" Feb 16 17:39:21.986549 master-0 kubenswrapper[4652]: W0216 17:39:21.981850 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dcd9841_b2ee_4485_91fb_47e8ecbce567.slice/crio-14d8d0afa0e0ae3ac45f1ecb7f438ce3fe512cb55507ebd3200a33246bfac0d9 WatchSource:0}: Error finding container 14d8d0afa0e0ae3ac45f1ecb7f438ce3fe512cb55507ebd3200a33246bfac0d9: Status 404 returned error can't find the container with id 14d8d0afa0e0ae3ac45f1ecb7f438ce3fe512cb55507ebd3200a33246bfac0d9 Feb 16 17:39:21.997057 master-0 kubenswrapper[4652]: I0216 17:39:21.997017 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-7c6b4" Feb 16 17:39:21.997483 master-0 kubenswrapper[4652]: I0216 17:39:21.997458 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr"] Feb 16 17:39:22.026240 master-0 kubenswrapper[4652]: I0216 17:39:22.016836 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc" Feb 16 17:39:22.046468 master-0 kubenswrapper[4652]: I0216 17:39:22.046403 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj"] Feb 16 17:39:22.069908 master-0 kubenswrapper[4652]: I0216 17:39:22.058183 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7m8kc" Feb 16 17:39:22.069908 master-0 kubenswrapper[4652]: I0216 17:39:22.062603 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm\" (UID: \"a44a1b70-8302-4276-a164-ab83b4a46945\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:22.069908 master-0 kubenswrapper[4652]: E0216 17:39:22.062791 4652 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:39:22.069908 master-0 kubenswrapper[4652]: E0216 17:39:22.062843 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert podName:a44a1b70-8302-4276-a164-ab83b4a46945 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:23.06282534 +0000 UTC m=+920.450993856 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" (UID: "a44a1b70-8302-4276-a164-ab83b4a46945") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:39:22.079306 master-0 kubenswrapper[4652]: I0216 17:39:22.072746 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l"] Feb 16 17:39:22.092108 master-0 kubenswrapper[4652]: W0216 17:39:22.090350 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a763571_3fe3_4093_9032_5137713d66a2.slice/crio-24aee97ac53c1a974cfa6ec2a3c17bbb7f59200b5f41d8e4ee166147621f25de WatchSource:0}: Error finding container 24aee97ac53c1a974cfa6ec2a3c17bbb7f59200b5f41d8e4ee166147621f25de: Status 404 returned error can't find the container with id 24aee97ac53c1a974cfa6ec2a3c17bbb7f59200b5f41d8e4ee166147621f25de Feb 16 17:39:22.368169 master-0 kubenswrapper[4652]: I0216 17:39:22.368027 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:22.368387 master-0 kubenswrapper[4652]: E0216 17:39:22.368224 4652 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:39:22.368387 master-0 kubenswrapper[4652]: I0216 17:39:22.368309 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:22.368387 master-0 kubenswrapper[4652]: E0216 17:39:22.368332 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs podName:a0b757aa-3fd1-48e0-9405-0008e2df1011 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:23.368311219 +0000 UTC m=+920.756479825 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-mlz96" (UID: "a0b757aa-3fd1-48e0-9405-0008e2df1011") : secret "webhook-server-cert" not found Feb 16 17:39:22.370710 master-0 kubenswrapper[4652]: E0216 17:39:22.370677 4652 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:39:22.370800 master-0 kubenswrapper[4652]: E0216 17:39:22.370731 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs podName:a0b757aa-3fd1-48e0-9405-0008e2df1011 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:23.370718964 +0000 UTC m=+920.758887480 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-mlz96" (UID: "a0b757aa-3fd1-48e0-9405-0008e2df1011") : secret "metrics-server-cert" not found Feb 16 17:39:22.676403 master-0 kubenswrapper[4652]: I0216 17:39:22.676346 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert\") pod \"infra-operator-controller-manager-5f879c76b6-f4x7q\" (UID: \"68e37ed4-9304-4842-a4b5-9bc380c92262\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:22.677833 master-0 kubenswrapper[4652]: E0216 17:39:22.677802 4652 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:39:22.677833 master-0 kubenswrapper[4652]: E0216 17:39:22.677880 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert podName:68e37ed4-9304-4842-a4b5-9bc380c92262 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:24.677860877 +0000 UTC m=+922.066029403 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert") pod "infra-operator-controller-manager-5f879c76b6-f4x7q" (UID: "68e37ed4-9304-4842-a4b5-9bc380c92262") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:39:22.722930 master-0 kubenswrapper[4652]: I0216 17:39:22.720459 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq"] Feb 16 17:39:22.740764 master-0 kubenswrapper[4652]: I0216 17:39:22.736455 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t"] Feb 16 17:39:22.782296 master-0 kubenswrapper[4652]: W0216 17:39:22.779766 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc48f8618_dc72_4a9a_9929_9bb841ad3a4b.slice/crio-4ff1421578169918e10fa42e8e30acf88858197335a6cef3c8878f4b72f1e55a WatchSource:0}: Error finding container 4ff1421578169918e10fa42e8e30acf88858197335a6cef3c8878f4b72f1e55a: Status 404 returned error can't find the container with id 4ff1421578169918e10fa42e8e30acf88858197335a6cef3c8878f4b72f1e55a Feb 16 17:39:22.790827 master-0 kubenswrapper[4652]: I0216 17:39:22.789434 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9"] Feb 16 17:39:22.795173 master-0 kubenswrapper[4652]: W0216 17:39:22.794037 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb821b42_d977_4380_9dcd_ac2684aa0ebd.slice/crio-c43740b8342ed34c063ed9e5a74ad24b55bd0ccbffb49d8a9de1a42d55828577 WatchSource:0}: Error finding container c43740b8342ed34c063ed9e5a74ad24b55bd0ccbffb49d8a9de1a42d55828577: Status 404 returned error can't find the container with id c43740b8342ed34c063ed9e5a74ad24b55bd0ccbffb49d8a9de1a42d55828577 Feb 16 17:39:22.798480 master-0 kubenswrapper[4652]: I0216 17:39:22.798439 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t" event={"ID":"93f5b04d-01a1-4a05-9c52-d0e12e388ecb","Type":"ContainerStarted","Data":"68a535e6d4f750af190b2c125d1ba04d9ccc00b32a862f18a7b72ff33b328b28"} Feb 16 17:39:22.802259 master-0 kubenswrapper[4652]: I0216 17:39:22.802005 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9" event={"ID":"e53ab543-bdf3-44fc-adb4-8f1823a53e23","Type":"ContainerStarted","Data":"0d4a737d65905a7adce56047dd8bc4c05fc473a999b504a1347a5a7ada58d061"} Feb 16 17:39:22.802259 master-0 kubenswrapper[4652]: I0216 17:39:22.802183 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9"] Feb 16 17:39:22.804290 master-0 kubenswrapper[4652]: I0216 17:39:22.804232 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj" event={"ID":"7dcd9841-b2ee-4485-91fb-47e8ecbce567","Type":"ContainerStarted","Data":"14d8d0afa0e0ae3ac45f1ecb7f438ce3fe512cb55507ebd3200a33246bfac0d9"} Feb 16 17:39:22.805611 master-0 kubenswrapper[4652]: I0216 17:39:22.805579 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq" event={"ID":"25fc3b27-12ca-453a-866d-ae8f312e3fce","Type":"ContainerStarted","Data":"0392acc474e035ee3e7598f885b5c293448fa091197b34237e104018bdf96b77"} Feb 16 17:39:22.807401 master-0 kubenswrapper[4652]: I0216 17:39:22.807294 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj" event={"ID":"c48f8618-dc72-4a9a-9929-9bb841ad3a4b","Type":"ContainerStarted","Data":"4ff1421578169918e10fa42e8e30acf88858197335a6cef3c8878f4b72f1e55a"} Feb 16 17:39:22.809899 master-0 kubenswrapper[4652]: I0216 17:39:22.809851 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l" event={"ID":"0a763571-3fe3-4093-9032-5137713d66a2","Type":"ContainerStarted","Data":"24aee97ac53c1a974cfa6ec2a3c17bbb7f59200b5f41d8e4ee166147621f25de"} Feb 16 17:39:22.812106 master-0 kubenswrapper[4652]: I0216 17:39:22.812070 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr" event={"ID":"73d82bf6-e57d-4909-b293-39b33f8c142e","Type":"ContainerStarted","Data":"0730e0a1c6a2d1bc33e834c745d8e17ddfdd64808dc68e17d55c1f25bc6c447b"} Feb 16 17:39:22.813613 master-0 kubenswrapper[4652]: I0216 17:39:22.813318 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9" event={"ID":"b8fca871-c716-40c2-9e17-d85b8c71cca0","Type":"ContainerStarted","Data":"1f045e448f9403d0218efc746c3fde6ffe3c6fc772547524b8ca68984da0aa71"} Feb 16 17:39:22.816338 master-0 kubenswrapper[4652]: I0216 17:39:22.814449 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k"] Feb 16 17:39:22.818898 master-0 kubenswrapper[4652]: I0216 17:39:22.818338 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx" event={"ID":"482c56ed-0576-46a4-b961-15030e811005","Type":"ContainerStarted","Data":"06590738f800037fb9ea48fb42a70e9aa45a204d2891dc3119f5da884e7fc9b2"} Feb 16 17:39:22.830287 master-0 kubenswrapper[4652]: I0216 17:39:22.826070 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj"] Feb 16 17:39:22.843403 master-0 kubenswrapper[4652]: W0216 17:39:22.842980 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2031c5c_58d9_49ee_a82e_da48fba84526.slice/crio-5593127e9cd803da6123f7a73c2f88c063ded79ff82dffa8f1183a4b8940dac4 WatchSource:0}: Error finding container 5593127e9cd803da6123f7a73c2f88c063ded79ff82dffa8f1183a4b8940dac4: Status 404 returned error can't find the container with id 5593127e9cd803da6123f7a73c2f88c063ded79ff82dffa8f1183a4b8940dac4 Feb 16 17:39:22.843403 master-0 kubenswrapper[4652]: I0216 17:39:22.843334 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm"] Feb 16 17:39:22.863981 master-0 kubenswrapper[4652]: I0216 17:39:22.863922 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj"] Feb 16 17:39:23.089973 master-0 kubenswrapper[4652]: I0216 17:39:23.086058 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm\" (UID: \"a44a1b70-8302-4276-a164-ab83b4a46945\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:23.089973 master-0 kubenswrapper[4652]: E0216 17:39:23.086219 4652 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:39:23.089973 master-0 kubenswrapper[4652]: E0216 17:39:23.086278 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert podName:a44a1b70-8302-4276-a164-ab83b4a46945 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:25.086262235 +0000 UTC m=+922.474430751 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" (UID: "a44a1b70-8302-4276-a164-ab83b4a46945") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:39:23.407995 master-0 kubenswrapper[4652]: I0216 17:39:23.407891 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:23.408194 master-0 kubenswrapper[4652]: I0216 17:39:23.408036 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:23.408194 master-0 kubenswrapper[4652]: E0216 17:39:23.408127 4652 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:39:23.413312 master-0 kubenswrapper[4652]: E0216 17:39:23.408196 4652 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:39:23.413312 master-0 kubenswrapper[4652]: E0216 17:39:23.408223 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs podName:a0b757aa-3fd1-48e0-9405-0008e2df1011 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:25.408199685 +0000 UTC m=+922.796368201 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-mlz96" (UID: "a0b757aa-3fd1-48e0-9405-0008e2df1011") : secret "metrics-server-cert" not found Feb 16 17:39:23.413312 master-0 kubenswrapper[4652]: E0216 17:39:23.408272 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs podName:a0b757aa-3fd1-48e0-9405-0008e2df1011 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:25.408261516 +0000 UTC m=+922.796430132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-mlz96" (UID: "a0b757aa-3fd1-48e0-9405-0008e2df1011") : secret "webhook-server-cert" not found Feb 16 17:39:23.431858 master-0 kubenswrapper[4652]: I0216 17:39:23.431818 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7m8kc"] Feb 16 17:39:23.487522 master-0 kubenswrapper[4652]: I0216 17:39:23.487435 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j"] Feb 16 17:39:23.512453 master-0 kubenswrapper[4652]: I0216 17:39:23.512371 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6"] Feb 16 17:39:23.536464 master-0 kubenswrapper[4652]: I0216 17:39:23.536054 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc"] Feb 16 17:39:23.606999 master-0 kubenswrapper[4652]: I0216 17:39:23.606880 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs"] Feb 16 17:39:23.630906 master-0 kubenswrapper[4652]: I0216 17:39:23.630839 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8"] Feb 16 17:39:23.655537 master-0 kubenswrapper[4652]: I0216 17:39:23.655455 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-7c6b4"] Feb 16 17:39:23.830847 master-0 kubenswrapper[4652]: I0216 17:39:23.830795 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm" event={"ID":"8c805d7e-74a8-405c-a828-3d93851ce223","Type":"ContainerStarted","Data":"99d623d87e6402439ff323487769a73b8c496add12bc9e4aa6c146e078d0e4cd"} Feb 16 17:39:23.833853 master-0 kubenswrapper[4652]: I0216 17:39:23.833808 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj" event={"ID":"d2031c5c-58d9-49ee-a82e-da48fba84526","Type":"ContainerStarted","Data":"5593127e9cd803da6123f7a73c2f88c063ded79ff82dffa8f1183a4b8940dac4"} Feb 16 17:39:23.835795 master-0 kubenswrapper[4652]: I0216 17:39:23.835761 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k" event={"ID":"cb821b42-d977-4380-9dcd-ac2684aa0ebd","Type":"ContainerStarted","Data":"c43740b8342ed34c063ed9e5a74ad24b55bd0ccbffb49d8a9de1a42d55828577"} Feb 16 17:39:24.730698 master-0 kubenswrapper[4652]: I0216 17:39:24.730648 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert\") pod \"infra-operator-controller-manager-5f879c76b6-f4x7q\" (UID: \"68e37ed4-9304-4842-a4b5-9bc380c92262\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:24.730933 master-0 kubenswrapper[4652]: E0216 17:39:24.730837 4652 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:39:24.730933 master-0 kubenswrapper[4652]: E0216 17:39:24.730912 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert podName:68e37ed4-9304-4842-a4b5-9bc380c92262 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:28.730891141 +0000 UTC m=+926.119059657 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert") pod "infra-operator-controller-manager-5f879c76b6-f4x7q" (UID: "68e37ed4-9304-4842-a4b5-9bc380c92262") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:39:24.987975 master-0 kubenswrapper[4652]: W0216 17:39:24.987087 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e497b24_455b_402b_b2d1_36ad774dec94.slice/crio-65632c7c2ca481cc83138d486343bc7f8f65386c6e640ced72e819dbb2be04bf WatchSource:0}: Error finding container 65632c7c2ca481cc83138d486343bc7f8f65386c6e640ced72e819dbb2be04bf: Status 404 returned error can't find the container with id 65632c7c2ca481cc83138d486343bc7f8f65386c6e640ced72e819dbb2be04bf Feb 16 17:39:25.136661 master-0 kubenswrapper[4652]: I0216 17:39:25.136492 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm\" (UID: \"a44a1b70-8302-4276-a164-ab83b4a46945\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:25.136879 master-0 kubenswrapper[4652]: E0216 17:39:25.136710 4652 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:39:25.136879 master-0 kubenswrapper[4652]: E0216 17:39:25.136833 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert podName:a44a1b70-8302-4276-a164-ab83b4a46945 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:29.136801702 +0000 UTC m=+926.524970228 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" (UID: "a44a1b70-8302-4276-a164-ab83b4a46945") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:39:25.442885 master-0 kubenswrapper[4652]: I0216 17:39:25.442798 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:25.443146 master-0 kubenswrapper[4652]: I0216 17:39:25.442984 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:25.443193 master-0 kubenswrapper[4652]: E0216 17:39:25.443164 4652 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:39:25.443272 master-0 kubenswrapper[4652]: E0216 17:39:25.443223 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs podName:a0b757aa-3fd1-48e0-9405-0008e2df1011 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:29.443204914 +0000 UTC m=+926.831373430 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-mlz96" (UID: "a0b757aa-3fd1-48e0-9405-0008e2df1011") : secret "metrics-server-cert" not found Feb 16 17:39:25.443723 master-0 kubenswrapper[4652]: E0216 17:39:25.443689 4652 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:39:25.443776 master-0 kubenswrapper[4652]: E0216 17:39:25.443731 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs podName:a0b757aa-3fd1-48e0-9405-0008e2df1011 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:29.443720868 +0000 UTC m=+926.831889384 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-mlz96" (UID: "a0b757aa-3fd1-48e0-9405-0008e2df1011") : secret "webhook-server-cert" not found Feb 16 17:39:25.781296 master-0 kubenswrapper[4652]: W0216 17:39:25.781115 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71c16787_8d47_4258_85be_938b26e3d7e7.slice/crio-69fe0a74b200d8b5fd19c7c80f97b2b090d8592c72f8cd6b4d4e4a87a5f4cb56 WatchSource:0}: Error finding container 69fe0a74b200d8b5fd19c7c80f97b2b090d8592c72f8cd6b4d4e4a87a5f4cb56: Status 404 returned error can't find the container with id 69fe0a74b200d8b5fd19c7c80f97b2b090d8592c72f8cd6b4d4e4a87a5f4cb56 Feb 16 17:39:25.854201 master-0 kubenswrapper[4652]: I0216 17:39:25.854135 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8" event={"ID":"d806807a-49f5-4a93-b423-724c8ca48c84","Type":"ContainerStarted","Data":"167f313e5facfd7b71c40f1ac78a78de41f4f6642aa1b966df0ddd67c0f55713"} Feb 16 17:39:25.856318 master-0 kubenswrapper[4652]: I0216 17:39:25.856275 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc" event={"ID":"7e497b24-455b-402b-b2d1-36ad774dec94","Type":"ContainerStarted","Data":"65632c7c2ca481cc83138d486343bc7f8f65386c6e640ced72e819dbb2be04bf"} Feb 16 17:39:25.858313 master-0 kubenswrapper[4652]: I0216 17:39:25.858272 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs" event={"ID":"71c16787-8d47-4258-85be-938b26e3d7e7","Type":"ContainerStarted","Data":"69fe0a74b200d8b5fd19c7c80f97b2b090d8592c72f8cd6b4d4e4a87a5f4cb56"} Feb 16 17:39:26.257393 master-0 kubenswrapper[4652]: W0216 17:39:26.257330 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f1a42b6_5584_411c_b499_5d9c71d3e9f1.slice/crio-47b03f1502f1a9a47e7d5da776c5e03c430ac41cfba2b12de79adc5654bfefea WatchSource:0}: Error finding container 47b03f1502f1a9a47e7d5da776c5e03c430ac41cfba2b12de79adc5654bfefea: Status 404 returned error can't find the container with id 47b03f1502f1a9a47e7d5da776c5e03c430ac41cfba2b12de79adc5654bfefea Feb 16 17:39:26.260821 master-0 kubenswrapper[4652]: W0216 17:39:26.260748 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod313fc27d_cc30_4146_b820_47d332777114.slice/crio-a1c3cceb15769086ad3816b37401abed9cd0785a469bbc9fe660bdcad63bb48e WatchSource:0}: Error finding container a1c3cceb15769086ad3816b37401abed9cd0785a469bbc9fe660bdcad63bb48e: Status 404 returned error can't find the container with id a1c3cceb15769086ad3816b37401abed9cd0785a469bbc9fe660bdcad63bb48e Feb 16 17:39:26.263605 master-0 kubenswrapper[4652]: W0216 17:39:26.263530 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf337e3a4_350f_4d2e_80b2_215c172d2a68.slice/crio-9692b3fe813e884124c854dd65795c411fb280ce739f0651c32b0d82be808390 WatchSource:0}: Error finding container 9692b3fe813e884124c854dd65795c411fb280ce739f0651c32b0d82be808390: Status 404 returned error can't find the container with id 9692b3fe813e884124c854dd65795c411fb280ce739f0651c32b0d82be808390 Feb 16 17:39:26.266858 master-0 kubenswrapper[4652]: W0216 17:39:26.266819 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37380c64_5bcd_4446_9a7e_e44745a24096.slice/crio-8bae40f3fb484bae88e3d6936e101b9c438a3626facab7ebfd9379fb07baf62d WatchSource:0}: Error finding container 8bae40f3fb484bae88e3d6936e101b9c438a3626facab7ebfd9379fb07baf62d: Status 404 returned error can't find the container with id 8bae40f3fb484bae88e3d6936e101b9c438a3626facab7ebfd9379fb07baf62d Feb 16 17:39:26.872112 master-0 kubenswrapper[4652]: I0216 17:39:26.872038 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6" event={"ID":"37380c64-5bcd-4446-9a7e-e44745a24096","Type":"ContainerStarted","Data":"8bae40f3fb484bae88e3d6936e101b9c438a3626facab7ebfd9379fb07baf62d"} Feb 16 17:39:26.874296 master-0 kubenswrapper[4652]: I0216 17:39:26.874265 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7m8kc" event={"ID":"f337e3a4-350f-4d2e-80b2-215c172d2a68","Type":"ContainerStarted","Data":"9692b3fe813e884124c854dd65795c411fb280ce739f0651c32b0d82be808390"} Feb 16 17:39:26.875851 master-0 kubenswrapper[4652]: I0216 17:39:26.875805 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-7c6b4" event={"ID":"2f1a42b6-5584-411c-b499-5d9c71d3e9f1","Type":"ContainerStarted","Data":"47b03f1502f1a9a47e7d5da776c5e03c430ac41cfba2b12de79adc5654bfefea"} Feb 16 17:39:26.878918 master-0 kubenswrapper[4652]: I0216 17:39:26.878885 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j" event={"ID":"313fc27d-cc30-4146-b820-47d332777114","Type":"ContainerStarted","Data":"a1c3cceb15769086ad3816b37401abed9cd0785a469bbc9fe660bdcad63bb48e"} Feb 16 17:39:28.807753 master-0 kubenswrapper[4652]: I0216 17:39:28.806969 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert\") pod \"infra-operator-controller-manager-5f879c76b6-f4x7q\" (UID: \"68e37ed4-9304-4842-a4b5-9bc380c92262\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:28.807753 master-0 kubenswrapper[4652]: E0216 17:39:28.807128 4652 secret.go:189] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 17:39:28.807753 master-0 kubenswrapper[4652]: E0216 17:39:28.807176 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert podName:68e37ed4-9304-4842-a4b5-9bc380c92262 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:36.807162909 +0000 UTC m=+934.195331425 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert") pod "infra-operator-controller-manager-5f879c76b6-f4x7q" (UID: "68e37ed4-9304-4842-a4b5-9bc380c92262") : secret "infra-operator-webhook-server-cert" not found Feb 16 17:39:29.219451 master-0 kubenswrapper[4652]: I0216 17:39:29.219409 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm\" (UID: \"a44a1b70-8302-4276-a164-ab83b4a46945\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:29.219820 master-0 kubenswrapper[4652]: E0216 17:39:29.219585 4652 secret.go:189] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:39:29.219879 master-0 kubenswrapper[4652]: E0216 17:39:29.219865 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert podName:a44a1b70-8302-4276-a164-ab83b4a46945 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:37.219844521 +0000 UTC m=+934.608013037 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert") pod "openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" (UID: "a44a1b70-8302-4276-a164-ab83b4a46945") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 17:39:29.524872 master-0 kubenswrapper[4652]: I0216 17:39:29.524769 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:29.525191 master-0 kubenswrapper[4652]: E0216 17:39:29.524968 4652 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:39:29.525274 master-0 kubenswrapper[4652]: E0216 17:39:29.525239 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs podName:a0b757aa-3fd1-48e0-9405-0008e2df1011 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:37.525220207 +0000 UTC m=+934.913388723 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-mlz96" (UID: "a0b757aa-3fd1-48e0-9405-0008e2df1011") : secret "webhook-server-cert" not found Feb 16 17:39:29.525397 master-0 kubenswrapper[4652]: I0216 17:39:29.525381 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:29.525671 master-0 kubenswrapper[4652]: E0216 17:39:29.525606 4652 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:39:29.525783 master-0 kubenswrapper[4652]: E0216 17:39:29.525727 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs podName:a0b757aa-3fd1-48e0-9405-0008e2df1011 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:37.525700529 +0000 UTC m=+934.913869095 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-mlz96" (UID: "a0b757aa-3fd1-48e0-9405-0008e2df1011") : secret "metrics-server-cert" not found Feb 16 17:39:36.871018 master-0 kubenswrapper[4652]: I0216 17:39:36.870872 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert\") pod \"infra-operator-controller-manager-5f879c76b6-f4x7q\" (UID: \"68e37ed4-9304-4842-a4b5-9bc380c92262\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:36.874851 master-0 kubenswrapper[4652]: I0216 17:39:36.874802 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/68e37ed4-9304-4842-a4b5-9bc380c92262-cert\") pod \"infra-operator-controller-manager-5f879c76b6-f4x7q\" (UID: \"68e37ed4-9304-4842-a4b5-9bc380c92262\") " pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:36.943271 master-0 kubenswrapper[4652]: I0216 17:39:36.943181 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:37.279223 master-0 kubenswrapper[4652]: I0216 17:39:37.279155 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm\" (UID: \"a44a1b70-8302-4276-a164-ab83b4a46945\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:37.283203 master-0 kubenswrapper[4652]: I0216 17:39:37.283165 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a44a1b70-8302-4276-a164-ab83b4a46945-cert\") pod \"openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm\" (UID: \"a44a1b70-8302-4276-a164-ab83b4a46945\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:37.554416 master-0 kubenswrapper[4652]: I0216 17:39:37.554366 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:37.583346 master-0 kubenswrapper[4652]: I0216 17:39:37.583298 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:37.583591 master-0 kubenswrapper[4652]: I0216 17:39:37.583399 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:37.583591 master-0 kubenswrapper[4652]: E0216 17:39:37.583465 4652 secret.go:189] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 17:39:37.583591 master-0 kubenswrapper[4652]: E0216 17:39:37.583517 4652 secret.go:189] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 17:39:37.583591 master-0 kubenswrapper[4652]: E0216 17:39:37.583531 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs podName:a0b757aa-3fd1-48e0-9405-0008e2df1011 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:53.583513846 +0000 UTC m=+950.971682362 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs") pod "openstack-operator-controller-manager-74d597bfd6-mlz96" (UID: "a0b757aa-3fd1-48e0-9405-0008e2df1011") : secret "metrics-server-cert" not found Feb 16 17:39:37.583591 master-0 kubenswrapper[4652]: E0216 17:39:37.583564 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs podName:a0b757aa-3fd1-48e0-9405-0008e2df1011 nodeName:}" failed. No retries permitted until 2026-02-16 17:39:53.583550227 +0000 UTC m=+950.971718743 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs") pod "openstack-operator-controller-manager-74d597bfd6-mlz96" (UID: "a0b757aa-3fd1-48e0-9405-0008e2df1011") : secret "webhook-server-cert" not found Feb 16 17:39:38.987906 master-0 kubenswrapper[4652]: I0216 17:39:38.987378 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr" event={"ID":"73d82bf6-e57d-4909-b293-39b33f8c142e","Type":"ContainerStarted","Data":"37024e8f9fc9ceaea74a97dc950af7568702188af4e4e2bd77644a0bb8e68d7f"} Feb 16 17:39:38.988569 master-0 kubenswrapper[4652]: I0216 17:39:38.988545 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr" Feb 16 17:39:38.990936 master-0 kubenswrapper[4652]: I0216 17:39:38.990890 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9" event={"ID":"b8fca871-c716-40c2-9e17-d85b8c71cca0","Type":"ContainerStarted","Data":"1100d5499a6b706d982471b59f385a8253b78eef3a9d2e4e333e1792c9f30a08"} Feb 16 17:39:38.991228 master-0 kubenswrapper[4652]: I0216 17:39:38.991203 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9" Feb 16 17:39:39.005600 master-0 kubenswrapper[4652]: I0216 17:39:39.002899 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q"] Feb 16 17:39:39.022343 master-0 kubenswrapper[4652]: I0216 17:39:39.007895 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx" event={"ID":"482c56ed-0576-46a4-b961-15030e811005","Type":"ContainerStarted","Data":"246bd4a51b348ed5a0957557ad0d10447ef665366791ea855e41f1be5ef0f791"} Feb 16 17:39:39.022343 master-0 kubenswrapper[4652]: I0216 17:39:39.008557 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx" Feb 16 17:39:39.022343 master-0 kubenswrapper[4652]: I0216 17:39:39.012605 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj" event={"ID":"7dcd9841-b2ee-4485-91fb-47e8ecbce567","Type":"ContainerStarted","Data":"00b0efce5f54e3b4a34796271d16123a41a27342b11651ae57bb34d88f370234"} Feb 16 17:39:39.022343 master-0 kubenswrapper[4652]: I0216 17:39:39.013125 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj" Feb 16 17:39:39.022343 master-0 kubenswrapper[4652]: W0216 17:39:39.018114 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68e37ed4_9304_4842_a4b5_9bc380c92262.slice/crio-04f899f06aba4ae9a12e205c1184d423b04a47af820edb1090c2264d43090a65 WatchSource:0}: Error finding container 04f899f06aba4ae9a12e205c1184d423b04a47af820edb1090c2264d43090a65: Status 404 returned error can't find the container with id 04f899f06aba4ae9a12e205c1184d423b04a47af820edb1090c2264d43090a65 Feb 16 17:39:39.022343 master-0 kubenswrapper[4652]: I0216 17:39:39.019995 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq" event={"ID":"1c903b0b-4f9c-4739-8a74-cec124ddf2b8","Type":"ContainerStarted","Data":"37eb558885fad2ec8fea491c91228894d455d20127d364afee7fa162eff05f80"} Feb 16 17:39:39.022343 master-0 kubenswrapper[4652]: I0216 17:39:39.020021 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq" Feb 16 17:39:39.027366 master-0 kubenswrapper[4652]: I0216 17:39:39.026467 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr" podStartSLOduration=6.737428078 podStartE2EDuration="19.026455456s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:21.971532003 +0000 UTC m=+919.359700519" lastFinishedPulling="2026-02-16 17:39:34.260559381 +0000 UTC m=+931.648727897" observedRunningTime="2026-02-16 17:39:39.006513961 +0000 UTC m=+936.394682477" watchObservedRunningTime="2026-02-16 17:39:39.026455456 +0000 UTC m=+936.414623972" Feb 16 17:39:39.027366 master-0 kubenswrapper[4652]: I0216 17:39:39.026972 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j" event={"ID":"313fc27d-cc30-4146-b820-47d332777114","Type":"ContainerStarted","Data":"e0ac922d3dcd2a7edd9f8a65670a0cdd9d3e2e1ea474780882bd242cbba031b8"} Feb 16 17:39:39.027641 master-0 kubenswrapper[4652]: I0216 17:39:39.027545 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j" Feb 16 17:39:39.052312 master-0 kubenswrapper[4652]: I0216 17:39:39.051462 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9" podStartSLOduration=5.827494945 podStartE2EDuration="19.051445075s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:22.770963053 +0000 UTC m=+920.159131569" lastFinishedPulling="2026-02-16 17:39:35.994913183 +0000 UTC m=+933.383081699" observedRunningTime="2026-02-16 17:39:39.029148818 +0000 UTC m=+936.417317324" watchObservedRunningTime="2026-02-16 17:39:39.051445075 +0000 UTC m=+936.439613591" Feb 16 17:39:39.058627 master-0 kubenswrapper[4652]: I0216 17:39:39.057834 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l" event={"ID":"0a763571-3fe3-4093-9032-5137713d66a2","Type":"ContainerStarted","Data":"64210c002fa294e4931741cdc916a29ef70ec4acafa421431c746fe1a285d060"} Feb 16 17:39:39.058627 master-0 kubenswrapper[4652]: I0216 17:39:39.058589 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l" Feb 16 17:39:39.073614 master-0 kubenswrapper[4652]: I0216 17:39:39.073511 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j" podStartSLOduration=6.807333862 podStartE2EDuration="19.073310562s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:26.262663631 +0000 UTC m=+923.650832167" lastFinishedPulling="2026-02-16 17:39:38.528640351 +0000 UTC m=+935.916808867" observedRunningTime="2026-02-16 17:39:39.048537748 +0000 UTC m=+936.436706264" watchObservedRunningTime="2026-02-16 17:39:39.073310562 +0000 UTC m=+936.461479118" Feb 16 17:39:39.086373 master-0 kubenswrapper[4652]: I0216 17:39:39.086285 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq" podStartSLOduration=7.369390079 podStartE2EDuration="19.086234018s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:21.608638546 +0000 UTC m=+918.996807052" lastFinishedPulling="2026-02-16 17:39:33.325482475 +0000 UTC m=+930.713650991" observedRunningTime="2026-02-16 17:39:39.068147423 +0000 UTC m=+936.456315939" watchObservedRunningTime="2026-02-16 17:39:39.086234018 +0000 UTC m=+936.474402524" Feb 16 17:39:39.100536 master-0 kubenswrapper[4652]: I0216 17:39:39.100442 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj" podStartSLOduration=7.75967978 podStartE2EDuration="19.100407248s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:21.984727957 +0000 UTC m=+919.372896473" lastFinishedPulling="2026-02-16 17:39:33.325455425 +0000 UTC m=+930.713623941" observedRunningTime="2026-02-16 17:39:39.089774233 +0000 UTC m=+936.477942749" watchObservedRunningTime="2026-02-16 17:39:39.100407248 +0000 UTC m=+936.488575764" Feb 16 17:39:39.166885 master-0 kubenswrapper[4652]: I0216 17:39:39.166812 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx" podStartSLOduration=7.792632234 podStartE2EDuration="19.146242987s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:21.971817431 +0000 UTC m=+919.359985947" lastFinishedPulling="2026-02-16 17:39:33.325428184 +0000 UTC m=+930.713596700" observedRunningTime="2026-02-16 17:39:39.107238171 +0000 UTC m=+936.495406687" watchObservedRunningTime="2026-02-16 17:39:39.146242987 +0000 UTC m=+936.534411503" Feb 16 17:39:39.168570 master-0 kubenswrapper[4652]: W0216 17:39:39.168525 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda44a1b70_8302_4276_a164_ab83b4a46945.slice/crio-b5cb55e5a793458525b1348b9cbe73d5b6daaaa4b58985676a56ad927099bf60 WatchSource:0}: Error finding container b5cb55e5a793458525b1348b9cbe73d5b6daaaa4b58985676a56ad927099bf60: Status 404 returned error can't find the container with id b5cb55e5a793458525b1348b9cbe73d5b6daaaa4b58985676a56ad927099bf60 Feb 16 17:39:39.198118 master-0 kubenswrapper[4652]: I0216 17:39:39.198075 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm"] Feb 16 17:39:39.199949 master-0 kubenswrapper[4652]: I0216 17:39:39.199888 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l" podStartSLOduration=7.967554622 podStartE2EDuration="19.199868874s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:22.093118722 +0000 UTC m=+919.481287228" lastFinishedPulling="2026-02-16 17:39:33.325432964 +0000 UTC m=+930.713601480" observedRunningTime="2026-02-16 17:39:39.128636085 +0000 UTC m=+936.516804601" watchObservedRunningTime="2026-02-16 17:39:39.199868874 +0000 UTC m=+936.588037390" Feb 16 17:39:40.098331 master-0 kubenswrapper[4652]: I0216 17:39:40.097576 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" event={"ID":"a44a1b70-8302-4276-a164-ab83b4a46945","Type":"ContainerStarted","Data":"b5cb55e5a793458525b1348b9cbe73d5b6daaaa4b58985676a56ad927099bf60"} Feb 16 17:39:40.127269 master-0 kubenswrapper[4652]: I0216 17:39:40.126810 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj" event={"ID":"d2031c5c-58d9-49ee-a82e-da48fba84526","Type":"ContainerStarted","Data":"526e414afc4d2609cf07c7cdc14cd7a71357af8abab3b017be7eecf9cae1bf51"} Feb 16 17:39:40.131893 master-0 kubenswrapper[4652]: I0216 17:39:40.128363 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj" Feb 16 17:39:40.140274 master-0 kubenswrapper[4652]: I0216 17:39:40.130235 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs" event={"ID":"71c16787-8d47-4258-85be-938b26e3d7e7","Type":"ContainerStarted","Data":"704b2d54d7f814c487fb9e7fbeecd5fc0ae99db0e4f853551bf78b6ebf5ceedb"} Feb 16 17:39:40.140274 master-0 kubenswrapper[4652]: I0216 17:39:40.139477 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs" Feb 16 17:39:40.171029 master-0 kubenswrapper[4652]: I0216 17:39:40.170570 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9" event={"ID":"e53ab543-bdf3-44fc-adb4-8f1823a53e23","Type":"ContainerStarted","Data":"ec99c06349d1aafbe6020f333bb89e4f3e37ba9c82959e7eb72c021df5ebcb74"} Feb 16 17:39:40.171269 master-0 kubenswrapper[4652]: I0216 17:39:40.171160 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9" Feb 16 17:39:40.173016 master-0 kubenswrapper[4652]: I0216 17:39:40.171884 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj" podStartSLOduration=4.497252087 podStartE2EDuration="20.171871169s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:22.845965593 +0000 UTC m=+920.234134109" lastFinishedPulling="2026-02-16 17:39:38.520584675 +0000 UTC m=+935.908753191" observedRunningTime="2026-02-16 17:39:40.169556427 +0000 UTC m=+937.557724943" watchObservedRunningTime="2026-02-16 17:39:40.171871169 +0000 UTC m=+937.560039685" Feb 16 17:39:40.187217 master-0 kubenswrapper[4652]: I0216 17:39:40.185052 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7m8kc" event={"ID":"f337e3a4-350f-4d2e-80b2-215c172d2a68","Type":"ContainerStarted","Data":"737c0f0f6fd67fcd258b06eeae36d743f73a9f425f1179241524fdf5cfea01bd"} Feb 16 17:39:40.199456 master-0 kubenswrapper[4652]: I0216 17:39:40.197442 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm" event={"ID":"8c805d7e-74a8-405c-a828-3d93851ce223","Type":"ContainerStarted","Data":"43e24ad7f4b82820a0ee3c48ef1f023c68a61da13bdf80f71237456db1ef49f1"} Feb 16 17:39:40.199456 master-0 kubenswrapper[4652]: I0216 17:39:40.198179 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm" Feb 16 17:39:40.237800 master-0 kubenswrapper[4652]: I0216 17:39:40.235717 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc" event={"ID":"7e497b24-455b-402b-b2d1-36ad774dec94","Type":"ContainerStarted","Data":"5498cdf6360029e33934f434d1aeb8a487473df85e27c8b3ad3cda29e60abfbf"} Feb 16 17:39:40.237800 master-0 kubenswrapper[4652]: I0216 17:39:40.236656 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc" Feb 16 17:39:40.249210 master-0 kubenswrapper[4652]: I0216 17:39:40.248514 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" event={"ID":"68e37ed4-9304-4842-a4b5-9bc380c92262","Type":"ContainerStarted","Data":"04f899f06aba4ae9a12e205c1184d423b04a47af820edb1090c2264d43090a65"} Feb 16 17:39:40.275286 master-0 kubenswrapper[4652]: I0216 17:39:40.273209 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs" podStartSLOduration=7.593746792 podStartE2EDuration="20.273187635s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:25.783542238 +0000 UTC m=+923.171710764" lastFinishedPulling="2026-02-16 17:39:38.462983091 +0000 UTC m=+935.851151607" observedRunningTime="2026-02-16 17:39:40.205323256 +0000 UTC m=+937.593491782" watchObservedRunningTime="2026-02-16 17:39:40.273187635 +0000 UTC m=+937.661356151" Feb 16 17:39:40.275286 master-0 kubenswrapper[4652]: I0216 17:39:40.273263 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k" event={"ID":"cb821b42-d977-4380-9dcd-ac2684aa0ebd","Type":"ContainerStarted","Data":"da726d6e80b07bd06169509c12604591e930a1a2d57543c7c61e165046fd384b"} Feb 16 17:39:40.275286 master-0 kubenswrapper[4652]: I0216 17:39:40.274342 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k" Feb 16 17:39:40.292204 master-0 kubenswrapper[4652]: I0216 17:39:40.292114 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7m8kc" podStartSLOduration=6.875128815 podStartE2EDuration="19.292094662s" podCreationTimestamp="2026-02-16 17:39:21 +0000 UTC" firstStartedPulling="2026-02-16 17:39:26.268401575 +0000 UTC m=+923.656570111" lastFinishedPulling="2026-02-16 17:39:38.685367442 +0000 UTC m=+936.073535958" observedRunningTime="2026-02-16 17:39:40.281761215 +0000 UTC m=+937.669929741" watchObservedRunningTime="2026-02-16 17:39:40.292094662 +0000 UTC m=+937.680263178" Feb 16 17:39:40.314284 master-0 kubenswrapper[4652]: I0216 17:39:40.303564 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq" event={"ID":"25fc3b27-12ca-453a-866d-ae8f312e3fce","Type":"ContainerStarted","Data":"8a4c4ec89220b497ba74e0bb237b41b7f4f89eaa6ca4e5c77b997d6943df3ca4"} Feb 16 17:39:40.314284 master-0 kubenswrapper[4652]: I0216 17:39:40.304477 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq" Feb 16 17:39:40.323281 master-0 kubenswrapper[4652]: I0216 17:39:40.319527 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm" podStartSLOduration=4.683698695 podStartE2EDuration="20.319502386s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:22.829615965 +0000 UTC m=+920.217784481" lastFinishedPulling="2026-02-16 17:39:38.465419656 +0000 UTC m=+935.853588172" observedRunningTime="2026-02-16 17:39:40.247731242 +0000 UTC m=+937.635899758" watchObservedRunningTime="2026-02-16 17:39:40.319502386 +0000 UTC m=+937.707670902" Feb 16 17:39:40.346638 master-0 kubenswrapper[4652]: I0216 17:39:40.346563 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj" event={"ID":"c48f8618-dc72-4a9a-9929-9bb841ad3a4b","Type":"ContainerStarted","Data":"876257b03baf35518b6a9e1275a8fa1e082db8e3795314780f1f55988d10a319"} Feb 16 17:39:40.352380 master-0 kubenswrapper[4652]: I0216 17:39:40.347451 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj" Feb 16 17:39:40.371736 master-0 kubenswrapper[4652]: I0216 17:39:40.371661 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t" event={"ID":"93f5b04d-01a1-4a05-9c52-d0e12e388ecb","Type":"ContainerStarted","Data":"31a7fb7a4d3662ac60207cb1ca560224c7b15404d8271b9350f0d333876d44cb"} Feb 16 17:39:40.376271 master-0 kubenswrapper[4652]: I0216 17:39:40.372517 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t" Feb 16 17:39:40.391449 master-0 kubenswrapper[4652]: I0216 17:39:40.391024 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9" podStartSLOduration=4.685613167 podStartE2EDuration="20.391001343s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:22.757559694 +0000 UTC m=+920.145728210" lastFinishedPulling="2026-02-16 17:39:38.46294787 +0000 UTC m=+935.851116386" observedRunningTime="2026-02-16 17:39:40.349618744 +0000 UTC m=+937.737787280" watchObservedRunningTime="2026-02-16 17:39:40.391001343 +0000 UTC m=+937.779169859" Feb 16 17:39:40.402289 master-0 kubenswrapper[4652]: I0216 17:39:40.392336 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6" event={"ID":"37380c64-5bcd-4446-9a7e-e44745a24096","Type":"ContainerStarted","Data":"3fb5cada24c5cbac7d2edebc8dea42bdc66e122abfa37633870de0c8b3ec2d5c"} Feb 16 17:39:40.402289 master-0 kubenswrapper[4652]: I0216 17:39:40.393211 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6" Feb 16 17:39:40.422655 master-0 kubenswrapper[4652]: I0216 17:39:40.417494 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-7c6b4" event={"ID":"2f1a42b6-5584-411c-b499-5d9c71d3e9f1","Type":"ContainerStarted","Data":"9d09c93ca24f526633d11ceddb42c8a207c53b874681ce7a67744e157adc711f"} Feb 16 17:39:40.422655 master-0 kubenswrapper[4652]: I0216 17:39:40.418342 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-7c6b4" Feb 16 17:39:40.447532 master-0 kubenswrapper[4652]: I0216 17:39:40.447486 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8" event={"ID":"d806807a-49f5-4a93-b423-724c8ca48c84","Type":"ContainerStarted","Data":"e8c891495efdfc4e5607f285d9010207e25c8ceb6761b2655d2a0909209ab694"} Feb 16 17:39:40.451509 master-0 kubenswrapper[4652]: I0216 17:39:40.450012 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8" Feb 16 17:39:40.461078 master-0 kubenswrapper[4652]: I0216 17:39:40.461012 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq" podStartSLOduration=4.738660858 podStartE2EDuration="20.460989009s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:22.740685361 +0000 UTC m=+920.128853877" lastFinishedPulling="2026-02-16 17:39:38.463013502 +0000 UTC m=+935.851182028" observedRunningTime="2026-02-16 17:39:40.378772455 +0000 UTC m=+937.766940961" watchObservedRunningTime="2026-02-16 17:39:40.460989009 +0000 UTC m=+937.849157525" Feb 16 17:39:40.465234 master-0 kubenswrapper[4652]: I0216 17:39:40.465199 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k" podStartSLOduration=4.72677089 podStartE2EDuration="20.465187742s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:22.802116018 +0000 UTC m=+920.190284534" lastFinishedPulling="2026-02-16 17:39:38.54053287 +0000 UTC m=+935.928701386" observedRunningTime="2026-02-16 17:39:40.410374242 +0000 UTC m=+937.798542758" watchObservedRunningTime="2026-02-16 17:39:40.465187742 +0000 UTC m=+937.853356258" Feb 16 17:39:40.503268 master-0 kubenswrapper[4652]: I0216 17:39:40.503178 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc" podStartSLOduration=6.037604254 podStartE2EDuration="19.503157499s" podCreationTimestamp="2026-02-16 17:39:21 +0000 UTC" firstStartedPulling="2026-02-16 17:39:24.999913873 +0000 UTC m=+922.388082389" lastFinishedPulling="2026-02-16 17:39:38.465467118 +0000 UTC m=+935.853635634" observedRunningTime="2026-02-16 17:39:40.447418555 +0000 UTC m=+937.835587071" watchObservedRunningTime="2026-02-16 17:39:40.503157499 +0000 UTC m=+937.891326015" Feb 16 17:39:40.530278 master-0 kubenswrapper[4652]: I0216 17:39:40.522258 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj" podStartSLOduration=4.8389941279999995 podStartE2EDuration="20.522229031s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:22.782301347 +0000 UTC m=+920.170469863" lastFinishedPulling="2026-02-16 17:39:38.46553623 +0000 UTC m=+935.853704766" observedRunningTime="2026-02-16 17:39:40.471772678 +0000 UTC m=+937.859941194" watchObservedRunningTime="2026-02-16 17:39:40.522229031 +0000 UTC m=+937.910397547" Feb 16 17:39:40.548272 master-0 kubenswrapper[4652]: I0216 17:39:40.536752 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8" podStartSLOduration=7.010060456 podStartE2EDuration="20.53673636s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:25.005074111 +0000 UTC m=+922.393242627" lastFinishedPulling="2026-02-16 17:39:38.531750015 +0000 UTC m=+935.919918531" observedRunningTime="2026-02-16 17:39:40.497178349 +0000 UTC m=+937.885346865" watchObservedRunningTime="2026-02-16 17:39:40.53673636 +0000 UTC m=+937.924904876" Feb 16 17:39:40.560993 master-0 kubenswrapper[4652]: I0216 17:39:40.556090 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6" podStartSLOduration=8.364486952 podStartE2EDuration="20.556072628s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:26.271152448 +0000 UTC m=+923.659320974" lastFinishedPulling="2026-02-16 17:39:38.462738134 +0000 UTC m=+935.850906650" observedRunningTime="2026-02-16 17:39:40.532692191 +0000 UTC m=+937.920860707" watchObservedRunningTime="2026-02-16 17:39:40.556072628 +0000 UTC m=+937.944241144" Feb 16 17:39:40.592348 master-0 kubenswrapper[4652]: I0216 17:39:40.591411 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-7c6b4" podStartSLOduration=8.321256733 podStartE2EDuration="20.591391005s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:26.26187995 +0000 UTC m=+923.650048466" lastFinishedPulling="2026-02-16 17:39:38.532014222 +0000 UTC m=+935.920182738" observedRunningTime="2026-02-16 17:39:40.563657321 +0000 UTC m=+937.951825837" watchObservedRunningTime="2026-02-16 17:39:40.591391005 +0000 UTC m=+937.979559521" Feb 16 17:39:40.617396 master-0 kubenswrapper[4652]: I0216 17:39:40.615163 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t" podStartSLOduration=5.8109995229999996 podStartE2EDuration="20.615142581s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:22.736426997 +0000 UTC m=+920.124595513" lastFinishedPulling="2026-02-16 17:39:37.540570015 +0000 UTC m=+934.928738571" observedRunningTime="2026-02-16 17:39:40.586919935 +0000 UTC m=+937.975088471" watchObservedRunningTime="2026-02-16 17:39:40.615142581 +0000 UTC m=+938.003311097" Feb 16 17:39:43.482551 master-0 kubenswrapper[4652]: I0216 17:39:43.482502 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" event={"ID":"a44a1b70-8302-4276-a164-ab83b4a46945","Type":"ContainerStarted","Data":"6fb65682b93024f80fa17cc2d3226554031a8ac0788124199e69c8c792a79b36"} Feb 16 17:39:43.483344 master-0 kubenswrapper[4652]: I0216 17:39:43.483323 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:39:43.483802 master-0 kubenswrapper[4652]: I0216 17:39:43.483783 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" event={"ID":"68e37ed4-9304-4842-a4b5-9bc380c92262","Type":"ContainerStarted","Data":"96ce57105fee1165f7561cc46709c821ca2d5c5bfd92b14d366e35f84a372527"} Feb 16 17:39:43.484896 master-0 kubenswrapper[4652]: I0216 17:39:43.484846 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:43.519668 master-0 kubenswrapper[4652]: I0216 17:39:43.519576 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" podStartSLOduration=19.771186258 podStartE2EDuration="23.519549696s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:39.182478698 +0000 UTC m=+936.570647214" lastFinishedPulling="2026-02-16 17:39:42.930842146 +0000 UTC m=+940.319010652" observedRunningTime="2026-02-16 17:39:43.510038161 +0000 UTC m=+940.898206697" watchObservedRunningTime="2026-02-16 17:39:43.519549696 +0000 UTC m=+940.907718212" Feb 16 17:39:43.542074 master-0 kubenswrapper[4652]: I0216 17:39:43.541970 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" podStartSLOduration=19.648819528 podStartE2EDuration="23.541946127s" podCreationTimestamp="2026-02-16 17:39:20 +0000 UTC" firstStartedPulling="2026-02-16 17:39:39.0568342 +0000 UTC m=+936.445002716" lastFinishedPulling="2026-02-16 17:39:42.949960799 +0000 UTC m=+940.338129315" observedRunningTime="2026-02-16 17:39:43.530388897 +0000 UTC m=+940.918557423" watchObservedRunningTime="2026-02-16 17:39:43.541946127 +0000 UTC m=+940.930114653" Feb 16 17:39:50.758044 master-0 kubenswrapper[4652]: I0216 17:39:50.757995 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-8ppjx" Feb 16 17:39:50.838599 master-0 kubenswrapper[4652]: I0216 17:39:50.836604 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-jmqqq" Feb 16 17:39:50.960331 master-0 kubenswrapper[4652]: I0216 17:39:50.960262 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-pddtr" Feb 16 17:39:50.989319 master-0 kubenswrapper[4652]: I0216 17:39:50.987263 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-sv8qj" Feb 16 17:39:51.043897 master-0 kubenswrapper[4652]: I0216 17:39:51.043765 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xv27l" Feb 16 17:39:51.136423 master-0 kubenswrapper[4652]: I0216 17:39:51.136352 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-n4s9t" Feb 16 17:39:51.198060 master-0 kubenswrapper[4652]: I0216 17:39:51.198012 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-sggd9" Feb 16 17:39:51.345714 master-0 kubenswrapper[4652]: I0216 17:39:51.345589 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-rcsk9" Feb 16 17:39:51.414507 master-0 kubenswrapper[4652]: I0216 17:39:51.414467 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-lqjrq" Feb 16 17:39:51.427915 master-0 kubenswrapper[4652]: I0216 17:39:51.427859 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-4sgzm" Feb 16 17:39:51.522013 master-0 kubenswrapper[4652]: I0216 17:39:51.521940 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-swv4k" Feb 16 17:39:51.563327 master-0 kubenswrapper[4652]: I0216 17:39:51.560882 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-gcmjj" Feb 16 17:39:51.587986 master-0 kubenswrapper[4652]: I0216 17:39:51.587939 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-xv2qs" Feb 16 17:39:51.629145 master-0 kubenswrapper[4652]: I0216 17:39:51.629009 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-tmx4j" Feb 16 17:39:51.677786 master-0 kubenswrapper[4652]: I0216 17:39:51.677740 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-pkhcj" Feb 16 17:39:51.716649 master-0 kubenswrapper[4652]: I0216 17:39:51.716433 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-bhcg6" Feb 16 17:39:51.864641 master-0 kubenswrapper[4652]: I0216 17:39:51.864584 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-7f45b4ff68-wsws8" Feb 16 17:39:52.002645 master-0 kubenswrapper[4652]: I0216 17:39:52.002288 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-7c6b4" Feb 16 17:39:52.026650 master-0 kubenswrapper[4652]: I0216 17:39:52.026593 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-tmbxc" Feb 16 17:39:53.634462 master-0 kubenswrapper[4652]: I0216 17:39:53.634374 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:53.635198 master-0 kubenswrapper[4652]: I0216 17:39:53.634606 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:53.638219 master-0 kubenswrapper[4652]: I0216 17:39:53.638180 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-metrics-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:53.638641 master-0 kubenswrapper[4652]: I0216 17:39:53.638603 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a0b757aa-3fd1-48e0-9405-0008e2df1011-webhook-certs\") pod \"openstack-operator-controller-manager-74d597bfd6-mlz96\" (UID: \"a0b757aa-3fd1-48e0-9405-0008e2df1011\") " pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:53.845052 master-0 kubenswrapper[4652]: I0216 17:39:53.844967 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:54.325640 master-0 kubenswrapper[4652]: W0216 17:39:54.325579 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0b757aa_3fd1_48e0_9405_0008e2df1011.slice/crio-c36ce3e59b282eb961b46831ece75983e134d971edc07a5a87d28b62b4656f79 WatchSource:0}: Error finding container c36ce3e59b282eb961b46831ece75983e134d971edc07a5a87d28b62b4656f79: Status 404 returned error can't find the container with id c36ce3e59b282eb961b46831ece75983e134d971edc07a5a87d28b62b4656f79 Feb 16 17:39:54.339327 master-0 kubenswrapper[4652]: I0216 17:39:54.338275 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96"] Feb 16 17:39:54.591638 master-0 kubenswrapper[4652]: I0216 17:39:54.591500 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" event={"ID":"a0b757aa-3fd1-48e0-9405-0008e2df1011","Type":"ContainerStarted","Data":"f97eb549fd0bca8eeabdd66f60c76bfa310792f3fc8550967e511adcce57fcc1"} Feb 16 17:39:54.591638 master-0 kubenswrapper[4652]: I0216 17:39:54.591561 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" event={"ID":"a0b757aa-3fd1-48e0-9405-0008e2df1011","Type":"ContainerStarted","Data":"c36ce3e59b282eb961b46831ece75983e134d971edc07a5a87d28b62b4656f79"} Feb 16 17:39:54.591883 master-0 kubenswrapper[4652]: I0216 17:39:54.591672 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:39:54.632325 master-0 kubenswrapper[4652]: I0216 17:39:54.632219 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" podStartSLOduration=33.632197771 podStartE2EDuration="33.632197771s" podCreationTimestamp="2026-02-16 17:39:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:39:54.622196402 +0000 UTC m=+952.010364918" watchObservedRunningTime="2026-02-16 17:39:54.632197771 +0000 UTC m=+952.020366287" Feb 16 17:39:56.949480 master-0 kubenswrapper[4652]: I0216 17:39:56.949385 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-5f879c76b6-f4x7q" Feb 16 17:39:57.562704 master-0 kubenswrapper[4652]: I0216 17:39:57.562645 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5f8cd6b89bv7czm" Feb 16 17:40:03.852475 master-0 kubenswrapper[4652]: I0216 17:40:03.852405 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-74d597bfd6-mlz96" Feb 16 17:40:04.691941 master-0 kubenswrapper[4652]: I0216 17:40:04.691882 4652 scope.go:117] "RemoveContainer" containerID="edd4c5d0e652b5757bdccf907fa4067f8e355e19438b3b15ad493b63e9b63bb2" Feb 16 17:40:39.681271 master-0 kubenswrapper[4652]: I0216 17:40:39.672796 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-m6b8n"] Feb 16 17:40:39.681271 master-0 kubenswrapper[4652]: I0216 17:40:39.676240 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-m6b8n" Feb 16 17:40:39.681271 master-0 kubenswrapper[4652]: I0216 17:40:39.680892 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 17:40:39.681271 master-0 kubenswrapper[4652]: I0216 17:40:39.681114 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 17:40:39.681271 master-0 kubenswrapper[4652]: I0216 17:40:39.681221 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 17:40:39.684322 master-0 kubenswrapper[4652]: I0216 17:40:39.682877 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-m6b8n"] Feb 16 17:40:39.772629 master-0 kubenswrapper[4652]: I0216 17:40:39.772575 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d78499c-vxnqn"] Feb 16 17:40:39.775179 master-0 kubenswrapper[4652]: I0216 17:40:39.775125 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-vxnqn" Feb 16 17:40:39.786665 master-0 kubenswrapper[4652]: I0216 17:40:39.786625 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 17:40:39.812414 master-0 kubenswrapper[4652]: I0216 17:40:39.806430 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-vxnqn"] Feb 16 17:40:39.873579 master-0 kubenswrapper[4652]: I0216 17:40:39.873518 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t7ns\" (UniqueName: \"kubernetes.io/projected/483fd99f-b1d8-4755-8635-78d8508f079f-kube-api-access-5t7ns\") pod \"dnsmasq-dns-5c7b6fb887-m6b8n\" (UID: \"483fd99f-b1d8-4755-8635-78d8508f079f\") " pod="openstack/dnsmasq-dns-5c7b6fb887-m6b8n" Feb 16 17:40:39.873795 master-0 kubenswrapper[4652]: I0216 17:40:39.873618 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/483fd99f-b1d8-4755-8635-78d8508f079f-config\") pod \"dnsmasq-dns-5c7b6fb887-m6b8n\" (UID: \"483fd99f-b1d8-4755-8635-78d8508f079f\") " pod="openstack/dnsmasq-dns-5c7b6fb887-m6b8n" Feb 16 17:40:39.975637 master-0 kubenswrapper[4652]: I0216 17:40:39.975489 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/483fd99f-b1d8-4755-8635-78d8508f079f-config\") pod \"dnsmasq-dns-5c7b6fb887-m6b8n\" (UID: \"483fd99f-b1d8-4755-8635-78d8508f079f\") " pod="openstack/dnsmasq-dns-5c7b6fb887-m6b8n" Feb 16 17:40:39.975637 master-0 kubenswrapper[4652]: I0216 17:40:39.975583 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-dns-svc\") pod \"dnsmasq-dns-7d78499c-vxnqn\" (UID: \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\") " pod="openstack/dnsmasq-dns-7d78499c-vxnqn" Feb 16 17:40:39.976028 master-0 kubenswrapper[4652]: I0216 17:40:39.975698 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-config\") pod \"dnsmasq-dns-7d78499c-vxnqn\" (UID: \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\") " pod="openstack/dnsmasq-dns-7d78499c-vxnqn" Feb 16 17:40:39.976028 master-0 kubenswrapper[4652]: I0216 17:40:39.975881 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t7ns\" (UniqueName: \"kubernetes.io/projected/483fd99f-b1d8-4755-8635-78d8508f079f-kube-api-access-5t7ns\") pod \"dnsmasq-dns-5c7b6fb887-m6b8n\" (UID: \"483fd99f-b1d8-4755-8635-78d8508f079f\") " pod="openstack/dnsmasq-dns-5c7b6fb887-m6b8n" Feb 16 17:40:39.976187 master-0 kubenswrapper[4652]: I0216 17:40:39.976103 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7z6h\" (UniqueName: \"kubernetes.io/projected/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-kube-api-access-n7z6h\") pod \"dnsmasq-dns-7d78499c-vxnqn\" (UID: \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\") " pod="openstack/dnsmasq-dns-7d78499c-vxnqn" Feb 16 17:40:39.976484 master-0 kubenswrapper[4652]: I0216 17:40:39.976440 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/483fd99f-b1d8-4755-8635-78d8508f079f-config\") pod \"dnsmasq-dns-5c7b6fb887-m6b8n\" (UID: \"483fd99f-b1d8-4755-8635-78d8508f079f\") " pod="openstack/dnsmasq-dns-5c7b6fb887-m6b8n" Feb 16 17:40:39.990826 master-0 kubenswrapper[4652]: I0216 17:40:39.990781 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t7ns\" (UniqueName: \"kubernetes.io/projected/483fd99f-b1d8-4755-8635-78d8508f079f-kube-api-access-5t7ns\") pod \"dnsmasq-dns-5c7b6fb887-m6b8n\" (UID: \"483fd99f-b1d8-4755-8635-78d8508f079f\") " pod="openstack/dnsmasq-dns-5c7b6fb887-m6b8n" Feb 16 17:40:40.038722 master-0 kubenswrapper[4652]: I0216 17:40:40.038657 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-m6b8n" Feb 16 17:40:40.078120 master-0 kubenswrapper[4652]: I0216 17:40:40.078056 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-config\") pod \"dnsmasq-dns-7d78499c-vxnqn\" (UID: \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\") " pod="openstack/dnsmasq-dns-7d78499c-vxnqn" Feb 16 17:40:40.078357 master-0 kubenswrapper[4652]: I0216 17:40:40.078210 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7z6h\" (UniqueName: \"kubernetes.io/projected/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-kube-api-access-n7z6h\") pod \"dnsmasq-dns-7d78499c-vxnqn\" (UID: \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\") " pod="openstack/dnsmasq-dns-7d78499c-vxnqn" Feb 16 17:40:40.078357 master-0 kubenswrapper[4652]: I0216 17:40:40.078308 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-dns-svc\") pod \"dnsmasq-dns-7d78499c-vxnqn\" (UID: \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\") " pod="openstack/dnsmasq-dns-7d78499c-vxnqn" Feb 16 17:40:40.079598 master-0 kubenswrapper[4652]: I0216 17:40:40.079368 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-dns-svc\") pod \"dnsmasq-dns-7d78499c-vxnqn\" (UID: \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\") " pod="openstack/dnsmasq-dns-7d78499c-vxnqn" Feb 16 17:40:40.083441 master-0 kubenswrapper[4652]: I0216 17:40:40.080014 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-config\") pod \"dnsmasq-dns-7d78499c-vxnqn\" (UID: \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\") " pod="openstack/dnsmasq-dns-7d78499c-vxnqn" Feb 16 17:40:40.107902 master-0 kubenswrapper[4652]: I0216 17:40:40.107868 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7z6h\" (UniqueName: \"kubernetes.io/projected/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-kube-api-access-n7z6h\") pod \"dnsmasq-dns-7d78499c-vxnqn\" (UID: \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\") " pod="openstack/dnsmasq-dns-7d78499c-vxnqn" Feb 16 17:40:40.127154 master-0 kubenswrapper[4652]: I0216 17:40:40.127094 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-vxnqn" Feb 16 17:40:40.550084 master-0 kubenswrapper[4652]: I0216 17:40:40.549621 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-m6b8n"] Feb 16 17:40:40.556828 master-0 kubenswrapper[4652]: I0216 17:40:40.556771 4652 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 17:40:40.638339 master-0 kubenswrapper[4652]: I0216 17:40:40.638262 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-vxnqn"] Feb 16 17:40:40.639748 master-0 kubenswrapper[4652]: W0216 17:40:40.639707 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cde9d39_1a26_44c8_8ea3_d9d4bd2ecfb8.slice/crio-d42ef72881a66c5040c2af156705587d82444f0a459aed0eebd6a2f6b9f338d2 WatchSource:0}: Error finding container d42ef72881a66c5040c2af156705587d82444f0a459aed0eebd6a2f6b9f338d2: Status 404 returned error can't find the container with id d42ef72881a66c5040c2af156705587d82444f0a459aed0eebd6a2f6b9f338d2 Feb 16 17:40:40.999429 master-0 kubenswrapper[4652]: I0216 17:40:40.999330 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-vxnqn" event={"ID":"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8","Type":"ContainerStarted","Data":"d42ef72881a66c5040c2af156705587d82444f0a459aed0eebd6a2f6b9f338d2"} Feb 16 17:40:41.001631 master-0 kubenswrapper[4652]: I0216 17:40:41.001584 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-m6b8n" event={"ID":"483fd99f-b1d8-4755-8635-78d8508f079f","Type":"ContainerStarted","Data":"d3f9f11b870d9226726d18e572fcb4d22eed24e8f2cd5c4ab90d9f9d53210542"} Feb 16 17:40:42.453175 master-0 kubenswrapper[4652]: I0216 17:40:42.450711 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-m6b8n"] Feb 16 17:40:42.496306 master-0 kubenswrapper[4652]: I0216 17:40:42.494399 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-9sfsg"] Feb 16 17:40:42.500232 master-0 kubenswrapper[4652]: I0216 17:40:42.497362 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:40:42.531148 master-0 kubenswrapper[4652]: I0216 17:40:42.531114 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-9sfsg"] Feb 16 17:40:42.645046 master-0 kubenswrapper[4652]: I0216 17:40:42.644997 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llbgz\" (UniqueName: \"kubernetes.io/projected/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-kube-api-access-llbgz\") pod \"dnsmasq-dns-5bcd98d69f-9sfsg\" (UID: \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\") " pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:40:42.645304 master-0 kubenswrapper[4652]: I0216 17:40:42.645064 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-config\") pod \"dnsmasq-dns-5bcd98d69f-9sfsg\" (UID: \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\") " pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:40:42.645576 master-0 kubenswrapper[4652]: I0216 17:40:42.645420 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-9sfsg\" (UID: \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\") " pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:40:42.751949 master-0 kubenswrapper[4652]: I0216 17:40:42.750153 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-config\") pod \"dnsmasq-dns-5bcd98d69f-9sfsg\" (UID: \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\") " pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:40:42.751949 master-0 kubenswrapper[4652]: I0216 17:40:42.750275 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-9sfsg\" (UID: \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\") " pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:40:42.751949 master-0 kubenswrapper[4652]: I0216 17:40:42.750501 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llbgz\" (UniqueName: \"kubernetes.io/projected/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-kube-api-access-llbgz\") pod \"dnsmasq-dns-5bcd98d69f-9sfsg\" (UID: \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\") " pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:40:42.764321 master-0 kubenswrapper[4652]: I0216 17:40:42.763093 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-config\") pod \"dnsmasq-dns-5bcd98d69f-9sfsg\" (UID: \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\") " pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:40:42.781360 master-0 kubenswrapper[4652]: I0216 17:40:42.776976 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llbgz\" (UniqueName: \"kubernetes.io/projected/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-kube-api-access-llbgz\") pod \"dnsmasq-dns-5bcd98d69f-9sfsg\" (UID: \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\") " pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:40:42.806947 master-0 kubenswrapper[4652]: I0216 17:40:42.806065 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-dns-svc\") pod \"dnsmasq-dns-5bcd98d69f-9sfsg\" (UID: \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\") " pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:40:42.835219 master-0 kubenswrapper[4652]: I0216 17:40:42.835178 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:40:42.843948 master-0 kubenswrapper[4652]: I0216 17:40:42.838016 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-vxnqn"] Feb 16 17:40:42.927491 master-0 kubenswrapper[4652]: I0216 17:40:42.927431 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-nxsmd"] Feb 16 17:40:42.939345 master-0 kubenswrapper[4652]: I0216 17:40:42.939300 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:40:42.957983 master-0 kubenswrapper[4652]: I0216 17:40:42.957921 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-nxsmd"] Feb 16 17:40:42.958227 master-0 kubenswrapper[4652]: I0216 17:40:42.958119 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5895aff2-c4f5-42f3-a422-9ef5ea305756-config\") pod \"dnsmasq-dns-6b98d7b55c-nxsmd\" (UID: \"5895aff2-c4f5-42f3-a422-9ef5ea305756\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:40:42.958394 master-0 kubenswrapper[4652]: I0216 17:40:42.958233 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt5rg\" (UniqueName: \"kubernetes.io/projected/5895aff2-c4f5-42f3-a422-9ef5ea305756-kube-api-access-mt5rg\") pod \"dnsmasq-dns-6b98d7b55c-nxsmd\" (UID: \"5895aff2-c4f5-42f3-a422-9ef5ea305756\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:40:42.958394 master-0 kubenswrapper[4652]: I0216 17:40:42.958282 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5895aff2-c4f5-42f3-a422-9ef5ea305756-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-nxsmd\" (UID: \"5895aff2-c4f5-42f3-a422-9ef5ea305756\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:40:43.062296 master-0 kubenswrapper[4652]: I0216 17:40:43.059357 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5895aff2-c4f5-42f3-a422-9ef5ea305756-config\") pod \"dnsmasq-dns-6b98d7b55c-nxsmd\" (UID: \"5895aff2-c4f5-42f3-a422-9ef5ea305756\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:40:43.062296 master-0 kubenswrapper[4652]: I0216 17:40:43.059428 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt5rg\" (UniqueName: \"kubernetes.io/projected/5895aff2-c4f5-42f3-a422-9ef5ea305756-kube-api-access-mt5rg\") pod \"dnsmasq-dns-6b98d7b55c-nxsmd\" (UID: \"5895aff2-c4f5-42f3-a422-9ef5ea305756\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:40:43.062296 master-0 kubenswrapper[4652]: I0216 17:40:43.059447 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5895aff2-c4f5-42f3-a422-9ef5ea305756-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-nxsmd\" (UID: \"5895aff2-c4f5-42f3-a422-9ef5ea305756\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:40:43.062296 master-0 kubenswrapper[4652]: I0216 17:40:43.061238 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5895aff2-c4f5-42f3-a422-9ef5ea305756-dns-svc\") pod \"dnsmasq-dns-6b98d7b55c-nxsmd\" (UID: \"5895aff2-c4f5-42f3-a422-9ef5ea305756\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:40:43.065506 master-0 kubenswrapper[4652]: I0216 17:40:43.061703 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5895aff2-c4f5-42f3-a422-9ef5ea305756-config\") pod \"dnsmasq-dns-6b98d7b55c-nxsmd\" (UID: \"5895aff2-c4f5-42f3-a422-9ef5ea305756\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:40:43.077891 master-0 kubenswrapper[4652]: I0216 17:40:43.077842 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt5rg\" (UniqueName: \"kubernetes.io/projected/5895aff2-c4f5-42f3-a422-9ef5ea305756-kube-api-access-mt5rg\") pod \"dnsmasq-dns-6b98d7b55c-nxsmd\" (UID: \"5895aff2-c4f5-42f3-a422-9ef5ea305756\") " pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:40:43.268782 master-0 kubenswrapper[4652]: I0216 17:40:43.268102 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:40:43.368056 master-0 kubenswrapper[4652]: I0216 17:40:43.367995 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-9sfsg"] Feb 16 17:40:43.386611 master-0 kubenswrapper[4652]: W0216 17:40:43.386544 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62508adf_ee70_4ca9_ba5f_7422b4cbacd9.slice/crio-b238ebd221939ded4c778e8c7d1c37b0f5d846bc1d5ebad2015d0e5baba2c41d WatchSource:0}: Error finding container b238ebd221939ded4c778e8c7d1c37b0f5d846bc1d5ebad2015d0e5baba2c41d: Status 404 returned error can't find the container with id b238ebd221939ded4c778e8c7d1c37b0f5d846bc1d5ebad2015d0e5baba2c41d Feb 16 17:40:43.705874 master-0 kubenswrapper[4652]: I0216 17:40:43.705828 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-nxsmd"] Feb 16 17:40:43.711256 master-0 kubenswrapper[4652]: W0216 17:40:43.711202 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5895aff2_c4f5_42f3_a422_9ef5ea305756.slice/crio-b4d3be7976baa41361819405a212bc6d13f4cdea5addc96b57d7ebc1e641f361 WatchSource:0}: Error finding container b4d3be7976baa41361819405a212bc6d13f4cdea5addc96b57d7ebc1e641f361: Status 404 returned error can't find the container with id b4d3be7976baa41361819405a212bc6d13f4cdea5addc96b57d7ebc1e641f361 Feb 16 17:40:44.094364 master-0 kubenswrapper[4652]: I0216 17:40:44.090105 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" event={"ID":"5895aff2-c4f5-42f3-a422-9ef5ea305756","Type":"ContainerStarted","Data":"b4d3be7976baa41361819405a212bc6d13f4cdea5addc96b57d7ebc1e641f361"} Feb 16 17:40:44.095574 master-0 kubenswrapper[4652]: I0216 17:40:44.095504 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" event={"ID":"62508adf-ee70-4ca9-ba5f-7422b4cbacd9","Type":"ContainerStarted","Data":"b238ebd221939ded4c778e8c7d1c37b0f5d846bc1d5ebad2015d0e5baba2c41d"} Feb 16 17:40:46.640661 master-0 kubenswrapper[4652]: I0216 17:40:46.640241 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:40:46.642543 master-0 kubenswrapper[4652]: I0216 17:40:46.642509 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.645112 master-0 kubenswrapper[4652]: I0216 17:40:46.644396 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 17:40:46.645112 master-0 kubenswrapper[4652]: I0216 17:40:46.644801 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 17:40:46.648529 master-0 kubenswrapper[4652]: I0216 17:40:46.645306 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 17:40:46.648529 master-0 kubenswrapper[4652]: I0216 17:40:46.645129 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 17:40:46.648529 master-0 kubenswrapper[4652]: I0216 17:40:46.645548 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 17:40:46.648529 master-0 kubenswrapper[4652]: I0216 17:40:46.645429 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 17:40:46.668186 master-0 kubenswrapper[4652]: I0216 17:40:46.668141 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:40:46.753196 master-0 kubenswrapper[4652]: I0216 17:40:46.752035 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da39e35c-827b-40f3-8359-db6934118af4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.753196 master-0 kubenswrapper[4652]: I0216 17:40:46.752132 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9lwf\" (UniqueName: \"kubernetes.io/projected/da39e35c-827b-40f3-8359-db6934118af4-kube-api-access-x9lwf\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.753196 master-0 kubenswrapper[4652]: I0216 17:40:46.752162 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da39e35c-827b-40f3-8359-db6934118af4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.753196 master-0 kubenswrapper[4652]: I0216 17:40:46.752188 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da39e35c-827b-40f3-8359-db6934118af4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.753196 master-0 kubenswrapper[4652]: I0216 17:40:46.752205 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da39e35c-827b-40f3-8359-db6934118af4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.753196 master-0 kubenswrapper[4652]: I0216 17:40:46.752241 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da39e35c-827b-40f3-8359-db6934118af4-config-data\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.753196 master-0 kubenswrapper[4652]: I0216 17:40:46.752271 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da39e35c-827b-40f3-8359-db6934118af4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.753196 master-0 kubenswrapper[4652]: I0216 17:40:46.752297 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da39e35c-827b-40f3-8359-db6934118af4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.753196 master-0 kubenswrapper[4652]: I0216 17:40:46.752318 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba4bd580-e80d-4e89-a986-69817c8e8f85\" (UniqueName: \"kubernetes.io/csi/topolvm.io^734ed801-2dbc-419f-8eb4-5648171f4b55\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.753196 master-0 kubenswrapper[4652]: I0216 17:40:46.752340 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da39e35c-827b-40f3-8359-db6934118af4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.753196 master-0 kubenswrapper[4652]: I0216 17:40:46.752359 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da39e35c-827b-40f3-8359-db6934118af4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.831938 master-0 kubenswrapper[4652]: I0216 17:40:46.831853 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 17:40:46.833320 master-0 kubenswrapper[4652]: I0216 17:40:46.833293 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 17:40:46.839920 master-0 kubenswrapper[4652]: I0216 17:40:46.838328 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 17:40:46.840728 master-0 kubenswrapper[4652]: I0216 17:40:46.840630 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 17:40:46.844703 master-0 kubenswrapper[4652]: I0216 17:40:46.844671 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 17:40:46.855460 master-0 kubenswrapper[4652]: I0216 17:40:46.853943 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da39e35c-827b-40f3-8359-db6934118af4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.855460 master-0 kubenswrapper[4652]: I0216 17:40:46.854033 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da39e35c-827b-40f3-8359-db6934118af4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.855460 master-0 kubenswrapper[4652]: I0216 17:40:46.854120 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da39e35c-827b-40f3-8359-db6934118af4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.857071 master-0 kubenswrapper[4652]: I0216 17:40:46.855808 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da39e35c-827b-40f3-8359-db6934118af4-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.857071 master-0 kubenswrapper[4652]: I0216 17:40:46.855876 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9lwf\" (UniqueName: \"kubernetes.io/projected/da39e35c-827b-40f3-8359-db6934118af4-kube-api-access-x9lwf\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.858054 master-0 kubenswrapper[4652]: I0216 17:40:46.858000 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da39e35c-827b-40f3-8359-db6934118af4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.858159 master-0 kubenswrapper[4652]: I0216 17:40:46.858097 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da39e35c-827b-40f3-8359-db6934118af4-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.862853 master-0 kubenswrapper[4652]: I0216 17:40:46.862494 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da39e35c-827b-40f3-8359-db6934118af4-pod-info\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.863364 master-0 kubenswrapper[4652]: I0216 17:40:46.863333 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da39e35c-827b-40f3-8359-db6934118af4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.865024 master-0 kubenswrapper[4652]: I0216 17:40:46.865000 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da39e35c-827b-40f3-8359-db6934118af4-server-conf\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.865108 master-0 kubenswrapper[4652]: I0216 17:40:46.865053 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da39e35c-827b-40f3-8359-db6934118af4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.865222 master-0 kubenswrapper[4652]: I0216 17:40:46.865203 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da39e35c-827b-40f3-8359-db6934118af4-config-data\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.865305 master-0 kubenswrapper[4652]: I0216 17:40:46.865228 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da39e35c-827b-40f3-8359-db6934118af4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.865305 master-0 kubenswrapper[4652]: I0216 17:40:46.865290 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da39e35c-827b-40f3-8359-db6934118af4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.865381 master-0 kubenswrapper[4652]: I0216 17:40:46.865336 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ba4bd580-e80d-4e89-a986-69817c8e8f85\" (UniqueName: \"kubernetes.io/csi/topolvm.io^734ed801-2dbc-419f-8eb4-5648171f4b55\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.866209 master-0 kubenswrapper[4652]: I0216 17:40:46.865823 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 17:40:46.866726 master-0 kubenswrapper[4652]: I0216 17:40:46.866686 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da39e35c-827b-40f3-8359-db6934118af4-config-data\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.866835 master-0 kubenswrapper[4652]: I0216 17:40:46.866774 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da39e35c-827b-40f3-8359-db6934118af4-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.870346 master-0 kubenswrapper[4652]: I0216 17:40:46.870108 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da39e35c-827b-40f3-8359-db6934118af4-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.871036 master-0 kubenswrapper[4652]: I0216 17:40:46.870890 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:40:46.871036 master-0 kubenswrapper[4652]: I0216 17:40:46.870925 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ba4bd580-e80d-4e89-a986-69817c8e8f85\" (UniqueName: \"kubernetes.io/csi/topolvm.io^734ed801-2dbc-419f-8eb4-5648171f4b55\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/facff698cc8a26411c34c864c996acf33c9e2c3346bf8eeb42dccfca6e9a8864/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.875042 master-0 kubenswrapper[4652]: I0216 17:40:46.874989 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da39e35c-827b-40f3-8359-db6934118af4-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.876483 master-0 kubenswrapper[4652]: I0216 17:40:46.876430 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9lwf\" (UniqueName: \"kubernetes.io/projected/da39e35c-827b-40f3-8359-db6934118af4-kube-api-access-x9lwf\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.896015 master-0 kubenswrapper[4652]: I0216 17:40:46.895911 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da39e35c-827b-40f3-8359-db6934118af4-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:46.967553 master-0 kubenswrapper[4652]: I0216 17:40:46.967500 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2425g\" (UniqueName: \"kubernetes.io/projected/e6a7c073-8562-4706-b5be-41ea098db1ab-kube-api-access-2425g\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:46.967829 master-0 kubenswrapper[4652]: I0216 17:40:46.967605 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a7c073-8562-4706-b5be-41ea098db1ab-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:46.967899 master-0 kubenswrapper[4652]: I0216 17:40:46.967868 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6a7c073-8562-4706-b5be-41ea098db1ab-kolla-config\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:46.969080 master-0 kubenswrapper[4652]: I0216 17:40:46.968893 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6a7c073-8562-4706-b5be-41ea098db1ab-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:46.969186 master-0 kubenswrapper[4652]: I0216 17:40:46.969153 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6a7c073-8562-4706-b5be-41ea098db1ab-config-data\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:47.071576 master-0 kubenswrapper[4652]: I0216 17:40:47.071514 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a7c073-8562-4706-b5be-41ea098db1ab-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:47.071808 master-0 kubenswrapper[4652]: I0216 17:40:47.071628 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6a7c073-8562-4706-b5be-41ea098db1ab-kolla-config\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:47.071808 master-0 kubenswrapper[4652]: I0216 17:40:47.071695 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6a7c073-8562-4706-b5be-41ea098db1ab-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:47.071808 master-0 kubenswrapper[4652]: I0216 17:40:47.071731 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6a7c073-8562-4706-b5be-41ea098db1ab-config-data\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:47.072460 master-0 kubenswrapper[4652]: I0216 17:40:47.072164 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2425g\" (UniqueName: \"kubernetes.io/projected/e6a7c073-8562-4706-b5be-41ea098db1ab-kube-api-access-2425g\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:47.072666 master-0 kubenswrapper[4652]: I0216 17:40:47.072607 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e6a7c073-8562-4706-b5be-41ea098db1ab-kolla-config\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:47.077883 master-0 kubenswrapper[4652]: I0216 17:40:47.077308 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6a7c073-8562-4706-b5be-41ea098db1ab-config-data\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:47.078987 master-0 kubenswrapper[4652]: I0216 17:40:47.078604 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/e6a7c073-8562-4706-b5be-41ea098db1ab-memcached-tls-certs\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:47.092359 master-0 kubenswrapper[4652]: I0216 17:40:47.092275 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2425g\" (UniqueName: \"kubernetes.io/projected/e6a7c073-8562-4706-b5be-41ea098db1ab-kube-api-access-2425g\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:47.095347 master-0 kubenswrapper[4652]: I0216 17:40:47.095312 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6a7c073-8562-4706-b5be-41ea098db1ab-combined-ca-bundle\") pod \"memcached-0\" (UID: \"e6a7c073-8562-4706-b5be-41ea098db1ab\") " pod="openstack/memcached-0" Feb 16 17:40:47.156379 master-0 kubenswrapper[4652]: I0216 17:40:47.156135 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 17:40:47.972237 master-0 kubenswrapper[4652]: I0216 17:40:47.972105 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:40:47.974324 master-0 kubenswrapper[4652]: I0216 17:40:47.974222 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:47.985424 master-0 kubenswrapper[4652]: I0216 17:40:47.984294 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:40:47.990518 master-0 kubenswrapper[4652]: I0216 17:40:47.990081 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 17:40:47.990870 master-0 kubenswrapper[4652]: I0216 17:40:47.990818 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 17:40:47.991143 master-0 kubenswrapper[4652]: I0216 17:40:47.991111 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 17:40:47.991347 master-0 kubenswrapper[4652]: I0216 17:40:47.991317 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 17:40:47.991506 master-0 kubenswrapper[4652]: I0216 17:40:47.991486 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 17:40:47.991733 master-0 kubenswrapper[4652]: I0216 17:40:47.991706 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 17:40:48.091385 master-0 kubenswrapper[4652]: I0216 17:40:48.091337 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90814419-59bd-4110-8afa-6842e5fa7b95-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.091385 master-0 kubenswrapper[4652]: I0216 17:40:48.091386 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90814419-59bd-4110-8afa-6842e5fa7b95-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.091665 master-0 kubenswrapper[4652]: I0216 17:40:48.091412 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90814419-59bd-4110-8afa-6842e5fa7b95-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.091665 master-0 kubenswrapper[4652]: I0216 17:40:48.091473 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90814419-59bd-4110-8afa-6842e5fa7b95-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.091665 master-0 kubenswrapper[4652]: I0216 17:40:48.091501 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90814419-59bd-4110-8afa-6842e5fa7b95-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.092460 master-0 kubenswrapper[4652]: I0216 17:40:48.092436 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-86c3a0d7-1610-4287-9649-62ef946bd34f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8a0a3cf7-4e08-46a9-939e-5337e72fa20e\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.092685 master-0 kubenswrapper[4652]: I0216 17:40:48.092645 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90814419-59bd-4110-8afa-6842e5fa7b95-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.092732 master-0 kubenswrapper[4652]: I0216 17:40:48.092701 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90814419-59bd-4110-8afa-6842e5fa7b95-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.093161 master-0 kubenswrapper[4652]: I0216 17:40:48.092941 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90814419-59bd-4110-8afa-6842e5fa7b95-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.093161 master-0 kubenswrapper[4652]: I0216 17:40:48.092992 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90814419-59bd-4110-8afa-6842e5fa7b95-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.093161 master-0 kubenswrapper[4652]: I0216 17:40:48.093019 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z27vs\" (UniqueName: \"kubernetes.io/projected/90814419-59bd-4110-8afa-6842e5fa7b95-kube-api-access-z27vs\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.145979 master-0 kubenswrapper[4652]: I0216 17:40:48.145911 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 17:40:48.147904 master-0 kubenswrapper[4652]: I0216 17:40:48.147870 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 17:40:48.156531 master-0 kubenswrapper[4652]: I0216 17:40:48.156484 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 17:40:48.156798 master-0 kubenswrapper[4652]: I0216 17:40:48.156770 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 17:40:48.156942 master-0 kubenswrapper[4652]: I0216 17:40:48.156918 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 17:40:48.169751 master-0 kubenswrapper[4652]: I0216 17:40:48.169696 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 17:40:48.195603 master-0 kubenswrapper[4652]: I0216 17:40:48.195523 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90814419-59bd-4110-8afa-6842e5fa7b95-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.195603 master-0 kubenswrapper[4652]: I0216 17:40:48.195596 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90814419-59bd-4110-8afa-6842e5fa7b95-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.195603 master-0 kubenswrapper[4652]: I0216 17:40:48.195621 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90814419-59bd-4110-8afa-6842e5fa7b95-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.195938 master-0 kubenswrapper[4652]: I0216 17:40:48.195773 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90814419-59bd-4110-8afa-6842e5fa7b95-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.196502 master-0 kubenswrapper[4652]: I0216 17:40:48.196464 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/90814419-59bd-4110-8afa-6842e5fa7b95-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.197006 master-0 kubenswrapper[4652]: I0216 17:40:48.196545 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90814419-59bd-4110-8afa-6842e5fa7b95-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.197106 master-0 kubenswrapper[4652]: I0216 17:40:48.197080 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-86c3a0d7-1610-4287-9649-62ef946bd34f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8a0a3cf7-4e08-46a9-939e-5337e72fa20e\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.198676 master-0 kubenswrapper[4652]: I0216 17:40:48.197183 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90814419-59bd-4110-8afa-6842e5fa7b95-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.198676 master-0 kubenswrapper[4652]: I0216 17:40:48.197285 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90814419-59bd-4110-8afa-6842e5fa7b95-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.198676 master-0 kubenswrapper[4652]: I0216 17:40:48.197318 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/90814419-59bd-4110-8afa-6842e5fa7b95-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.198676 master-0 kubenswrapper[4652]: I0216 17:40:48.198026 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90814419-59bd-4110-8afa-6842e5fa7b95-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.198676 master-0 kubenswrapper[4652]: I0216 17:40:48.198066 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90814419-59bd-4110-8afa-6842e5fa7b95-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.198676 master-0 kubenswrapper[4652]: I0216 17:40:48.198099 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z27vs\" (UniqueName: \"kubernetes.io/projected/90814419-59bd-4110-8afa-6842e5fa7b95-kube-api-access-z27vs\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.199900 master-0 kubenswrapper[4652]: I0216 17:40:48.199140 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/90814419-59bd-4110-8afa-6842e5fa7b95-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.199900 master-0 kubenswrapper[4652]: I0216 17:40:48.199469 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/90814419-59bd-4110-8afa-6842e5fa7b95-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.200011 master-0 kubenswrapper[4652]: I0216 17:40:48.199934 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/90814419-59bd-4110-8afa-6842e5fa7b95-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.201360 master-0 kubenswrapper[4652]: I0216 17:40:48.200655 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:40:48.201360 master-0 kubenswrapper[4652]: I0216 17:40:48.200690 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-86c3a0d7-1610-4287-9649-62ef946bd34f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8a0a3cf7-4e08-46a9-939e-5337e72fa20e\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/be7b1d43a003578f694bebdb529699049b397496842f1f2eb4158c6706be6ae2/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.202310 master-0 kubenswrapper[4652]: I0216 17:40:48.202284 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/90814419-59bd-4110-8afa-6842e5fa7b95-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.211472 master-0 kubenswrapper[4652]: I0216 17:40:48.211149 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/90814419-59bd-4110-8afa-6842e5fa7b95-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.212392 master-0 kubenswrapper[4652]: I0216 17:40:48.212300 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/90814419-59bd-4110-8afa-6842e5fa7b95-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.218924 master-0 kubenswrapper[4652]: I0216 17:40:48.218893 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z27vs\" (UniqueName: \"kubernetes.io/projected/90814419-59bd-4110-8afa-6842e5fa7b95-kube-api-access-z27vs\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.221958 master-0 kubenswrapper[4652]: I0216 17:40:48.221898 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/90814419-59bd-4110-8afa-6842e5fa7b95-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:48.301729 master-0 kubenswrapper[4652]: I0216 17:40:48.301660 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ac815e50-ebe0-4937-b781-9396e09bc55d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.301977 master-0 kubenswrapper[4652]: I0216 17:40:48.301731 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac815e50-ebe0-4937-b781-9396e09bc55d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.301977 master-0 kubenswrapper[4652]: I0216 17:40:48.301775 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac815e50-ebe0-4937-b781-9396e09bc55d-kolla-config\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.301977 master-0 kubenswrapper[4652]: I0216 17:40:48.301812 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac815e50-ebe0-4937-b781-9396e09bc55d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.301977 master-0 kubenswrapper[4652]: I0216 17:40:48.301853 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac815e50-ebe0-4937-b781-9396e09bc55d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.301977 master-0 kubenswrapper[4652]: I0216 17:40:48.301930 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxrqt\" (UniqueName: \"kubernetes.io/projected/ac815e50-ebe0-4937-b781-9396e09bc55d-kube-api-access-qxrqt\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.302179 master-0 kubenswrapper[4652]: I0216 17:40:48.302004 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c7024dcf-a25b-4ab7-b526-cd66d9de9733\" (UniqueName: \"kubernetes.io/csi/topolvm.io^394f29fd-e678-4f99-8970-2bdafc5e24fa\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.302327 master-0 kubenswrapper[4652]: I0216 17:40:48.302268 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ac815e50-ebe0-4937-b781-9396e09bc55d-config-data-default\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.372936 master-0 kubenswrapper[4652]: I0216 17:40:48.372867 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ba4bd580-e80d-4e89-a986-69817c8e8f85\" (UniqueName: \"kubernetes.io/csi/topolvm.io^734ed801-2dbc-419f-8eb4-5648171f4b55\") pod \"rabbitmq-server-0\" (UID: \"da39e35c-827b-40f3-8359-db6934118af4\") " pod="openstack/rabbitmq-server-0" Feb 16 17:40:48.407486 master-0 kubenswrapper[4652]: I0216 17:40:48.407423 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ac815e50-ebe0-4937-b781-9396e09bc55d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.407486 master-0 kubenswrapper[4652]: I0216 17:40:48.407480 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac815e50-ebe0-4937-b781-9396e09bc55d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.407765 master-0 kubenswrapper[4652]: I0216 17:40:48.407511 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac815e50-ebe0-4937-b781-9396e09bc55d-kolla-config\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.407765 master-0 kubenswrapper[4652]: I0216 17:40:48.407539 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac815e50-ebe0-4937-b781-9396e09bc55d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.407765 master-0 kubenswrapper[4652]: I0216 17:40:48.407562 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac815e50-ebe0-4937-b781-9396e09bc55d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.407765 master-0 kubenswrapper[4652]: I0216 17:40:48.407613 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxrqt\" (UniqueName: \"kubernetes.io/projected/ac815e50-ebe0-4937-b781-9396e09bc55d-kube-api-access-qxrqt\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.407765 master-0 kubenswrapper[4652]: I0216 17:40:48.407658 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c7024dcf-a25b-4ab7-b526-cd66d9de9733\" (UniqueName: \"kubernetes.io/csi/topolvm.io^394f29fd-e678-4f99-8970-2bdafc5e24fa\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.407765 master-0 kubenswrapper[4652]: I0216 17:40:48.407741 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ac815e50-ebe0-4937-b781-9396e09bc55d-config-data-default\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.408650 master-0 kubenswrapper[4652]: I0216 17:40:48.408616 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ac815e50-ebe0-4937-b781-9396e09bc55d-config-data-default\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.408865 master-0 kubenswrapper[4652]: I0216 17:40:48.408838 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ac815e50-ebe0-4937-b781-9396e09bc55d-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.409967 master-0 kubenswrapper[4652]: I0216 17:40:48.409933 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac815e50-ebe0-4937-b781-9396e09bc55d-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.410977 master-0 kubenswrapper[4652]: I0216 17:40:48.410953 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac815e50-ebe0-4937-b781-9396e09bc55d-kolla-config\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.412353 master-0 kubenswrapper[4652]: I0216 17:40:48.412295 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:40:48.412553 master-0 kubenswrapper[4652]: I0216 17:40:48.412482 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c7024dcf-a25b-4ab7-b526-cd66d9de9733\" (UniqueName: \"kubernetes.io/csi/topolvm.io^394f29fd-e678-4f99-8970-2bdafc5e24fa\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/eb414155841ae189e9ee0367a26f6aa408678e54cf9cc73a4824bd94704fc225/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 17:40:48.420358 master-0 kubenswrapper[4652]: I0216 17:40:48.413891 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac815e50-ebe0-4937-b781-9396e09bc55d-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.420358 master-0 kubenswrapper[4652]: I0216 17:40:48.419451 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac815e50-ebe0-4937-b781-9396e09bc55d-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.435030 master-0 kubenswrapper[4652]: I0216 17:40:48.434993 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxrqt\" (UniqueName: \"kubernetes.io/projected/ac815e50-ebe0-4937-b781-9396e09bc55d-kube-api-access-qxrqt\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:48.487700 master-0 kubenswrapper[4652]: I0216 17:40:48.487637 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 17:40:49.781930 master-0 kubenswrapper[4652]: I0216 17:40:49.781740 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-86c3a0d7-1610-4287-9649-62ef946bd34f\" (UniqueName: \"kubernetes.io/csi/topolvm.io^8a0a3cf7-4e08-46a9-939e-5337e72fa20e\") pod \"rabbitmq-cell1-server-0\" (UID: \"90814419-59bd-4110-8afa-6842e5fa7b95\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:49.793697 master-0 kubenswrapper[4652]: I0216 17:40:49.793636 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 17:40:49.795306 master-0 kubenswrapper[4652]: I0216 17:40:49.795283 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:49.802490 master-0 kubenswrapper[4652]: I0216 17:40:49.802420 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 17:40:49.802701 master-0 kubenswrapper[4652]: I0216 17:40:49.802577 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 17:40:49.802876 master-0 kubenswrapper[4652]: I0216 17:40:49.802845 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 17:40:49.814974 master-0 kubenswrapper[4652]: I0216 17:40:49.814166 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 17:40:49.859410 master-0 kubenswrapper[4652]: I0216 17:40:49.859330 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:40:49.961601 master-0 kubenswrapper[4652]: I0216 17:40:49.961484 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c40b200-e1ed-461b-86c5-d23dce6ceb35-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:49.961601 master-0 kubenswrapper[4652]: I0216 17:40:49.961537 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0fa6e866-565c-4f7a-a53f-8a224bf5f52c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c80166e5-674b-4b54-8fca-eff2ff961dc9\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:49.961826 master-0 kubenswrapper[4652]: I0216 17:40:49.961617 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsfx2\" (UniqueName: \"kubernetes.io/projected/6c40b200-e1ed-461b-86c5-d23dce6ceb35-kube-api-access-nsfx2\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:49.961826 master-0 kubenswrapper[4652]: I0216 17:40:49.961724 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c40b200-e1ed-461b-86c5-d23dce6ceb35-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:49.961826 master-0 kubenswrapper[4652]: I0216 17:40:49.961786 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c40b200-e1ed-461b-86c5-d23dce6ceb35-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:49.961826 master-0 kubenswrapper[4652]: I0216 17:40:49.961816 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c40b200-e1ed-461b-86c5-d23dce6ceb35-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:49.962007 master-0 kubenswrapper[4652]: I0216 17:40:49.961962 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c40b200-e1ed-461b-86c5-d23dce6ceb35-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:49.962046 master-0 kubenswrapper[4652]: I0216 17:40:49.962010 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c40b200-e1ed-461b-86c5-d23dce6ceb35-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.064632 master-0 kubenswrapper[4652]: I0216 17:40:50.064563 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0fa6e866-565c-4f7a-a53f-8a224bf5f52c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c80166e5-674b-4b54-8fca-eff2ff961dc9\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.064874 master-0 kubenswrapper[4652]: I0216 17:40:50.064847 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsfx2\" (UniqueName: \"kubernetes.io/projected/6c40b200-e1ed-461b-86c5-d23dce6ceb35-kube-api-access-nsfx2\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.064944 master-0 kubenswrapper[4652]: I0216 17:40:50.064897 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c40b200-e1ed-461b-86c5-d23dce6ceb35-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.064944 master-0 kubenswrapper[4652]: I0216 17:40:50.064931 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c40b200-e1ed-461b-86c5-d23dce6ceb35-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.065014 master-0 kubenswrapper[4652]: I0216 17:40:50.064957 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c40b200-e1ed-461b-86c5-d23dce6ceb35-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.065054 master-0 kubenswrapper[4652]: I0216 17:40:50.065042 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c40b200-e1ed-461b-86c5-d23dce6ceb35-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.065093 master-0 kubenswrapper[4652]: I0216 17:40:50.065071 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c40b200-e1ed-461b-86c5-d23dce6ceb35-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.065198 master-0 kubenswrapper[4652]: I0216 17:40:50.065173 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c40b200-e1ed-461b-86c5-d23dce6ceb35-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.066318 master-0 kubenswrapper[4652]: I0216 17:40:50.066222 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c40b200-e1ed-461b-86c5-d23dce6ceb35-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.066712 master-0 kubenswrapper[4652]: I0216 17:40:50.066473 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:40:50.066712 master-0 kubenswrapper[4652]: I0216 17:40:50.066519 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0fa6e866-565c-4f7a-a53f-8a224bf5f52c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c80166e5-674b-4b54-8fca-eff2ff961dc9\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/d92ed5a1914d53ced71421f504167ba57d45e89ea4418b6fc6c00d24f096c484/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.075902 master-0 kubenswrapper[4652]: I0216 17:40:50.070173 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c40b200-e1ed-461b-86c5-d23dce6ceb35-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.075902 master-0 kubenswrapper[4652]: I0216 17:40:50.070703 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c40b200-e1ed-461b-86c5-d23dce6ceb35-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.075902 master-0 kubenswrapper[4652]: I0216 17:40:50.070805 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c40b200-e1ed-461b-86c5-d23dce6ceb35-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.077408 master-0 kubenswrapper[4652]: I0216 17:40:50.077376 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c40b200-e1ed-461b-86c5-d23dce6ceb35-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.079727 master-0 kubenswrapper[4652]: I0216 17:40:50.079699 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c40b200-e1ed-461b-86c5-d23dce6ceb35-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.085043 master-0 kubenswrapper[4652]: I0216 17:40:50.084996 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsfx2\" (UniqueName: \"kubernetes.io/projected/6c40b200-e1ed-461b-86c5-d23dce6ceb35-kube-api-access-nsfx2\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:50.800357 master-0 kubenswrapper[4652]: I0216 17:40:50.800313 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c7024dcf-a25b-4ab7-b526-cd66d9de9733\" (UniqueName: \"kubernetes.io/csi/topolvm.io^394f29fd-e678-4f99-8970-2bdafc5e24fa\") pod \"openstack-galera-0\" (UID: \"ac815e50-ebe0-4937-b781-9396e09bc55d\") " pod="openstack/openstack-galera-0" Feb 16 17:40:51.470340 master-0 kubenswrapper[4652]: I0216 17:40:51.470227 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 17:40:51.834844 master-0 kubenswrapper[4652]: I0216 17:40:51.834802 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0fa6e866-565c-4f7a-a53f-8a224bf5f52c\" (UniqueName: \"kubernetes.io/csi/topolvm.io^c80166e5-674b-4b54-8fca-eff2ff961dc9\") pod \"openstack-cell1-galera-0\" (UID: \"6c40b200-e1ed-461b-86c5-d23dce6ceb35\") " pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:52.243186 master-0 kubenswrapper[4652]: I0216 17:40:52.243128 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 17:40:52.467955 master-0 kubenswrapper[4652]: I0216 17:40:52.467898 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5qcmk"] Feb 16 17:40:52.469202 master-0 kubenswrapper[4652]: I0216 17:40:52.469176 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5qcmk"] Feb 16 17:40:52.469290 master-0 kubenswrapper[4652]: I0216 17:40:52.469276 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.475459 master-0 kubenswrapper[4652]: I0216 17:40:52.472166 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 17:40:52.475459 master-0 kubenswrapper[4652]: I0216 17:40:52.472239 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 17:40:52.483439 master-0 kubenswrapper[4652]: I0216 17:40:52.480693 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-bmlhg"] Feb 16 17:40:52.483439 master-0 kubenswrapper[4652]: I0216 17:40:52.482594 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.493553 master-0 kubenswrapper[4652]: I0216 17:40:52.493436 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bmlhg"] Feb 16 17:40:52.615441 master-0 kubenswrapper[4652]: I0216 17:40:52.615395 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c938ffa5-271f-4685-b9c2-c236001d07b4-var-run-ovn\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.615647 master-0 kubenswrapper[4652]: I0216 17:40:52.615500 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-etc-ovs\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.615647 master-0 kubenswrapper[4652]: I0216 17:40:52.615534 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c938ffa5-271f-4685-b9c2-c236001d07b4-var-run\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.615647 master-0 kubenswrapper[4652]: I0216 17:40:52.615603 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvk6j\" (UniqueName: \"kubernetes.io/projected/c938ffa5-271f-4685-b9c2-c236001d07b4-kube-api-access-xvk6j\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.615647 master-0 kubenswrapper[4652]: I0216 17:40:52.615644 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxwwv\" (UniqueName: \"kubernetes.io/projected/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-kube-api-access-gxwwv\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.615788 master-0 kubenswrapper[4652]: I0216 17:40:52.615662 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c938ffa5-271f-4685-b9c2-c236001d07b4-var-log-ovn\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.615788 master-0 kubenswrapper[4652]: I0216 17:40:52.615683 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-var-lib\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.615788 master-0 kubenswrapper[4652]: I0216 17:40:52.615704 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-var-log\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.615788 master-0 kubenswrapper[4652]: I0216 17:40:52.615724 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-var-run\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.615788 master-0 kubenswrapper[4652]: I0216 17:40:52.615755 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c938ffa5-271f-4685-b9c2-c236001d07b4-ovn-controller-tls-certs\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.615788 master-0 kubenswrapper[4652]: I0216 17:40:52.615774 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c938ffa5-271f-4685-b9c2-c236001d07b4-combined-ca-bundle\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.615979 master-0 kubenswrapper[4652]: I0216 17:40:52.615793 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-scripts\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.615979 master-0 kubenswrapper[4652]: I0216 17:40:52.615810 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c938ffa5-271f-4685-b9c2-c236001d07b4-scripts\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.723897 master-0 kubenswrapper[4652]: I0216 17:40:52.723814 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c938ffa5-271f-4685-b9c2-c236001d07b4-var-run-ovn\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.723897 master-0 kubenswrapper[4652]: I0216 17:40:52.723889 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-etc-ovs\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.724268 master-0 kubenswrapper[4652]: I0216 17:40:52.723928 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c938ffa5-271f-4685-b9c2-c236001d07b4-var-run\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.724438 master-0 kubenswrapper[4652]: I0216 17:40:52.724397 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvk6j\" (UniqueName: \"kubernetes.io/projected/c938ffa5-271f-4685-b9c2-c236001d07b4-kube-api-access-xvk6j\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.724778 master-0 kubenswrapper[4652]: I0216 17:40:52.724736 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-etc-ovs\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.724860 master-0 kubenswrapper[4652]: I0216 17:40:52.724625 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c938ffa5-271f-4685-b9c2-c236001d07b4-var-run\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.724860 master-0 kubenswrapper[4652]: I0216 17:40:52.724590 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c938ffa5-271f-4685-b9c2-c236001d07b4-var-run-ovn\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.725217 master-0 kubenswrapper[4652]: I0216 17:40:52.725160 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxwwv\" (UniqueName: \"kubernetes.io/projected/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-kube-api-access-gxwwv\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.725317 master-0 kubenswrapper[4652]: I0216 17:40:52.725240 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c938ffa5-271f-4685-b9c2-c236001d07b4-var-log-ovn\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.725317 master-0 kubenswrapper[4652]: I0216 17:40:52.725294 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-var-lib\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.725422 master-0 kubenswrapper[4652]: I0216 17:40:52.725350 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-var-log\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.725422 master-0 kubenswrapper[4652]: I0216 17:40:52.725388 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-var-run\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.725553 master-0 kubenswrapper[4652]: I0216 17:40:52.725531 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c938ffa5-271f-4685-b9c2-c236001d07b4-ovn-controller-tls-certs\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.725616 master-0 kubenswrapper[4652]: I0216 17:40:52.725594 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c938ffa5-271f-4685-b9c2-c236001d07b4-combined-ca-bundle\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.725711 master-0 kubenswrapper[4652]: I0216 17:40:52.725639 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-scripts\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.725919 master-0 kubenswrapper[4652]: I0216 17:40:52.725784 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-var-log\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.725919 master-0 kubenswrapper[4652]: I0216 17:40:52.725798 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c938ffa5-271f-4685-b9c2-c236001d07b4-scripts\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.725919 master-0 kubenswrapper[4652]: I0216 17:40:52.725685 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c938ffa5-271f-4685-b9c2-c236001d07b4-var-log-ovn\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.725919 master-0 kubenswrapper[4652]: I0216 17:40:52.725891 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-var-lib\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.726211 master-0 kubenswrapper[4652]: I0216 17:40:52.726187 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-var-run\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.728767 master-0 kubenswrapper[4652]: I0216 17:40:52.728728 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c938ffa5-271f-4685-b9c2-c236001d07b4-scripts\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.741821 master-0 kubenswrapper[4652]: I0216 17:40:52.737589 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-scripts\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.741821 master-0 kubenswrapper[4652]: I0216 17:40:52.741131 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c938ffa5-271f-4685-b9c2-c236001d07b4-ovn-controller-tls-certs\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.745298 master-0 kubenswrapper[4652]: I0216 17:40:52.745050 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxwwv\" (UniqueName: \"kubernetes.io/projected/a6ea7a52-9270-45fe-b0dd-d80bac7c3a75-kube-api-access-gxwwv\") pod \"ovn-controller-ovs-bmlhg\" (UID: \"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75\") " pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:52.748263 master-0 kubenswrapper[4652]: I0216 17:40:52.748211 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvk6j\" (UniqueName: \"kubernetes.io/projected/c938ffa5-271f-4685-b9c2-c236001d07b4-kube-api-access-xvk6j\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.748908 master-0 kubenswrapper[4652]: I0216 17:40:52.748872 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c938ffa5-271f-4685-b9c2-c236001d07b4-combined-ca-bundle\") pod \"ovn-controller-5qcmk\" (UID: \"c938ffa5-271f-4685-b9c2-c236001d07b4\") " pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.804658 master-0 kubenswrapper[4652]: I0216 17:40:52.804602 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qcmk" Feb 16 17:40:52.818645 master-0 kubenswrapper[4652]: I0216 17:40:52.818598 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:40:54.348051 master-0 kubenswrapper[4652]: I0216 17:40:54.348007 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 17:40:54.352509 master-0 kubenswrapper[4652]: I0216 17:40:54.350480 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.357405 master-0 kubenswrapper[4652]: I0216 17:40:54.353442 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 17:40:54.357405 master-0 kubenswrapper[4652]: I0216 17:40:54.355768 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 17:40:54.357405 master-0 kubenswrapper[4652]: I0216 17:40:54.356884 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 17:40:54.357405 master-0 kubenswrapper[4652]: I0216 17:40:54.357113 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 17:40:54.366619 master-0 kubenswrapper[4652]: I0216 17:40:54.366579 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 17:40:54.464650 master-0 kubenswrapper[4652]: I0216 17:40:54.464591 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-config\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.464840 master-0 kubenswrapper[4652]: I0216 17:40:54.464677 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.464840 master-0 kubenswrapper[4652]: I0216 17:40:54.464771 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b561d12d-f636-4387-af82-5cefe4c15491\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a1077481-5eba-4716-86ec-f3d83adfc7d0\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.465038 master-0 kubenswrapper[4652]: I0216 17:40:54.464940 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.465038 master-0 kubenswrapper[4652]: I0216 17:40:54.465025 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.465311 master-0 kubenswrapper[4652]: I0216 17:40:54.465286 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.465398 master-0 kubenswrapper[4652]: I0216 17:40:54.465338 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.465458 master-0 kubenswrapper[4652]: I0216 17:40:54.465411 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwv86\" (UniqueName: \"kubernetes.io/projected/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-kube-api-access-rwv86\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.567519 master-0 kubenswrapper[4652]: I0216 17:40:54.567465 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b561d12d-f636-4387-af82-5cefe4c15491\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a1077481-5eba-4716-86ec-f3d83adfc7d0\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.567810 master-0 kubenswrapper[4652]: I0216 17:40:54.567562 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.567810 master-0 kubenswrapper[4652]: I0216 17:40:54.567763 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.568039 master-0 kubenswrapper[4652]: I0216 17:40:54.568021 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.568076 master-0 kubenswrapper[4652]: I0216 17:40:54.568052 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.568110 master-0 kubenswrapper[4652]: I0216 17:40:54.568098 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwv86\" (UniqueName: \"kubernetes.io/projected/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-kube-api-access-rwv86\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.568747 master-0 kubenswrapper[4652]: I0216 17:40:54.568726 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-config\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.568893 master-0 kubenswrapper[4652]: I0216 17:40:54.568787 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.569532 master-0 kubenswrapper[4652]: I0216 17:40:54.569508 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.570574 master-0 kubenswrapper[4652]: I0216 17:40:54.570540 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.570574 master-0 kubenswrapper[4652]: I0216 17:40:54.570549 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-config\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.570988 master-0 kubenswrapper[4652]: I0216 17:40:54.570940 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:40:54.571069 master-0 kubenswrapper[4652]: I0216 17:40:54.571011 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b561d12d-f636-4387-af82-5cefe4c15491\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a1077481-5eba-4716-86ec-f3d83adfc7d0\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/1e7e1b9cb90bea61408cd0bddf9ef7f588192c09e074188e80ab7d3636678cd3/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.572570 master-0 kubenswrapper[4652]: I0216 17:40:54.572523 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.572667 master-0 kubenswrapper[4652]: I0216 17:40:54.572566 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.574780 master-0 kubenswrapper[4652]: I0216 17:40:54.574731 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:54.584698 master-0 kubenswrapper[4652]: I0216 17:40:54.584629 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwv86\" (UniqueName: \"kubernetes.io/projected/ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12-kube-api-access-rwv86\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:56.065377 master-0 kubenswrapper[4652]: I0216 17:40:56.065327 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b561d12d-f636-4387-af82-5cefe4c15491\" (UniqueName: \"kubernetes.io/csi/topolvm.io^a1077481-5eba-4716-86ec-f3d83adfc7d0\") pod \"ovsdbserver-nb-0\" (UID: \"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12\") " pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:56.250158 master-0 kubenswrapper[4652]: I0216 17:40:56.250029 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 17:40:57.139323 master-0 kubenswrapper[4652]: I0216 17:40:57.139242 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 17:40:57.140846 master-0 kubenswrapper[4652]: I0216 17:40:57.140810 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.142912 master-0 kubenswrapper[4652]: I0216 17:40:57.142871 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 17:40:57.143125 master-0 kubenswrapper[4652]: I0216 17:40:57.143094 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 17:40:57.143241 master-0 kubenswrapper[4652]: I0216 17:40:57.143224 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 17:40:57.156715 master-0 kubenswrapper[4652]: I0216 17:40:57.156665 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 17:40:57.232876 master-0 kubenswrapper[4652]: I0216 17:40:57.230586 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-af0b6829-784b-4f79-97ef-a1c9d87dfe2b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^26b0f30c-f3e4-44d1-97e9-49a609673ad7\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.232876 master-0 kubenswrapper[4652]: I0216 17:40:57.230632 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a4ecc9-bb63-4021-abd6-bac36eec8181-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.232876 master-0 kubenswrapper[4652]: I0216 17:40:57.230660 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15a4ecc9-bb63-4021-abd6-bac36eec8181-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.232876 master-0 kubenswrapper[4652]: I0216 17:40:57.230680 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a4ecc9-bb63-4021-abd6-bac36eec8181-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.232876 master-0 kubenswrapper[4652]: I0216 17:40:57.230719 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a4ecc9-bb63-4021-abd6-bac36eec8181-config\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.232876 master-0 kubenswrapper[4652]: I0216 17:40:57.230853 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a4ecc9-bb63-4021-abd6-bac36eec8181-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.232876 master-0 kubenswrapper[4652]: I0216 17:40:57.230890 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thxmj\" (UniqueName: \"kubernetes.io/projected/15a4ecc9-bb63-4021-abd6-bac36eec8181-kube-api-access-thxmj\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.232876 master-0 kubenswrapper[4652]: I0216 17:40:57.231229 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/15a4ecc9-bb63-4021-abd6-bac36eec8181-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.236303 master-0 kubenswrapper[4652]: I0216 17:40:57.236053 4652 generic.go:334] "Generic (PLEG): container finished" podID="5895aff2-c4f5-42f3-a422-9ef5ea305756" containerID="392d1772e11d7eec338b5b67a6dbe97a791fd9fa547e44027768af0e416c51ad" exitCode=0 Feb 16 17:40:57.236303 master-0 kubenswrapper[4652]: I0216 17:40:57.236112 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" event={"ID":"5895aff2-c4f5-42f3-a422-9ef5ea305756","Type":"ContainerDied","Data":"392d1772e11d7eec338b5b67a6dbe97a791fd9fa547e44027768af0e416c51ad"} Feb 16 17:40:57.240270 master-0 kubenswrapper[4652]: I0216 17:40:57.240205 4652 generic.go:334] "Generic (PLEG): container finished" podID="1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8" containerID="247d34bdc7a39167eecd49732fae3a98f316666dedf586aa933f262c0ce848b1" exitCode=0 Feb 16 17:40:57.240377 master-0 kubenswrapper[4652]: I0216 17:40:57.240291 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-vxnqn" event={"ID":"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8","Type":"ContainerDied","Data":"247d34bdc7a39167eecd49732fae3a98f316666dedf586aa933f262c0ce848b1"} Feb 16 17:40:57.242782 master-0 kubenswrapper[4652]: I0216 17:40:57.242528 4652 generic.go:334] "Generic (PLEG): container finished" podID="483fd99f-b1d8-4755-8635-78d8508f079f" containerID="1839cd71cd19eea96a2900ce4cc01260807919f87b5e182e15d2072d8fc1af58" exitCode=0 Feb 16 17:40:57.242782 master-0 kubenswrapper[4652]: I0216 17:40:57.242578 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-m6b8n" event={"ID":"483fd99f-b1d8-4755-8635-78d8508f079f","Type":"ContainerDied","Data":"1839cd71cd19eea96a2900ce4cc01260807919f87b5e182e15d2072d8fc1af58"} Feb 16 17:40:57.333934 master-0 kubenswrapper[4652]: I0216 17:40:57.332963 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-af0b6829-784b-4f79-97ef-a1c9d87dfe2b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^26b0f30c-f3e4-44d1-97e9-49a609673ad7\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.333934 master-0 kubenswrapper[4652]: I0216 17:40:57.333011 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a4ecc9-bb63-4021-abd6-bac36eec8181-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.333934 master-0 kubenswrapper[4652]: I0216 17:40:57.333184 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15a4ecc9-bb63-4021-abd6-bac36eec8181-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.333934 master-0 kubenswrapper[4652]: I0216 17:40:57.333257 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a4ecc9-bb63-4021-abd6-bac36eec8181-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.333934 master-0 kubenswrapper[4652]: I0216 17:40:57.333390 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a4ecc9-bb63-4021-abd6-bac36eec8181-config\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.333934 master-0 kubenswrapper[4652]: I0216 17:40:57.333883 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a4ecc9-bb63-4021-abd6-bac36eec8181-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.333934 master-0 kubenswrapper[4652]: I0216 17:40:57.333939 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thxmj\" (UniqueName: \"kubernetes.io/projected/15a4ecc9-bb63-4021-abd6-bac36eec8181-kube-api-access-thxmj\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.334367 master-0 kubenswrapper[4652]: I0216 17:40:57.334094 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/15a4ecc9-bb63-4021-abd6-bac36eec8181-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.334417 master-0 kubenswrapper[4652]: I0216 17:40:57.334383 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15a4ecc9-bb63-4021-abd6-bac36eec8181-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.334628 master-0 kubenswrapper[4652]: I0216 17:40:57.334579 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15a4ecc9-bb63-4021-abd6-bac36eec8181-config\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.334852 master-0 kubenswrapper[4652]: I0216 17:40:57.334800 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/15a4ecc9-bb63-4021-abd6-bac36eec8181-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.337233 master-0 kubenswrapper[4652]: I0216 17:40:57.337193 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:40:57.337385 master-0 kubenswrapper[4652]: I0216 17:40:57.337238 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-af0b6829-784b-4f79-97ef-a1c9d87dfe2b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^26b0f30c-f3e4-44d1-97e9-49a609673ad7\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/b7451980a0b63f88da92f5e71a2600a42966f0bb2a5187d85bb5dffc749960d8/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.337758 master-0 kubenswrapper[4652]: I0216 17:40:57.337718 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a4ecc9-bb63-4021-abd6-bac36eec8181-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.338267 master-0 kubenswrapper[4652]: I0216 17:40:57.338214 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a4ecc9-bb63-4021-abd6-bac36eec8181-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.338341 master-0 kubenswrapper[4652]: I0216 17:40:57.338239 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/15a4ecc9-bb63-4021-abd6-bac36eec8181-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.351765 master-0 kubenswrapper[4652]: I0216 17:40:57.351706 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thxmj\" (UniqueName: \"kubernetes.io/projected/15a4ecc9-bb63-4021-abd6-bac36eec8181-kube-api-access-thxmj\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:57.494839 master-0 kubenswrapper[4652]: I0216 17:40:57.494758 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5qcmk"] Feb 16 17:40:57.508519 master-0 kubenswrapper[4652]: I0216 17:40:57.508452 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 17:40:57.520848 master-0 kubenswrapper[4652]: I0216 17:40:57.520708 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 17:40:57.539228 master-0 kubenswrapper[4652]: W0216 17:40:57.539158 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc938ffa5_271f_4685_b9c2_c236001d07b4.slice/crio-635e4096066e7857325896e74c1d9abf89d9612e8f138551b79507ba67226579 WatchSource:0}: Error finding container 635e4096066e7857325896e74c1d9abf89d9612e8f138551b79507ba67226579: Status 404 returned error can't find the container with id 635e4096066e7857325896e74c1d9abf89d9612e8f138551b79507ba67226579 Feb 16 17:40:57.770386 master-0 kubenswrapper[4652]: I0216 17:40:57.770055 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 17:40:57.819560 master-0 kubenswrapper[4652]: W0216 17:40:57.819502 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c40b200_e1ed_461b_86c5_d23dce6ceb35.slice/crio-9f4d6ffb1f74157a889296e310e77c1703be798b5241ef9dd2016d9c0437493f WatchSource:0}: Error finding container 9f4d6ffb1f74157a889296e310e77c1703be798b5241ef9dd2016d9c0437493f: Status 404 returned error can't find the container with id 9f4d6ffb1f74157a889296e310e77c1703be798b5241ef9dd2016d9c0437493f Feb 16 17:40:57.889085 master-0 kubenswrapper[4652]: I0216 17:40:57.889001 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 17:40:57.898551 master-0 kubenswrapper[4652]: W0216 17:40:57.898446 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab1d9aca_fd20_49ca_ae53_ff9ddd6ebd12.slice/crio-0fbfe835c57c5ee7098f53c31da53d80a08608add4a5445797f44401621d3251 WatchSource:0}: Error finding container 0fbfe835c57c5ee7098f53c31da53d80a08608add4a5445797f44401621d3251: Status 404 returned error can't find the container with id 0fbfe835c57c5ee7098f53c31da53d80a08608add4a5445797f44401621d3251 Feb 16 17:40:58.077768 master-0 kubenswrapper[4652]: W0216 17:40:58.076348 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac815e50_ebe0_4937_b781_9396e09bc55d.slice/crio-f252b0d8981a588c357ed8f9aad59616796bb8cb9fee603ceb107db048087533 WatchSource:0}: Error finding container f252b0d8981a588c357ed8f9aad59616796bb8cb9fee603ceb107db048087533: Status 404 returned error can't find the container with id f252b0d8981a588c357ed8f9aad59616796bb8cb9fee603ceb107db048087533 Feb 16 17:40:58.077768 master-0 kubenswrapper[4652]: W0216 17:40:58.077472 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda39e35c_827b_40f3_8359_db6934118af4.slice/crio-6d55cea5def90f7d01c6e1cd6be1228677e8ce1782af1d555cd0d96c4a726bc5 WatchSource:0}: Error finding container 6d55cea5def90f7d01c6e1cd6be1228677e8ce1782af1d555cd0d96c4a726bc5: Status 404 returned error can't find the container with id 6d55cea5def90f7d01c6e1cd6be1228677e8ce1782af1d555cd0d96c4a726bc5 Feb 16 17:40:58.083152 master-0 kubenswrapper[4652]: I0216 17:40:58.083057 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 17:40:58.092502 master-0 kubenswrapper[4652]: I0216 17:40:58.092408 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 17:40:58.257971 master-0 kubenswrapper[4652]: I0216 17:40:58.256525 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d78499c-vxnqn" event={"ID":"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8","Type":"ContainerDied","Data":"d42ef72881a66c5040c2af156705587d82444f0a459aed0eebd6a2f6b9f338d2"} Feb 16 17:40:58.257971 master-0 kubenswrapper[4652]: I0216 17:40:58.256571 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d42ef72881a66c5040c2af156705587d82444f0a459aed0eebd6a2f6b9f338d2" Feb 16 17:40:58.259793 master-0 kubenswrapper[4652]: I0216 17:40:58.259491 4652 generic.go:334] "Generic (PLEG): container finished" podID="62508adf-ee70-4ca9-ba5f-7422b4cbacd9" containerID="9133e297e77313656a6cd283ec8c843e197ccde9336db1cee576b1e38c396d04" exitCode=0 Feb 16 17:40:58.259793 master-0 kubenswrapper[4652]: I0216 17:40:58.259563 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" event={"ID":"62508adf-ee70-4ca9-ba5f-7422b4cbacd9","Type":"ContainerDied","Data":"9133e297e77313656a6cd283ec8c843e197ccde9336db1cee576b1e38c396d04"} Feb 16 17:40:58.262118 master-0 kubenswrapper[4652]: I0216 17:40:58.262068 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" event={"ID":"5895aff2-c4f5-42f3-a422-9ef5ea305756","Type":"ContainerStarted","Data":"f56c573e8768b4052feae4cdd7b52f23d0230d2aba245698450cbd0b7b25a0ed"} Feb 16 17:40:58.262217 master-0 kubenswrapper[4652]: I0216 17:40:58.262173 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:40:58.263627 master-0 kubenswrapper[4652]: I0216 17:40:58.263585 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"da39e35c-827b-40f3-8359-db6934118af4","Type":"ContainerStarted","Data":"6d55cea5def90f7d01c6e1cd6be1228677e8ce1782af1d555cd0d96c4a726bc5"} Feb 16 17:40:58.265107 master-0 kubenswrapper[4652]: I0216 17:40:58.265068 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12","Type":"ContainerStarted","Data":"0fbfe835c57c5ee7098f53c31da53d80a08608add4a5445797f44401621d3251"} Feb 16 17:40:58.266277 master-0 kubenswrapper[4652]: I0216 17:40:58.266213 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90814419-59bd-4110-8afa-6842e5fa7b95","Type":"ContainerStarted","Data":"29a9e8feb34e357f6b0c922de6805077c0894823d4ff917d046a58560dcaf19b"} Feb 16 17:40:58.267181 master-0 kubenswrapper[4652]: I0216 17:40:58.267123 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6c40b200-e1ed-461b-86c5-d23dce6ceb35","Type":"ContainerStarted","Data":"9f4d6ffb1f74157a889296e310e77c1703be798b5241ef9dd2016d9c0437493f"} Feb 16 17:40:58.268301 master-0 kubenswrapper[4652]: I0216 17:40:58.268260 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5qcmk" event={"ID":"c938ffa5-271f-4685-b9c2-c236001d07b4","Type":"ContainerStarted","Data":"635e4096066e7857325896e74c1d9abf89d9612e8f138551b79507ba67226579"} Feb 16 17:40:58.269871 master-0 kubenswrapper[4652]: I0216 17:40:58.269833 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6fb887-m6b8n" event={"ID":"483fd99f-b1d8-4755-8635-78d8508f079f","Type":"ContainerDied","Data":"d3f9f11b870d9226726d18e572fcb4d22eed24e8f2cd5c4ab90d9f9d53210542"} Feb 16 17:40:58.269871 master-0 kubenswrapper[4652]: I0216 17:40:58.269867 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3f9f11b870d9226726d18e572fcb4d22eed24e8f2cd5c4ab90d9f9d53210542" Feb 16 17:40:58.272202 master-0 kubenswrapper[4652]: I0216 17:40:58.271616 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e6a7c073-8562-4706-b5be-41ea098db1ab","Type":"ContainerStarted","Data":"08c57797490233e154512447c8f35e92da6f2226bf3d121b2a7e7cc5e43ada49"} Feb 16 17:40:58.275352 master-0 kubenswrapper[4652]: I0216 17:40:58.275294 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ac815e50-ebe0-4937-b781-9396e09bc55d","Type":"ContainerStarted","Data":"f252b0d8981a588c357ed8f9aad59616796bb8cb9fee603ceb107db048087533"} Feb 16 17:40:58.284998 master-0 kubenswrapper[4652]: I0216 17:40:58.284895 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-vxnqn" Feb 16 17:40:58.302278 master-0 kubenswrapper[4652]: I0216 17:40:58.300189 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-m6b8n" Feb 16 17:40:58.310564 master-0 kubenswrapper[4652]: I0216 17:40:58.310427 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-bmlhg"] Feb 16 17:40:58.320706 master-0 kubenswrapper[4652]: I0216 17:40:58.320611 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" podStartSLOduration=3.289764272 podStartE2EDuration="16.320591944s" podCreationTimestamp="2026-02-16 17:40:42 +0000 UTC" firstStartedPulling="2026-02-16 17:40:43.714030542 +0000 UTC m=+1001.102199058" lastFinishedPulling="2026-02-16 17:40:56.744858214 +0000 UTC m=+1014.133026730" observedRunningTime="2026-02-16 17:40:58.303142126 +0000 UTC m=+1015.691310662" watchObservedRunningTime="2026-02-16 17:40:58.320591944 +0000 UTC m=+1015.708760460" Feb 16 17:40:58.371914 master-0 kubenswrapper[4652]: I0216 17:40:58.371870 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-config\") pod \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\" (UID: \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\") " Feb 16 17:40:58.372136 master-0 kubenswrapper[4652]: I0216 17:40:58.372024 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7z6h\" (UniqueName: \"kubernetes.io/projected/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-kube-api-access-n7z6h\") pod \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\" (UID: \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\") " Feb 16 17:40:58.372136 master-0 kubenswrapper[4652]: I0216 17:40:58.372067 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/483fd99f-b1d8-4755-8635-78d8508f079f-config\") pod \"483fd99f-b1d8-4755-8635-78d8508f079f\" (UID: \"483fd99f-b1d8-4755-8635-78d8508f079f\") " Feb 16 17:40:58.372136 master-0 kubenswrapper[4652]: I0216 17:40:58.372130 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t7ns\" (UniqueName: \"kubernetes.io/projected/483fd99f-b1d8-4755-8635-78d8508f079f-kube-api-access-5t7ns\") pod \"483fd99f-b1d8-4755-8635-78d8508f079f\" (UID: \"483fd99f-b1d8-4755-8635-78d8508f079f\") " Feb 16 17:40:58.372333 master-0 kubenswrapper[4652]: I0216 17:40:58.372226 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-dns-svc\") pod \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\" (UID: \"1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8\") " Feb 16 17:40:58.379494 master-0 kubenswrapper[4652]: I0216 17:40:58.379440 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483fd99f-b1d8-4755-8635-78d8508f079f-kube-api-access-5t7ns" (OuterVolumeSpecName: "kube-api-access-5t7ns") pod "483fd99f-b1d8-4755-8635-78d8508f079f" (UID: "483fd99f-b1d8-4755-8635-78d8508f079f"). InnerVolumeSpecName "kube-api-access-5t7ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:40:58.380607 master-0 kubenswrapper[4652]: I0216 17:40:58.380568 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-kube-api-access-n7z6h" (OuterVolumeSpecName: "kube-api-access-n7z6h") pod "1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8" (UID: "1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8"). InnerVolumeSpecName "kube-api-access-n7z6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:40:58.397345 master-0 kubenswrapper[4652]: I0216 17:40:58.397279 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-config" (OuterVolumeSpecName: "config") pod "1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8" (UID: "1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:40:58.400025 master-0 kubenswrapper[4652]: I0216 17:40:58.399976 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8" (UID: "1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:40:58.405167 master-0 kubenswrapper[4652]: I0216 17:40:58.405097 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/483fd99f-b1d8-4755-8635-78d8508f079f-config" (OuterVolumeSpecName: "config") pod "483fd99f-b1d8-4755-8635-78d8508f079f" (UID: "483fd99f-b1d8-4755-8635-78d8508f079f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:40:58.474655 master-0 kubenswrapper[4652]: I0216 17:40:58.474604 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:40:58.474655 master-0 kubenswrapper[4652]: I0216 17:40:58.474648 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7z6h\" (UniqueName: \"kubernetes.io/projected/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-kube-api-access-n7z6h\") on node \"master-0\" DevicePath \"\"" Feb 16 17:40:58.474655 master-0 kubenswrapper[4652]: I0216 17:40:58.474661 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/483fd99f-b1d8-4755-8635-78d8508f079f-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:40:58.475005 master-0 kubenswrapper[4652]: I0216 17:40:58.474671 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t7ns\" (UniqueName: \"kubernetes.io/projected/483fd99f-b1d8-4755-8635-78d8508f079f-kube-api-access-5t7ns\") on node \"master-0\" DevicePath \"\"" Feb 16 17:40:58.475005 master-0 kubenswrapper[4652]: I0216 17:40:58.474683 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:40:58.834020 master-0 kubenswrapper[4652]: I0216 17:40:58.833889 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-af0b6829-784b-4f79-97ef-a1c9d87dfe2b\" (UniqueName: \"kubernetes.io/csi/topolvm.io^26b0f30c-f3e4-44d1-97e9-49a609673ad7\") pod \"ovsdbserver-sb-0\" (UID: \"15a4ecc9-bb63-4021-abd6-bac36eec8181\") " pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:59.135526 master-0 kubenswrapper[4652]: I0216 17:40:59.135454 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 17:40:59.327349 master-0 kubenswrapper[4652]: I0216 17:40:59.326373 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bmlhg" event={"ID":"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75","Type":"ContainerStarted","Data":"18c470699205cb64ed293f6406bc3dfce044b70de6cb466efbc091e9084efc46"} Feb 16 17:40:59.341625 master-0 kubenswrapper[4652]: I0216 17:40:59.341584 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" event={"ID":"62508adf-ee70-4ca9-ba5f-7422b4cbacd9","Type":"ContainerStarted","Data":"728aae3844de89cdafdc43bb27a6c29fb0862feccc466095b675212ebfa0aded"} Feb 16 17:40:59.341940 master-0 kubenswrapper[4652]: I0216 17:40:59.341926 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d78499c-vxnqn" Feb 16 17:40:59.342425 master-0 kubenswrapper[4652]: I0216 17:40:59.342400 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6fb887-m6b8n" Feb 16 17:40:59.342926 master-0 kubenswrapper[4652]: I0216 17:40:59.342904 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:40:59.399756 master-0 kubenswrapper[4652]: I0216 17:40:59.399701 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-m6b8n"] Feb 16 17:40:59.407474 master-0 kubenswrapper[4652]: I0216 17:40:59.407368 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6fb887-m6b8n"] Feb 16 17:40:59.443941 master-0 kubenswrapper[4652]: I0216 17:40:59.443838 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" podStartSLOduration=4.01273607 podStartE2EDuration="17.443816302s" podCreationTimestamp="2026-02-16 17:40:42 +0000 UTC" firstStartedPulling="2026-02-16 17:40:43.390566301 +0000 UTC m=+1000.778734817" lastFinishedPulling="2026-02-16 17:40:56.821646533 +0000 UTC m=+1014.209815049" observedRunningTime="2026-02-16 17:40:59.441418248 +0000 UTC m=+1016.829586774" watchObservedRunningTime="2026-02-16 17:40:59.443816302 +0000 UTC m=+1016.831984818" Feb 16 17:40:59.488668 master-0 kubenswrapper[4652]: I0216 17:40:59.488200 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-vxnqn"] Feb 16 17:40:59.500782 master-0 kubenswrapper[4652]: I0216 17:40:59.500727 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d78499c-vxnqn"] Feb 16 17:41:00.764548 master-0 kubenswrapper[4652]: I0216 17:41:00.764502 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8" path="/var/lib/kubelet/pods/1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8/volumes" Feb 16 17:41:00.765171 master-0 kubenswrapper[4652]: I0216 17:41:00.765114 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="483fd99f-b1d8-4755-8635-78d8508f079f" path="/var/lib/kubelet/pods/483fd99f-b1d8-4755-8635-78d8508f079f/volumes" Feb 16 17:41:03.269548 master-0 kubenswrapper[4652]: I0216 17:41:03.269494 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:41:03.355174 master-0 kubenswrapper[4652]: I0216 17:41:03.355095 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-9sfsg"] Feb 16 17:41:03.355491 master-0 kubenswrapper[4652]: I0216 17:41:03.355446 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" podUID="62508adf-ee70-4ca9-ba5f-7422b4cbacd9" containerName="dnsmasq-dns" containerID="cri-o://728aae3844de89cdafdc43bb27a6c29fb0862feccc466095b675212ebfa0aded" gracePeriod=10 Feb 16 17:41:03.379980 master-0 kubenswrapper[4652]: I0216 17:41:03.377463 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:41:03.835205 master-0 kubenswrapper[4652]: I0216 17:41:03.835157 4652 trace.go:236] Trace[501381793]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (16-Feb-2026 17:41:02.735) (total time: 1099ms): Feb 16 17:41:03.835205 master-0 kubenswrapper[4652]: Trace[501381793]: [1.09940269s] [1.09940269s] END Feb 16 17:41:04.407706 master-0 kubenswrapper[4652]: I0216 17:41:04.407617 4652 generic.go:334] "Generic (PLEG): container finished" podID="62508adf-ee70-4ca9-ba5f-7422b4cbacd9" containerID="728aae3844de89cdafdc43bb27a6c29fb0862feccc466095b675212ebfa0aded" exitCode=0 Feb 16 17:41:04.407706 master-0 kubenswrapper[4652]: I0216 17:41:04.407669 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" event={"ID":"62508adf-ee70-4ca9-ba5f-7422b4cbacd9","Type":"ContainerDied","Data":"728aae3844de89cdafdc43bb27a6c29fb0862feccc466095b675212ebfa0aded"} Feb 16 17:41:04.617697 master-0 kubenswrapper[4652]: I0216 17:41:04.617649 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:41:04.724965 master-0 kubenswrapper[4652]: I0216 17:41:04.724798 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 17:41:04.734480 master-0 kubenswrapper[4652]: I0216 17:41:04.734281 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-dns-svc\") pod \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\" (UID: \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\") " Feb 16 17:41:04.734699 master-0 kubenswrapper[4652]: I0216 17:41:04.734554 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-config\") pod \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\" (UID: \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\") " Feb 16 17:41:04.734699 master-0 kubenswrapper[4652]: I0216 17:41:04.734629 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llbgz\" (UniqueName: \"kubernetes.io/projected/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-kube-api-access-llbgz\") pod \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\" (UID: \"62508adf-ee70-4ca9-ba5f-7422b4cbacd9\") " Feb 16 17:41:04.804447 master-0 kubenswrapper[4652]: I0216 17:41:04.804380 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-kube-api-access-llbgz" (OuterVolumeSpecName: "kube-api-access-llbgz") pod "62508adf-ee70-4ca9-ba5f-7422b4cbacd9" (UID: "62508adf-ee70-4ca9-ba5f-7422b4cbacd9"). InnerVolumeSpecName "kube-api-access-llbgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:04.837939 master-0 kubenswrapper[4652]: I0216 17:41:04.837712 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llbgz\" (UniqueName: \"kubernetes.io/projected/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-kube-api-access-llbgz\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:05.418888 master-0 kubenswrapper[4652]: I0216 17:41:05.418827 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bmlhg" event={"ID":"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75","Type":"ContainerStarted","Data":"f834e5cf4ba3d96f251b84a0da480f335e677987625cc7f3c9c07e2d2042c6fc"} Feb 16 17:41:05.421775 master-0 kubenswrapper[4652]: I0216 17:41:05.421740 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" Feb 16 17:41:05.421775 master-0 kubenswrapper[4652]: I0216 17:41:05.421749 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bcd98d69f-9sfsg" event={"ID":"62508adf-ee70-4ca9-ba5f-7422b4cbacd9","Type":"ContainerDied","Data":"b238ebd221939ded4c778e8c7d1c37b0f5d846bc1d5ebad2015d0e5baba2c41d"} Feb 16 17:41:05.421922 master-0 kubenswrapper[4652]: I0216 17:41:05.421816 4652 scope.go:117] "RemoveContainer" containerID="728aae3844de89cdafdc43bb27a6c29fb0862feccc466095b675212ebfa0aded" Feb 16 17:41:05.423339 master-0 kubenswrapper[4652]: I0216 17:41:05.423312 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"e6a7c073-8562-4706-b5be-41ea098db1ab","Type":"ContainerStarted","Data":"c12110585848a044747f919b36d648eff3f424a30203eb5ce45e5fc7301ca9cb"} Feb 16 17:41:05.423482 master-0 kubenswrapper[4652]: I0216 17:41:05.423460 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 17:41:05.425542 master-0 kubenswrapper[4652]: I0216 17:41:05.425474 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ac815e50-ebe0-4937-b781-9396e09bc55d","Type":"ContainerStarted","Data":"d4999ee1d51af8b13f1e4bb0c7750a79209aef8a012fe90d4b78938d54a0ace9"} Feb 16 17:41:05.426959 master-0 kubenswrapper[4652]: I0216 17:41:05.426923 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12","Type":"ContainerStarted","Data":"54c7a42e5fb19d6124446dc80edb7c955b1e409181052a154d4a6f34bb6b8836"} Feb 16 17:41:05.428899 master-0 kubenswrapper[4652]: I0216 17:41:05.428861 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6c40b200-e1ed-461b-86c5-d23dce6ceb35","Type":"ContainerStarted","Data":"1eae8c33361935d453419b14c5dcb4ecbd05a38b76a2774c2d1ce94d4ab8893e"} Feb 16 17:41:05.430784 master-0 kubenswrapper[4652]: I0216 17:41:05.430733 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5qcmk" event={"ID":"c938ffa5-271f-4685-b9c2-c236001d07b4","Type":"ContainerStarted","Data":"dc1f32dc4bec58abd1c50b8cec7187d1e770decc72c408d7376391395cb35bf3"} Feb 16 17:41:05.430902 master-0 kubenswrapper[4652]: I0216 17:41:05.430851 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-5qcmk" Feb 16 17:41:05.432007 master-0 kubenswrapper[4652]: I0216 17:41:05.431977 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"15a4ecc9-bb63-4021-abd6-bac36eec8181","Type":"ContainerStarted","Data":"d6a790dff21e27b52d04a6f8ff3a5d5e2e927686dedae9b2f08c8bea97bb2fbf"} Feb 16 17:41:05.443175 master-0 kubenswrapper[4652]: I0216 17:41:05.443128 4652 scope.go:117] "RemoveContainer" containerID="9133e297e77313656a6cd283ec8c843e197ccde9336db1cee576b1e38c396d04" Feb 16 17:41:05.468919 master-0 kubenswrapper[4652]: I0216 17:41:05.468818 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.951051874000001 podStartE2EDuration="19.468801328s" podCreationTimestamp="2026-02-16 17:40:46 +0000 UTC" firstStartedPulling="2026-02-16 17:40:57.545131937 +0000 UTC m=+1014.933300453" lastFinishedPulling="2026-02-16 17:41:04.062881391 +0000 UTC m=+1021.451049907" observedRunningTime="2026-02-16 17:41:05.460033923 +0000 UTC m=+1022.848202439" watchObservedRunningTime="2026-02-16 17:41:05.468801328 +0000 UTC m=+1022.856969844" Feb 16 17:41:05.529274 master-0 kubenswrapper[4652]: I0216 17:41:05.527897 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-5qcmk" podStartSLOduration=6.990915491 podStartE2EDuration="13.52787551s" podCreationTimestamp="2026-02-16 17:40:52 +0000 UTC" firstStartedPulling="2026-02-16 17:40:57.544053038 +0000 UTC m=+1014.932221554" lastFinishedPulling="2026-02-16 17:41:04.081013057 +0000 UTC m=+1021.469181573" observedRunningTime="2026-02-16 17:41:05.525942229 +0000 UTC m=+1022.914110745" watchObservedRunningTime="2026-02-16 17:41:05.52787551 +0000 UTC m=+1022.916044026" Feb 16 17:41:05.731883 master-0 kubenswrapper[4652]: I0216 17:41:05.731757 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "62508adf-ee70-4ca9-ba5f-7422b4cbacd9" (UID: "62508adf-ee70-4ca9-ba5f-7422b4cbacd9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:05.740737 master-0 kubenswrapper[4652]: I0216 17:41:05.740682 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-config" (OuterVolumeSpecName: "config") pod "62508adf-ee70-4ca9-ba5f-7422b4cbacd9" (UID: "62508adf-ee70-4ca9-ba5f-7422b4cbacd9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:05.757158 master-0 kubenswrapper[4652]: I0216 17:41:05.757111 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:05.757158 master-0 kubenswrapper[4652]: I0216 17:41:05.757154 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62508adf-ee70-4ca9-ba5f-7422b4cbacd9-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:06.065292 master-0 kubenswrapper[4652]: I0216 17:41:06.060412 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-9sfsg"] Feb 16 17:41:06.078326 master-0 kubenswrapper[4652]: I0216 17:41:06.069092 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bcd98d69f-9sfsg"] Feb 16 17:41:06.443687 master-0 kubenswrapper[4652]: I0216 17:41:06.443642 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"15a4ecc9-bb63-4021-abd6-bac36eec8181","Type":"ContainerStarted","Data":"ec8fefb526b5cd8f4c0ed5d87743404c32e3aa7b279e93bc1ff91ae2202d10b4"} Feb 16 17:41:06.445416 master-0 kubenswrapper[4652]: I0216 17:41:06.445385 4652 generic.go:334] "Generic (PLEG): container finished" podID="a6ea7a52-9270-45fe-b0dd-d80bac7c3a75" containerID="f834e5cf4ba3d96f251b84a0da480f335e677987625cc7f3c9c07e2d2042c6fc" exitCode=0 Feb 16 17:41:06.445477 master-0 kubenswrapper[4652]: I0216 17:41:06.445439 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bmlhg" event={"ID":"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75","Type":"ContainerDied","Data":"f834e5cf4ba3d96f251b84a0da480f335e677987625cc7f3c9c07e2d2042c6fc"} Feb 16 17:41:06.448614 master-0 kubenswrapper[4652]: I0216 17:41:06.448544 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"da39e35c-827b-40f3-8359-db6934118af4","Type":"ContainerStarted","Data":"2fcff6a59f4d5df284aaafe56df6e773582ee2a99a7c2f208bfa34f4ec111fb6"} Feb 16 17:41:06.450091 master-0 kubenswrapper[4652]: I0216 17:41:06.450059 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90814419-59bd-4110-8afa-6842e5fa7b95","Type":"ContainerStarted","Data":"cae25c00637e89ee4db2640f6f994b062e7a8bf7bd5b5cf75f08008fe3e765d6"} Feb 16 17:41:06.765387 master-0 kubenswrapper[4652]: I0216 17:41:06.765296 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62508adf-ee70-4ca9-ba5f-7422b4cbacd9" path="/var/lib/kubelet/pods/62508adf-ee70-4ca9-ba5f-7422b4cbacd9/volumes" Feb 16 17:41:07.459357 master-0 kubenswrapper[4652]: I0216 17:41:07.459305 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ab1d9aca-fd20-49ca-ae53-ff9ddd6ebd12","Type":"ContainerStarted","Data":"0c428b9f3862a4ea98d106e2dcbf1ae32893bcc7e57c27a34d0d366993cf225a"} Feb 16 17:41:07.462158 master-0 kubenswrapper[4652]: I0216 17:41:07.462118 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"15a4ecc9-bb63-4021-abd6-bac36eec8181","Type":"ContainerStarted","Data":"fb774f23d064dd2ad8822e36e0565c2d9c3ac28327522927827380cfa9f82568"} Feb 16 17:41:07.465702 master-0 kubenswrapper[4652]: I0216 17:41:07.465661 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bmlhg" event={"ID":"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75","Type":"ContainerStarted","Data":"d22d542e6d61683c27c77ae2585836c14931dc5e5aaf96ccf4b0dfc49721e0eb"} Feb 16 17:41:07.465702 master-0 kubenswrapper[4652]: I0216 17:41:07.465698 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:41:07.465850 master-0 kubenswrapper[4652]: I0216 17:41:07.465709 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-bmlhg" event={"ID":"a6ea7a52-9270-45fe-b0dd-d80bac7c3a75","Type":"ContainerStarted","Data":"81c3b24647d7862eb383de054440667047ff36967f35b4aa889325160f6accf9"} Feb 16 17:41:07.466148 master-0 kubenswrapper[4652]: I0216 17:41:07.466121 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:41:07.485234 master-0 kubenswrapper[4652]: I0216 17:41:07.485152 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.618807708 podStartE2EDuration="15.485133727s" podCreationTimestamp="2026-02-16 17:40:52 +0000 UTC" firstStartedPulling="2026-02-16 17:40:57.903299988 +0000 UTC m=+1015.291468504" lastFinishedPulling="2026-02-16 17:41:06.769626007 +0000 UTC m=+1024.157794523" observedRunningTime="2026-02-16 17:41:07.483888894 +0000 UTC m=+1024.872057420" watchObservedRunningTime="2026-02-16 17:41:07.485133727 +0000 UTC m=+1024.873302243" Feb 16 17:41:07.514452 master-0 kubenswrapper[4652]: I0216 17:41:07.514302 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-bmlhg" podStartSLOduration=9.80929349 podStartE2EDuration="15.514278418s" podCreationTimestamp="2026-02-16 17:40:52 +0000 UTC" firstStartedPulling="2026-02-16 17:40:58.37608777 +0000 UTC m=+1015.764256286" lastFinishedPulling="2026-02-16 17:41:04.081072698 +0000 UTC m=+1021.469241214" observedRunningTime="2026-02-16 17:41:07.507782924 +0000 UTC m=+1024.895951460" watchObservedRunningTime="2026-02-16 17:41:07.514278418 +0000 UTC m=+1024.902446934" Feb 16 17:41:07.538302 master-0 kubenswrapper[4652]: I0216 17:41:07.538199 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.570512674 podStartE2EDuration="12.538182209s" podCreationTimestamp="2026-02-16 17:40:55 +0000 UTC" firstStartedPulling="2026-02-16 17:41:04.808776355 +0000 UTC m=+1022.196944881" lastFinishedPulling="2026-02-16 17:41:06.77644591 +0000 UTC m=+1024.164614416" observedRunningTime="2026-02-16 17:41:07.530355889 +0000 UTC m=+1024.918524425" watchObservedRunningTime="2026-02-16 17:41:07.538182209 +0000 UTC m=+1024.926350725" Feb 16 17:41:08.136135 master-0 kubenswrapper[4652]: I0216 17:41:08.136076 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 17:41:08.250878 master-0 kubenswrapper[4652]: I0216 17:41:08.250820 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 17:41:08.288577 master-0 kubenswrapper[4652]: I0216 17:41:08.288521 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 17:41:08.474049 master-0 kubenswrapper[4652]: I0216 17:41:08.473997 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 17:41:09.136263 master-0 kubenswrapper[4652]: I0216 17:41:09.136165 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 17:41:10.493906 master-0 kubenswrapper[4652]: I0216 17:41:10.493833 4652 generic.go:334] "Generic (PLEG): container finished" podID="ac815e50-ebe0-4937-b781-9396e09bc55d" containerID="d4999ee1d51af8b13f1e4bb0c7750a79209aef8a012fe90d4b78938d54a0ace9" exitCode=0 Feb 16 17:41:10.494581 master-0 kubenswrapper[4652]: I0216 17:41:10.493907 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ac815e50-ebe0-4937-b781-9396e09bc55d","Type":"ContainerDied","Data":"d4999ee1d51af8b13f1e4bb0c7750a79209aef8a012fe90d4b78938d54a0ace9"} Feb 16 17:41:10.496992 master-0 kubenswrapper[4652]: I0216 17:41:10.496941 4652 generic.go:334] "Generic (PLEG): container finished" podID="6c40b200-e1ed-461b-86c5-d23dce6ceb35" containerID="1eae8c33361935d453419b14c5dcb4ecbd05a38b76a2774c2d1ce94d4ab8893e" exitCode=0 Feb 16 17:41:10.497069 master-0 kubenswrapper[4652]: I0216 17:41:10.496987 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6c40b200-e1ed-461b-86c5-d23dce6ceb35","Type":"ContainerDied","Data":"1eae8c33361935d453419b14c5dcb4ecbd05a38b76a2774c2d1ce94d4ab8893e"} Feb 16 17:41:10.551135 master-0 kubenswrapper[4652]: I0216 17:41:10.551053 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 17:41:10.904688 master-0 kubenswrapper[4652]: I0216 17:41:10.904588 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-dgb7m"] Feb 16 17:41:10.905665 master-0 kubenswrapper[4652]: E0216 17:41:10.905633 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8" containerName="init" Feb 16 17:41:10.905750 master-0 kubenswrapper[4652]: I0216 17:41:10.905672 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8" containerName="init" Feb 16 17:41:10.905750 master-0 kubenswrapper[4652]: E0216 17:41:10.905693 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483fd99f-b1d8-4755-8635-78d8508f079f" containerName="init" Feb 16 17:41:10.905750 master-0 kubenswrapper[4652]: I0216 17:41:10.905701 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="483fd99f-b1d8-4755-8635-78d8508f079f" containerName="init" Feb 16 17:41:10.905750 master-0 kubenswrapper[4652]: E0216 17:41:10.905739 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62508adf-ee70-4ca9-ba5f-7422b4cbacd9" containerName="dnsmasq-dns" Feb 16 17:41:10.905750 master-0 kubenswrapper[4652]: I0216 17:41:10.905748 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="62508adf-ee70-4ca9-ba5f-7422b4cbacd9" containerName="dnsmasq-dns" Feb 16 17:41:10.905976 master-0 kubenswrapper[4652]: E0216 17:41:10.905765 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62508adf-ee70-4ca9-ba5f-7422b4cbacd9" containerName="init" Feb 16 17:41:10.905976 master-0 kubenswrapper[4652]: I0216 17:41:10.905774 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="62508adf-ee70-4ca9-ba5f-7422b4cbacd9" containerName="init" Feb 16 17:41:10.906467 master-0 kubenswrapper[4652]: I0216 17:41:10.906440 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="483fd99f-b1d8-4755-8635-78d8508f079f" containerName="init" Feb 16 17:41:10.906554 master-0 kubenswrapper[4652]: I0216 17:41:10.906483 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cde9d39-1a26-44c8-8ea3-d9d4bd2ecfb8" containerName="init" Feb 16 17:41:10.906554 master-0 kubenswrapper[4652]: I0216 17:41:10.906509 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="62508adf-ee70-4ca9-ba5f-7422b4cbacd9" containerName="dnsmasq-dns" Feb 16 17:41:10.907876 master-0 kubenswrapper[4652]: I0216 17:41:10.907844 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:10.910463 master-0 kubenswrapper[4652]: I0216 17:41:10.910417 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 17:41:10.924005 master-0 kubenswrapper[4652]: I0216 17:41:10.923948 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-dgb7m"] Feb 16 17:41:10.953165 master-0 kubenswrapper[4652]: I0216 17:41:10.952253 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-wcq82"] Feb 16 17:41:10.954447 master-0 kubenswrapper[4652]: I0216 17:41:10.953931 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:10.958788 master-0 kubenswrapper[4652]: I0216 17:41:10.958656 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 17:41:10.965724 master-0 kubenswrapper[4652]: I0216 17:41:10.965681 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wcq82"] Feb 16 17:41:11.005893 master-0 kubenswrapper[4652]: I0216 17:41:11.005471 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmfq7\" (UniqueName: \"kubernetes.io/projected/80a25250-e608-4b4a-80db-83054184408d-kube-api-access-tmfq7\") pod \"dnsmasq-dns-7c8cfc46bf-dgb7m\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:11.005893 master-0 kubenswrapper[4652]: I0216 17:41:11.005524 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-config\") pod \"dnsmasq-dns-7c8cfc46bf-dgb7m\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:11.005893 master-0 kubenswrapper[4652]: I0216 17:41:11.005584 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-dgb7m\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:11.005893 master-0 kubenswrapper[4652]: I0216 17:41:11.005619 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-dgb7m\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:11.113658 master-0 kubenswrapper[4652]: I0216 17:41:11.113144 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-dgb7m\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:11.113944 master-0 kubenswrapper[4652]: I0216 17:41:11.113702 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99665ac-d438-4a45-950a-fd2446c020cf-combined-ca-bundle\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.114040 master-0 kubenswrapper[4652]: I0216 17:41:11.114011 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c99665ac-d438-4a45-950a-fd2446c020cf-ovn-rundir\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.114155 master-0 kubenswrapper[4652]: I0216 17:41:11.114114 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99665ac-d438-4a45-950a-fd2446c020cf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.114227 master-0 kubenswrapper[4652]: I0216 17:41:11.114208 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whf4l\" (UniqueName: \"kubernetes.io/projected/c99665ac-d438-4a45-950a-fd2446c020cf-kube-api-access-whf4l\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.114384 master-0 kubenswrapper[4652]: I0216 17:41:11.114128 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-ovsdbserver-nb\") pod \"dnsmasq-dns-7c8cfc46bf-dgb7m\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:11.114722 master-0 kubenswrapper[4652]: I0216 17:41:11.114690 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmfq7\" (UniqueName: \"kubernetes.io/projected/80a25250-e608-4b4a-80db-83054184408d-kube-api-access-tmfq7\") pod \"dnsmasq-dns-7c8cfc46bf-dgb7m\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:11.114818 master-0 kubenswrapper[4652]: I0216 17:41:11.114738 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-config\") pod \"dnsmasq-dns-7c8cfc46bf-dgb7m\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:11.114818 master-0 kubenswrapper[4652]: I0216 17:41:11.114794 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c99665ac-d438-4a45-950a-fd2446c020cf-config\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.114949 master-0 kubenswrapper[4652]: I0216 17:41:11.114842 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c99665ac-d438-4a45-950a-fd2446c020cf-ovs-rundir\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.115002 master-0 kubenswrapper[4652]: I0216 17:41:11.114950 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-dgb7m\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:11.117895 master-0 kubenswrapper[4652]: I0216 17:41:11.117858 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-config\") pod \"dnsmasq-dns-7c8cfc46bf-dgb7m\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:11.118240 master-0 kubenswrapper[4652]: I0216 17:41:11.118158 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-dns-svc\") pod \"dnsmasq-dns-7c8cfc46bf-dgb7m\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:11.134093 master-0 kubenswrapper[4652]: I0216 17:41:11.134042 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmfq7\" (UniqueName: \"kubernetes.io/projected/80a25250-e608-4b4a-80db-83054184408d-kube-api-access-tmfq7\") pod \"dnsmasq-dns-7c8cfc46bf-dgb7m\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:11.179704 master-0 kubenswrapper[4652]: I0216 17:41:11.179647 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 17:41:11.219157 master-0 kubenswrapper[4652]: I0216 17:41:11.219098 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99665ac-d438-4a45-950a-fd2446c020cf-combined-ca-bundle\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.223764 master-0 kubenswrapper[4652]: I0216 17:41:11.219610 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c99665ac-d438-4a45-950a-fd2446c020cf-ovn-rundir\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.223764 master-0 kubenswrapper[4652]: I0216 17:41:11.219702 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99665ac-d438-4a45-950a-fd2446c020cf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.223764 master-0 kubenswrapper[4652]: I0216 17:41:11.219738 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whf4l\" (UniqueName: \"kubernetes.io/projected/c99665ac-d438-4a45-950a-fd2446c020cf-kube-api-access-whf4l\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.223764 master-0 kubenswrapper[4652]: I0216 17:41:11.219879 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c99665ac-d438-4a45-950a-fd2446c020cf-config\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.223764 master-0 kubenswrapper[4652]: I0216 17:41:11.219946 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c99665ac-d438-4a45-950a-fd2446c020cf-ovs-rundir\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.223764 master-0 kubenswrapper[4652]: I0216 17:41:11.222382 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c99665ac-d438-4a45-950a-fd2446c020cf-ovs-rundir\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.223764 master-0 kubenswrapper[4652]: I0216 17:41:11.222459 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c99665ac-d438-4a45-950a-fd2446c020cf-ovn-rundir\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.223764 master-0 kubenswrapper[4652]: I0216 17:41:11.222389 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99665ac-d438-4a45-950a-fd2446c020cf-combined-ca-bundle\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.223764 master-0 kubenswrapper[4652]: I0216 17:41:11.222877 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c99665ac-d438-4a45-950a-fd2446c020cf-config\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.225651 master-0 kubenswrapper[4652]: I0216 17:41:11.225059 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c99665ac-d438-4a45-950a-fd2446c020cf-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.248292 master-0 kubenswrapper[4652]: I0216 17:41:11.245397 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whf4l\" (UniqueName: \"kubernetes.io/projected/c99665ac-d438-4a45-950a-fd2446c020cf-kube-api-access-whf4l\") pod \"ovn-controller-metrics-wcq82\" (UID: \"c99665ac-d438-4a45-950a-fd2446c020cf\") " pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.253091 master-0 kubenswrapper[4652]: I0216 17:41:11.253033 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-dgb7m"] Feb 16 17:41:11.255501 master-0 kubenswrapper[4652]: I0216 17:41:11.254983 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:11.275406 master-0 kubenswrapper[4652]: I0216 17:41:11.275081 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-wcq82" Feb 16 17:41:11.292291 master-0 kubenswrapper[4652]: I0216 17:41:11.292194 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-jwcwv"] Feb 16 17:41:11.295307 master-0 kubenswrapper[4652]: I0216 17:41:11.295283 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.301176 master-0 kubenswrapper[4652]: I0216 17:41:11.301128 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 17:41:11.310374 master-0 kubenswrapper[4652]: I0216 17:41:11.310322 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-jwcwv"] Feb 16 17:41:11.434605 master-0 kubenswrapper[4652]: I0216 17:41:11.434505 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.435188 master-0 kubenswrapper[4652]: I0216 17:41:11.435165 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.435406 master-0 kubenswrapper[4652]: I0216 17:41:11.435386 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.435539 master-0 kubenswrapper[4652]: I0216 17:41:11.435523 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-config\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.435700 master-0 kubenswrapper[4652]: I0216 17:41:11.435684 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cnrz\" (UniqueName: \"kubernetes.io/projected/6aadc63b-7612-4341-8c2f-c4502c631e34-kube-api-access-4cnrz\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.511588 master-0 kubenswrapper[4652]: I0216 17:41:11.510931 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ac815e50-ebe0-4937-b781-9396e09bc55d","Type":"ContainerStarted","Data":"b1ea2c62ddc118e3c65b3b9b5ebf9ab8fae4e0b2a33f917f4e855c3667f772b1"} Feb 16 17:41:11.532390 master-0 kubenswrapper[4652]: I0216 17:41:11.531923 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"6c40b200-e1ed-461b-86c5-d23dce6ceb35","Type":"ContainerStarted","Data":"f873bcf3359982e7ca861efa0b5756e66c0c963bed271b8c7995390b4ea47f5f"} Feb 16 17:41:11.541900 master-0 kubenswrapper[4652]: I0216 17:41:11.538030 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.541900 master-0 kubenswrapper[4652]: I0216 17:41:11.538503 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.541900 master-0 kubenswrapper[4652]: I0216 17:41:11.538598 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-config\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.541900 master-0 kubenswrapper[4652]: I0216 17:41:11.538732 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cnrz\" (UniqueName: \"kubernetes.io/projected/6aadc63b-7612-4341-8c2f-c4502c631e34-kube-api-access-4cnrz\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.541900 master-0 kubenswrapper[4652]: I0216 17:41:11.538930 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.541900 master-0 kubenswrapper[4652]: I0216 17:41:11.540795 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-ovsdbserver-nb\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.541900 master-0 kubenswrapper[4652]: I0216 17:41:11.541204 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-dns-svc\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.541900 master-0 kubenswrapper[4652]: I0216 17:41:11.541525 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-config\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.541900 master-0 kubenswrapper[4652]: I0216 17:41:11.541563 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-ovsdbserver-sb\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.578081 master-0 kubenswrapper[4652]: I0216 17:41:11.556810 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cnrz\" (UniqueName: \"kubernetes.io/projected/6aadc63b-7612-4341-8c2f-c4502c631e34-kube-api-access-4cnrz\") pod \"dnsmasq-dns-7b9694dd79-jwcwv\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.578081 master-0 kubenswrapper[4652]: I0216 17:41:11.560884 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=21.555762128 podStartE2EDuration="27.56086532s" podCreationTimestamp="2026-02-16 17:40:44 +0000 UTC" firstStartedPulling="2026-02-16 17:40:58.078530735 +0000 UTC m=+1015.466699251" lastFinishedPulling="2026-02-16 17:41:04.083633927 +0000 UTC m=+1021.471802443" observedRunningTime="2026-02-16 17:41:11.537295979 +0000 UTC m=+1028.925464515" watchObservedRunningTime="2026-02-16 17:41:11.56086532 +0000 UTC m=+1028.949033836" Feb 16 17:41:11.578081 master-0 kubenswrapper[4652]: I0216 17:41:11.575906 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.288685919 podStartE2EDuration="26.575883263s" podCreationTimestamp="2026-02-16 17:40:45 +0000 UTC" firstStartedPulling="2026-02-16 17:40:57.823637742 +0000 UTC m=+1015.211806258" lastFinishedPulling="2026-02-16 17:41:04.110835086 +0000 UTC m=+1021.499003602" observedRunningTime="2026-02-16 17:41:11.568045883 +0000 UTC m=+1028.956214399" watchObservedRunningTime="2026-02-16 17:41:11.575883263 +0000 UTC m=+1028.964051769" Feb 16 17:41:11.581358 master-0 kubenswrapper[4652]: I0216 17:41:11.580372 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 17:41:11.632354 master-0 kubenswrapper[4652]: I0216 17:41:11.632209 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:11.827535 master-0 kubenswrapper[4652]: I0216 17:41:11.826512 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 17:41:11.836082 master-0 kubenswrapper[4652]: I0216 17:41:11.828741 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 17:41:11.836082 master-0 kubenswrapper[4652]: I0216 17:41:11.831064 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 17:41:11.836082 master-0 kubenswrapper[4652]: I0216 17:41:11.831359 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 17:41:11.836082 master-0 kubenswrapper[4652]: I0216 17:41:11.831570 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 17:41:11.844348 master-0 kubenswrapper[4652]: I0216 17:41:11.844006 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 17:41:11.943925 master-0 kubenswrapper[4652]: I0216 17:41:11.943868 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-wcq82"] Feb 16 17:41:11.944966 master-0 kubenswrapper[4652]: W0216 17:41:11.944754 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80a25250_e608_4b4a_80db_83054184408d.slice/crio-f9f9923a1fc79f016abae68e0c2dff54f87e5efebba5faa9418b58d5d4b0bb77 WatchSource:0}: Error finding container f9f9923a1fc79f016abae68e0c2dff54f87e5efebba5faa9418b58d5d4b0bb77: Status 404 returned error can't find the container with id f9f9923a1fc79f016abae68e0c2dff54f87e5efebba5faa9418b58d5d4b0bb77 Feb 16 17:41:11.970658 master-0 kubenswrapper[4652]: I0216 17:41:11.970607 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b6cc\" (UniqueName: \"kubernetes.io/projected/1cd36752-14be-4e58-8129-348694a45fd8-kube-api-access-5b6cc\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:11.971311 master-0 kubenswrapper[4652]: I0216 17:41:11.970679 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1cd36752-14be-4e58-8129-348694a45fd8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:11.971311 master-0 kubenswrapper[4652]: I0216 17:41:11.970715 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd36752-14be-4e58-8129-348694a45fd8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:11.971311 master-0 kubenswrapper[4652]: I0216 17:41:11.970743 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd36752-14be-4e58-8129-348694a45fd8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:11.971311 master-0 kubenswrapper[4652]: I0216 17:41:11.970802 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd36752-14be-4e58-8129-348694a45fd8-config\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:11.971311 master-0 kubenswrapper[4652]: I0216 17:41:11.970819 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1cd36752-14be-4e58-8129-348694a45fd8-scripts\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:11.971311 master-0 kubenswrapper[4652]: I0216 17:41:11.970958 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd36752-14be-4e58-8129-348694a45fd8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:11.975962 master-0 kubenswrapper[4652]: I0216 17:41:11.975933 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-dgb7m"] Feb 16 17:41:12.073924 master-0 kubenswrapper[4652]: I0216 17:41:12.073012 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b6cc\" (UniqueName: \"kubernetes.io/projected/1cd36752-14be-4e58-8129-348694a45fd8-kube-api-access-5b6cc\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.073924 master-0 kubenswrapper[4652]: I0216 17:41:12.073189 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1cd36752-14be-4e58-8129-348694a45fd8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.073924 master-0 kubenswrapper[4652]: I0216 17:41:12.073232 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd36752-14be-4e58-8129-348694a45fd8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.073924 master-0 kubenswrapper[4652]: I0216 17:41:12.073281 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd36752-14be-4e58-8129-348694a45fd8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.073924 master-0 kubenswrapper[4652]: I0216 17:41:12.073399 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd36752-14be-4e58-8129-348694a45fd8-config\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.073924 master-0 kubenswrapper[4652]: I0216 17:41:12.073475 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1cd36752-14be-4e58-8129-348694a45fd8-scripts\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.073924 master-0 kubenswrapper[4652]: I0216 17:41:12.073529 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd36752-14be-4e58-8129-348694a45fd8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.076430 master-0 kubenswrapper[4652]: I0216 17:41:12.076090 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1cd36752-14be-4e58-8129-348694a45fd8-scripts\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.078209 master-0 kubenswrapper[4652]: I0216 17:41:12.078133 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1cd36752-14be-4e58-8129-348694a45fd8-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.080955 master-0 kubenswrapper[4652]: I0216 17:41:12.080909 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd36752-14be-4e58-8129-348694a45fd8-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.087909 master-0 kubenswrapper[4652]: I0216 17:41:12.087338 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd36752-14be-4e58-8129-348694a45fd8-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.090451 master-0 kubenswrapper[4652]: I0216 17:41:12.090392 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd36752-14be-4e58-8129-348694a45fd8-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.101349 master-0 kubenswrapper[4652]: I0216 17:41:12.101001 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b6cc\" (UniqueName: \"kubernetes.io/projected/1cd36752-14be-4e58-8129-348694a45fd8-kube-api-access-5b6cc\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.102503 master-0 kubenswrapper[4652]: I0216 17:41:12.102474 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd36752-14be-4e58-8129-348694a45fd8-config\") pod \"ovn-northd-0\" (UID: \"1cd36752-14be-4e58-8129-348694a45fd8\") " pod="openstack/ovn-northd-0" Feb 16 17:41:12.157032 master-0 kubenswrapper[4652]: I0216 17:41:12.156963 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 17:41:12.165468 master-0 kubenswrapper[4652]: I0216 17:41:12.165368 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 17:41:12.228753 master-0 kubenswrapper[4652]: I0216 17:41:12.228050 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-jwcwv"] Feb 16 17:41:12.244078 master-0 kubenswrapper[4652]: I0216 17:41:12.243979 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 17:41:12.244078 master-0 kubenswrapper[4652]: I0216 17:41:12.244063 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 17:41:12.542454 master-0 kubenswrapper[4652]: I0216 17:41:12.542381 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wcq82" event={"ID":"c99665ac-d438-4a45-950a-fd2446c020cf","Type":"ContainerStarted","Data":"c44654cc31c5c458b5ada0646f946f499d6cd7e672a7cd975d059e6d8ece5f89"} Feb 16 17:41:12.542454 master-0 kubenswrapper[4652]: I0216 17:41:12.542430 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-wcq82" event={"ID":"c99665ac-d438-4a45-950a-fd2446c020cf","Type":"ContainerStarted","Data":"c69bdc0a424612d085d4acfc63972e53d16aaddfc5bebdebd24890156dd63fc8"} Feb 16 17:41:12.544488 master-0 kubenswrapper[4652]: I0216 17:41:12.544449 4652 generic.go:334] "Generic (PLEG): container finished" podID="80a25250-e608-4b4a-80db-83054184408d" containerID="5b666e023c5674f2e52826e7fd774eab2ef30d0d77be53d6bc6a4e869b924e3c" exitCode=0 Feb 16 17:41:12.544704 master-0 kubenswrapper[4652]: I0216 17:41:12.544509 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" event={"ID":"80a25250-e608-4b4a-80db-83054184408d","Type":"ContainerDied","Data":"5b666e023c5674f2e52826e7fd774eab2ef30d0d77be53d6bc6a4e869b924e3c"} Feb 16 17:41:12.544704 master-0 kubenswrapper[4652]: I0216 17:41:12.544531 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" event={"ID":"80a25250-e608-4b4a-80db-83054184408d","Type":"ContainerStarted","Data":"f9f9923a1fc79f016abae68e0c2dff54f87e5efebba5faa9418b58d5d4b0bb77"} Feb 16 17:41:12.546665 master-0 kubenswrapper[4652]: I0216 17:41:12.546619 4652 generic.go:334] "Generic (PLEG): container finished" podID="6aadc63b-7612-4341-8c2f-c4502c631e34" containerID="859e632c8e7547966ad50166600268f8e28ae5e0b816561579e74fdacfffb67a" exitCode=0 Feb 16 17:41:12.547663 master-0 kubenswrapper[4652]: I0216 17:41:12.547631 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" event={"ID":"6aadc63b-7612-4341-8c2f-c4502c631e34","Type":"ContainerDied","Data":"859e632c8e7547966ad50166600268f8e28ae5e0b816561579e74fdacfffb67a"} Feb 16 17:41:12.547663 master-0 kubenswrapper[4652]: I0216 17:41:12.547660 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" event={"ID":"6aadc63b-7612-4341-8c2f-c4502c631e34","Type":"ContainerStarted","Data":"ea1c58c7f2b43c455081f3abef2e965edee3e00f963e7266da51bc76aa676a26"} Feb 16 17:41:12.670021 master-0 kubenswrapper[4652]: I0216 17:41:12.669945 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-wcq82" podStartSLOduration=2.66992323 podStartE2EDuration="2.66992323s" podCreationTimestamp="2026-02-16 17:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:41:12.65463039 +0000 UTC m=+1030.042798906" watchObservedRunningTime="2026-02-16 17:41:12.66992323 +0000 UTC m=+1030.058091736" Feb 16 17:41:12.687616 master-0 kubenswrapper[4652]: I0216 17:41:12.686180 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 17:41:12.709814 master-0 kubenswrapper[4652]: W0216 17:41:12.709758 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1cd36752_14be_4e58_8129_348694a45fd8.slice/crio-2229819c823a0adcb05f985be90ffd8e073540f01c5673f73c404767ddf7b0e0 WatchSource:0}: Error finding container 2229819c823a0adcb05f985be90ffd8e073540f01c5673f73c404767ddf7b0e0: Status 404 returned error can't find the container with id 2229819c823a0adcb05f985be90ffd8e073540f01c5673f73c404767ddf7b0e0 Feb 16 17:41:13.026882 master-0 kubenswrapper[4652]: I0216 17:41:13.026840 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:13.109298 master-0 kubenswrapper[4652]: I0216 17:41:13.104869 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-ovsdbserver-nb\") pod \"80a25250-e608-4b4a-80db-83054184408d\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " Feb 16 17:41:13.109298 master-0 kubenswrapper[4652]: I0216 17:41:13.105112 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmfq7\" (UniqueName: \"kubernetes.io/projected/80a25250-e608-4b4a-80db-83054184408d-kube-api-access-tmfq7\") pod \"80a25250-e608-4b4a-80db-83054184408d\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " Feb 16 17:41:13.109298 master-0 kubenswrapper[4652]: I0216 17:41:13.105155 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-dns-svc\") pod \"80a25250-e608-4b4a-80db-83054184408d\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " Feb 16 17:41:13.109298 master-0 kubenswrapper[4652]: I0216 17:41:13.105282 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-config\") pod \"80a25250-e608-4b4a-80db-83054184408d\" (UID: \"80a25250-e608-4b4a-80db-83054184408d\") " Feb 16 17:41:13.137196 master-0 kubenswrapper[4652]: I0216 17:41:13.129540 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80a25250-e608-4b4a-80db-83054184408d-kube-api-access-tmfq7" (OuterVolumeSpecName: "kube-api-access-tmfq7") pod "80a25250-e608-4b4a-80db-83054184408d" (UID: "80a25250-e608-4b4a-80db-83054184408d"). InnerVolumeSpecName "kube-api-access-tmfq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:13.175292 master-0 kubenswrapper[4652]: I0216 17:41:13.175195 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "80a25250-e608-4b4a-80db-83054184408d" (UID: "80a25250-e608-4b4a-80db-83054184408d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:13.205013 master-0 kubenswrapper[4652]: I0216 17:41:13.193416 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80a25250-e608-4b4a-80db-83054184408d" (UID: "80a25250-e608-4b4a-80db-83054184408d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:13.207388 master-0 kubenswrapper[4652]: I0216 17:41:13.207339 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:13.207480 master-0 kubenswrapper[4652]: I0216 17:41:13.207389 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmfq7\" (UniqueName: \"kubernetes.io/projected/80a25250-e608-4b4a-80db-83054184408d-kube-api-access-tmfq7\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:13.207480 master-0 kubenswrapper[4652]: I0216 17:41:13.207404 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:13.216533 master-0 kubenswrapper[4652]: I0216 17:41:13.216481 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-config" (OuterVolumeSpecName: "config") pod "80a25250-e608-4b4a-80db-83054184408d" (UID: "80a25250-e608-4b4a-80db-83054184408d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:13.308894 master-0 kubenswrapper[4652]: I0216 17:41:13.308750 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a25250-e608-4b4a-80db-83054184408d-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:13.560754 master-0 kubenswrapper[4652]: I0216 17:41:13.560638 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1cd36752-14be-4e58-8129-348694a45fd8","Type":"ContainerStarted","Data":"2229819c823a0adcb05f985be90ffd8e073540f01c5673f73c404767ddf7b0e0"} Feb 16 17:41:13.562481 master-0 kubenswrapper[4652]: I0216 17:41:13.562359 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" event={"ID":"80a25250-e608-4b4a-80db-83054184408d","Type":"ContainerDied","Data":"f9f9923a1fc79f016abae68e0c2dff54f87e5efebba5faa9418b58d5d4b0bb77"} Feb 16 17:41:13.562481 master-0 kubenswrapper[4652]: I0216 17:41:13.562394 4652 scope.go:117] "RemoveContainer" containerID="5b666e023c5674f2e52826e7fd774eab2ef30d0d77be53d6bc6a4e869b924e3c" Feb 16 17:41:13.562481 master-0 kubenswrapper[4652]: I0216 17:41:13.562411 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c8cfc46bf-dgb7m" Feb 16 17:41:13.566047 master-0 kubenswrapper[4652]: I0216 17:41:13.566007 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" event={"ID":"6aadc63b-7612-4341-8c2f-c4502c631e34","Type":"ContainerStarted","Data":"3c0da124f29faa4f8b99f6469af271cedabe7792cc09ff306049edd9d5d99a34"} Feb 16 17:41:13.566451 master-0 kubenswrapper[4652]: I0216 17:41:13.566299 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:13.609385 master-0 kubenswrapper[4652]: I0216 17:41:13.606932 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" podStartSLOduration=2.606911156 podStartE2EDuration="2.606911156s" podCreationTimestamp="2026-02-16 17:41:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:41:13.587750762 +0000 UTC m=+1030.975919308" watchObservedRunningTime="2026-02-16 17:41:13.606911156 +0000 UTC m=+1030.995079692" Feb 16 17:41:13.652510 master-0 kubenswrapper[4652]: I0216 17:41:13.652418 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-dgb7m"] Feb 16 17:41:13.663173 master-0 kubenswrapper[4652]: I0216 17:41:13.663112 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c8cfc46bf-dgb7m"] Feb 16 17:41:14.577506 master-0 kubenswrapper[4652]: I0216 17:41:14.577453 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1cd36752-14be-4e58-8129-348694a45fd8","Type":"ContainerStarted","Data":"2042532dfd2ec28cfd9bd70ecbdd8993460b116c5f87ddbf5cc7269c6b30d60f"} Feb 16 17:41:14.577506 master-0 kubenswrapper[4652]: I0216 17:41:14.577514 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1cd36752-14be-4e58-8129-348694a45fd8","Type":"ContainerStarted","Data":"e177de29651742e4a795952c8f0e9901c8c8f04f0c20d8469981e26bbc6c27a4"} Feb 16 17:41:14.596401 master-0 kubenswrapper[4652]: I0216 17:41:14.596327 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.503279398 podStartE2EDuration="3.596307048s" podCreationTimestamp="2026-02-16 17:41:11 +0000 UTC" firstStartedPulling="2026-02-16 17:41:12.714840413 +0000 UTC m=+1030.103008929" lastFinishedPulling="2026-02-16 17:41:13.807868063 +0000 UTC m=+1031.196036579" observedRunningTime="2026-02-16 17:41:14.593992176 +0000 UTC m=+1031.982160692" watchObservedRunningTime="2026-02-16 17:41:14.596307048 +0000 UTC m=+1031.984475554" Feb 16 17:41:14.757686 master-0 kubenswrapper[4652]: I0216 17:41:14.757612 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80a25250-e608-4b4a-80db-83054184408d" path="/var/lib/kubelet/pods/80a25250-e608-4b4a-80db-83054184408d/volumes" Feb 16 17:41:15.586193 master-0 kubenswrapper[4652]: I0216 17:41:15.586133 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 17:41:18.350379 master-0 kubenswrapper[4652]: I0216 17:41:18.350324 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 17:41:18.467711 master-0 kubenswrapper[4652]: I0216 17:41:18.467662 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 17:41:20.738702 master-0 kubenswrapper[4652]: I0216 17:41:20.738632 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-jwcwv"] Feb 16 17:41:20.739696 master-0 kubenswrapper[4652]: I0216 17:41:20.738869 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" podUID="6aadc63b-7612-4341-8c2f-c4502c631e34" containerName="dnsmasq-dns" containerID="cri-o://3c0da124f29faa4f8b99f6469af271cedabe7792cc09ff306049edd9d5d99a34" gracePeriod=10 Feb 16 17:41:20.743370 master-0 kubenswrapper[4652]: I0216 17:41:20.740971 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:21.047130 master-0 kubenswrapper[4652]: I0216 17:41:21.047017 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-4rvpk"] Feb 16 17:41:21.047492 master-0 kubenswrapper[4652]: E0216 17:41:21.047466 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a25250-e608-4b4a-80db-83054184408d" containerName="init" Feb 16 17:41:21.047492 master-0 kubenswrapper[4652]: I0216 17:41:21.047483 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a25250-e608-4b4a-80db-83054184408d" containerName="init" Feb 16 17:41:21.047699 master-0 kubenswrapper[4652]: I0216 17:41:21.047662 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a25250-e608-4b4a-80db-83054184408d" containerName="init" Feb 16 17:41:21.048738 master-0 kubenswrapper[4652]: I0216 17:41:21.048705 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.095422 master-0 kubenswrapper[4652]: I0216 17:41:21.095339 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.095422 master-0 kubenswrapper[4652]: I0216 17:41:21.095421 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftjzq\" (UniqueName: \"kubernetes.io/projected/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-kube-api-access-ftjzq\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.095724 master-0 kubenswrapper[4652]: I0216 17:41:21.095457 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-dns-svc\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.095807 master-0 kubenswrapper[4652]: I0216 17:41:21.095735 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.095896 master-0 kubenswrapper[4652]: I0216 17:41:21.095866 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-config\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.137107 master-0 kubenswrapper[4652]: I0216 17:41:21.137058 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-4rvpk"] Feb 16 17:41:21.197306 master-0 kubenswrapper[4652]: I0216 17:41:21.197203 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.197577 master-0 kubenswrapper[4652]: I0216 17:41:21.197342 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftjzq\" (UniqueName: \"kubernetes.io/projected/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-kube-api-access-ftjzq\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.198750 master-0 kubenswrapper[4652]: I0216 17:41:21.198014 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-dns-svc\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.199986 master-0 kubenswrapper[4652]: I0216 17:41:21.198870 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.199986 master-0 kubenswrapper[4652]: I0216 17:41:21.199127 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-config\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.200595 master-0 kubenswrapper[4652]: I0216 17:41:21.200323 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-dns-svc\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.201481 master-0 kubenswrapper[4652]: I0216 17:41:21.201146 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-ovsdbserver-sb\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.201748 master-0 kubenswrapper[4652]: I0216 17:41:21.201550 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-ovsdbserver-nb\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.203759 master-0 kubenswrapper[4652]: I0216 17:41:21.202326 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-config\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.399321 master-0 kubenswrapper[4652]: I0216 17:41:21.397399 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftjzq\" (UniqueName: \"kubernetes.io/projected/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-kube-api-access-ftjzq\") pod \"dnsmasq-dns-6fd49994df-4rvpk\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.471417 master-0 kubenswrapper[4652]: I0216 17:41:21.471343 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 17:41:21.471417 master-0 kubenswrapper[4652]: I0216 17:41:21.471427 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 17:41:21.557281 master-0 kubenswrapper[4652]: I0216 17:41:21.557012 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 17:41:21.668412 master-0 kubenswrapper[4652]: I0216 17:41:21.667438 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:21.679783 master-0 kubenswrapper[4652]: I0216 17:41:21.679690 4652 generic.go:334] "Generic (PLEG): container finished" podID="6aadc63b-7612-4341-8c2f-c4502c631e34" containerID="3c0da124f29faa4f8b99f6469af271cedabe7792cc09ff306049edd9d5d99a34" exitCode=0 Feb 16 17:41:21.681689 master-0 kubenswrapper[4652]: I0216 17:41:21.681625 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" event={"ID":"6aadc63b-7612-4341-8c2f-c4502c631e34","Type":"ContainerDied","Data":"3c0da124f29faa4f8b99f6469af271cedabe7792cc09ff306049edd9d5d99a34"} Feb 16 17:41:21.761538 master-0 kubenswrapper[4652]: I0216 17:41:21.761451 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 17:41:21.894528 master-0 kubenswrapper[4652]: I0216 17:41:21.887737 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:22.221688 master-0 kubenswrapper[4652]: I0216 17:41:22.220512 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-ovsdbserver-sb\") pod \"6aadc63b-7612-4341-8c2f-c4502c631e34\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " Feb 16 17:41:22.221688 master-0 kubenswrapper[4652]: I0216 17:41:22.220649 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-ovsdbserver-nb\") pod \"6aadc63b-7612-4341-8c2f-c4502c631e34\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " Feb 16 17:41:22.221688 master-0 kubenswrapper[4652]: I0216 17:41:22.220796 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-dns-svc\") pod \"6aadc63b-7612-4341-8c2f-c4502c631e34\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " Feb 16 17:41:22.221688 master-0 kubenswrapper[4652]: I0216 17:41:22.220824 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-config\") pod \"6aadc63b-7612-4341-8c2f-c4502c631e34\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " Feb 16 17:41:22.221688 master-0 kubenswrapper[4652]: I0216 17:41:22.220944 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cnrz\" (UniqueName: \"kubernetes.io/projected/6aadc63b-7612-4341-8c2f-c4502c631e34-kube-api-access-4cnrz\") pod \"6aadc63b-7612-4341-8c2f-c4502c631e34\" (UID: \"6aadc63b-7612-4341-8c2f-c4502c631e34\") " Feb 16 17:41:22.228061 master-0 kubenswrapper[4652]: I0216 17:41:22.227828 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aadc63b-7612-4341-8c2f-c4502c631e34-kube-api-access-4cnrz" (OuterVolumeSpecName: "kube-api-access-4cnrz") pod "6aadc63b-7612-4341-8c2f-c4502c631e34" (UID: "6aadc63b-7612-4341-8c2f-c4502c631e34"). InnerVolumeSpecName "kube-api-access-4cnrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:22.231839 master-0 kubenswrapper[4652]: I0216 17:41:22.231802 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-4rvpk"] Feb 16 17:41:22.281554 master-0 kubenswrapper[4652]: I0216 17:41:22.281483 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-config" (OuterVolumeSpecName: "config") pod "6aadc63b-7612-4341-8c2f-c4502c631e34" (UID: "6aadc63b-7612-4341-8c2f-c4502c631e34"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:22.282458 master-0 kubenswrapper[4652]: I0216 17:41:22.282379 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6aadc63b-7612-4341-8c2f-c4502c631e34" (UID: "6aadc63b-7612-4341-8c2f-c4502c631e34"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:22.283230 master-0 kubenswrapper[4652]: I0216 17:41:22.283188 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6aadc63b-7612-4341-8c2f-c4502c631e34" (UID: "6aadc63b-7612-4341-8c2f-c4502c631e34"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:22.287591 master-0 kubenswrapper[4652]: I0216 17:41:22.287522 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6aadc63b-7612-4341-8c2f-c4502c631e34" (UID: "6aadc63b-7612-4341-8c2f-c4502c631e34"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:22.331786 master-0 kubenswrapper[4652]: I0216 17:41:22.323518 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:22.331786 master-0 kubenswrapper[4652]: I0216 17:41:22.323563 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:22.331786 master-0 kubenswrapper[4652]: I0216 17:41:22.323581 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cnrz\" (UniqueName: \"kubernetes.io/projected/6aadc63b-7612-4341-8c2f-c4502c631e34-kube-api-access-4cnrz\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:22.331786 master-0 kubenswrapper[4652]: I0216 17:41:22.323594 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:22.331786 master-0 kubenswrapper[4652]: I0216 17:41:22.323606 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6aadc63b-7612-4341-8c2f-c4502c631e34-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:22.552284 master-0 kubenswrapper[4652]: I0216 17:41:22.552080 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 17:41:22.554939 master-0 kubenswrapper[4652]: E0216 17:41:22.552714 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aadc63b-7612-4341-8c2f-c4502c631e34" containerName="init" Feb 16 17:41:22.554939 master-0 kubenswrapper[4652]: I0216 17:41:22.552741 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aadc63b-7612-4341-8c2f-c4502c631e34" containerName="init" Feb 16 17:41:22.554939 master-0 kubenswrapper[4652]: E0216 17:41:22.552761 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aadc63b-7612-4341-8c2f-c4502c631e34" containerName="dnsmasq-dns" Feb 16 17:41:22.554939 master-0 kubenswrapper[4652]: I0216 17:41:22.552773 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aadc63b-7612-4341-8c2f-c4502c631e34" containerName="dnsmasq-dns" Feb 16 17:41:22.554939 master-0 kubenswrapper[4652]: I0216 17:41:22.553155 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aadc63b-7612-4341-8c2f-c4502c631e34" containerName="dnsmasq-dns" Feb 16 17:41:22.568640 master-0 kubenswrapper[4652]: I0216 17:41:22.567062 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 17:41:22.569585 master-0 kubenswrapper[4652]: I0216 17:41:22.569543 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 17:41:22.569850 master-0 kubenswrapper[4652]: I0216 17:41:22.569820 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 17:41:22.572162 master-0 kubenswrapper[4652]: I0216 17:41:22.572107 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 17:41:22.691122 master-0 kubenswrapper[4652]: I0216 17:41:22.691056 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" event={"ID":"6aadc63b-7612-4341-8c2f-c4502c631e34","Type":"ContainerDied","Data":"ea1c58c7f2b43c455081f3abef2e965edee3e00f963e7266da51bc76aa676a26"} Feb 16 17:41:22.691122 master-0 kubenswrapper[4652]: I0216 17:41:22.691131 4652 scope.go:117] "RemoveContainer" containerID="3c0da124f29faa4f8b99f6469af271cedabe7792cc09ff306049edd9d5d99a34" Feb 16 17:41:22.691417 master-0 kubenswrapper[4652]: I0216 17:41:22.691297 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" Feb 16 17:41:22.700847 master-0 kubenswrapper[4652]: I0216 17:41:22.700796 4652 generic.go:334] "Generic (PLEG): container finished" podID="6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" containerID="7ef9550e515c13adf316cefdb9949d5d301b65f144d2755c976c47339b1ecd6e" exitCode=0 Feb 16 17:41:22.701944 master-0 kubenswrapper[4652]: I0216 17:41:22.701909 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" event={"ID":"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742","Type":"ContainerDied","Data":"7ef9550e515c13adf316cefdb9949d5d301b65f144d2755c976c47339b1ecd6e"} Feb 16 17:41:22.702019 master-0 kubenswrapper[4652]: I0216 17:41:22.701951 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" event={"ID":"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742","Type":"ContainerStarted","Data":"62d84f57e0b1d72961534ff38b9c8d57398efc9ce89f46151b97a83cc51aaa34"} Feb 16 17:41:22.716298 master-0 kubenswrapper[4652]: I0216 17:41:22.716217 4652 scope.go:117] "RemoveContainer" containerID="859e632c8e7547966ad50166600268f8e28ae5e0b816561579e74fdacfffb67a" Feb 16 17:41:22.867836 master-0 kubenswrapper[4652]: I0216 17:41:22.867761 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 17:41:23.363880 master-0 kubenswrapper[4652]: I0216 17:41:23.363831 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-lock\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.364146 master-0 kubenswrapper[4652]: I0216 17:41:23.364112 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzfd6\" (UniqueName: \"kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-kube-api-access-pzfd6\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.364216 master-0 kubenswrapper[4652]: I0216 17:41:23.364201 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.364296 master-0 kubenswrapper[4652]: I0216 17:41:23.364274 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.364475 master-0 kubenswrapper[4652]: I0216 17:41:23.364453 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-cache\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.364686 master-0 kubenswrapper[4652]: I0216 17:41:23.364664 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7ced871c-1534-44aa-87eb-e2aa6f2f2b29\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b0d780d2-c3de-4c2e-b542-15a74e61ac8e\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.466747 master-0 kubenswrapper[4652]: I0216 17:41:23.466600 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-lock\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.466747 master-0 kubenswrapper[4652]: I0216 17:41:23.466678 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzfd6\" (UniqueName: \"kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-kube-api-access-pzfd6\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.466747 master-0 kubenswrapper[4652]: I0216 17:41:23.466697 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.466747 master-0 kubenswrapper[4652]: I0216 17:41:23.466717 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.466747 master-0 kubenswrapper[4652]: I0216 17:41:23.466748 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-cache\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.467530 master-0 kubenswrapper[4652]: E0216 17:41:23.467146 4652 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:41:23.467530 master-0 kubenswrapper[4652]: I0216 17:41:23.467189 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-cache\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.467530 master-0 kubenswrapper[4652]: E0216 17:41:23.467201 4652 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:41:23.467530 master-0 kubenswrapper[4652]: E0216 17:41:23.467316 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift podName:9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2 nodeName:}" failed. No retries permitted until 2026-02-16 17:41:23.967285301 +0000 UTC m=+1041.355453867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift") pod "swift-storage-0" (UID: "9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2") : configmap "swift-ring-files" not found Feb 16 17:41:23.467530 master-0 kubenswrapper[4652]: I0216 17:41:23.467515 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-lock\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.470502 master-0 kubenswrapper[4652]: I0216 17:41:23.470465 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.582930 master-0 kubenswrapper[4652]: I0216 17:41:23.582828 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-jwcwv"] Feb 16 17:41:23.590581 master-0 kubenswrapper[4652]: I0216 17:41:23.590540 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzfd6\" (UniqueName: \"kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-kube-api-access-pzfd6\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.636359 master-0 kubenswrapper[4652]: I0216 17:41:23.636278 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b9694dd79-jwcwv"] Feb 16 17:41:23.671880 master-0 kubenswrapper[4652]: I0216 17:41:23.671798 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7ced871c-1534-44aa-87eb-e2aa6f2f2b29\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b0d780d2-c3de-4c2e-b542-15a74e61ac8e\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.673794 master-0 kubenswrapper[4652]: I0216 17:41:23.673760 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:41:23.673873 master-0 kubenswrapper[4652]: I0216 17:41:23.673791 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7ced871c-1534-44aa-87eb-e2aa6f2f2b29\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b0d780d2-c3de-4c2e-b542-15a74e61ac8e\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/283f75658e7d6c2d86f01e00ce8b3d706b376990367c07c7442bba33440ef86c/globalmount\"" pod="openstack/swift-storage-0" Feb 16 17:41:23.711048 master-0 kubenswrapper[4652]: I0216 17:41:23.710994 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" event={"ID":"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742","Type":"ContainerStarted","Data":"d2405f65f5a9e898a93938632be47f9bfa5859d60eb354f495245df0535a938b"} Feb 16 17:41:23.711326 master-0 kubenswrapper[4652]: I0216 17:41:23.711136 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:23.985285 master-0 kubenswrapper[4652]: I0216 17:41:23.980820 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:23.985285 master-0 kubenswrapper[4652]: E0216 17:41:23.981124 4652 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:41:23.985285 master-0 kubenswrapper[4652]: E0216 17:41:23.981139 4652 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:41:23.985285 master-0 kubenswrapper[4652]: E0216 17:41:23.981179 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift podName:9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2 nodeName:}" failed. No retries permitted until 2026-02-16 17:41:24.981163946 +0000 UTC m=+1042.369332462 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift") pod "swift-storage-0" (UID: "9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2") : configmap "swift-ring-files" not found Feb 16 17:41:24.021349 master-0 kubenswrapper[4652]: I0216 17:41:24.019068 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-nxjgq"] Feb 16 17:41:24.021349 master-0 kubenswrapper[4652]: I0216 17:41:24.020421 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.025318 master-0 kubenswrapper[4652]: I0216 17:41:24.025268 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 17:41:24.028686 master-0 kubenswrapper[4652]: I0216 17:41:24.028635 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 17:41:24.028895 master-0 kubenswrapper[4652]: I0216 17:41:24.028869 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 17:41:24.042343 master-0 kubenswrapper[4652]: I0216 17:41:24.042266 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nxjgq"] Feb 16 17:41:24.050566 master-0 kubenswrapper[4652]: I0216 17:41:24.050102 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" podStartSLOduration=4.050016122 podStartE2EDuration="4.050016122s" podCreationTimestamp="2026-02-16 17:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:41:24.019955566 +0000 UTC m=+1041.408124082" watchObservedRunningTime="2026-02-16 17:41:24.050016122 +0000 UTC m=+1041.438184648" Feb 16 17:41:24.083241 master-0 kubenswrapper[4652]: I0216 17:41:24.083181 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a714374-e581-4c12-9e7f-5060fd746f10-scripts\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.083510 master-0 kubenswrapper[4652]: I0216 17:41:24.083289 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-combined-ca-bundle\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.083510 master-0 kubenswrapper[4652]: I0216 17:41:24.083361 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2a714374-e581-4c12-9e7f-5060fd746f10-ring-data-devices\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.083510 master-0 kubenswrapper[4652]: I0216 17:41:24.083392 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv4nk\" (UniqueName: \"kubernetes.io/projected/2a714374-e581-4c12-9e7f-5060fd746f10-kube-api-access-cv4nk\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.083510 master-0 kubenswrapper[4652]: I0216 17:41:24.083452 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-swiftconf\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.083510 master-0 kubenswrapper[4652]: I0216 17:41:24.083513 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-dispersionconf\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.083787 master-0 kubenswrapper[4652]: I0216 17:41:24.083587 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2a714374-e581-4c12-9e7f-5060fd746f10-etc-swift\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.185653 master-0 kubenswrapper[4652]: I0216 17:41:24.185596 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-swiftconf\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.185856 master-0 kubenswrapper[4652]: I0216 17:41:24.185714 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-dispersionconf\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.185936 master-0 kubenswrapper[4652]: I0216 17:41:24.185853 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2a714374-e581-4c12-9e7f-5060fd746f10-etc-swift\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.185936 master-0 kubenswrapper[4652]: I0216 17:41:24.185889 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a714374-e581-4c12-9e7f-5060fd746f10-scripts\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.186211 master-0 kubenswrapper[4652]: I0216 17:41:24.186173 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-combined-ca-bundle\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.186316 master-0 kubenswrapper[4652]: I0216 17:41:24.186270 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2a714374-e581-4c12-9e7f-5060fd746f10-ring-data-devices\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.187919 master-0 kubenswrapper[4652]: I0216 17:41:24.186972 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a714374-e581-4c12-9e7f-5060fd746f10-scripts\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.187919 master-0 kubenswrapper[4652]: I0216 17:41:24.187076 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv4nk\" (UniqueName: \"kubernetes.io/projected/2a714374-e581-4c12-9e7f-5060fd746f10-kube-api-access-cv4nk\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.188050 master-0 kubenswrapper[4652]: I0216 17:41:24.187921 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2a714374-e581-4c12-9e7f-5060fd746f10-etc-swift\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.188188 master-0 kubenswrapper[4652]: I0216 17:41:24.188166 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2a714374-e581-4c12-9e7f-5060fd746f10-ring-data-devices\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.190996 master-0 kubenswrapper[4652]: I0216 17:41:24.190954 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-combined-ca-bundle\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.191882 master-0 kubenswrapper[4652]: I0216 17:41:24.191858 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-swiftconf\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.192026 master-0 kubenswrapper[4652]: I0216 17:41:24.191979 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-dispersionconf\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.209718 master-0 kubenswrapper[4652]: I0216 17:41:24.209627 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4wpdm"] Feb 16 17:41:24.215584 master-0 kubenswrapper[4652]: I0216 17:41:24.215523 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4wpdm" Feb 16 17:41:24.217780 master-0 kubenswrapper[4652]: I0216 17:41:24.217744 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv4nk\" (UniqueName: \"kubernetes.io/projected/2a714374-e581-4c12-9e7f-5060fd746f10-kube-api-access-cv4nk\") pod \"swift-ring-rebalance-nxjgq\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.218958 master-0 kubenswrapper[4652]: I0216 17:41:24.218935 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 17:41:24.245643 master-0 kubenswrapper[4652]: I0216 17:41:24.245129 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4wpdm"] Feb 16 17:41:24.289625 master-0 kubenswrapper[4652]: I0216 17:41:24.289548 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0952820-4f4e-4a7e-b728-ffab772beea7-operator-scripts\") pod \"root-account-create-update-4wpdm\" (UID: \"e0952820-4f4e-4a7e-b728-ffab772beea7\") " pod="openstack/root-account-create-update-4wpdm" Feb 16 17:41:24.290574 master-0 kubenswrapper[4652]: I0216 17:41:24.290243 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mdkx\" (UniqueName: \"kubernetes.io/projected/e0952820-4f4e-4a7e-b728-ffab772beea7-kube-api-access-6mdkx\") pod \"root-account-create-update-4wpdm\" (UID: \"e0952820-4f4e-4a7e-b728-ffab772beea7\") " pod="openstack/root-account-create-update-4wpdm" Feb 16 17:41:24.357330 master-0 kubenswrapper[4652]: I0216 17:41:24.357226 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:24.392546 master-0 kubenswrapper[4652]: I0216 17:41:24.392463 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0952820-4f4e-4a7e-b728-ffab772beea7-operator-scripts\") pod \"root-account-create-update-4wpdm\" (UID: \"e0952820-4f4e-4a7e-b728-ffab772beea7\") " pod="openstack/root-account-create-update-4wpdm" Feb 16 17:41:24.392806 master-0 kubenswrapper[4652]: I0216 17:41:24.392629 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mdkx\" (UniqueName: \"kubernetes.io/projected/e0952820-4f4e-4a7e-b728-ffab772beea7-kube-api-access-6mdkx\") pod \"root-account-create-update-4wpdm\" (UID: \"e0952820-4f4e-4a7e-b728-ffab772beea7\") " pod="openstack/root-account-create-update-4wpdm" Feb 16 17:41:24.393807 master-0 kubenswrapper[4652]: I0216 17:41:24.393772 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0952820-4f4e-4a7e-b728-ffab772beea7-operator-scripts\") pod \"root-account-create-update-4wpdm\" (UID: \"e0952820-4f4e-4a7e-b728-ffab772beea7\") " pod="openstack/root-account-create-update-4wpdm" Feb 16 17:41:24.441876 master-0 kubenswrapper[4652]: I0216 17:41:24.441839 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mdkx\" (UniqueName: \"kubernetes.io/projected/e0952820-4f4e-4a7e-b728-ffab772beea7-kube-api-access-6mdkx\") pod \"root-account-create-update-4wpdm\" (UID: \"e0952820-4f4e-4a7e-b728-ffab772beea7\") " pod="openstack/root-account-create-update-4wpdm" Feb 16 17:41:24.591853 master-0 kubenswrapper[4652]: I0216 17:41:24.591800 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4wpdm" Feb 16 17:41:24.759880 master-0 kubenswrapper[4652]: I0216 17:41:24.759842 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6aadc63b-7612-4341-8c2f-c4502c631e34" path="/var/lib/kubelet/pods/6aadc63b-7612-4341-8c2f-c4502c631e34/volumes" Feb 16 17:41:25.013337 master-0 kubenswrapper[4652]: I0216 17:41:25.013266 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:25.013942 master-0 kubenswrapper[4652]: E0216 17:41:25.013520 4652 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:41:25.013942 master-0 kubenswrapper[4652]: E0216 17:41:25.013569 4652 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:41:25.013942 master-0 kubenswrapper[4652]: E0216 17:41:25.013651 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift podName:9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2 nodeName:}" failed. No retries permitted until 2026-02-16 17:41:27.013612812 +0000 UTC m=+1044.401781328 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift") pod "swift-storage-0" (UID: "9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2") : configmap "swift-ring-files" not found Feb 16 17:41:25.061400 master-0 kubenswrapper[4652]: I0216 17:41:25.044682 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-nxjgq"] Feb 16 17:41:25.065991 master-0 kubenswrapper[4652]: W0216 17:41:25.065951 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a714374_e581_4c12_9e7f_5060fd746f10.slice/crio-67de9c9191e44a41e5e3a0d9e97b7f74c08a496f15d237d751acb614451d5a57 WatchSource:0}: Error finding container 67de9c9191e44a41e5e3a0d9e97b7f74c08a496f15d237d751acb614451d5a57: Status 404 returned error can't find the container with id 67de9c9191e44a41e5e3a0d9e97b7f74c08a496f15d237d751acb614451d5a57 Feb 16 17:41:25.220853 master-0 kubenswrapper[4652]: I0216 17:41:25.220819 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4wpdm"] Feb 16 17:41:25.277118 master-0 kubenswrapper[4652]: I0216 17:41:25.277075 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7ced871c-1534-44aa-87eb-e2aa6f2f2b29\" (UniqueName: \"kubernetes.io/csi/topolvm.io^b0d780d2-c3de-4c2e-b542-15a74e61ac8e\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:25.755039 master-0 kubenswrapper[4652]: I0216 17:41:25.753458 4652 generic.go:334] "Generic (PLEG): container finished" podID="e0952820-4f4e-4a7e-b728-ffab772beea7" containerID="9b1ab2cc2be412b1cbab860d6f4a20ed0a04d71c2e4100dc2a0fec9ba8add898" exitCode=0 Feb 16 17:41:25.755039 master-0 kubenswrapper[4652]: I0216 17:41:25.753534 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4wpdm" event={"ID":"e0952820-4f4e-4a7e-b728-ffab772beea7","Type":"ContainerDied","Data":"9b1ab2cc2be412b1cbab860d6f4a20ed0a04d71c2e4100dc2a0fec9ba8add898"} Feb 16 17:41:25.755039 master-0 kubenswrapper[4652]: I0216 17:41:25.753566 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4wpdm" event={"ID":"e0952820-4f4e-4a7e-b728-ffab772beea7","Type":"ContainerStarted","Data":"68dc587b3064e89520dd9b5d488017f1f99be953f84311349e829efa0af364ac"} Feb 16 17:41:25.755039 master-0 kubenswrapper[4652]: I0216 17:41:25.755029 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nxjgq" event={"ID":"2a714374-e581-4c12-9e7f-5060fd746f10","Type":"ContainerStarted","Data":"67de9c9191e44a41e5e3a0d9e97b7f74c08a496f15d237d751acb614451d5a57"} Feb 16 17:41:26.634197 master-0 kubenswrapper[4652]: I0216 17:41:26.634129 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b9694dd79-jwcwv" podUID="6aadc63b-7612-4341-8c2f-c4502c631e34" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.148:5353: i/o timeout" Feb 16 17:41:27.071220 master-0 kubenswrapper[4652]: I0216 17:41:27.070945 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:27.071502 master-0 kubenswrapper[4652]: E0216 17:41:27.071355 4652 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:41:27.071502 master-0 kubenswrapper[4652]: E0216 17:41:27.071434 4652 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:41:27.071584 master-0 kubenswrapper[4652]: E0216 17:41:27.071524 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift podName:9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2 nodeName:}" failed. No retries permitted until 2026-02-16 17:41:31.071497025 +0000 UTC m=+1048.459665531 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift") pod "swift-storage-0" (UID: "9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2") : configmap "swift-ring-files" not found Feb 16 17:41:28.124352 master-0 kubenswrapper[4652]: I0216 17:41:28.123035 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6tchx"] Feb 16 17:41:28.124849 master-0 kubenswrapper[4652]: I0216 17:41:28.124551 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6tchx" Feb 16 17:41:28.136950 master-0 kubenswrapper[4652]: I0216 17:41:28.136895 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6tchx"] Feb 16 17:41:28.194674 master-0 kubenswrapper[4652]: I0216 17:41:28.194570 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535576c7-2541-496c-be4e-75714ddcb6de-operator-scripts\") pod \"keystone-db-create-6tchx\" (UID: \"535576c7-2541-496c-be4e-75714ddcb6de\") " pod="openstack/keystone-db-create-6tchx" Feb 16 17:41:28.194936 master-0 kubenswrapper[4652]: I0216 17:41:28.194733 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2vgx\" (UniqueName: \"kubernetes.io/projected/535576c7-2541-496c-be4e-75714ddcb6de-kube-api-access-w2vgx\") pod \"keystone-db-create-6tchx\" (UID: \"535576c7-2541-496c-be4e-75714ddcb6de\") " pod="openstack/keystone-db-create-6tchx" Feb 16 17:41:28.231292 master-0 kubenswrapper[4652]: I0216 17:41:28.231236 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-0c92-account-create-update-vkjtr"] Feb 16 17:41:28.233455 master-0 kubenswrapper[4652]: I0216 17:41:28.233429 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c92-account-create-update-vkjtr" Feb 16 17:41:28.236258 master-0 kubenswrapper[4652]: I0216 17:41:28.236213 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 17:41:28.242555 master-0 kubenswrapper[4652]: I0216 17:41:28.242319 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0c92-account-create-update-vkjtr"] Feb 16 17:41:28.296375 master-0 kubenswrapper[4652]: I0216 17:41:28.296318 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e389adde-527c-4092-adb1-8a9f5bab0a35-operator-scripts\") pod \"keystone-0c92-account-create-update-vkjtr\" (UID: \"e389adde-527c-4092-adb1-8a9f5bab0a35\") " pod="openstack/keystone-0c92-account-create-update-vkjtr" Feb 16 17:41:28.296644 master-0 kubenswrapper[4652]: I0216 17:41:28.296582 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535576c7-2541-496c-be4e-75714ddcb6de-operator-scripts\") pod \"keystone-db-create-6tchx\" (UID: \"535576c7-2541-496c-be4e-75714ddcb6de\") " pod="openstack/keystone-db-create-6tchx" Feb 16 17:41:28.296871 master-0 kubenswrapper[4652]: I0216 17:41:28.296823 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmctc\" (UniqueName: \"kubernetes.io/projected/e389adde-527c-4092-adb1-8a9f5bab0a35-kube-api-access-qmctc\") pod \"keystone-0c92-account-create-update-vkjtr\" (UID: \"e389adde-527c-4092-adb1-8a9f5bab0a35\") " pod="openstack/keystone-0c92-account-create-update-vkjtr" Feb 16 17:41:28.296971 master-0 kubenswrapper[4652]: I0216 17:41:28.296954 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2vgx\" (UniqueName: \"kubernetes.io/projected/535576c7-2541-496c-be4e-75714ddcb6de-kube-api-access-w2vgx\") pod \"keystone-db-create-6tchx\" (UID: \"535576c7-2541-496c-be4e-75714ddcb6de\") " pod="openstack/keystone-db-create-6tchx" Feb 16 17:41:28.297345 master-0 kubenswrapper[4652]: I0216 17:41:28.297317 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535576c7-2541-496c-be4e-75714ddcb6de-operator-scripts\") pod \"keystone-db-create-6tchx\" (UID: \"535576c7-2541-496c-be4e-75714ddcb6de\") " pod="openstack/keystone-db-create-6tchx" Feb 16 17:41:28.315515 master-0 kubenswrapper[4652]: I0216 17:41:28.315469 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2vgx\" (UniqueName: \"kubernetes.io/projected/535576c7-2541-496c-be4e-75714ddcb6de-kube-api-access-w2vgx\") pod \"keystone-db-create-6tchx\" (UID: \"535576c7-2541-496c-be4e-75714ddcb6de\") " pod="openstack/keystone-db-create-6tchx" Feb 16 17:41:28.379780 master-0 kubenswrapper[4652]: I0216 17:41:28.379645 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-npbng"] Feb 16 17:41:28.380809 master-0 kubenswrapper[4652]: I0216 17:41:28.380777 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-npbng" Feb 16 17:41:28.398336 master-0 kubenswrapper[4652]: I0216 17:41:28.398265 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e389adde-527c-4092-adb1-8a9f5bab0a35-operator-scripts\") pod \"keystone-0c92-account-create-update-vkjtr\" (UID: \"e389adde-527c-4092-adb1-8a9f5bab0a35\") " pod="openstack/keystone-0c92-account-create-update-vkjtr" Feb 16 17:41:28.398433 master-0 kubenswrapper[4652]: I0216 17:41:28.398392 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmctc\" (UniqueName: \"kubernetes.io/projected/e389adde-527c-4092-adb1-8a9f5bab0a35-kube-api-access-qmctc\") pod \"keystone-0c92-account-create-update-vkjtr\" (UID: \"e389adde-527c-4092-adb1-8a9f5bab0a35\") " pod="openstack/keystone-0c92-account-create-update-vkjtr" Feb 16 17:41:28.398908 master-0 kubenswrapper[4652]: I0216 17:41:28.398871 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e389adde-527c-4092-adb1-8a9f5bab0a35-operator-scripts\") pod \"keystone-0c92-account-create-update-vkjtr\" (UID: \"e389adde-527c-4092-adb1-8a9f5bab0a35\") " pod="openstack/keystone-0c92-account-create-update-vkjtr" Feb 16 17:41:28.444554 master-0 kubenswrapper[4652]: I0216 17:41:28.444498 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-npbng"] Feb 16 17:41:28.458988 master-0 kubenswrapper[4652]: I0216 17:41:28.458914 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmctc\" (UniqueName: \"kubernetes.io/projected/e389adde-527c-4092-adb1-8a9f5bab0a35-kube-api-access-qmctc\") pod \"keystone-0c92-account-create-update-vkjtr\" (UID: \"e389adde-527c-4092-adb1-8a9f5bab0a35\") " pod="openstack/keystone-0c92-account-create-update-vkjtr" Feb 16 17:41:28.487452 master-0 kubenswrapper[4652]: I0216 17:41:28.487386 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6tchx" Feb 16 17:41:28.500900 master-0 kubenswrapper[4652]: I0216 17:41:28.500852 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4wpdm" Feb 16 17:41:28.501790 master-0 kubenswrapper[4652]: I0216 17:41:28.501730 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dpkd\" (UniqueName: \"kubernetes.io/projected/2e0fa216-316f-4f38-9522-e08e6741d57e-kube-api-access-8dpkd\") pod \"placement-db-create-npbng\" (UID: \"2e0fa216-316f-4f38-9522-e08e6741d57e\") " pod="openstack/placement-db-create-npbng" Feb 16 17:41:28.501864 master-0 kubenswrapper[4652]: I0216 17:41:28.501846 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0fa216-316f-4f38-9522-e08e6741d57e-operator-scripts\") pod \"placement-db-create-npbng\" (UID: \"2e0fa216-316f-4f38-9522-e08e6741d57e\") " pod="openstack/placement-db-create-npbng" Feb 16 17:41:28.540286 master-0 kubenswrapper[4652]: I0216 17:41:28.539133 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f9e4-account-create-update-xch88"] Feb 16 17:41:28.540286 master-0 kubenswrapper[4652]: E0216 17:41:28.539685 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0952820-4f4e-4a7e-b728-ffab772beea7" containerName="mariadb-account-create-update" Feb 16 17:41:28.540286 master-0 kubenswrapper[4652]: I0216 17:41:28.539718 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0952820-4f4e-4a7e-b728-ffab772beea7" containerName="mariadb-account-create-update" Feb 16 17:41:28.540286 master-0 kubenswrapper[4652]: I0216 17:41:28.539906 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0952820-4f4e-4a7e-b728-ffab772beea7" containerName="mariadb-account-create-update" Feb 16 17:41:28.540629 master-0 kubenswrapper[4652]: I0216 17:41:28.540608 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f9e4-account-create-update-xch88" Feb 16 17:41:28.545629 master-0 kubenswrapper[4652]: I0216 17:41:28.545584 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 17:41:28.566811 master-0 kubenswrapper[4652]: I0216 17:41:28.559403 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c92-account-create-update-vkjtr" Feb 16 17:41:28.566811 master-0 kubenswrapper[4652]: I0216 17:41:28.563107 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f9e4-account-create-update-xch88"] Feb 16 17:41:28.603541 master-0 kubenswrapper[4652]: I0216 17:41:28.603488 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0952820-4f4e-4a7e-b728-ffab772beea7-operator-scripts\") pod \"e0952820-4f4e-4a7e-b728-ffab772beea7\" (UID: \"e0952820-4f4e-4a7e-b728-ffab772beea7\") " Feb 16 17:41:28.603631 master-0 kubenswrapper[4652]: I0216 17:41:28.603594 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mdkx\" (UniqueName: \"kubernetes.io/projected/e0952820-4f4e-4a7e-b728-ffab772beea7-kube-api-access-6mdkx\") pod \"e0952820-4f4e-4a7e-b728-ffab772beea7\" (UID: \"e0952820-4f4e-4a7e-b728-ffab772beea7\") " Feb 16 17:41:28.604185 master-0 kubenswrapper[4652]: I0216 17:41:28.604159 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0952820-4f4e-4a7e-b728-ffab772beea7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0952820-4f4e-4a7e-b728-ffab772beea7" (UID: "e0952820-4f4e-4a7e-b728-ffab772beea7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:28.604396 master-0 kubenswrapper[4652]: I0216 17:41:28.604170 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccccd6a-a0cd-48cf-b8f9-234e97c490be-operator-scripts\") pod \"placement-f9e4-account-create-update-xch88\" (UID: \"2ccccd6a-a0cd-48cf-b8f9-234e97c490be\") " pod="openstack/placement-f9e4-account-create-update-xch88" Feb 16 17:41:28.604688 master-0 kubenswrapper[4652]: I0216 17:41:28.604666 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dpkd\" (UniqueName: \"kubernetes.io/projected/2e0fa216-316f-4f38-9522-e08e6741d57e-kube-api-access-8dpkd\") pod \"placement-db-create-npbng\" (UID: \"2e0fa216-316f-4f38-9522-e08e6741d57e\") " pod="openstack/placement-db-create-npbng" Feb 16 17:41:28.604822 master-0 kubenswrapper[4652]: I0216 17:41:28.604806 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0fa216-316f-4f38-9522-e08e6741d57e-operator-scripts\") pod \"placement-db-create-npbng\" (UID: \"2e0fa216-316f-4f38-9522-e08e6741d57e\") " pod="openstack/placement-db-create-npbng" Feb 16 17:41:28.604978 master-0 kubenswrapper[4652]: I0216 17:41:28.604959 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4vqw\" (UniqueName: \"kubernetes.io/projected/2ccccd6a-a0cd-48cf-b8f9-234e97c490be-kube-api-access-k4vqw\") pod \"placement-f9e4-account-create-update-xch88\" (UID: \"2ccccd6a-a0cd-48cf-b8f9-234e97c490be\") " pod="openstack/placement-f9e4-account-create-update-xch88" Feb 16 17:41:28.605137 master-0 kubenswrapper[4652]: I0216 17:41:28.605121 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0952820-4f4e-4a7e-b728-ffab772beea7-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:28.606521 master-0 kubenswrapper[4652]: I0216 17:41:28.606502 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0fa216-316f-4f38-9522-e08e6741d57e-operator-scripts\") pod \"placement-db-create-npbng\" (UID: \"2e0fa216-316f-4f38-9522-e08e6741d57e\") " pod="openstack/placement-db-create-npbng" Feb 16 17:41:28.606874 master-0 kubenswrapper[4652]: I0216 17:41:28.606820 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0952820-4f4e-4a7e-b728-ffab772beea7-kube-api-access-6mdkx" (OuterVolumeSpecName: "kube-api-access-6mdkx") pod "e0952820-4f4e-4a7e-b728-ffab772beea7" (UID: "e0952820-4f4e-4a7e-b728-ffab772beea7"). InnerVolumeSpecName "kube-api-access-6mdkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:28.623376 master-0 kubenswrapper[4652]: I0216 17:41:28.623332 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dpkd\" (UniqueName: \"kubernetes.io/projected/2e0fa216-316f-4f38-9522-e08e6741d57e-kube-api-access-8dpkd\") pod \"placement-db-create-npbng\" (UID: \"2e0fa216-316f-4f38-9522-e08e6741d57e\") " pod="openstack/placement-db-create-npbng" Feb 16 17:41:28.708336 master-0 kubenswrapper[4652]: I0216 17:41:28.707513 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4vqw\" (UniqueName: \"kubernetes.io/projected/2ccccd6a-a0cd-48cf-b8f9-234e97c490be-kube-api-access-k4vqw\") pod \"placement-f9e4-account-create-update-xch88\" (UID: \"2ccccd6a-a0cd-48cf-b8f9-234e97c490be\") " pod="openstack/placement-f9e4-account-create-update-xch88" Feb 16 17:41:28.708336 master-0 kubenswrapper[4652]: I0216 17:41:28.707611 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccccd6a-a0cd-48cf-b8f9-234e97c490be-operator-scripts\") pod \"placement-f9e4-account-create-update-xch88\" (UID: \"2ccccd6a-a0cd-48cf-b8f9-234e97c490be\") " pod="openstack/placement-f9e4-account-create-update-xch88" Feb 16 17:41:28.708336 master-0 kubenswrapper[4652]: I0216 17:41:28.707732 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mdkx\" (UniqueName: \"kubernetes.io/projected/e0952820-4f4e-4a7e-b728-ffab772beea7-kube-api-access-6mdkx\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:28.708636 master-0 kubenswrapper[4652]: I0216 17:41:28.708481 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccccd6a-a0cd-48cf-b8f9-234e97c490be-operator-scripts\") pod \"placement-f9e4-account-create-update-xch88\" (UID: \"2ccccd6a-a0cd-48cf-b8f9-234e97c490be\") " pod="openstack/placement-f9e4-account-create-update-xch88" Feb 16 17:41:28.738497 master-0 kubenswrapper[4652]: I0216 17:41:28.738448 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4vqw\" (UniqueName: \"kubernetes.io/projected/2ccccd6a-a0cd-48cf-b8f9-234e97c490be-kube-api-access-k4vqw\") pod \"placement-f9e4-account-create-update-xch88\" (UID: \"2ccccd6a-a0cd-48cf-b8f9-234e97c490be\") " pod="openstack/placement-f9e4-account-create-update-xch88" Feb 16 17:41:28.783760 master-0 kubenswrapper[4652]: I0216 17:41:28.783463 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4wpdm" event={"ID":"e0952820-4f4e-4a7e-b728-ffab772beea7","Type":"ContainerDied","Data":"68dc587b3064e89520dd9b5d488017f1f99be953f84311349e829efa0af364ac"} Feb 16 17:41:28.783760 master-0 kubenswrapper[4652]: I0216 17:41:28.783757 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68dc587b3064e89520dd9b5d488017f1f99be953f84311349e829efa0af364ac" Feb 16 17:41:28.783965 master-0 kubenswrapper[4652]: I0216 17:41:28.783870 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4wpdm" Feb 16 17:41:28.812999 master-0 kubenswrapper[4652]: I0216 17:41:28.812910 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-npbng" Feb 16 17:41:28.868274 master-0 kubenswrapper[4652]: I0216 17:41:28.868027 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f9e4-account-create-update-xch88" Feb 16 17:41:29.027723 master-0 kubenswrapper[4652]: I0216 17:41:29.027661 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6tchx"] Feb 16 17:41:29.140288 master-0 kubenswrapper[4652]: I0216 17:41:29.140192 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-0c92-account-create-update-vkjtr"] Feb 16 17:41:29.153030 master-0 kubenswrapper[4652]: W0216 17:41:29.152966 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode389adde_527c_4092_adb1_8a9f5bab0a35.slice/crio-3b2798a7d3f89be846246142217030e97ff9842352f6b69b509455f6842ae409 WatchSource:0}: Error finding container 3b2798a7d3f89be846246142217030e97ff9842352f6b69b509455f6842ae409: Status 404 returned error can't find the container with id 3b2798a7d3f89be846246142217030e97ff9842352f6b69b509455f6842ae409 Feb 16 17:41:29.323443 master-0 kubenswrapper[4652]: I0216 17:41:29.323379 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-npbng"] Feb 16 17:41:29.598726 master-0 kubenswrapper[4652]: I0216 17:41:29.598691 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f9e4-account-create-update-xch88"] Feb 16 17:41:29.798266 master-0 kubenswrapper[4652]: I0216 17:41:29.798167 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nxjgq" event={"ID":"2a714374-e581-4c12-9e7f-5060fd746f10","Type":"ContainerStarted","Data":"4efa87a749035657dbb6080eb6d3906035357ab1a0c2d3ece107018fbc93b7cc"} Feb 16 17:41:29.802760 master-0 kubenswrapper[4652]: I0216 17:41:29.802697 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-npbng" event={"ID":"2e0fa216-316f-4f38-9522-e08e6741d57e","Type":"ContainerStarted","Data":"d8b2da7cf9a42f65f6dd3885917972be14314db250e4a9f48c5b9c6cd2313771"} Feb 16 17:41:29.803008 master-0 kubenswrapper[4652]: I0216 17:41:29.802987 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-npbng" event={"ID":"2e0fa216-316f-4f38-9522-e08e6741d57e","Type":"ContainerStarted","Data":"797ebb3787ba14d5dead917436216b3d0b946ac1bcc795168b6f42777675ee27"} Feb 16 17:41:29.813294 master-0 kubenswrapper[4652]: I0216 17:41:29.812468 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f9e4-account-create-update-xch88" event={"ID":"2ccccd6a-a0cd-48cf-b8f9-234e97c490be","Type":"ContainerStarted","Data":"216ca207d1e9a50a745699ec1c9ee0efa036147238eaa833c09d5aaa3a190296"} Feb 16 17:41:29.816224 master-0 kubenswrapper[4652]: I0216 17:41:29.816186 4652 generic.go:334] "Generic (PLEG): container finished" podID="e389adde-527c-4092-adb1-8a9f5bab0a35" containerID="a26f6715eede58d2f8eec8b61ca4ee9de53451a05d637a6fd89ff07b4410eac9" exitCode=0 Feb 16 17:41:29.816538 master-0 kubenswrapper[4652]: I0216 17:41:29.816381 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0c92-account-create-update-vkjtr" event={"ID":"e389adde-527c-4092-adb1-8a9f5bab0a35","Type":"ContainerDied","Data":"a26f6715eede58d2f8eec8b61ca4ee9de53451a05d637a6fd89ff07b4410eac9"} Feb 16 17:41:29.816693 master-0 kubenswrapper[4652]: I0216 17:41:29.816670 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0c92-account-create-update-vkjtr" event={"ID":"e389adde-527c-4092-adb1-8a9f5bab0a35","Type":"ContainerStarted","Data":"3b2798a7d3f89be846246142217030e97ff9842352f6b69b509455f6842ae409"} Feb 16 17:41:29.818142 master-0 kubenswrapper[4652]: I0216 17:41:29.818119 4652 generic.go:334] "Generic (PLEG): container finished" podID="535576c7-2541-496c-be4e-75714ddcb6de" containerID="ccb2f7dbf5adf707b23a829ae07aae2ff8b742016d1bd0843b76248a6a0c6e93" exitCode=0 Feb 16 17:41:29.818399 master-0 kubenswrapper[4652]: I0216 17:41:29.818276 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6tchx" event={"ID":"535576c7-2541-496c-be4e-75714ddcb6de","Type":"ContainerDied","Data":"ccb2f7dbf5adf707b23a829ae07aae2ff8b742016d1bd0843b76248a6a0c6e93"} Feb 16 17:41:29.818527 master-0 kubenswrapper[4652]: I0216 17:41:29.818503 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6tchx" event={"ID":"535576c7-2541-496c-be4e-75714ddcb6de","Type":"ContainerStarted","Data":"3467fc35f9ef09328e5042e7c9b999fbbf55616ba9a3ca4e8718d6e8d88be79c"} Feb 16 17:41:29.827923 master-0 kubenswrapper[4652]: I0216 17:41:29.827861 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-nxjgq" podStartSLOduration=3.3107881949999998 podStartE2EDuration="6.827839312s" podCreationTimestamp="2026-02-16 17:41:23 +0000 UTC" firstStartedPulling="2026-02-16 17:41:25.068687229 +0000 UTC m=+1042.456855745" lastFinishedPulling="2026-02-16 17:41:28.585738346 +0000 UTC m=+1045.973906862" observedRunningTime="2026-02-16 17:41:29.821350528 +0000 UTC m=+1047.209519044" watchObservedRunningTime="2026-02-16 17:41:29.827839312 +0000 UTC m=+1047.216007828" Feb 16 17:41:29.911931 master-0 kubenswrapper[4652]: I0216 17:41:29.911862 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f9e4-account-create-update-xch88" podStartSLOduration=1.911842714 podStartE2EDuration="1.911842714s" podCreationTimestamp="2026-02-16 17:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:41:29.893991695 +0000 UTC m=+1047.282160211" watchObservedRunningTime="2026-02-16 17:41:29.911842714 +0000 UTC m=+1047.300011230" Feb 16 17:41:29.919151 master-0 kubenswrapper[4652]: I0216 17:41:29.919071 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-npbng" podStartSLOduration=1.9190515970000002 podStartE2EDuration="1.919051597s" podCreationTimestamp="2026-02-16 17:41:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:41:29.911256248 +0000 UTC m=+1047.299424764" watchObservedRunningTime="2026-02-16 17:41:29.919051597 +0000 UTC m=+1047.307220113" Feb 16 17:41:30.809558 master-0 kubenswrapper[4652]: I0216 17:41:30.808822 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4wpdm"] Feb 16 17:41:30.817859 master-0 kubenswrapper[4652]: I0216 17:41:30.817805 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4wpdm"] Feb 16 17:41:30.829840 master-0 kubenswrapper[4652]: I0216 17:41:30.829786 4652 generic.go:334] "Generic (PLEG): container finished" podID="2e0fa216-316f-4f38-9522-e08e6741d57e" containerID="d8b2da7cf9a42f65f6dd3885917972be14314db250e4a9f48c5b9c6cd2313771" exitCode=0 Feb 16 17:41:30.830101 master-0 kubenswrapper[4652]: I0216 17:41:30.829864 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-npbng" event={"ID":"2e0fa216-316f-4f38-9522-e08e6741d57e","Type":"ContainerDied","Data":"d8b2da7cf9a42f65f6dd3885917972be14314db250e4a9f48c5b9c6cd2313771"} Feb 16 17:41:30.833210 master-0 kubenswrapper[4652]: I0216 17:41:30.831486 4652 generic.go:334] "Generic (PLEG): container finished" podID="2ccccd6a-a0cd-48cf-b8f9-234e97c490be" containerID="40077a0d99c67a4ca04132c2eb6b2a3b207b8ec9ea6e60e94db4e36517923258" exitCode=0 Feb 16 17:41:30.833210 master-0 kubenswrapper[4652]: I0216 17:41:30.831844 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f9e4-account-create-update-xch88" event={"ID":"2ccccd6a-a0cd-48cf-b8f9-234e97c490be","Type":"ContainerDied","Data":"40077a0d99c67a4ca04132c2eb6b2a3b207b8ec9ea6e60e94db4e36517923258"} Feb 16 17:41:31.172375 master-0 kubenswrapper[4652]: I0216 17:41:31.171789 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:31.172375 master-0 kubenswrapper[4652]: E0216 17:41:31.172056 4652 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 17:41:31.172375 master-0 kubenswrapper[4652]: E0216 17:41:31.172100 4652 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 17:41:31.172375 master-0 kubenswrapper[4652]: E0216 17:41:31.172170 4652 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift podName:9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2 nodeName:}" failed. No retries permitted until 2026-02-16 17:41:39.172148837 +0000 UTC m=+1056.560317353 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift") pod "swift-storage-0" (UID: "9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2") : configmap "swift-ring-files" not found Feb 16 17:41:31.440653 master-0 kubenswrapper[4652]: I0216 17:41:31.440587 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6tchx" Feb 16 17:41:31.454388 master-0 kubenswrapper[4652]: I0216 17:41:31.454320 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c92-account-create-update-vkjtr" Feb 16 17:41:31.584885 master-0 kubenswrapper[4652]: I0216 17:41:31.584812 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535576c7-2541-496c-be4e-75714ddcb6de-operator-scripts\") pod \"535576c7-2541-496c-be4e-75714ddcb6de\" (UID: \"535576c7-2541-496c-be4e-75714ddcb6de\") " Feb 16 17:41:31.585133 master-0 kubenswrapper[4652]: I0216 17:41:31.584933 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2vgx\" (UniqueName: \"kubernetes.io/projected/535576c7-2541-496c-be4e-75714ddcb6de-kube-api-access-w2vgx\") pod \"535576c7-2541-496c-be4e-75714ddcb6de\" (UID: \"535576c7-2541-496c-be4e-75714ddcb6de\") " Feb 16 17:41:31.585133 master-0 kubenswrapper[4652]: I0216 17:41:31.585012 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e389adde-527c-4092-adb1-8a9f5bab0a35-operator-scripts\") pod \"e389adde-527c-4092-adb1-8a9f5bab0a35\" (UID: \"e389adde-527c-4092-adb1-8a9f5bab0a35\") " Feb 16 17:41:31.585133 master-0 kubenswrapper[4652]: I0216 17:41:31.585064 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmctc\" (UniqueName: \"kubernetes.io/projected/e389adde-527c-4092-adb1-8a9f5bab0a35-kube-api-access-qmctc\") pod \"e389adde-527c-4092-adb1-8a9f5bab0a35\" (UID: \"e389adde-527c-4092-adb1-8a9f5bab0a35\") " Feb 16 17:41:31.585555 master-0 kubenswrapper[4652]: I0216 17:41:31.585520 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e389adde-527c-4092-adb1-8a9f5bab0a35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e389adde-527c-4092-adb1-8a9f5bab0a35" (UID: "e389adde-527c-4092-adb1-8a9f5bab0a35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:31.585555 master-0 kubenswrapper[4652]: I0216 17:41:31.585528 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/535576c7-2541-496c-be4e-75714ddcb6de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "535576c7-2541-496c-be4e-75714ddcb6de" (UID: "535576c7-2541-496c-be4e-75714ddcb6de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:31.588697 master-0 kubenswrapper[4652]: I0216 17:41:31.588664 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/535576c7-2541-496c-be4e-75714ddcb6de-kube-api-access-w2vgx" (OuterVolumeSpecName: "kube-api-access-w2vgx") pod "535576c7-2541-496c-be4e-75714ddcb6de" (UID: "535576c7-2541-496c-be4e-75714ddcb6de"). InnerVolumeSpecName "kube-api-access-w2vgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:31.593474 master-0 kubenswrapper[4652]: I0216 17:41:31.593431 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e389adde-527c-4092-adb1-8a9f5bab0a35-kube-api-access-qmctc" (OuterVolumeSpecName: "kube-api-access-qmctc") pod "e389adde-527c-4092-adb1-8a9f5bab0a35" (UID: "e389adde-527c-4092-adb1-8a9f5bab0a35"). InnerVolumeSpecName "kube-api-access-qmctc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:31.669008 master-0 kubenswrapper[4652]: I0216 17:41:31.668715 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:41:31.691952 master-0 kubenswrapper[4652]: I0216 17:41:31.689436 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e389adde-527c-4092-adb1-8a9f5bab0a35-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:31.691952 master-0 kubenswrapper[4652]: I0216 17:41:31.689492 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmctc\" (UniqueName: \"kubernetes.io/projected/e389adde-527c-4092-adb1-8a9f5bab0a35-kube-api-access-qmctc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:31.691952 master-0 kubenswrapper[4652]: I0216 17:41:31.689530 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/535576c7-2541-496c-be4e-75714ddcb6de-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:31.691952 master-0 kubenswrapper[4652]: I0216 17:41:31.689541 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2vgx\" (UniqueName: \"kubernetes.io/projected/535576c7-2541-496c-be4e-75714ddcb6de-kube-api-access-w2vgx\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:31.744525 master-0 kubenswrapper[4652]: I0216 17:41:31.744457 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-nxsmd"] Feb 16 17:41:31.746277 master-0 kubenswrapper[4652]: I0216 17:41:31.746207 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" podUID="5895aff2-c4f5-42f3-a422-9ef5ea305756" containerName="dnsmasq-dns" containerID="cri-o://f56c573e8768b4052feae4cdd7b52f23d0230d2aba245698450cbd0b7b25a0ed" gracePeriod=10 Feb 16 17:41:31.843604 master-0 kubenswrapper[4652]: I0216 17:41:31.842288 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-0c92-account-create-update-vkjtr" event={"ID":"e389adde-527c-4092-adb1-8a9f5bab0a35","Type":"ContainerDied","Data":"3b2798a7d3f89be846246142217030e97ff9842352f6b69b509455f6842ae409"} Feb 16 17:41:31.843604 master-0 kubenswrapper[4652]: I0216 17:41:31.842339 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b2798a7d3f89be846246142217030e97ff9842352f6b69b509455f6842ae409" Feb 16 17:41:31.843604 master-0 kubenswrapper[4652]: I0216 17:41:31.842407 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-0c92-account-create-update-vkjtr" Feb 16 17:41:31.848274 master-0 kubenswrapper[4652]: I0216 17:41:31.846549 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6tchx" event={"ID":"535576c7-2541-496c-be4e-75714ddcb6de","Type":"ContainerDied","Data":"3467fc35f9ef09328e5042e7c9b999fbbf55616ba9a3ca4e8718d6e8d88be79c"} Feb 16 17:41:31.848274 master-0 kubenswrapper[4652]: I0216 17:41:31.846602 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3467fc35f9ef09328e5042e7c9b999fbbf55616ba9a3ca4e8718d6e8d88be79c" Feb 16 17:41:31.848274 master-0 kubenswrapper[4652]: I0216 17:41:31.846653 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6tchx" Feb 16 17:41:32.242456 master-0 kubenswrapper[4652]: I0216 17:41:32.235528 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 17:41:32.393178 master-0 kubenswrapper[4652]: I0216 17:41:32.392637 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-p5x2l"] Feb 16 17:41:32.393537 master-0 kubenswrapper[4652]: E0216 17:41:32.393501 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e389adde-527c-4092-adb1-8a9f5bab0a35" containerName="mariadb-account-create-update" Feb 16 17:41:32.393537 master-0 kubenswrapper[4652]: I0216 17:41:32.393529 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="e389adde-527c-4092-adb1-8a9f5bab0a35" containerName="mariadb-account-create-update" Feb 16 17:41:32.393622 master-0 kubenswrapper[4652]: E0216 17:41:32.393600 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="535576c7-2541-496c-be4e-75714ddcb6de" containerName="mariadb-database-create" Feb 16 17:41:32.393622 master-0 kubenswrapper[4652]: I0216 17:41:32.393612 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="535576c7-2541-496c-be4e-75714ddcb6de" containerName="mariadb-database-create" Feb 16 17:41:32.393947 master-0 kubenswrapper[4652]: I0216 17:41:32.393859 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="535576c7-2541-496c-be4e-75714ddcb6de" containerName="mariadb-database-create" Feb 16 17:41:32.394006 master-0 kubenswrapper[4652]: I0216 17:41:32.393962 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="e389adde-527c-4092-adb1-8a9f5bab0a35" containerName="mariadb-account-create-update" Feb 16 17:41:32.394813 master-0 kubenswrapper[4652]: I0216 17:41:32.394777 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p5x2l" Feb 16 17:41:32.410581 master-0 kubenswrapper[4652]: I0216 17:41:32.410538 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-npbng" Feb 16 17:41:32.429285 master-0 kubenswrapper[4652]: I0216 17:41:32.425801 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p5x2l"] Feb 16 17:41:32.512689 master-0 kubenswrapper[4652]: I0216 17:41:32.512635 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0fa216-316f-4f38-9522-e08e6741d57e-operator-scripts\") pod \"2e0fa216-316f-4f38-9522-e08e6741d57e\" (UID: \"2e0fa216-316f-4f38-9522-e08e6741d57e\") " Feb 16 17:41:32.512899 master-0 kubenswrapper[4652]: I0216 17:41:32.512730 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dpkd\" (UniqueName: \"kubernetes.io/projected/2e0fa216-316f-4f38-9522-e08e6741d57e-kube-api-access-8dpkd\") pod \"2e0fa216-316f-4f38-9522-e08e6741d57e\" (UID: \"2e0fa216-316f-4f38-9522-e08e6741d57e\") " Feb 16 17:41:32.513452 master-0 kubenswrapper[4652]: I0216 17:41:32.513417 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953baf72-0755-451c-a083-0088cb99c43a-operator-scripts\") pod \"glance-db-create-p5x2l\" (UID: \"953baf72-0755-451c-a083-0088cb99c43a\") " pod="openstack/glance-db-create-p5x2l" Feb 16 17:41:32.513534 master-0 kubenswrapper[4652]: I0216 17:41:32.513458 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8vzn\" (UniqueName: \"kubernetes.io/projected/953baf72-0755-451c-a083-0088cb99c43a-kube-api-access-z8vzn\") pod \"glance-db-create-p5x2l\" (UID: \"953baf72-0755-451c-a083-0088cb99c43a\") " pod="openstack/glance-db-create-p5x2l" Feb 16 17:41:32.514031 master-0 kubenswrapper[4652]: I0216 17:41:32.513997 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0fa216-316f-4f38-9522-e08e6741d57e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2e0fa216-316f-4f38-9522-e08e6741d57e" (UID: "2e0fa216-316f-4f38-9522-e08e6741d57e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:32.536566 master-0 kubenswrapper[4652]: I0216 17:41:32.536521 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0fa216-316f-4f38-9522-e08e6741d57e-kube-api-access-8dpkd" (OuterVolumeSpecName: "kube-api-access-8dpkd") pod "2e0fa216-316f-4f38-9522-e08e6741d57e" (UID: "2e0fa216-316f-4f38-9522-e08e6741d57e"). InnerVolumeSpecName "kube-api-access-8dpkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:32.552631 master-0 kubenswrapper[4652]: I0216 17:41:32.552570 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-78fa-account-create-update-prrd4"] Feb 16 17:41:32.553138 master-0 kubenswrapper[4652]: E0216 17:41:32.553107 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e0fa216-316f-4f38-9522-e08e6741d57e" containerName="mariadb-database-create" Feb 16 17:41:32.553138 master-0 kubenswrapper[4652]: I0216 17:41:32.553130 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0fa216-316f-4f38-9522-e08e6741d57e" containerName="mariadb-database-create" Feb 16 17:41:32.553433 master-0 kubenswrapper[4652]: I0216 17:41:32.553416 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e0fa216-316f-4f38-9522-e08e6741d57e" containerName="mariadb-database-create" Feb 16 17:41:32.554270 master-0 kubenswrapper[4652]: I0216 17:41:32.554213 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-78fa-account-create-update-prrd4" Feb 16 17:41:32.573330 master-0 kubenswrapper[4652]: I0216 17:41:32.568102 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 17:41:32.592108 master-0 kubenswrapper[4652]: I0216 17:41:32.592033 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-78fa-account-create-update-prrd4"] Feb 16 17:41:32.639030 master-0 kubenswrapper[4652]: I0216 17:41:32.618954 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st7pb\" (UniqueName: \"kubernetes.io/projected/268bd521-18a0-44a2-94af-c8b0d5fc62de-kube-api-access-st7pb\") pod \"glance-78fa-account-create-update-prrd4\" (UID: \"268bd521-18a0-44a2-94af-c8b0d5fc62de\") " pod="openstack/glance-78fa-account-create-update-prrd4" Feb 16 17:41:32.639030 master-0 kubenswrapper[4652]: I0216 17:41:32.619140 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953baf72-0755-451c-a083-0088cb99c43a-operator-scripts\") pod \"glance-db-create-p5x2l\" (UID: \"953baf72-0755-451c-a083-0088cb99c43a\") " pod="openstack/glance-db-create-p5x2l" Feb 16 17:41:32.639030 master-0 kubenswrapper[4652]: I0216 17:41:32.619170 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8vzn\" (UniqueName: \"kubernetes.io/projected/953baf72-0755-451c-a083-0088cb99c43a-kube-api-access-z8vzn\") pod \"glance-db-create-p5x2l\" (UID: \"953baf72-0755-451c-a083-0088cb99c43a\") " pod="openstack/glance-db-create-p5x2l" Feb 16 17:41:32.639030 master-0 kubenswrapper[4652]: I0216 17:41:32.619343 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/268bd521-18a0-44a2-94af-c8b0d5fc62de-operator-scripts\") pod \"glance-78fa-account-create-update-prrd4\" (UID: \"268bd521-18a0-44a2-94af-c8b0d5fc62de\") " pod="openstack/glance-78fa-account-create-update-prrd4" Feb 16 17:41:32.639030 master-0 kubenswrapper[4652]: I0216 17:41:32.619500 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2e0fa216-316f-4f38-9522-e08e6741d57e-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:32.639030 master-0 kubenswrapper[4652]: I0216 17:41:32.619515 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dpkd\" (UniqueName: \"kubernetes.io/projected/2e0fa216-316f-4f38-9522-e08e6741d57e-kube-api-access-8dpkd\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:32.639030 master-0 kubenswrapper[4652]: I0216 17:41:32.620268 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953baf72-0755-451c-a083-0088cb99c43a-operator-scripts\") pod \"glance-db-create-p5x2l\" (UID: \"953baf72-0755-451c-a083-0088cb99c43a\") " pod="openstack/glance-db-create-p5x2l" Feb 16 17:41:32.644087 master-0 kubenswrapper[4652]: I0216 17:41:32.644043 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8vzn\" (UniqueName: \"kubernetes.io/projected/953baf72-0755-451c-a083-0088cb99c43a-kube-api-access-z8vzn\") pod \"glance-db-create-p5x2l\" (UID: \"953baf72-0755-451c-a083-0088cb99c43a\") " pod="openstack/glance-db-create-p5x2l" Feb 16 17:41:32.656534 master-0 kubenswrapper[4652]: I0216 17:41:32.656499 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f9e4-account-create-update-xch88" Feb 16 17:41:32.676353 master-0 kubenswrapper[4652]: I0216 17:41:32.676267 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:41:32.720598 master-0 kubenswrapper[4652]: I0216 17:41:32.720546 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccccd6a-a0cd-48cf-b8f9-234e97c490be-operator-scripts\") pod \"2ccccd6a-a0cd-48cf-b8f9-234e97c490be\" (UID: \"2ccccd6a-a0cd-48cf-b8f9-234e97c490be\") " Feb 16 17:41:32.720830 master-0 kubenswrapper[4652]: I0216 17:41:32.720664 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4vqw\" (UniqueName: \"kubernetes.io/projected/2ccccd6a-a0cd-48cf-b8f9-234e97c490be-kube-api-access-k4vqw\") pod \"2ccccd6a-a0cd-48cf-b8f9-234e97c490be\" (UID: \"2ccccd6a-a0cd-48cf-b8f9-234e97c490be\") " Feb 16 17:41:32.720830 master-0 kubenswrapper[4652]: I0216 17:41:32.720816 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5895aff2-c4f5-42f3-a422-9ef5ea305756-dns-svc\") pod \"5895aff2-c4f5-42f3-a422-9ef5ea305756\" (UID: \"5895aff2-c4f5-42f3-a422-9ef5ea305756\") " Feb 16 17:41:32.720924 master-0 kubenswrapper[4652]: I0216 17:41:32.720886 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5895aff2-c4f5-42f3-a422-9ef5ea305756-config\") pod \"5895aff2-c4f5-42f3-a422-9ef5ea305756\" (UID: \"5895aff2-c4f5-42f3-a422-9ef5ea305756\") " Feb 16 17:41:32.720961 master-0 kubenswrapper[4652]: I0216 17:41:32.720929 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt5rg\" (UniqueName: \"kubernetes.io/projected/5895aff2-c4f5-42f3-a422-9ef5ea305756-kube-api-access-mt5rg\") pod \"5895aff2-c4f5-42f3-a422-9ef5ea305756\" (UID: \"5895aff2-c4f5-42f3-a422-9ef5ea305756\") " Feb 16 17:41:32.721558 master-0 kubenswrapper[4652]: I0216 17:41:32.721366 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/268bd521-18a0-44a2-94af-c8b0d5fc62de-operator-scripts\") pod \"glance-78fa-account-create-update-prrd4\" (UID: \"268bd521-18a0-44a2-94af-c8b0d5fc62de\") " pod="openstack/glance-78fa-account-create-update-prrd4" Feb 16 17:41:32.721558 master-0 kubenswrapper[4652]: I0216 17:41:32.721531 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st7pb\" (UniqueName: \"kubernetes.io/projected/268bd521-18a0-44a2-94af-c8b0d5fc62de-kube-api-access-st7pb\") pod \"glance-78fa-account-create-update-prrd4\" (UID: \"268bd521-18a0-44a2-94af-c8b0d5fc62de\") " pod="openstack/glance-78fa-account-create-update-prrd4" Feb 16 17:41:32.722433 master-0 kubenswrapper[4652]: I0216 17:41:32.722408 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ccccd6a-a0cd-48cf-b8f9-234e97c490be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ccccd6a-a0cd-48cf-b8f9-234e97c490be" (UID: "2ccccd6a-a0cd-48cf-b8f9-234e97c490be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:32.729230 master-0 kubenswrapper[4652]: I0216 17:41:32.729071 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/268bd521-18a0-44a2-94af-c8b0d5fc62de-operator-scripts\") pod \"glance-78fa-account-create-update-prrd4\" (UID: \"268bd521-18a0-44a2-94af-c8b0d5fc62de\") " pod="openstack/glance-78fa-account-create-update-prrd4" Feb 16 17:41:32.730342 master-0 kubenswrapper[4652]: I0216 17:41:32.729554 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ccccd6a-a0cd-48cf-b8f9-234e97c490be-kube-api-access-k4vqw" (OuterVolumeSpecName: "kube-api-access-k4vqw") pod "2ccccd6a-a0cd-48cf-b8f9-234e97c490be" (UID: "2ccccd6a-a0cd-48cf-b8f9-234e97c490be"). InnerVolumeSpecName "kube-api-access-k4vqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:32.730342 master-0 kubenswrapper[4652]: I0216 17:41:32.729594 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p5x2l" Feb 16 17:41:32.739621 master-0 kubenswrapper[4652]: I0216 17:41:32.738986 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5895aff2-c4f5-42f3-a422-9ef5ea305756-kube-api-access-mt5rg" (OuterVolumeSpecName: "kube-api-access-mt5rg") pod "5895aff2-c4f5-42f3-a422-9ef5ea305756" (UID: "5895aff2-c4f5-42f3-a422-9ef5ea305756"). InnerVolumeSpecName "kube-api-access-mt5rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:32.748103 master-0 kubenswrapper[4652]: I0216 17:41:32.746122 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st7pb\" (UniqueName: \"kubernetes.io/projected/268bd521-18a0-44a2-94af-c8b0d5fc62de-kube-api-access-st7pb\") pod \"glance-78fa-account-create-update-prrd4\" (UID: \"268bd521-18a0-44a2-94af-c8b0d5fc62de\") " pod="openstack/glance-78fa-account-create-update-prrd4" Feb 16 17:41:32.762669 master-0 kubenswrapper[4652]: I0216 17:41:32.762593 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0952820-4f4e-4a7e-b728-ffab772beea7" path="/var/lib/kubelet/pods/e0952820-4f4e-4a7e-b728-ffab772beea7/volumes" Feb 16 17:41:32.808276 master-0 kubenswrapper[4652]: I0216 17:41:32.808180 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5895aff2-c4f5-42f3-a422-9ef5ea305756-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5895aff2-c4f5-42f3-a422-9ef5ea305756" (UID: "5895aff2-c4f5-42f3-a422-9ef5ea305756"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:32.838387 master-0 kubenswrapper[4652]: I0216 17:41:32.825921 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ccccd6a-a0cd-48cf-b8f9-234e97c490be-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:32.838387 master-0 kubenswrapper[4652]: I0216 17:41:32.825974 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4vqw\" (UniqueName: \"kubernetes.io/projected/2ccccd6a-a0cd-48cf-b8f9-234e97c490be-kube-api-access-k4vqw\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:32.838387 master-0 kubenswrapper[4652]: I0216 17:41:32.825990 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5895aff2-c4f5-42f3-a422-9ef5ea305756-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:32.838387 master-0 kubenswrapper[4652]: I0216 17:41:32.826000 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt5rg\" (UniqueName: \"kubernetes.io/projected/5895aff2-c4f5-42f3-a422-9ef5ea305756-kube-api-access-mt5rg\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:32.875030 master-0 kubenswrapper[4652]: I0216 17:41:32.874989 4652 generic.go:334] "Generic (PLEG): container finished" podID="5895aff2-c4f5-42f3-a422-9ef5ea305756" containerID="f56c573e8768b4052feae4cdd7b52f23d0230d2aba245698450cbd0b7b25a0ed" exitCode=0 Feb 16 17:41:32.876090 master-0 kubenswrapper[4652]: I0216 17:41:32.875061 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" event={"ID":"5895aff2-c4f5-42f3-a422-9ef5ea305756","Type":"ContainerDied","Data":"f56c573e8768b4052feae4cdd7b52f23d0230d2aba245698450cbd0b7b25a0ed"} Feb 16 17:41:32.876090 master-0 kubenswrapper[4652]: I0216 17:41:32.875097 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" event={"ID":"5895aff2-c4f5-42f3-a422-9ef5ea305756","Type":"ContainerDied","Data":"b4d3be7976baa41361819405a212bc6d13f4cdea5addc96b57d7ebc1e641f361"} Feb 16 17:41:32.876090 master-0 kubenswrapper[4652]: I0216 17:41:32.875092 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b98d7b55c-nxsmd" Feb 16 17:41:32.876090 master-0 kubenswrapper[4652]: I0216 17:41:32.875134 4652 scope.go:117] "RemoveContainer" containerID="f56c573e8768b4052feae4cdd7b52f23d0230d2aba245698450cbd0b7b25a0ed" Feb 16 17:41:32.880460 master-0 kubenswrapper[4652]: I0216 17:41:32.877810 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-npbng" event={"ID":"2e0fa216-316f-4f38-9522-e08e6741d57e","Type":"ContainerDied","Data":"797ebb3787ba14d5dead917436216b3d0b946ac1bcc795168b6f42777675ee27"} Feb 16 17:41:32.880460 master-0 kubenswrapper[4652]: I0216 17:41:32.877839 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="797ebb3787ba14d5dead917436216b3d0b946ac1bcc795168b6f42777675ee27" Feb 16 17:41:32.880460 master-0 kubenswrapper[4652]: I0216 17:41:32.877858 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-npbng" Feb 16 17:41:32.880700 master-0 kubenswrapper[4652]: I0216 17:41:32.880654 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5895aff2-c4f5-42f3-a422-9ef5ea305756-config" (OuterVolumeSpecName: "config") pod "5895aff2-c4f5-42f3-a422-9ef5ea305756" (UID: "5895aff2-c4f5-42f3-a422-9ef5ea305756"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:32.881719 master-0 kubenswrapper[4652]: I0216 17:41:32.881567 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f9e4-account-create-update-xch88" event={"ID":"2ccccd6a-a0cd-48cf-b8f9-234e97c490be","Type":"ContainerDied","Data":"216ca207d1e9a50a745699ec1c9ee0efa036147238eaa833c09d5aaa3a190296"} Feb 16 17:41:32.881719 master-0 kubenswrapper[4652]: I0216 17:41:32.881599 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="216ca207d1e9a50a745699ec1c9ee0efa036147238eaa833c09d5aaa3a190296" Feb 16 17:41:32.881719 master-0 kubenswrapper[4652]: I0216 17:41:32.881673 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f9e4-account-create-update-xch88" Feb 16 17:41:32.902489 master-0 kubenswrapper[4652]: I0216 17:41:32.902443 4652 scope.go:117] "RemoveContainer" containerID="392d1772e11d7eec338b5b67a6dbe97a791fd9fa547e44027768af0e416c51ad" Feb 16 17:41:32.928800 master-0 kubenswrapper[4652]: I0216 17:41:32.928033 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5895aff2-c4f5-42f3-a422-9ef5ea305756-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:32.935478 master-0 kubenswrapper[4652]: I0216 17:41:32.935421 4652 scope.go:117] "RemoveContainer" containerID="f56c573e8768b4052feae4cdd7b52f23d0230d2aba245698450cbd0b7b25a0ed" Feb 16 17:41:32.945572 master-0 kubenswrapper[4652]: E0216 17:41:32.945507 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f56c573e8768b4052feae4cdd7b52f23d0230d2aba245698450cbd0b7b25a0ed\": container with ID starting with f56c573e8768b4052feae4cdd7b52f23d0230d2aba245698450cbd0b7b25a0ed not found: ID does not exist" containerID="f56c573e8768b4052feae4cdd7b52f23d0230d2aba245698450cbd0b7b25a0ed" Feb 16 17:41:32.945774 master-0 kubenswrapper[4652]: I0216 17:41:32.945570 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f56c573e8768b4052feae4cdd7b52f23d0230d2aba245698450cbd0b7b25a0ed"} err="failed to get container status \"f56c573e8768b4052feae4cdd7b52f23d0230d2aba245698450cbd0b7b25a0ed\": rpc error: code = NotFound desc = could not find container \"f56c573e8768b4052feae4cdd7b52f23d0230d2aba245698450cbd0b7b25a0ed\": container with ID starting with f56c573e8768b4052feae4cdd7b52f23d0230d2aba245698450cbd0b7b25a0ed not found: ID does not exist" Feb 16 17:41:32.945774 master-0 kubenswrapper[4652]: I0216 17:41:32.945601 4652 scope.go:117] "RemoveContainer" containerID="392d1772e11d7eec338b5b67a6dbe97a791fd9fa547e44027768af0e416c51ad" Feb 16 17:41:32.945986 master-0 kubenswrapper[4652]: E0216 17:41:32.945942 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"392d1772e11d7eec338b5b67a6dbe97a791fd9fa547e44027768af0e416c51ad\": container with ID starting with 392d1772e11d7eec338b5b67a6dbe97a791fd9fa547e44027768af0e416c51ad not found: ID does not exist" containerID="392d1772e11d7eec338b5b67a6dbe97a791fd9fa547e44027768af0e416c51ad" Feb 16 17:41:32.945986 master-0 kubenswrapper[4652]: I0216 17:41:32.945978 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"392d1772e11d7eec338b5b67a6dbe97a791fd9fa547e44027768af0e416c51ad"} err="failed to get container status \"392d1772e11d7eec338b5b67a6dbe97a791fd9fa547e44027768af0e416c51ad\": rpc error: code = NotFound desc = could not find container \"392d1772e11d7eec338b5b67a6dbe97a791fd9fa547e44027768af0e416c51ad\": container with ID starting with 392d1772e11d7eec338b5b67a6dbe97a791fd9fa547e44027768af0e416c51ad not found: ID does not exist" Feb 16 17:41:32.974057 master-0 kubenswrapper[4652]: I0216 17:41:32.974016 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-78fa-account-create-update-prrd4" Feb 16 17:41:33.243127 master-0 kubenswrapper[4652]: I0216 17:41:33.242896 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-nxsmd"] Feb 16 17:41:33.263575 master-0 kubenswrapper[4652]: I0216 17:41:33.263525 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b98d7b55c-nxsmd"] Feb 16 17:41:33.287715 master-0 kubenswrapper[4652]: I0216 17:41:33.287621 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-p5x2l"] Feb 16 17:41:33.432552 master-0 kubenswrapper[4652]: I0216 17:41:33.432479 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-78fa-account-create-update-prrd4"] Feb 16 17:41:33.444769 master-0 kubenswrapper[4652]: W0216 17:41:33.444724 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod268bd521_18a0_44a2_94af_c8b0d5fc62de.slice/crio-e62a82d3dcc8c89f0feec50669bc506232efb39b33bba2fd9356b0e7ffdac1b6 WatchSource:0}: Error finding container e62a82d3dcc8c89f0feec50669bc506232efb39b33bba2fd9356b0e7ffdac1b6: Status 404 returned error can't find the container with id e62a82d3dcc8c89f0feec50669bc506232efb39b33bba2fd9356b0e7ffdac1b6 Feb 16 17:41:33.906132 master-0 kubenswrapper[4652]: I0216 17:41:33.906013 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-78fa-account-create-update-prrd4" event={"ID":"268bd521-18a0-44a2-94af-c8b0d5fc62de","Type":"ContainerStarted","Data":"52ee62a4d9d8c1c4bbfa090ef576438b85370449ee8ea1632a4ffc10ca9f4340"} Feb 16 17:41:33.906701 master-0 kubenswrapper[4652]: I0216 17:41:33.906157 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-78fa-account-create-update-prrd4" event={"ID":"268bd521-18a0-44a2-94af-c8b0d5fc62de","Type":"ContainerStarted","Data":"e62a82d3dcc8c89f0feec50669bc506232efb39b33bba2fd9356b0e7ffdac1b6"} Feb 16 17:41:33.910125 master-0 kubenswrapper[4652]: I0216 17:41:33.910075 4652 generic.go:334] "Generic (PLEG): container finished" podID="953baf72-0755-451c-a083-0088cb99c43a" containerID="22cf9db35594d5cb2c5ec28d9923c7b2e2fe31d7c012f8c0c3bb3893ffd8444f" exitCode=0 Feb 16 17:41:33.910295 master-0 kubenswrapper[4652]: I0216 17:41:33.910166 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p5x2l" event={"ID":"953baf72-0755-451c-a083-0088cb99c43a","Type":"ContainerDied","Data":"22cf9db35594d5cb2c5ec28d9923c7b2e2fe31d7c012f8c0c3bb3893ffd8444f"} Feb 16 17:41:33.910295 master-0 kubenswrapper[4652]: I0216 17:41:33.910193 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p5x2l" event={"ID":"953baf72-0755-451c-a083-0088cb99c43a","Type":"ContainerStarted","Data":"0773bf4be7db3428d1c43ed5bb2c4a70a62afab93bd571bfa41a4b04d51178b0"} Feb 16 17:41:33.929028 master-0 kubenswrapper[4652]: I0216 17:41:33.928633 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-78fa-account-create-update-prrd4" podStartSLOduration=1.928613557 podStartE2EDuration="1.928613557s" podCreationTimestamp="2026-02-16 17:41:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:41:33.925270917 +0000 UTC m=+1051.313439433" watchObservedRunningTime="2026-02-16 17:41:33.928613557 +0000 UTC m=+1051.316782083" Feb 16 17:41:34.258334 master-0 kubenswrapper[4652]: I0216 17:41:34.258270 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jm8b2"] Feb 16 17:41:34.259022 master-0 kubenswrapper[4652]: E0216 17:41:34.258988 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5895aff2-c4f5-42f3-a422-9ef5ea305756" containerName="dnsmasq-dns" Feb 16 17:41:34.259121 master-0 kubenswrapper[4652]: I0216 17:41:34.259090 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="5895aff2-c4f5-42f3-a422-9ef5ea305756" containerName="dnsmasq-dns" Feb 16 17:41:34.259223 master-0 kubenswrapper[4652]: E0216 17:41:34.259212 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5895aff2-c4f5-42f3-a422-9ef5ea305756" containerName="init" Feb 16 17:41:34.259314 master-0 kubenswrapper[4652]: I0216 17:41:34.259303 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="5895aff2-c4f5-42f3-a422-9ef5ea305756" containerName="init" Feb 16 17:41:34.259461 master-0 kubenswrapper[4652]: E0216 17:41:34.259450 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ccccd6a-a0cd-48cf-b8f9-234e97c490be" containerName="mariadb-account-create-update" Feb 16 17:41:34.259532 master-0 kubenswrapper[4652]: I0216 17:41:34.259521 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ccccd6a-a0cd-48cf-b8f9-234e97c490be" containerName="mariadb-account-create-update" Feb 16 17:41:34.259780 master-0 kubenswrapper[4652]: I0216 17:41:34.259769 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ccccd6a-a0cd-48cf-b8f9-234e97c490be" containerName="mariadb-account-create-update" Feb 16 17:41:34.259860 master-0 kubenswrapper[4652]: I0216 17:41:34.259850 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="5895aff2-c4f5-42f3-a422-9ef5ea305756" containerName="dnsmasq-dns" Feb 16 17:41:34.260708 master-0 kubenswrapper[4652]: I0216 17:41:34.260689 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jm8b2" Feb 16 17:41:34.263883 master-0 kubenswrapper[4652]: I0216 17:41:34.263843 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 17:41:34.274444 master-0 kubenswrapper[4652]: I0216 17:41:34.274402 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jm8b2"] Feb 16 17:41:34.362993 master-0 kubenswrapper[4652]: I0216 17:41:34.362931 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21472637-29f7-4764-ba99-d0b7d6ccdaa4-operator-scripts\") pod \"root-account-create-update-jm8b2\" (UID: \"21472637-29f7-4764-ba99-d0b7d6ccdaa4\") " pod="openstack/root-account-create-update-jm8b2" Feb 16 17:41:34.363197 master-0 kubenswrapper[4652]: I0216 17:41:34.363077 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jz84\" (UniqueName: \"kubernetes.io/projected/21472637-29f7-4764-ba99-d0b7d6ccdaa4-kube-api-access-8jz84\") pod \"root-account-create-update-jm8b2\" (UID: \"21472637-29f7-4764-ba99-d0b7d6ccdaa4\") " pod="openstack/root-account-create-update-jm8b2" Feb 16 17:41:34.464916 master-0 kubenswrapper[4652]: I0216 17:41:34.464830 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21472637-29f7-4764-ba99-d0b7d6ccdaa4-operator-scripts\") pod \"root-account-create-update-jm8b2\" (UID: \"21472637-29f7-4764-ba99-d0b7d6ccdaa4\") " pod="openstack/root-account-create-update-jm8b2" Feb 16 17:41:34.465200 master-0 kubenswrapper[4652]: I0216 17:41:34.464985 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jz84\" (UniqueName: \"kubernetes.io/projected/21472637-29f7-4764-ba99-d0b7d6ccdaa4-kube-api-access-8jz84\") pod \"root-account-create-update-jm8b2\" (UID: \"21472637-29f7-4764-ba99-d0b7d6ccdaa4\") " pod="openstack/root-account-create-update-jm8b2" Feb 16 17:41:34.466206 master-0 kubenswrapper[4652]: I0216 17:41:34.466139 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21472637-29f7-4764-ba99-d0b7d6ccdaa4-operator-scripts\") pod \"root-account-create-update-jm8b2\" (UID: \"21472637-29f7-4764-ba99-d0b7d6ccdaa4\") " pod="openstack/root-account-create-update-jm8b2" Feb 16 17:41:34.480994 master-0 kubenswrapper[4652]: I0216 17:41:34.480941 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jz84\" (UniqueName: \"kubernetes.io/projected/21472637-29f7-4764-ba99-d0b7d6ccdaa4-kube-api-access-8jz84\") pod \"root-account-create-update-jm8b2\" (UID: \"21472637-29f7-4764-ba99-d0b7d6ccdaa4\") " pod="openstack/root-account-create-update-jm8b2" Feb 16 17:41:34.592501 master-0 kubenswrapper[4652]: I0216 17:41:34.591870 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jm8b2" Feb 16 17:41:34.785098 master-0 kubenswrapper[4652]: I0216 17:41:34.785018 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5895aff2-c4f5-42f3-a422-9ef5ea305756" path="/var/lib/kubelet/pods/5895aff2-c4f5-42f3-a422-9ef5ea305756/volumes" Feb 16 17:41:34.920821 master-0 kubenswrapper[4652]: I0216 17:41:34.920779 4652 generic.go:334] "Generic (PLEG): container finished" podID="268bd521-18a0-44a2-94af-c8b0d5fc62de" containerID="52ee62a4d9d8c1c4bbfa090ef576438b85370449ee8ea1632a4ffc10ca9f4340" exitCode=0 Feb 16 17:41:34.922238 master-0 kubenswrapper[4652]: I0216 17:41:34.921000 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-78fa-account-create-update-prrd4" event={"ID":"268bd521-18a0-44a2-94af-c8b0d5fc62de","Type":"ContainerDied","Data":"52ee62a4d9d8c1c4bbfa090ef576438b85370449ee8ea1632a4ffc10ca9f4340"} Feb 16 17:41:35.190104 master-0 kubenswrapper[4652]: W0216 17:41:35.189209 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21472637_29f7_4764_ba99_d0b7d6ccdaa4.slice/crio-62f1f33d5937a15b74ccbc45a1a6307315088f7c6b83298b36163b1125db1a68 WatchSource:0}: Error finding container 62f1f33d5937a15b74ccbc45a1a6307315088f7c6b83298b36163b1125db1a68: Status 404 returned error can't find the container with id 62f1f33d5937a15b74ccbc45a1a6307315088f7c6b83298b36163b1125db1a68 Feb 16 17:41:35.203860 master-0 kubenswrapper[4652]: I0216 17:41:35.203790 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jm8b2"] Feb 16 17:41:35.473210 master-0 kubenswrapper[4652]: I0216 17:41:35.473171 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p5x2l" Feb 16 17:41:35.590609 master-0 kubenswrapper[4652]: I0216 17:41:35.590547 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8vzn\" (UniqueName: \"kubernetes.io/projected/953baf72-0755-451c-a083-0088cb99c43a-kube-api-access-z8vzn\") pod \"953baf72-0755-451c-a083-0088cb99c43a\" (UID: \"953baf72-0755-451c-a083-0088cb99c43a\") " Feb 16 17:41:35.590878 master-0 kubenswrapper[4652]: I0216 17:41:35.590852 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953baf72-0755-451c-a083-0088cb99c43a-operator-scripts\") pod \"953baf72-0755-451c-a083-0088cb99c43a\" (UID: \"953baf72-0755-451c-a083-0088cb99c43a\") " Feb 16 17:41:35.591345 master-0 kubenswrapper[4652]: I0216 17:41:35.591302 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953baf72-0755-451c-a083-0088cb99c43a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "953baf72-0755-451c-a083-0088cb99c43a" (UID: "953baf72-0755-451c-a083-0088cb99c43a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:35.591604 master-0 kubenswrapper[4652]: I0216 17:41:35.591575 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/953baf72-0755-451c-a083-0088cb99c43a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:35.614909 master-0 kubenswrapper[4652]: I0216 17:41:35.614854 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953baf72-0755-451c-a083-0088cb99c43a-kube-api-access-z8vzn" (OuterVolumeSpecName: "kube-api-access-z8vzn") pod "953baf72-0755-451c-a083-0088cb99c43a" (UID: "953baf72-0755-451c-a083-0088cb99c43a"). InnerVolumeSpecName "kube-api-access-z8vzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:35.693185 master-0 kubenswrapper[4652]: I0216 17:41:35.693095 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8vzn\" (UniqueName: \"kubernetes.io/projected/953baf72-0755-451c-a083-0088cb99c43a-kube-api-access-z8vzn\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:35.930512 master-0 kubenswrapper[4652]: I0216 17:41:35.930468 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-p5x2l" Feb 16 17:41:35.931009 master-0 kubenswrapper[4652]: I0216 17:41:35.930468 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-p5x2l" event={"ID":"953baf72-0755-451c-a083-0088cb99c43a","Type":"ContainerDied","Data":"0773bf4be7db3428d1c43ed5bb2c4a70a62afab93bd571bfa41a4b04d51178b0"} Feb 16 17:41:35.931009 master-0 kubenswrapper[4652]: I0216 17:41:35.930598 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0773bf4be7db3428d1c43ed5bb2c4a70a62afab93bd571bfa41a4b04d51178b0" Feb 16 17:41:35.932410 master-0 kubenswrapper[4652]: I0216 17:41:35.932376 4652 generic.go:334] "Generic (PLEG): container finished" podID="21472637-29f7-4764-ba99-d0b7d6ccdaa4" containerID="8c4bd38b59d17d0c89cd7c5883431356086cba2c97e1fb9be66cc23ec4b985ef" exitCode=0 Feb 16 17:41:35.932410 master-0 kubenswrapper[4652]: I0216 17:41:35.932399 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jm8b2" event={"ID":"21472637-29f7-4764-ba99-d0b7d6ccdaa4","Type":"ContainerDied","Data":"8c4bd38b59d17d0c89cd7c5883431356086cba2c97e1fb9be66cc23ec4b985ef"} Feb 16 17:41:35.932519 master-0 kubenswrapper[4652]: I0216 17:41:35.932426 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jm8b2" event={"ID":"21472637-29f7-4764-ba99-d0b7d6ccdaa4","Type":"ContainerStarted","Data":"62f1f33d5937a15b74ccbc45a1a6307315088f7c6b83298b36163b1125db1a68"} Feb 16 17:41:35.934315 master-0 kubenswrapper[4652]: I0216 17:41:35.934267 4652 generic.go:334] "Generic (PLEG): container finished" podID="2a714374-e581-4c12-9e7f-5060fd746f10" containerID="4efa87a749035657dbb6080eb6d3906035357ab1a0c2d3ece107018fbc93b7cc" exitCode=0 Feb 16 17:41:35.934315 master-0 kubenswrapper[4652]: I0216 17:41:35.934298 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nxjgq" event={"ID":"2a714374-e581-4c12-9e7f-5060fd746f10","Type":"ContainerDied","Data":"4efa87a749035657dbb6080eb6d3906035357ab1a0c2d3ece107018fbc93b7cc"} Feb 16 17:41:36.335641 master-0 kubenswrapper[4652]: I0216 17:41:36.335567 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-78fa-account-create-update-prrd4" Feb 16 17:41:36.408753 master-0 kubenswrapper[4652]: I0216 17:41:36.407738 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st7pb\" (UniqueName: \"kubernetes.io/projected/268bd521-18a0-44a2-94af-c8b0d5fc62de-kube-api-access-st7pb\") pod \"268bd521-18a0-44a2-94af-c8b0d5fc62de\" (UID: \"268bd521-18a0-44a2-94af-c8b0d5fc62de\") " Feb 16 17:41:36.408753 master-0 kubenswrapper[4652]: I0216 17:41:36.408028 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/268bd521-18a0-44a2-94af-c8b0d5fc62de-operator-scripts\") pod \"268bd521-18a0-44a2-94af-c8b0d5fc62de\" (UID: \"268bd521-18a0-44a2-94af-c8b0d5fc62de\") " Feb 16 17:41:36.408753 master-0 kubenswrapper[4652]: I0216 17:41:36.408620 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/268bd521-18a0-44a2-94af-c8b0d5fc62de-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "268bd521-18a0-44a2-94af-c8b0d5fc62de" (UID: "268bd521-18a0-44a2-94af-c8b0d5fc62de"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:36.411473 master-0 kubenswrapper[4652]: I0216 17:41:36.411425 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268bd521-18a0-44a2-94af-c8b0d5fc62de-kube-api-access-st7pb" (OuterVolumeSpecName: "kube-api-access-st7pb") pod "268bd521-18a0-44a2-94af-c8b0d5fc62de" (UID: "268bd521-18a0-44a2-94af-c8b0d5fc62de"). InnerVolumeSpecName "kube-api-access-st7pb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:36.512183 master-0 kubenswrapper[4652]: I0216 17:41:36.512034 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/268bd521-18a0-44a2-94af-c8b0d5fc62de-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:36.512183 master-0 kubenswrapper[4652]: I0216 17:41:36.512090 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st7pb\" (UniqueName: \"kubernetes.io/projected/268bd521-18a0-44a2-94af-c8b0d5fc62de-kube-api-access-st7pb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:36.946354 master-0 kubenswrapper[4652]: I0216 17:41:36.946234 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-78fa-account-create-update-prrd4" event={"ID":"268bd521-18a0-44a2-94af-c8b0d5fc62de","Type":"ContainerDied","Data":"e62a82d3dcc8c89f0feec50669bc506232efb39b33bba2fd9356b0e7ffdac1b6"} Feb 16 17:41:36.946354 master-0 kubenswrapper[4652]: I0216 17:41:36.946311 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-78fa-account-create-update-prrd4" Feb 16 17:41:36.946992 master-0 kubenswrapper[4652]: I0216 17:41:36.946370 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e62a82d3dcc8c89f0feec50669bc506232efb39b33bba2fd9356b0e7ffdac1b6" Feb 16 17:41:37.336641 master-0 kubenswrapper[4652]: I0216 17:41:37.336604 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:37.451794 master-0 kubenswrapper[4652]: I0216 17:41:37.451479 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-combined-ca-bundle\") pod \"2a714374-e581-4c12-9e7f-5060fd746f10\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " Feb 16 17:41:37.451794 master-0 kubenswrapper[4652]: I0216 17:41:37.451559 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a714374-e581-4c12-9e7f-5060fd746f10-scripts\") pod \"2a714374-e581-4c12-9e7f-5060fd746f10\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " Feb 16 17:41:37.451794 master-0 kubenswrapper[4652]: I0216 17:41:37.451644 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2a714374-e581-4c12-9e7f-5060fd746f10-etc-swift\") pod \"2a714374-e581-4c12-9e7f-5060fd746f10\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " Feb 16 17:41:37.451794 master-0 kubenswrapper[4652]: I0216 17:41:37.451664 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2a714374-e581-4c12-9e7f-5060fd746f10-ring-data-devices\") pod \"2a714374-e581-4c12-9e7f-5060fd746f10\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " Feb 16 17:41:37.451794 master-0 kubenswrapper[4652]: I0216 17:41:37.451715 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-dispersionconf\") pod \"2a714374-e581-4c12-9e7f-5060fd746f10\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " Feb 16 17:41:37.451794 master-0 kubenswrapper[4652]: I0216 17:41:37.451735 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-swiftconf\") pod \"2a714374-e581-4c12-9e7f-5060fd746f10\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " Feb 16 17:41:37.451794 master-0 kubenswrapper[4652]: I0216 17:41:37.451786 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv4nk\" (UniqueName: \"kubernetes.io/projected/2a714374-e581-4c12-9e7f-5060fd746f10-kube-api-access-cv4nk\") pod \"2a714374-e581-4c12-9e7f-5060fd746f10\" (UID: \"2a714374-e581-4c12-9e7f-5060fd746f10\") " Feb 16 17:41:37.452655 master-0 kubenswrapper[4652]: I0216 17:41:37.452376 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a714374-e581-4c12-9e7f-5060fd746f10-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2a714374-e581-4c12-9e7f-5060fd746f10" (UID: "2a714374-e581-4c12-9e7f-5060fd746f10"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:37.453331 master-0 kubenswrapper[4652]: I0216 17:41:37.453200 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a714374-e581-4c12-9e7f-5060fd746f10-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2a714374-e581-4c12-9e7f-5060fd746f10" (UID: "2a714374-e581-4c12-9e7f-5060fd746f10"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:41:37.456868 master-0 kubenswrapper[4652]: I0216 17:41:37.455643 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a714374-e581-4c12-9e7f-5060fd746f10-kube-api-access-cv4nk" (OuterVolumeSpecName: "kube-api-access-cv4nk") pod "2a714374-e581-4c12-9e7f-5060fd746f10" (UID: "2a714374-e581-4c12-9e7f-5060fd746f10"). InnerVolumeSpecName "kube-api-access-cv4nk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:37.458941 master-0 kubenswrapper[4652]: I0216 17:41:37.458869 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2a714374-e581-4c12-9e7f-5060fd746f10" (UID: "2a714374-e581-4c12-9e7f-5060fd746f10"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:41:37.478316 master-0 kubenswrapper[4652]: I0216 17:41:37.478077 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a714374-e581-4c12-9e7f-5060fd746f10-scripts" (OuterVolumeSpecName: "scripts") pod "2a714374-e581-4c12-9e7f-5060fd746f10" (UID: "2a714374-e581-4c12-9e7f-5060fd746f10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:37.480825 master-0 kubenswrapper[4652]: I0216 17:41:37.480773 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2a714374-e581-4c12-9e7f-5060fd746f10" (UID: "2a714374-e581-4c12-9e7f-5060fd746f10"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:41:37.481910 master-0 kubenswrapper[4652]: I0216 17:41:37.481825 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a714374-e581-4c12-9e7f-5060fd746f10" (UID: "2a714374-e581-4c12-9e7f-5060fd746f10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:41:37.556539 master-0 kubenswrapper[4652]: I0216 17:41:37.556476 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jm8b2" Feb 16 17:41:37.558523 master-0 kubenswrapper[4652]: I0216 17:41:37.558477 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:37.558523 master-0 kubenswrapper[4652]: I0216 17:41:37.558520 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a714374-e581-4c12-9e7f-5060fd746f10-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:37.558645 master-0 kubenswrapper[4652]: I0216 17:41:37.558537 4652 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2a714374-e581-4c12-9e7f-5060fd746f10-etc-swift\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:37.558645 master-0 kubenswrapper[4652]: I0216 17:41:37.558552 4652 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2a714374-e581-4c12-9e7f-5060fd746f10-ring-data-devices\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:37.558645 master-0 kubenswrapper[4652]: I0216 17:41:37.558565 4652 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-dispersionconf\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:37.558645 master-0 kubenswrapper[4652]: I0216 17:41:37.558578 4652 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2a714374-e581-4c12-9e7f-5060fd746f10-swiftconf\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:37.558645 master-0 kubenswrapper[4652]: I0216 17:41:37.558590 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv4nk\" (UniqueName: \"kubernetes.io/projected/2a714374-e581-4c12-9e7f-5060fd746f10-kube-api-access-cv4nk\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:37.651633 master-0 kubenswrapper[4652]: I0216 17:41:37.651570 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-fd8th"] Feb 16 17:41:37.652186 master-0 kubenswrapper[4652]: E0216 17:41:37.652157 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="953baf72-0755-451c-a083-0088cb99c43a" containerName="mariadb-database-create" Feb 16 17:41:37.652186 master-0 kubenswrapper[4652]: I0216 17:41:37.652182 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="953baf72-0755-451c-a083-0088cb99c43a" containerName="mariadb-database-create" Feb 16 17:41:37.652315 master-0 kubenswrapper[4652]: E0216 17:41:37.652205 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a714374-e581-4c12-9e7f-5060fd746f10" containerName="swift-ring-rebalance" Feb 16 17:41:37.652315 master-0 kubenswrapper[4652]: I0216 17:41:37.652213 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a714374-e581-4c12-9e7f-5060fd746f10" containerName="swift-ring-rebalance" Feb 16 17:41:37.652315 master-0 kubenswrapper[4652]: E0216 17:41:37.652265 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21472637-29f7-4764-ba99-d0b7d6ccdaa4" containerName="mariadb-account-create-update" Feb 16 17:41:37.652315 master-0 kubenswrapper[4652]: I0216 17:41:37.652273 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="21472637-29f7-4764-ba99-d0b7d6ccdaa4" containerName="mariadb-account-create-update" Feb 16 17:41:37.652315 master-0 kubenswrapper[4652]: E0216 17:41:37.652289 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268bd521-18a0-44a2-94af-c8b0d5fc62de" containerName="mariadb-account-create-update" Feb 16 17:41:37.652315 master-0 kubenswrapper[4652]: I0216 17:41:37.652297 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="268bd521-18a0-44a2-94af-c8b0d5fc62de" containerName="mariadb-account-create-update" Feb 16 17:41:37.652719 master-0 kubenswrapper[4652]: I0216 17:41:37.652698 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="21472637-29f7-4764-ba99-d0b7d6ccdaa4" containerName="mariadb-account-create-update" Feb 16 17:41:37.652773 master-0 kubenswrapper[4652]: I0216 17:41:37.652744 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="268bd521-18a0-44a2-94af-c8b0d5fc62de" containerName="mariadb-account-create-update" Feb 16 17:41:37.652815 master-0 kubenswrapper[4652]: I0216 17:41:37.652793 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a714374-e581-4c12-9e7f-5060fd746f10" containerName="swift-ring-rebalance" Feb 16 17:41:37.652815 master-0 kubenswrapper[4652]: I0216 17:41:37.652806 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="953baf72-0755-451c-a083-0088cb99c43a" containerName="mariadb-database-create" Feb 16 17:41:37.654050 master-0 kubenswrapper[4652]: I0216 17:41:37.654028 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:37.657043 master-0 kubenswrapper[4652]: I0216 17:41:37.657004 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-50e08-config-data" Feb 16 17:41:37.661006 master-0 kubenswrapper[4652]: I0216 17:41:37.659187 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jz84\" (UniqueName: \"kubernetes.io/projected/21472637-29f7-4764-ba99-d0b7d6ccdaa4-kube-api-access-8jz84\") pod \"21472637-29f7-4764-ba99-d0b7d6ccdaa4\" (UID: \"21472637-29f7-4764-ba99-d0b7d6ccdaa4\") " Feb 16 17:41:37.661006 master-0 kubenswrapper[4652]: I0216 17:41:37.659233 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21472637-29f7-4764-ba99-d0b7d6ccdaa4-operator-scripts\") pod \"21472637-29f7-4764-ba99-d0b7d6ccdaa4\" (UID: \"21472637-29f7-4764-ba99-d0b7d6ccdaa4\") " Feb 16 17:41:37.661006 master-0 kubenswrapper[4652]: I0216 17:41:37.660287 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21472637-29f7-4764-ba99-d0b7d6ccdaa4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21472637-29f7-4764-ba99-d0b7d6ccdaa4" (UID: "21472637-29f7-4764-ba99-d0b7d6ccdaa4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:37.664903 master-0 kubenswrapper[4652]: I0216 17:41:37.664031 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21472637-29f7-4764-ba99-d0b7d6ccdaa4-kube-api-access-8jz84" (OuterVolumeSpecName: "kube-api-access-8jz84") pod "21472637-29f7-4764-ba99-d0b7d6ccdaa4" (UID: "21472637-29f7-4764-ba99-d0b7d6ccdaa4"). InnerVolumeSpecName "kube-api-access-8jz84". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:37.670017 master-0 kubenswrapper[4652]: I0216 17:41:37.669959 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fd8th"] Feb 16 17:41:37.741895 master-0 kubenswrapper[4652]: E0216 17:41:37.741748 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda39e35c_827b_40f3_8359_db6934118af4.slice/crio-2fcff6a59f4d5df284aaafe56df6e773582ee2a99a7c2f208bfa34f4ec111fb6.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:41:37.742112 master-0 kubenswrapper[4652]: E0216 17:41:37.741980 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda39e35c_827b_40f3_8359_db6934118af4.slice/crio-2fcff6a59f4d5df284aaafe56df6e773582ee2a99a7c2f208bfa34f4ec111fb6.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:41:37.761631 master-0 kubenswrapper[4652]: I0216 17:41:37.761563 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-combined-ca-bundle\") pod \"glance-db-sync-fd8th\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:37.761879 master-0 kubenswrapper[4652]: I0216 17:41:37.761762 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-config-data\") pod \"glance-db-sync-fd8th\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:37.761974 master-0 kubenswrapper[4652]: I0216 17:41:37.761928 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngnpb\" (UniqueName: \"kubernetes.io/projected/f977e20f-2501-45b2-b8d1-2dc333899a52-kube-api-access-ngnpb\") pod \"glance-db-sync-fd8th\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:37.762141 master-0 kubenswrapper[4652]: I0216 17:41:37.762114 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-db-sync-config-data\") pod \"glance-db-sync-fd8th\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:37.762303 master-0 kubenswrapper[4652]: I0216 17:41:37.762283 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jz84\" (UniqueName: \"kubernetes.io/projected/21472637-29f7-4764-ba99-d0b7d6ccdaa4-kube-api-access-8jz84\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:37.762303 master-0 kubenswrapper[4652]: I0216 17:41:37.762302 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21472637-29f7-4764-ba99-d0b7d6ccdaa4-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:37.851022 master-0 kubenswrapper[4652]: I0216 17:41:37.850896 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-5qcmk" podUID="c938ffa5-271f-4685-b9c2-c236001d07b4" containerName="ovn-controller" probeResult="failure" output=< Feb 16 17:41:37.851022 master-0 kubenswrapper[4652]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 17:41:37.851022 master-0 kubenswrapper[4652]: > Feb 16 17:41:37.864697 master-0 kubenswrapper[4652]: I0216 17:41:37.863681 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:41:37.865881 master-0 kubenswrapper[4652]: I0216 17:41:37.865001 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-config-data\") pod \"glance-db-sync-fd8th\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:37.865881 master-0 kubenswrapper[4652]: I0216 17:41:37.865225 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngnpb\" (UniqueName: \"kubernetes.io/projected/f977e20f-2501-45b2-b8d1-2dc333899a52-kube-api-access-ngnpb\") pod \"glance-db-sync-fd8th\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:37.865881 master-0 kubenswrapper[4652]: I0216 17:41:37.865841 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-db-sync-config-data\") pod \"glance-db-sync-fd8th\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:37.866032 master-0 kubenswrapper[4652]: I0216 17:41:37.865904 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-combined-ca-bundle\") pod \"glance-db-sync-fd8th\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:37.869566 master-0 kubenswrapper[4652]: I0216 17:41:37.869281 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-combined-ca-bundle\") pod \"glance-db-sync-fd8th\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:37.869796 master-0 kubenswrapper[4652]: I0216 17:41:37.869767 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-db-sync-config-data\") pod \"glance-db-sync-fd8th\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:37.870215 master-0 kubenswrapper[4652]: I0216 17:41:37.870130 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-config-data\") pod \"glance-db-sync-fd8th\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:37.876044 master-0 kubenswrapper[4652]: I0216 17:41:37.876005 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-bmlhg" Feb 16 17:41:37.881493 master-0 kubenswrapper[4652]: I0216 17:41:37.881460 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngnpb\" (UniqueName: \"kubernetes.io/projected/f977e20f-2501-45b2-b8d1-2dc333899a52-kube-api-access-ngnpb\") pod \"glance-db-sync-fd8th\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:37.957044 master-0 kubenswrapper[4652]: I0216 17:41:37.956985 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-nxjgq" event={"ID":"2a714374-e581-4c12-9e7f-5060fd746f10","Type":"ContainerDied","Data":"67de9c9191e44a41e5e3a0d9e97b7f74c08a496f15d237d751acb614451d5a57"} Feb 16 17:41:37.957044 master-0 kubenswrapper[4652]: I0216 17:41:37.957042 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67de9c9191e44a41e5e3a0d9e97b7f74c08a496f15d237d751acb614451d5a57" Feb 16 17:41:37.957044 master-0 kubenswrapper[4652]: I0216 17:41:37.957001 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-nxjgq" Feb 16 17:41:37.959419 master-0 kubenswrapper[4652]: I0216 17:41:37.959365 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jm8b2" event={"ID":"21472637-29f7-4764-ba99-d0b7d6ccdaa4","Type":"ContainerDied","Data":"62f1f33d5937a15b74ccbc45a1a6307315088f7c6b83298b36163b1125db1a68"} Feb 16 17:41:37.959419 master-0 kubenswrapper[4652]: I0216 17:41:37.959399 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62f1f33d5937a15b74ccbc45a1a6307315088f7c6b83298b36163b1125db1a68" Feb 16 17:41:37.959778 master-0 kubenswrapper[4652]: I0216 17:41:37.959724 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jm8b2" Feb 16 17:41:37.963417 master-0 kubenswrapper[4652]: I0216 17:41:37.963389 4652 generic.go:334] "Generic (PLEG): container finished" podID="da39e35c-827b-40f3-8359-db6934118af4" containerID="2fcff6a59f4d5df284aaafe56df6e773582ee2a99a7c2f208bfa34f4ec111fb6" exitCode=0 Feb 16 17:41:37.963547 master-0 kubenswrapper[4652]: I0216 17:41:37.963488 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"da39e35c-827b-40f3-8359-db6934118af4","Type":"ContainerDied","Data":"2fcff6a59f4d5df284aaafe56df6e773582ee2a99a7c2f208bfa34f4ec111fb6"} Feb 16 17:41:38.011499 master-0 kubenswrapper[4652]: I0216 17:41:38.010589 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fd8th" Feb 16 17:41:38.111734 master-0 kubenswrapper[4652]: I0216 17:41:38.109658 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-5qcmk-config-tj7kp"] Feb 16 17:41:38.111914 master-0 kubenswrapper[4652]: I0216 17:41:38.111727 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.122315 master-0 kubenswrapper[4652]: I0216 17:41:38.118700 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 17:41:38.151045 master-0 kubenswrapper[4652]: I0216 17:41:38.148793 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5qcmk-config-tj7kp"] Feb 16 17:41:38.200496 master-0 kubenswrapper[4652]: I0216 17:41:38.197375 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqktr\" (UniqueName: \"kubernetes.io/projected/6d9f45af-6d96-4de6-8abd-3cf6f5857473-kube-api-access-hqktr\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.200496 master-0 kubenswrapper[4652]: I0216 17:41:38.197517 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-run-ovn\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.200496 master-0 kubenswrapper[4652]: I0216 17:41:38.197538 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d9f45af-6d96-4de6-8abd-3cf6f5857473-scripts\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.200496 master-0 kubenswrapper[4652]: I0216 17:41:38.197685 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d9f45af-6d96-4de6-8abd-3cf6f5857473-additional-scripts\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.200496 master-0 kubenswrapper[4652]: I0216 17:41:38.197880 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-log-ovn\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.200496 master-0 kubenswrapper[4652]: I0216 17:41:38.198054 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-run\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.302394 master-0 kubenswrapper[4652]: I0216 17:41:38.300314 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-run-ovn\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.302394 master-0 kubenswrapper[4652]: I0216 17:41:38.300482 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-run-ovn\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.302394 master-0 kubenswrapper[4652]: I0216 17:41:38.300493 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d9f45af-6d96-4de6-8abd-3cf6f5857473-scripts\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.302394 master-0 kubenswrapper[4652]: I0216 17:41:38.300649 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d9f45af-6d96-4de6-8abd-3cf6f5857473-additional-scripts\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.302394 master-0 kubenswrapper[4652]: I0216 17:41:38.300863 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-log-ovn\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.302394 master-0 kubenswrapper[4652]: I0216 17:41:38.300988 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-log-ovn\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.302394 master-0 kubenswrapper[4652]: I0216 17:41:38.301043 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-run\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.302394 master-0 kubenswrapper[4652]: I0216 17:41:38.301135 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-run\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.302394 master-0 kubenswrapper[4652]: I0216 17:41:38.301410 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqktr\" (UniqueName: \"kubernetes.io/projected/6d9f45af-6d96-4de6-8abd-3cf6f5857473-kube-api-access-hqktr\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.302394 master-0 kubenswrapper[4652]: I0216 17:41:38.301604 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d9f45af-6d96-4de6-8abd-3cf6f5857473-additional-scripts\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.303312 master-0 kubenswrapper[4652]: I0216 17:41:38.303291 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d9f45af-6d96-4de6-8abd-3cf6f5857473-scripts\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.319509 master-0 kubenswrapper[4652]: I0216 17:41:38.318934 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqktr\" (UniqueName: \"kubernetes.io/projected/6d9f45af-6d96-4de6-8abd-3cf6f5857473-kube-api-access-hqktr\") pod \"ovn-controller-5qcmk-config-tj7kp\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.505813 master-0 kubenswrapper[4652]: I0216 17:41:38.505766 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:38.659234 master-0 kubenswrapper[4652]: I0216 17:41:38.659172 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fd8th"] Feb 16 17:41:38.978604 master-0 kubenswrapper[4652]: I0216 17:41:38.978554 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-5qcmk-config-tj7kp"] Feb 16 17:41:38.983868 master-0 kubenswrapper[4652]: I0216 17:41:38.981978 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"da39e35c-827b-40f3-8359-db6934118af4","Type":"ContainerStarted","Data":"65bb4d28c3f34a1a1e19d0502236a1ed67ae17ce632538f3aa0d25d18bbc9e82"} Feb 16 17:41:38.983868 master-0 kubenswrapper[4652]: I0216 17:41:38.982184 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 17:41:38.983868 master-0 kubenswrapper[4652]: I0216 17:41:38.983644 4652 generic.go:334] "Generic (PLEG): container finished" podID="90814419-59bd-4110-8afa-6842e5fa7b95" containerID="cae25c00637e89ee4db2640f6f994b062e7a8bf7bd5b5cf75f08008fe3e765d6" exitCode=0 Feb 16 17:41:38.983868 master-0 kubenswrapper[4652]: I0216 17:41:38.983690 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90814419-59bd-4110-8afa-6842e5fa7b95","Type":"ContainerDied","Data":"cae25c00637e89ee4db2640f6f994b062e7a8bf7bd5b5cf75f08008fe3e765d6"} Feb 16 17:41:38.987335 master-0 kubenswrapper[4652]: I0216 17:41:38.985197 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fd8th" event={"ID":"f977e20f-2501-45b2-b8d1-2dc333899a52","Type":"ContainerStarted","Data":"4280ccb2474f723821fbba28a278fa4861e9961e862126e90c717e8c96d5b770"} Feb 16 17:41:39.014518 master-0 kubenswrapper[4652]: I0216 17:41:39.014159 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=51.006832487 podStartE2EDuration="57.014137948s" podCreationTimestamp="2026-02-16 17:40:42 +0000 UTC" firstStartedPulling="2026-02-16 17:40:58.07944999 +0000 UTC m=+1015.467618506" lastFinishedPulling="2026-02-16 17:41:04.086755451 +0000 UTC m=+1021.474923967" observedRunningTime="2026-02-16 17:41:39.008843256 +0000 UTC m=+1056.397011812" watchObservedRunningTime="2026-02-16 17:41:39.014137948 +0000 UTC m=+1056.402306464" Feb 16 17:41:39.229731 master-0 kubenswrapper[4652]: I0216 17:41:39.229663 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:39.238681 master-0 kubenswrapper[4652]: I0216 17:41:39.238617 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2-etc-swift\") pod \"swift-storage-0\" (UID: \"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2\") " pod="openstack/swift-storage-0" Feb 16 17:41:39.390943 master-0 kubenswrapper[4652]: I0216 17:41:39.390897 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 17:41:39.961798 master-0 kubenswrapper[4652]: I0216 17:41:39.958677 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 17:41:40.002534 master-0 kubenswrapper[4652]: I0216 17:41:40.001339 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"10f1db9b23e0d8179ecff6b97198795937e21cd5463bf5f04e2e4aa8a6798fc9"} Feb 16 17:41:40.007888 master-0 kubenswrapper[4652]: I0216 17:41:40.007809 4652 generic.go:334] "Generic (PLEG): container finished" podID="6d9f45af-6d96-4de6-8abd-3cf6f5857473" containerID="56e38ccc40c2cb0eb7ce018278579a1136e729c03eba950e9876c0cac1576931" exitCode=0 Feb 16 17:41:40.008299 master-0 kubenswrapper[4652]: I0216 17:41:40.007964 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5qcmk-config-tj7kp" event={"ID":"6d9f45af-6d96-4de6-8abd-3cf6f5857473","Type":"ContainerDied","Data":"56e38ccc40c2cb0eb7ce018278579a1136e729c03eba950e9876c0cac1576931"} Feb 16 17:41:40.008299 master-0 kubenswrapper[4652]: I0216 17:41:40.008001 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5qcmk-config-tj7kp" event={"ID":"6d9f45af-6d96-4de6-8abd-3cf6f5857473","Type":"ContainerStarted","Data":"2621aa7d10affed00c249e90e159c144611401ca79dfab3433054a911764f48b"} Feb 16 17:41:40.013819 master-0 kubenswrapper[4652]: I0216 17:41:40.013678 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"90814419-59bd-4110-8afa-6842e5fa7b95","Type":"ContainerStarted","Data":"3a386d2f97ce5024d2d2cc3a744e2996aaf4fcf3e8c2ba9f280a928b2dca05a6"} Feb 16 17:41:40.014124 master-0 kubenswrapper[4652]: I0216 17:41:40.013950 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:41:40.082382 master-0 kubenswrapper[4652]: I0216 17:41:40.082148 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.561137976 podStartE2EDuration="58.082127127s" podCreationTimestamp="2026-02-16 17:40:42 +0000 UTC" firstStartedPulling="2026-02-16 17:40:57.56467338 +0000 UTC m=+1014.952841896" lastFinishedPulling="2026-02-16 17:41:04.085662531 +0000 UTC m=+1021.473831047" observedRunningTime="2026-02-16 17:41:40.068464471 +0000 UTC m=+1057.456632987" watchObservedRunningTime="2026-02-16 17:41:40.082127127 +0000 UTC m=+1057.470295643" Feb 16 17:41:40.841351 master-0 kubenswrapper[4652]: I0216 17:41:40.841283 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jm8b2"] Feb 16 17:41:40.864539 master-0 kubenswrapper[4652]: I0216 17:41:40.864217 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jm8b2"] Feb 16 17:41:41.382395 master-0 kubenswrapper[4652]: I0216 17:41:41.382362 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:41.483107 master-0 kubenswrapper[4652]: I0216 17:41:41.483065 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-run\") pod \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " Feb 16 17:41:41.483200 master-0 kubenswrapper[4652]: I0216 17:41:41.483125 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqktr\" (UniqueName: \"kubernetes.io/projected/6d9f45af-6d96-4de6-8abd-3cf6f5857473-kube-api-access-hqktr\") pod \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " Feb 16 17:41:41.483200 master-0 kubenswrapper[4652]: I0216 17:41:41.483147 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-log-ovn\") pod \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " Feb 16 17:41:41.483200 master-0 kubenswrapper[4652]: I0216 17:41:41.483193 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d9f45af-6d96-4de6-8abd-3cf6f5857473-additional-scripts\") pod \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " Feb 16 17:41:41.483314 master-0 kubenswrapper[4652]: I0216 17:41:41.483216 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-run" (OuterVolumeSpecName: "var-run") pod "6d9f45af-6d96-4de6-8abd-3cf6f5857473" (UID: "6d9f45af-6d96-4de6-8abd-3cf6f5857473"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:41:41.483314 master-0 kubenswrapper[4652]: I0216 17:41:41.483305 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6d9f45af-6d96-4de6-8abd-3cf6f5857473" (UID: "6d9f45af-6d96-4de6-8abd-3cf6f5857473"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:41:41.483600 master-0 kubenswrapper[4652]: I0216 17:41:41.483510 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-run-ovn\") pod \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " Feb 16 17:41:41.483775 master-0 kubenswrapper[4652]: I0216 17:41:41.483748 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d9f45af-6d96-4de6-8abd-3cf6f5857473-scripts\") pod \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\" (UID: \"6d9f45af-6d96-4de6-8abd-3cf6f5857473\") " Feb 16 17:41:41.484085 master-0 kubenswrapper[4652]: I0216 17:41:41.483958 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d9f45af-6d96-4de6-8abd-3cf6f5857473-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6d9f45af-6d96-4de6-8abd-3cf6f5857473" (UID: "6d9f45af-6d96-4de6-8abd-3cf6f5857473"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:41.484085 master-0 kubenswrapper[4652]: I0216 17:41:41.484031 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6d9f45af-6d96-4de6-8abd-3cf6f5857473" (UID: "6d9f45af-6d96-4de6-8abd-3cf6f5857473"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:41:41.484730 master-0 kubenswrapper[4652]: I0216 17:41:41.484649 4652 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-run\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:41.484730 master-0 kubenswrapper[4652]: I0216 17:41:41.484676 4652 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-log-ovn\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:41.484730 master-0 kubenswrapper[4652]: I0216 17:41:41.484691 4652 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6d9f45af-6d96-4de6-8abd-3cf6f5857473-additional-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:41.484730 master-0 kubenswrapper[4652]: I0216 17:41:41.484704 4652 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6d9f45af-6d96-4de6-8abd-3cf6f5857473-var-run-ovn\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:41.484996 master-0 kubenswrapper[4652]: I0216 17:41:41.484931 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d9f45af-6d96-4de6-8abd-3cf6f5857473-scripts" (OuterVolumeSpecName: "scripts") pod "6d9f45af-6d96-4de6-8abd-3cf6f5857473" (UID: "6d9f45af-6d96-4de6-8abd-3cf6f5857473"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:41.492170 master-0 kubenswrapper[4652]: I0216 17:41:41.491662 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d9f45af-6d96-4de6-8abd-3cf6f5857473-kube-api-access-hqktr" (OuterVolumeSpecName: "kube-api-access-hqktr") pod "6d9f45af-6d96-4de6-8abd-3cf6f5857473" (UID: "6d9f45af-6d96-4de6-8abd-3cf6f5857473"). InnerVolumeSpecName "kube-api-access-hqktr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:41.586091 master-0 kubenswrapper[4652]: I0216 17:41:41.586052 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqktr\" (UniqueName: \"kubernetes.io/projected/6d9f45af-6d96-4de6-8abd-3cf6f5857473-kube-api-access-hqktr\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:41.586091 master-0 kubenswrapper[4652]: I0216 17:41:41.586084 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d9f45af-6d96-4de6-8abd-3cf6f5857473-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:42.054874 master-0 kubenswrapper[4652]: I0216 17:41:42.054814 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"ab0efa992500d8ebcfd36c1e5c467116f4fb3807038371824a2d12cd907837a9"} Feb 16 17:41:42.054874 master-0 kubenswrapper[4652]: I0216 17:41:42.054882 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"57cfcb55ae093bac7e407a8d8890b86cf9de109bb78358cabfb84f796197debb"} Feb 16 17:41:42.055164 master-0 kubenswrapper[4652]: I0216 17:41:42.054896 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"b22d9fefa51179c3f86ab69a6da18b84cffbb82fefcdff4302738ea40af95e66"} Feb 16 17:41:42.055164 master-0 kubenswrapper[4652]: I0216 17:41:42.054907 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"c502ce981dcce9a34056480004284a8ee4e8411cec8a7990d28db1c975a59a45"} Feb 16 17:41:42.057044 master-0 kubenswrapper[4652]: I0216 17:41:42.057012 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-5qcmk-config-tj7kp" event={"ID":"6d9f45af-6d96-4de6-8abd-3cf6f5857473","Type":"ContainerDied","Data":"2621aa7d10affed00c249e90e159c144611401ca79dfab3433054a911764f48b"} Feb 16 17:41:42.057137 master-0 kubenswrapper[4652]: I0216 17:41:42.057045 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2621aa7d10affed00c249e90e159c144611401ca79dfab3433054a911764f48b" Feb 16 17:41:42.057137 master-0 kubenswrapper[4652]: I0216 17:41:42.057102 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-5qcmk-config-tj7kp" Feb 16 17:41:42.535410 master-0 kubenswrapper[4652]: I0216 17:41:42.535240 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-5qcmk-config-tj7kp"] Feb 16 17:41:42.548432 master-0 kubenswrapper[4652]: I0216 17:41:42.548111 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-5qcmk-config-tj7kp"] Feb 16 17:41:42.763358 master-0 kubenswrapper[4652]: I0216 17:41:42.763285 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21472637-29f7-4764-ba99-d0b7d6ccdaa4" path="/var/lib/kubelet/pods/21472637-29f7-4764-ba99-d0b7d6ccdaa4/volumes" Feb 16 17:41:42.764451 master-0 kubenswrapper[4652]: I0216 17:41:42.764426 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d9f45af-6d96-4de6-8abd-3cf6f5857473" path="/var/lib/kubelet/pods/6d9f45af-6d96-4de6-8abd-3cf6f5857473/volumes" Feb 16 17:41:42.861849 master-0 kubenswrapper[4652]: I0216 17:41:42.855864 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-5qcmk" Feb 16 17:41:44.091866 master-0 kubenswrapper[4652]: I0216 17:41:44.091815 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"774ab7d2b1320bc1b790de3e7cc31367e35b368d019002e0cc91d6972bb963cb"} Feb 16 17:41:44.091866 master-0 kubenswrapper[4652]: I0216 17:41:44.091859 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"a388bd2c8910d9cca20e4903547db81c9ce897fafdfe803a93ca438877127f5e"} Feb 16 17:41:44.091866 master-0 kubenswrapper[4652]: I0216 17:41:44.091872 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"73aee45573e708f5a06a91bf29425fde44288bef066f50587a9732b9b70abbb4"} Feb 16 17:41:44.091866 master-0 kubenswrapper[4652]: I0216 17:41:44.091880 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"10dfe2f4e6b7ce0c1f2255178242969f33b4f19bcf02e9faa7f3de8a2108e29d"} Feb 16 17:41:45.845476 master-0 kubenswrapper[4652]: I0216 17:41:45.845414 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tvnfc"] Feb 16 17:41:45.846128 master-0 kubenswrapper[4652]: E0216 17:41:45.846093 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d9f45af-6d96-4de6-8abd-3cf6f5857473" containerName="ovn-config" Feb 16 17:41:45.846128 master-0 kubenswrapper[4652]: I0216 17:41:45.846117 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d9f45af-6d96-4de6-8abd-3cf6f5857473" containerName="ovn-config" Feb 16 17:41:45.847017 master-0 kubenswrapper[4652]: I0216 17:41:45.846984 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d9f45af-6d96-4de6-8abd-3cf6f5857473" containerName="ovn-config" Feb 16 17:41:45.847954 master-0 kubenswrapper[4652]: I0216 17:41:45.847921 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tvnfc" Feb 16 17:41:45.850166 master-0 kubenswrapper[4652]: I0216 17:41:45.850114 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 17:41:45.859293 master-0 kubenswrapper[4652]: I0216 17:41:45.859221 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tvnfc"] Feb 16 17:41:46.009273 master-0 kubenswrapper[4652]: I0216 17:41:46.009204 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n45f\" (UniqueName: \"kubernetes.io/projected/21f35c66-58aa-4320-9ef0-80dfa90c72af-kube-api-access-9n45f\") pod \"root-account-create-update-tvnfc\" (UID: \"21f35c66-58aa-4320-9ef0-80dfa90c72af\") " pod="openstack/root-account-create-update-tvnfc" Feb 16 17:41:46.009505 master-0 kubenswrapper[4652]: I0216 17:41:46.009385 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f35c66-58aa-4320-9ef0-80dfa90c72af-operator-scripts\") pod \"root-account-create-update-tvnfc\" (UID: \"21f35c66-58aa-4320-9ef0-80dfa90c72af\") " pod="openstack/root-account-create-update-tvnfc" Feb 16 17:41:46.111120 master-0 kubenswrapper[4652]: I0216 17:41:46.110992 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n45f\" (UniqueName: \"kubernetes.io/projected/21f35c66-58aa-4320-9ef0-80dfa90c72af-kube-api-access-9n45f\") pod \"root-account-create-update-tvnfc\" (UID: \"21f35c66-58aa-4320-9ef0-80dfa90c72af\") " pod="openstack/root-account-create-update-tvnfc" Feb 16 17:41:46.111120 master-0 kubenswrapper[4652]: I0216 17:41:46.111111 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f35c66-58aa-4320-9ef0-80dfa90c72af-operator-scripts\") pod \"root-account-create-update-tvnfc\" (UID: \"21f35c66-58aa-4320-9ef0-80dfa90c72af\") " pod="openstack/root-account-create-update-tvnfc" Feb 16 17:41:46.111835 master-0 kubenswrapper[4652]: I0216 17:41:46.111800 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f35c66-58aa-4320-9ef0-80dfa90c72af-operator-scripts\") pod \"root-account-create-update-tvnfc\" (UID: \"21f35c66-58aa-4320-9ef0-80dfa90c72af\") " pod="openstack/root-account-create-update-tvnfc" Feb 16 17:41:46.131785 master-0 kubenswrapper[4652]: I0216 17:41:46.131725 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n45f\" (UniqueName: \"kubernetes.io/projected/21f35c66-58aa-4320-9ef0-80dfa90c72af-kube-api-access-9n45f\") pod \"root-account-create-update-tvnfc\" (UID: \"21f35c66-58aa-4320-9ef0-80dfa90c72af\") " pod="openstack/root-account-create-update-tvnfc" Feb 16 17:41:46.195531 master-0 kubenswrapper[4652]: I0216 17:41:46.195467 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tvnfc" Feb 16 17:41:48.491536 master-0 kubenswrapper[4652]: I0216 17:41:48.491469 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 17:41:48.809587 master-0 kubenswrapper[4652]: I0216 17:41:48.809334 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-lkt9c"] Feb 16 17:41:48.813048 master-0 kubenswrapper[4652]: I0216 17:41:48.812760 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lkt9c" Feb 16 17:41:48.833274 master-0 kubenswrapper[4652]: I0216 17:41:48.832854 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lkt9c"] Feb 16 17:41:48.891369 master-0 kubenswrapper[4652]: I0216 17:41:48.891317 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d565-account-create-update-s2grp"] Feb 16 17:41:48.892815 master-0 kubenswrapper[4652]: I0216 17:41:48.892790 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d565-account-create-update-s2grp" Feb 16 17:41:48.902704 master-0 kubenswrapper[4652]: I0216 17:41:48.902661 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 17:41:48.904680 master-0 kubenswrapper[4652]: I0216 17:41:48.904634 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cef8dd61-0c05-4c03-8d95-c5cc00267a2a-operator-scripts\") pod \"cinder-db-create-lkt9c\" (UID: \"cef8dd61-0c05-4c03-8d95-c5cc00267a2a\") " pod="openstack/cinder-db-create-lkt9c" Feb 16 17:41:48.904762 master-0 kubenswrapper[4652]: I0216 17:41:48.904690 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxm9x\" (UniqueName: \"kubernetes.io/projected/cef8dd61-0c05-4c03-8d95-c5cc00267a2a-kube-api-access-dxm9x\") pod \"cinder-db-create-lkt9c\" (UID: \"cef8dd61-0c05-4c03-8d95-c5cc00267a2a\") " pod="openstack/cinder-db-create-lkt9c" Feb 16 17:41:48.911599 master-0 kubenswrapper[4652]: I0216 17:41:48.911534 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d565-account-create-update-s2grp"] Feb 16 17:41:49.006958 master-0 kubenswrapper[4652]: I0216 17:41:49.006832 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbvfd\" (UniqueName: \"kubernetes.io/projected/e24217ad-6ba4-4280-8a72-de8b7543fef0-kube-api-access-dbvfd\") pod \"cinder-d565-account-create-update-s2grp\" (UID: \"e24217ad-6ba4-4280-8a72-de8b7543fef0\") " pod="openstack/cinder-d565-account-create-update-s2grp" Feb 16 17:41:49.007209 master-0 kubenswrapper[4652]: I0216 17:41:49.007025 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24217ad-6ba4-4280-8a72-de8b7543fef0-operator-scripts\") pod \"cinder-d565-account-create-update-s2grp\" (UID: \"e24217ad-6ba4-4280-8a72-de8b7543fef0\") " pod="openstack/cinder-d565-account-create-update-s2grp" Feb 16 17:41:49.007209 master-0 kubenswrapper[4652]: I0216 17:41:49.007197 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cef8dd61-0c05-4c03-8d95-c5cc00267a2a-operator-scripts\") pod \"cinder-db-create-lkt9c\" (UID: \"cef8dd61-0c05-4c03-8d95-c5cc00267a2a\") " pod="openstack/cinder-db-create-lkt9c" Feb 16 17:41:49.007333 master-0 kubenswrapper[4652]: I0216 17:41:49.007274 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxm9x\" (UniqueName: \"kubernetes.io/projected/cef8dd61-0c05-4c03-8d95-c5cc00267a2a-kube-api-access-dxm9x\") pod \"cinder-db-create-lkt9c\" (UID: \"cef8dd61-0c05-4c03-8d95-c5cc00267a2a\") " pod="openstack/cinder-db-create-lkt9c" Feb 16 17:41:49.008194 master-0 kubenswrapper[4652]: I0216 17:41:49.008157 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cef8dd61-0c05-4c03-8d95-c5cc00267a2a-operator-scripts\") pod \"cinder-db-create-lkt9c\" (UID: \"cef8dd61-0c05-4c03-8d95-c5cc00267a2a\") " pod="openstack/cinder-db-create-lkt9c" Feb 16 17:41:49.023946 master-0 kubenswrapper[4652]: I0216 17:41:49.023838 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxm9x\" (UniqueName: \"kubernetes.io/projected/cef8dd61-0c05-4c03-8d95-c5cc00267a2a-kube-api-access-dxm9x\") pod \"cinder-db-create-lkt9c\" (UID: \"cef8dd61-0c05-4c03-8d95-c5cc00267a2a\") " pod="openstack/cinder-db-create-lkt9c" Feb 16 17:41:49.086991 master-0 kubenswrapper[4652]: I0216 17:41:49.086934 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-7cwql"] Feb 16 17:41:49.088559 master-0 kubenswrapper[4652]: I0216 17:41:49.088526 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7cwql" Feb 16 17:41:49.101603 master-0 kubenswrapper[4652]: I0216 17:41:49.101474 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7cwql"] Feb 16 17:41:49.108588 master-0 kubenswrapper[4652]: I0216 17:41:49.108539 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24217ad-6ba4-4280-8a72-de8b7543fef0-operator-scripts\") pod \"cinder-d565-account-create-update-s2grp\" (UID: \"e24217ad-6ba4-4280-8a72-de8b7543fef0\") " pod="openstack/cinder-d565-account-create-update-s2grp" Feb 16 17:41:49.108789 master-0 kubenswrapper[4652]: I0216 17:41:49.108696 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbvfd\" (UniqueName: \"kubernetes.io/projected/e24217ad-6ba4-4280-8a72-de8b7543fef0-kube-api-access-dbvfd\") pod \"cinder-d565-account-create-update-s2grp\" (UID: \"e24217ad-6ba4-4280-8a72-de8b7543fef0\") " pod="openstack/cinder-d565-account-create-update-s2grp" Feb 16 17:41:49.109640 master-0 kubenswrapper[4652]: I0216 17:41:49.109490 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24217ad-6ba4-4280-8a72-de8b7543fef0-operator-scripts\") pod \"cinder-d565-account-create-update-s2grp\" (UID: \"e24217ad-6ba4-4280-8a72-de8b7543fef0\") " pod="openstack/cinder-d565-account-create-update-s2grp" Feb 16 17:41:49.127615 master-0 kubenswrapper[4652]: I0216 17:41:49.127138 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbvfd\" (UniqueName: \"kubernetes.io/projected/e24217ad-6ba4-4280-8a72-de8b7543fef0-kube-api-access-dbvfd\") pod \"cinder-d565-account-create-update-s2grp\" (UID: \"e24217ad-6ba4-4280-8a72-de8b7543fef0\") " pod="openstack/cinder-d565-account-create-update-s2grp" Feb 16 17:41:49.156572 master-0 kubenswrapper[4652]: I0216 17:41:49.156444 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lkt9c" Feb 16 17:41:49.169738 master-0 kubenswrapper[4652]: I0216 17:41:49.169609 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bb42-account-create-update-cf2b2"] Feb 16 17:41:49.171134 master-0 kubenswrapper[4652]: I0216 17:41:49.171064 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb42-account-create-update-cf2b2" Feb 16 17:41:49.175233 master-0 kubenswrapper[4652]: I0216 17:41:49.173066 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 17:41:49.188576 master-0 kubenswrapper[4652]: I0216 17:41:49.188470 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb42-account-create-update-cf2b2"] Feb 16 17:41:49.210914 master-0 kubenswrapper[4652]: I0216 17:41:49.210861 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95923bef-659a-4898-9f22-fde581751f95-operator-scripts\") pod \"neutron-db-create-7cwql\" (UID: \"95923bef-659a-4898-9f22-fde581751f95\") " pod="openstack/neutron-db-create-7cwql" Feb 16 17:41:49.211178 master-0 kubenswrapper[4652]: I0216 17:41:49.210973 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rz2l\" (UniqueName: \"kubernetes.io/projected/95923bef-659a-4898-9f22-fde581751f95-kube-api-access-7rz2l\") pod \"neutron-db-create-7cwql\" (UID: \"95923bef-659a-4898-9f22-fde581751f95\") " pod="openstack/neutron-db-create-7cwql" Feb 16 17:41:49.250474 master-0 kubenswrapper[4652]: I0216 17:41:49.250424 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-xgxgv"] Feb 16 17:41:49.251745 master-0 kubenswrapper[4652]: I0216 17:41:49.251723 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xgxgv" Feb 16 17:41:49.253608 master-0 kubenswrapper[4652]: I0216 17:41:49.253591 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 17:41:49.253725 master-0 kubenswrapper[4652]: I0216 17:41:49.253599 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 17:41:49.255095 master-0 kubenswrapper[4652]: I0216 17:41:49.255064 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 17:41:49.260612 master-0 kubenswrapper[4652]: I0216 17:41:49.260554 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xgxgv"] Feb 16 17:41:49.270152 master-0 kubenswrapper[4652]: I0216 17:41:49.270097 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d565-account-create-update-s2grp" Feb 16 17:41:49.314370 master-0 kubenswrapper[4652]: I0216 17:41:49.313304 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2hr9\" (UniqueName: \"kubernetes.io/projected/940575f8-c708-470d-9674-9363119cc8e2-kube-api-access-g2hr9\") pod \"neutron-bb42-account-create-update-cf2b2\" (UID: \"940575f8-c708-470d-9674-9363119cc8e2\") " pod="openstack/neutron-bb42-account-create-update-cf2b2" Feb 16 17:41:49.314370 master-0 kubenswrapper[4652]: I0216 17:41:49.313484 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95923bef-659a-4898-9f22-fde581751f95-operator-scripts\") pod \"neutron-db-create-7cwql\" (UID: \"95923bef-659a-4898-9f22-fde581751f95\") " pod="openstack/neutron-db-create-7cwql" Feb 16 17:41:49.314370 master-0 kubenswrapper[4652]: I0216 17:41:49.313584 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/940575f8-c708-470d-9674-9363119cc8e2-operator-scripts\") pod \"neutron-bb42-account-create-update-cf2b2\" (UID: \"940575f8-c708-470d-9674-9363119cc8e2\") " pod="openstack/neutron-bb42-account-create-update-cf2b2" Feb 16 17:41:49.314370 master-0 kubenswrapper[4652]: I0216 17:41:49.313612 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rz2l\" (UniqueName: \"kubernetes.io/projected/95923bef-659a-4898-9f22-fde581751f95-kube-api-access-7rz2l\") pod \"neutron-db-create-7cwql\" (UID: \"95923bef-659a-4898-9f22-fde581751f95\") " pod="openstack/neutron-db-create-7cwql" Feb 16 17:41:49.316124 master-0 kubenswrapper[4652]: I0216 17:41:49.315702 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95923bef-659a-4898-9f22-fde581751f95-operator-scripts\") pod \"neutron-db-create-7cwql\" (UID: \"95923bef-659a-4898-9f22-fde581751f95\") " pod="openstack/neutron-db-create-7cwql" Feb 16 17:41:49.341752 master-0 kubenswrapper[4652]: I0216 17:41:49.341686 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rz2l\" (UniqueName: \"kubernetes.io/projected/95923bef-659a-4898-9f22-fde581751f95-kube-api-access-7rz2l\") pod \"neutron-db-create-7cwql\" (UID: \"95923bef-659a-4898-9f22-fde581751f95\") " pod="openstack/neutron-db-create-7cwql" Feb 16 17:41:49.415326 master-0 kubenswrapper[4652]: I0216 17:41:49.415267 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/940575f8-c708-470d-9674-9363119cc8e2-operator-scripts\") pod \"neutron-bb42-account-create-update-cf2b2\" (UID: \"940575f8-c708-470d-9674-9363119cc8e2\") " pod="openstack/neutron-bb42-account-create-update-cf2b2" Feb 16 17:41:49.415563 master-0 kubenswrapper[4652]: I0216 17:41:49.415347 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-config-data\") pod \"keystone-db-sync-xgxgv\" (UID: \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\") " pod="openstack/keystone-db-sync-xgxgv" Feb 16 17:41:49.415563 master-0 kubenswrapper[4652]: I0216 17:41:49.415379 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rk2l\" (UniqueName: \"kubernetes.io/projected/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-kube-api-access-5rk2l\") pod \"keystone-db-sync-xgxgv\" (UID: \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\") " pod="openstack/keystone-db-sync-xgxgv" Feb 16 17:41:49.415563 master-0 kubenswrapper[4652]: I0216 17:41:49.415418 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2hr9\" (UniqueName: \"kubernetes.io/projected/940575f8-c708-470d-9674-9363119cc8e2-kube-api-access-g2hr9\") pod \"neutron-bb42-account-create-update-cf2b2\" (UID: \"940575f8-c708-470d-9674-9363119cc8e2\") " pod="openstack/neutron-bb42-account-create-update-cf2b2" Feb 16 17:41:49.415563 master-0 kubenswrapper[4652]: I0216 17:41:49.415457 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-combined-ca-bundle\") pod \"keystone-db-sync-xgxgv\" (UID: \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\") " pod="openstack/keystone-db-sync-xgxgv" Feb 16 17:41:49.418178 master-0 kubenswrapper[4652]: I0216 17:41:49.418135 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/940575f8-c708-470d-9674-9363119cc8e2-operator-scripts\") pod \"neutron-bb42-account-create-update-cf2b2\" (UID: \"940575f8-c708-470d-9674-9363119cc8e2\") " pod="openstack/neutron-bb42-account-create-update-cf2b2" Feb 16 17:41:49.432128 master-0 kubenswrapper[4652]: I0216 17:41:49.432078 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2hr9\" (UniqueName: \"kubernetes.io/projected/940575f8-c708-470d-9674-9363119cc8e2-kube-api-access-g2hr9\") pod \"neutron-bb42-account-create-update-cf2b2\" (UID: \"940575f8-c708-470d-9674-9363119cc8e2\") " pod="openstack/neutron-bb42-account-create-update-cf2b2" Feb 16 17:41:49.471301 master-0 kubenswrapper[4652]: I0216 17:41:49.471113 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7cwql" Feb 16 17:41:49.490846 master-0 kubenswrapper[4652]: I0216 17:41:49.490259 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb42-account-create-update-cf2b2" Feb 16 17:41:49.524494 master-0 kubenswrapper[4652]: I0216 17:41:49.523736 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-config-data\") pod \"keystone-db-sync-xgxgv\" (UID: \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\") " pod="openstack/keystone-db-sync-xgxgv" Feb 16 17:41:49.524494 master-0 kubenswrapper[4652]: I0216 17:41:49.523800 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rk2l\" (UniqueName: \"kubernetes.io/projected/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-kube-api-access-5rk2l\") pod \"keystone-db-sync-xgxgv\" (UID: \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\") " pod="openstack/keystone-db-sync-xgxgv" Feb 16 17:41:49.524494 master-0 kubenswrapper[4652]: I0216 17:41:49.523865 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-combined-ca-bundle\") pod \"keystone-db-sync-xgxgv\" (UID: \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\") " pod="openstack/keystone-db-sync-xgxgv" Feb 16 17:41:49.527527 master-0 kubenswrapper[4652]: I0216 17:41:49.527489 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-config-data\") pod \"keystone-db-sync-xgxgv\" (UID: \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\") " pod="openstack/keystone-db-sync-xgxgv" Feb 16 17:41:49.527635 master-0 kubenswrapper[4652]: I0216 17:41:49.527494 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-combined-ca-bundle\") pod \"keystone-db-sync-xgxgv\" (UID: \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\") " pod="openstack/keystone-db-sync-xgxgv" Feb 16 17:41:49.544518 master-0 kubenswrapper[4652]: I0216 17:41:49.544390 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rk2l\" (UniqueName: \"kubernetes.io/projected/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-kube-api-access-5rk2l\") pod \"keystone-db-sync-xgxgv\" (UID: \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\") " pod="openstack/keystone-db-sync-xgxgv" Feb 16 17:41:49.587573 master-0 kubenswrapper[4652]: I0216 17:41:49.587511 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xgxgv" Feb 16 17:41:49.870117 master-0 kubenswrapper[4652]: I0216 17:41:49.869966 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 17:41:52.205548 master-0 kubenswrapper[4652]: I0216 17:41:52.205501 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"17d7ee2a1c4b440f8b0786864f2b0e6fecc33350896991317623abe74728ebf9"} Feb 16 17:41:52.254191 master-0 kubenswrapper[4652]: I0216 17:41:52.254142 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d565-account-create-update-s2grp"] Feb 16 17:41:52.267347 master-0 kubenswrapper[4652]: W0216 17:41:52.267291 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode24217ad_6ba4_4280_8a72_de8b7543fef0.slice/crio-5a0184eb453aa09f909e98c7aba5bf7898ee5d6202e7f69f31f94e8a1e07129a WatchSource:0}: Error finding container 5a0184eb453aa09f909e98c7aba5bf7898ee5d6202e7f69f31f94e8a1e07129a: Status 404 returned error can't find the container with id 5a0184eb453aa09f909e98c7aba5bf7898ee5d6202e7f69f31f94e8a1e07129a Feb 16 17:41:52.489855 master-0 kubenswrapper[4652]: I0216 17:41:52.489828 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-lkt9c"] Feb 16 17:41:52.494923 master-0 kubenswrapper[4652]: W0216 17:41:52.494882 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod940575f8_c708_470d_9674_9363119cc8e2.slice/crio-ded06d27d02945b932f93768a110c829ff0d59bcf9805cfc5e81d184cce71cb6 WatchSource:0}: Error finding container ded06d27d02945b932f93768a110c829ff0d59bcf9805cfc5e81d184cce71cb6: Status 404 returned error can't find the container with id ded06d27d02945b932f93768a110c829ff0d59bcf9805cfc5e81d184cce71cb6 Feb 16 17:41:52.502271 master-0 kubenswrapper[4652]: I0216 17:41:52.502209 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bb42-account-create-update-cf2b2"] Feb 16 17:41:52.527938 master-0 kubenswrapper[4652]: I0216 17:41:52.527875 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tvnfc"] Feb 16 17:41:52.704801 master-0 kubenswrapper[4652]: I0216 17:41:52.704762 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-7cwql"] Feb 16 17:41:52.715275 master-0 kubenswrapper[4652]: I0216 17:41:52.715181 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xgxgv"] Feb 16 17:41:53.224970 master-0 kubenswrapper[4652]: I0216 17:41:53.224930 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fd8th" event={"ID":"f977e20f-2501-45b2-b8d1-2dc333899a52","Type":"ContainerStarted","Data":"f23fd87bd6bfb449dfd9cdc3a276a3b29513ce67988a3f3db93ed7e2a571aaf0"} Feb 16 17:41:53.230557 master-0 kubenswrapper[4652]: I0216 17:41:53.230519 4652 generic.go:334] "Generic (PLEG): container finished" podID="cef8dd61-0c05-4c03-8d95-c5cc00267a2a" containerID="d5abfec442eb135ff0ec6d82048ab5c29b3c64d3baf753d58fb30f22037510c8" exitCode=0 Feb 16 17:41:53.230690 master-0 kubenswrapper[4652]: I0216 17:41:53.230578 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lkt9c" event={"ID":"cef8dd61-0c05-4c03-8d95-c5cc00267a2a","Type":"ContainerDied","Data":"d5abfec442eb135ff0ec6d82048ab5c29b3c64d3baf753d58fb30f22037510c8"} Feb 16 17:41:53.230690 master-0 kubenswrapper[4652]: I0216 17:41:53.230599 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lkt9c" event={"ID":"cef8dd61-0c05-4c03-8d95-c5cc00267a2a","Type":"ContainerStarted","Data":"309bf6860b2c69d53a6811aec8664bbd0f33dbf472220d6c8f2dc5b5198a2db8"} Feb 16 17:41:53.232277 master-0 kubenswrapper[4652]: I0216 17:41:53.232225 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xgxgv" event={"ID":"3f51de53-2fd8-4d8b-95f4-8f4d4504333c","Type":"ContainerStarted","Data":"42135b6dcb26eb0a86b7be1e4a97ea0fb569799c34db904e9e960f4706b74526"} Feb 16 17:41:53.234023 master-0 kubenswrapper[4652]: I0216 17:41:53.234007 4652 generic.go:334] "Generic (PLEG): container finished" podID="21f35c66-58aa-4320-9ef0-80dfa90c72af" containerID="22dc1e621340728afa4c9358b2ee5db9131c1d37e8f8fe6999c83186e6a1c644" exitCode=0 Feb 16 17:41:53.234142 master-0 kubenswrapper[4652]: I0216 17:41:53.234126 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tvnfc" event={"ID":"21f35c66-58aa-4320-9ef0-80dfa90c72af","Type":"ContainerDied","Data":"22dc1e621340728afa4c9358b2ee5db9131c1d37e8f8fe6999c83186e6a1c644"} Feb 16 17:41:53.234229 master-0 kubenswrapper[4652]: I0216 17:41:53.234216 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tvnfc" event={"ID":"21f35c66-58aa-4320-9ef0-80dfa90c72af","Type":"ContainerStarted","Data":"59d2ad9822cb11454064ba6f63737558b0fa6b1a69b061d30cebf9d3fa9af76a"} Feb 16 17:41:53.237019 master-0 kubenswrapper[4652]: I0216 17:41:53.236976 4652 generic.go:334] "Generic (PLEG): container finished" podID="e24217ad-6ba4-4280-8a72-de8b7543fef0" containerID="9f6b3448d15bdec2fb609b4a3a9b75649fc464a26f0d68d9fd7604ff52cbf160" exitCode=0 Feb 16 17:41:53.237105 master-0 kubenswrapper[4652]: I0216 17:41:53.237043 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d565-account-create-update-s2grp" event={"ID":"e24217ad-6ba4-4280-8a72-de8b7543fef0","Type":"ContainerDied","Data":"9f6b3448d15bdec2fb609b4a3a9b75649fc464a26f0d68d9fd7604ff52cbf160"} Feb 16 17:41:53.237105 master-0 kubenswrapper[4652]: I0216 17:41:53.237072 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d565-account-create-update-s2grp" event={"ID":"e24217ad-6ba4-4280-8a72-de8b7543fef0","Type":"ContainerStarted","Data":"5a0184eb453aa09f909e98c7aba5bf7898ee5d6202e7f69f31f94e8a1e07129a"} Feb 16 17:41:53.243538 master-0 kubenswrapper[4652]: I0216 17:41:53.243460 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-fd8th" podStartSLOduration=3.105419311 podStartE2EDuration="16.243440207s" podCreationTimestamp="2026-02-16 17:41:37 +0000 UTC" firstStartedPulling="2026-02-16 17:41:38.672806809 +0000 UTC m=+1056.060975325" lastFinishedPulling="2026-02-16 17:41:51.810827705 +0000 UTC m=+1069.198996221" observedRunningTime="2026-02-16 17:41:53.241604538 +0000 UTC m=+1070.629773074" watchObservedRunningTime="2026-02-16 17:41:53.243440207 +0000 UTC m=+1070.631608743" Feb 16 17:41:53.255062 master-0 kubenswrapper[4652]: I0216 17:41:53.255006 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"555d8fdbcdb161ecb266ed744eabe8716fc7cad6fcef9f37e96ba1c51323f6d8"} Feb 16 17:41:53.255216 master-0 kubenswrapper[4652]: I0216 17:41:53.255078 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"64e7c1d77b0cd56487b85107b3ed9ca469736e34108dfda3e00994adfd3ce92e"} Feb 16 17:41:53.255216 master-0 kubenswrapper[4652]: I0216 17:41:53.255101 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"b40243391892d4051c339fb6740d4d95737c335b6da0540bec3bb72773ba06a1"} Feb 16 17:41:53.255216 master-0 kubenswrapper[4652]: I0216 17:41:53.255119 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"652405ac19775f3222474ed0bdddbd49d2a45da427ed0889d5fc6ebee68ae8fb"} Feb 16 17:41:53.256840 master-0 kubenswrapper[4652]: I0216 17:41:53.256793 4652 generic.go:334] "Generic (PLEG): container finished" podID="940575f8-c708-470d-9674-9363119cc8e2" containerID="5f08947e894cc41aea4212be0354c1e08940bfd9e3e0f7c04175380fcaaed9f9" exitCode=0 Feb 16 17:41:53.256915 master-0 kubenswrapper[4652]: I0216 17:41:53.256872 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb42-account-create-update-cf2b2" event={"ID":"940575f8-c708-470d-9674-9363119cc8e2","Type":"ContainerDied","Data":"5f08947e894cc41aea4212be0354c1e08940bfd9e3e0f7c04175380fcaaed9f9"} Feb 16 17:41:53.256915 master-0 kubenswrapper[4652]: I0216 17:41:53.256903 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb42-account-create-update-cf2b2" event={"ID":"940575f8-c708-470d-9674-9363119cc8e2","Type":"ContainerStarted","Data":"ded06d27d02945b932f93768a110c829ff0d59bcf9805cfc5e81d184cce71cb6"} Feb 16 17:41:53.258414 master-0 kubenswrapper[4652]: I0216 17:41:53.258314 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7cwql" event={"ID":"95923bef-659a-4898-9f22-fde581751f95","Type":"ContainerStarted","Data":"383febbeb5471695843d225f9bf3bdf02dd637c459910174d1d7b92ad33c0022"} Feb 16 17:41:53.258414 master-0 kubenswrapper[4652]: I0216 17:41:53.258342 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7cwql" event={"ID":"95923bef-659a-4898-9f22-fde581751f95","Type":"ContainerStarted","Data":"b023c6872272370253481dd539ed4a50b8568632903ef04405604ba3a643cecb"} Feb 16 17:41:53.328339 master-0 kubenswrapper[4652]: I0216 17:41:53.328267 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-7cwql" podStartSLOduration=4.32823564 podStartE2EDuration="4.32823564s" podCreationTimestamp="2026-02-16 17:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:41:53.326170044 +0000 UTC m=+1070.714338560" watchObservedRunningTime="2026-02-16 17:41:53.32823564 +0000 UTC m=+1070.716404156" Feb 16 17:41:54.278426 master-0 kubenswrapper[4652]: I0216 17:41:54.278155 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"80feb03c9bd8fff77206df8b8d82003666069a2a38fc3a8a305cd6402cd7fd47"} Feb 16 17:41:54.279059 master-0 kubenswrapper[4652]: I0216 17:41:54.278438 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"9762bbd2-d9ad-4af7-b6d3-79f4a853b2b2","Type":"ContainerStarted","Data":"4ad3af3955a3e534ccb31077dea969adde0e0424c8372b8aac5b3da3bb2c3838"} Feb 16 17:41:54.282503 master-0 kubenswrapper[4652]: I0216 17:41:54.281100 4652 generic.go:334] "Generic (PLEG): container finished" podID="95923bef-659a-4898-9f22-fde581751f95" containerID="383febbeb5471695843d225f9bf3bdf02dd637c459910174d1d7b92ad33c0022" exitCode=0 Feb 16 17:41:54.282503 master-0 kubenswrapper[4652]: I0216 17:41:54.281561 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7cwql" event={"ID":"95923bef-659a-4898-9f22-fde581751f95","Type":"ContainerDied","Data":"383febbeb5471695843d225f9bf3bdf02dd637c459910174d1d7b92ad33c0022"} Feb 16 17:41:54.322701 master-0 kubenswrapper[4652]: I0216 17:41:54.322617 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=22.480812136 podStartE2EDuration="34.322600825s" podCreationTimestamp="2026-02-16 17:41:20 +0000 UTC" firstStartedPulling="2026-02-16 17:41:39.961132484 +0000 UTC m=+1057.349300990" lastFinishedPulling="2026-02-16 17:41:51.802921163 +0000 UTC m=+1069.191089679" observedRunningTime="2026-02-16 17:41:54.318932897 +0000 UTC m=+1071.707101433" watchObservedRunningTime="2026-02-16 17:41:54.322600825 +0000 UTC m=+1071.710769341" Feb 16 17:41:54.648810 master-0 kubenswrapper[4652]: I0216 17:41:54.648621 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75cf8458ff-jkkqn"] Feb 16 17:41:54.652557 master-0 kubenswrapper[4652]: I0216 17:41:54.652516 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.655472 master-0 kubenswrapper[4652]: I0216 17:41:54.655421 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 17:41:54.702449 master-0 kubenswrapper[4652]: I0216 17:41:54.702378 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75cf8458ff-jkkqn"] Feb 16 17:41:54.737824 master-0 kubenswrapper[4652]: I0216 17:41:54.737761 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-dns-swift-storage-0\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.738045 master-0 kubenswrapper[4652]: I0216 17:41:54.737842 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-config\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.738045 master-0 kubenswrapper[4652]: I0216 17:41:54.737904 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-ovsdbserver-sb\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.738145 master-0 kubenswrapper[4652]: I0216 17:41:54.738091 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drhgf\" (UniqueName: \"kubernetes.io/projected/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-kube-api-access-drhgf\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.738197 master-0 kubenswrapper[4652]: I0216 17:41:54.738156 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-dns-svc\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.738197 master-0 kubenswrapper[4652]: I0216 17:41:54.738183 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-ovsdbserver-nb\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.840160 master-0 kubenswrapper[4652]: I0216 17:41:54.840081 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-dns-swift-storage-0\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.840437 master-0 kubenswrapper[4652]: I0216 17:41:54.840180 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-config\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.840437 master-0 kubenswrapper[4652]: I0216 17:41:54.840300 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-ovsdbserver-sb\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.840437 master-0 kubenswrapper[4652]: I0216 17:41:54.840395 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drhgf\" (UniqueName: \"kubernetes.io/projected/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-kube-api-access-drhgf\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.840657 master-0 kubenswrapper[4652]: I0216 17:41:54.840441 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-dns-svc\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.840657 master-0 kubenswrapper[4652]: I0216 17:41:54.840462 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-ovsdbserver-nb\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.841906 master-0 kubenswrapper[4652]: I0216 17:41:54.841880 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-dns-swift-storage-0\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.845608 master-0 kubenswrapper[4652]: I0216 17:41:54.845485 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-config\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.847594 master-0 kubenswrapper[4652]: I0216 17:41:54.847542 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-ovsdbserver-sb\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.850411 master-0 kubenswrapper[4652]: I0216 17:41:54.849318 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-dns-svc\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.850411 master-0 kubenswrapper[4652]: I0216 17:41:54.850358 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-ovsdbserver-nb\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.909630 master-0 kubenswrapper[4652]: I0216 17:41:54.909449 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drhgf\" (UniqueName: \"kubernetes.io/projected/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-kube-api-access-drhgf\") pod \"dnsmasq-dns-75cf8458ff-jkkqn\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:54.997427 master-0 kubenswrapper[4652]: I0216 17:41:54.996371 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:41:57.297184 master-0 kubenswrapper[4652]: I0216 17:41:57.297133 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tvnfc" Feb 16 17:41:57.324413 master-0 kubenswrapper[4652]: I0216 17:41:57.324353 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tvnfc" event={"ID":"21f35c66-58aa-4320-9ef0-80dfa90c72af","Type":"ContainerDied","Data":"59d2ad9822cb11454064ba6f63737558b0fa6b1a69b061d30cebf9d3fa9af76a"} Feb 16 17:41:57.324413 master-0 kubenswrapper[4652]: I0216 17:41:57.324401 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59d2ad9822cb11454064ba6f63737558b0fa6b1a69b061d30cebf9d3fa9af76a" Feb 16 17:41:57.324656 master-0 kubenswrapper[4652]: I0216 17:41:57.324462 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tvnfc" Feb 16 17:41:57.328086 master-0 kubenswrapper[4652]: I0216 17:41:57.328025 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d565-account-create-update-s2grp" event={"ID":"e24217ad-6ba4-4280-8a72-de8b7543fef0","Type":"ContainerDied","Data":"5a0184eb453aa09f909e98c7aba5bf7898ee5d6202e7f69f31f94e8a1e07129a"} Feb 16 17:41:57.328086 master-0 kubenswrapper[4652]: I0216 17:41:57.328055 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a0184eb453aa09f909e98c7aba5bf7898ee5d6202e7f69f31f94e8a1e07129a" Feb 16 17:41:57.329408 master-0 kubenswrapper[4652]: I0216 17:41:57.329379 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lkt9c" Feb 16 17:41:57.330988 master-0 kubenswrapper[4652]: I0216 17:41:57.330949 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bb42-account-create-update-cf2b2" event={"ID":"940575f8-c708-470d-9674-9363119cc8e2","Type":"ContainerDied","Data":"ded06d27d02945b932f93768a110c829ff0d59bcf9805cfc5e81d184cce71cb6"} Feb 16 17:41:57.330988 master-0 kubenswrapper[4652]: I0216 17:41:57.330974 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ded06d27d02945b932f93768a110c829ff0d59bcf9805cfc5e81d184cce71cb6" Feb 16 17:41:57.332965 master-0 kubenswrapper[4652]: I0216 17:41:57.332901 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-7cwql" event={"ID":"95923bef-659a-4898-9f22-fde581751f95","Type":"ContainerDied","Data":"b023c6872272370253481dd539ed4a50b8568632903ef04405604ba3a643cecb"} Feb 16 17:41:57.332965 master-0 kubenswrapper[4652]: I0216 17:41:57.332925 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b023c6872272370253481dd539ed4a50b8568632903ef04405604ba3a643cecb" Feb 16 17:41:57.337223 master-0 kubenswrapper[4652]: I0216 17:41:57.337030 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-lkt9c" event={"ID":"cef8dd61-0c05-4c03-8d95-c5cc00267a2a","Type":"ContainerDied","Data":"309bf6860b2c69d53a6811aec8664bbd0f33dbf472220d6c8f2dc5b5198a2db8"} Feb 16 17:41:57.337223 master-0 kubenswrapper[4652]: I0216 17:41:57.337129 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="309bf6860b2c69d53a6811aec8664bbd0f33dbf472220d6c8f2dc5b5198a2db8" Feb 16 17:41:57.337223 master-0 kubenswrapper[4652]: I0216 17:41:57.337188 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-lkt9c" Feb 16 17:41:57.337785 master-0 kubenswrapper[4652]: I0216 17:41:57.337189 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb42-account-create-update-cf2b2" Feb 16 17:41:57.356317 master-0 kubenswrapper[4652]: I0216 17:41:57.356240 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d565-account-create-update-s2grp" Feb 16 17:41:57.384668 master-0 kubenswrapper[4652]: I0216 17:41:57.384629 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7cwql" Feb 16 17:41:57.392419 master-0 kubenswrapper[4652]: I0216 17:41:57.392386 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f35c66-58aa-4320-9ef0-80dfa90c72af-operator-scripts\") pod \"21f35c66-58aa-4320-9ef0-80dfa90c72af\" (UID: \"21f35c66-58aa-4320-9ef0-80dfa90c72af\") " Feb 16 17:41:57.392716 master-0 kubenswrapper[4652]: I0216 17:41:57.392694 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n45f\" (UniqueName: \"kubernetes.io/projected/21f35c66-58aa-4320-9ef0-80dfa90c72af-kube-api-access-9n45f\") pod \"21f35c66-58aa-4320-9ef0-80dfa90c72af\" (UID: \"21f35c66-58aa-4320-9ef0-80dfa90c72af\") " Feb 16 17:41:57.392932 master-0 kubenswrapper[4652]: I0216 17:41:57.392889 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21f35c66-58aa-4320-9ef0-80dfa90c72af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21f35c66-58aa-4320-9ef0-80dfa90c72af" (UID: "21f35c66-58aa-4320-9ef0-80dfa90c72af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:57.393172 master-0 kubenswrapper[4652]: I0216 17:41:57.393149 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbvfd\" (UniqueName: \"kubernetes.io/projected/e24217ad-6ba4-4280-8a72-de8b7543fef0-kube-api-access-dbvfd\") pod \"e24217ad-6ba4-4280-8a72-de8b7543fef0\" (UID: \"e24217ad-6ba4-4280-8a72-de8b7543fef0\") " Feb 16 17:41:57.393383 master-0 kubenswrapper[4652]: I0216 17:41:57.393342 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cef8dd61-0c05-4c03-8d95-c5cc00267a2a-operator-scripts\") pod \"cef8dd61-0c05-4c03-8d95-c5cc00267a2a\" (UID: \"cef8dd61-0c05-4c03-8d95-c5cc00267a2a\") " Feb 16 17:41:57.393650 master-0 kubenswrapper[4652]: I0216 17:41:57.393633 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/940575f8-c708-470d-9674-9363119cc8e2-operator-scripts\") pod \"940575f8-c708-470d-9674-9363119cc8e2\" (UID: \"940575f8-c708-470d-9674-9363119cc8e2\") " Feb 16 17:41:57.394140 master-0 kubenswrapper[4652]: I0216 17:41:57.394121 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2hr9\" (UniqueName: \"kubernetes.io/projected/940575f8-c708-470d-9674-9363119cc8e2-kube-api-access-g2hr9\") pod \"940575f8-c708-470d-9674-9363119cc8e2\" (UID: \"940575f8-c708-470d-9674-9363119cc8e2\") " Feb 16 17:41:57.394323 master-0 kubenswrapper[4652]: I0216 17:41:57.394306 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxm9x\" (UniqueName: \"kubernetes.io/projected/cef8dd61-0c05-4c03-8d95-c5cc00267a2a-kube-api-access-dxm9x\") pod \"cef8dd61-0c05-4c03-8d95-c5cc00267a2a\" (UID: \"cef8dd61-0c05-4c03-8d95-c5cc00267a2a\") " Feb 16 17:41:57.394466 master-0 kubenswrapper[4652]: I0216 17:41:57.394451 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24217ad-6ba4-4280-8a72-de8b7543fef0-operator-scripts\") pod \"e24217ad-6ba4-4280-8a72-de8b7543fef0\" (UID: \"e24217ad-6ba4-4280-8a72-de8b7543fef0\") " Feb 16 17:41:57.395093 master-0 kubenswrapper[4652]: I0216 17:41:57.395068 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cef8dd61-0c05-4c03-8d95-c5cc00267a2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cef8dd61-0c05-4c03-8d95-c5cc00267a2a" (UID: "cef8dd61-0c05-4c03-8d95-c5cc00267a2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:57.395361 master-0 kubenswrapper[4652]: I0216 17:41:57.395344 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cef8dd61-0c05-4c03-8d95-c5cc00267a2a-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:57.395436 master-0 kubenswrapper[4652]: I0216 17:41:57.395426 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21f35c66-58aa-4320-9ef0-80dfa90c72af-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:57.395895 master-0 kubenswrapper[4652]: I0216 17:41:57.395878 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940575f8-c708-470d-9674-9363119cc8e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "940575f8-c708-470d-9674-9363119cc8e2" (UID: "940575f8-c708-470d-9674-9363119cc8e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:57.402223 master-0 kubenswrapper[4652]: I0216 17:41:57.399298 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e24217ad-6ba4-4280-8a72-de8b7543fef0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e24217ad-6ba4-4280-8a72-de8b7543fef0" (UID: "e24217ad-6ba4-4280-8a72-de8b7543fef0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:57.402497 master-0 kubenswrapper[4652]: I0216 17:41:57.402401 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/940575f8-c708-470d-9674-9363119cc8e2-kube-api-access-g2hr9" (OuterVolumeSpecName: "kube-api-access-g2hr9") pod "940575f8-c708-470d-9674-9363119cc8e2" (UID: "940575f8-c708-470d-9674-9363119cc8e2"). InnerVolumeSpecName "kube-api-access-g2hr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:57.402710 master-0 kubenswrapper[4652]: I0216 17:41:57.402656 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21f35c66-58aa-4320-9ef0-80dfa90c72af-kube-api-access-9n45f" (OuterVolumeSpecName: "kube-api-access-9n45f") pod "21f35c66-58aa-4320-9ef0-80dfa90c72af" (UID: "21f35c66-58aa-4320-9ef0-80dfa90c72af"). InnerVolumeSpecName "kube-api-access-9n45f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:57.406228 master-0 kubenswrapper[4652]: I0216 17:41:57.405385 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef8dd61-0c05-4c03-8d95-c5cc00267a2a-kube-api-access-dxm9x" (OuterVolumeSpecName: "kube-api-access-dxm9x") pod "cef8dd61-0c05-4c03-8d95-c5cc00267a2a" (UID: "cef8dd61-0c05-4c03-8d95-c5cc00267a2a"). InnerVolumeSpecName "kube-api-access-dxm9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:57.412079 master-0 kubenswrapper[4652]: I0216 17:41:57.412027 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24217ad-6ba4-4280-8a72-de8b7543fef0-kube-api-access-dbvfd" (OuterVolumeSpecName: "kube-api-access-dbvfd") pod "e24217ad-6ba4-4280-8a72-de8b7543fef0" (UID: "e24217ad-6ba4-4280-8a72-de8b7543fef0"). InnerVolumeSpecName "kube-api-access-dbvfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:57.496878 master-0 kubenswrapper[4652]: I0216 17:41:57.496803 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95923bef-659a-4898-9f22-fde581751f95-operator-scripts\") pod \"95923bef-659a-4898-9f22-fde581751f95\" (UID: \"95923bef-659a-4898-9f22-fde581751f95\") " Feb 16 17:41:57.497089 master-0 kubenswrapper[4652]: I0216 17:41:57.497032 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rz2l\" (UniqueName: \"kubernetes.io/projected/95923bef-659a-4898-9f22-fde581751f95-kube-api-access-7rz2l\") pod \"95923bef-659a-4898-9f22-fde581751f95\" (UID: \"95923bef-659a-4898-9f22-fde581751f95\") " Feb 16 17:41:57.497411 master-0 kubenswrapper[4652]: I0216 17:41:57.497362 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95923bef-659a-4898-9f22-fde581751f95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95923bef-659a-4898-9f22-fde581751f95" (UID: "95923bef-659a-4898-9f22-fde581751f95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:41:57.497783 master-0 kubenswrapper[4652]: I0216 17:41:57.497751 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95923bef-659a-4898-9f22-fde581751f95-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:57.497783 master-0 kubenswrapper[4652]: I0216 17:41:57.497778 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbvfd\" (UniqueName: \"kubernetes.io/projected/e24217ad-6ba4-4280-8a72-de8b7543fef0-kube-api-access-dbvfd\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:57.497860 master-0 kubenswrapper[4652]: I0216 17:41:57.497792 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/940575f8-c708-470d-9674-9363119cc8e2-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:57.497860 master-0 kubenswrapper[4652]: I0216 17:41:57.497803 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2hr9\" (UniqueName: \"kubernetes.io/projected/940575f8-c708-470d-9674-9363119cc8e2-kube-api-access-g2hr9\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:57.497860 master-0 kubenswrapper[4652]: I0216 17:41:57.497812 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxm9x\" (UniqueName: \"kubernetes.io/projected/cef8dd61-0c05-4c03-8d95-c5cc00267a2a-kube-api-access-dxm9x\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:57.497860 master-0 kubenswrapper[4652]: I0216 17:41:57.497821 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e24217ad-6ba4-4280-8a72-de8b7543fef0-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:57.497860 master-0 kubenswrapper[4652]: I0216 17:41:57.497830 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n45f\" (UniqueName: \"kubernetes.io/projected/21f35c66-58aa-4320-9ef0-80dfa90c72af-kube-api-access-9n45f\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:57.500482 master-0 kubenswrapper[4652]: I0216 17:41:57.500439 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95923bef-659a-4898-9f22-fde581751f95-kube-api-access-7rz2l" (OuterVolumeSpecName: "kube-api-access-7rz2l") pod "95923bef-659a-4898-9f22-fde581751f95" (UID: "95923bef-659a-4898-9f22-fde581751f95"). InnerVolumeSpecName "kube-api-access-7rz2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:41:57.546514 master-0 kubenswrapper[4652]: I0216 17:41:57.546451 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75cf8458ff-jkkqn"] Feb 16 17:41:57.600049 master-0 kubenswrapper[4652]: I0216 17:41:57.599993 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rz2l\" (UniqueName: \"kubernetes.io/projected/95923bef-659a-4898-9f22-fde581751f95-kube-api-access-7rz2l\") on node \"master-0\" DevicePath \"\"" Feb 16 17:41:58.364537 master-0 kubenswrapper[4652]: I0216 17:41:58.364445 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xgxgv" event={"ID":"3f51de53-2fd8-4d8b-95f4-8f4d4504333c","Type":"ContainerStarted","Data":"332c56a94cb37366abc854c5d9674bff116df312edc0c8e589b4de616160edce"} Feb 16 17:41:58.365950 master-0 kubenswrapper[4652]: I0216 17:41:58.365881 4652 generic.go:334] "Generic (PLEG): container finished" podID="930cccfa-080d-4b88-b4c5-c61bbebdd2ad" containerID="9a6252446f18a121cc6ec568e1849f04ca470b6228629b5039affc618d4c5ff9" exitCode=0 Feb 16 17:41:58.366182 master-0 kubenswrapper[4652]: I0216 17:41:58.366126 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-7cwql" Feb 16 17:41:58.366460 master-0 kubenswrapper[4652]: I0216 17:41:58.366397 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" event={"ID":"930cccfa-080d-4b88-b4c5-c61bbebdd2ad","Type":"ContainerDied","Data":"9a6252446f18a121cc6ec568e1849f04ca470b6228629b5039affc618d4c5ff9"} Feb 16 17:41:58.366460 master-0 kubenswrapper[4652]: I0216 17:41:58.366456 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" event={"ID":"930cccfa-080d-4b88-b4c5-c61bbebdd2ad","Type":"ContainerStarted","Data":"0749d065e9f7658db2fcf5efcc1887cf15ec655f8a71c0b150c3b3170a4a6201"} Feb 16 17:41:58.366676 master-0 kubenswrapper[4652]: I0216 17:41:58.366541 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d565-account-create-update-s2grp" Feb 16 17:41:58.367467 master-0 kubenswrapper[4652]: I0216 17:41:58.367398 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bb42-account-create-update-cf2b2" Feb 16 17:41:58.555679 master-0 kubenswrapper[4652]: I0216 17:41:58.555585 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-xgxgv" podStartSLOduration=5.122181002 podStartE2EDuration="9.555561363s" podCreationTimestamp="2026-02-16 17:41:49 +0000 UTC" firstStartedPulling="2026-02-16 17:41:52.745492389 +0000 UTC m=+1070.133660915" lastFinishedPulling="2026-02-16 17:41:57.17887276 +0000 UTC m=+1074.567041276" observedRunningTime="2026-02-16 17:41:58.485650119 +0000 UTC m=+1075.873818665" watchObservedRunningTime="2026-02-16 17:41:58.555561363 +0000 UTC m=+1075.943729879" Feb 16 17:41:59.380027 master-0 kubenswrapper[4652]: I0216 17:41:59.379786 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" event={"ID":"930cccfa-080d-4b88-b4c5-c61bbebdd2ad","Type":"ContainerStarted","Data":"0ef8461810b0b6c9fe3f8441a5e267cd29a10e466fbff4bf141f519a332cab08"} Feb 16 17:41:59.414836 master-0 kubenswrapper[4652]: I0216 17:41:59.414741 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" podStartSLOduration=5.414718763 podStartE2EDuration="5.414718763s" podCreationTimestamp="2026-02-16 17:41:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:41:59.405915277 +0000 UTC m=+1076.794083803" watchObservedRunningTime="2026-02-16 17:41:59.414718763 +0000 UTC m=+1076.802887279" Feb 16 17:41:59.997163 master-0 kubenswrapper[4652]: I0216 17:41:59.997083 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:42:02.407563 master-0 kubenswrapper[4652]: I0216 17:42:02.407366 4652 generic.go:334] "Generic (PLEG): container finished" podID="f977e20f-2501-45b2-b8d1-2dc333899a52" containerID="f23fd87bd6bfb449dfd9cdc3a276a3b29513ce67988a3f3db93ed7e2a571aaf0" exitCode=0 Feb 16 17:42:02.407563 master-0 kubenswrapper[4652]: I0216 17:42:02.407467 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fd8th" event={"ID":"f977e20f-2501-45b2-b8d1-2dc333899a52","Type":"ContainerDied","Data":"f23fd87bd6bfb449dfd9cdc3a276a3b29513ce67988a3f3db93ed7e2a571aaf0"} Feb 16 17:42:02.410396 master-0 kubenswrapper[4652]: I0216 17:42:02.410355 4652 generic.go:334] "Generic (PLEG): container finished" podID="3f51de53-2fd8-4d8b-95f4-8f4d4504333c" containerID="332c56a94cb37366abc854c5d9674bff116df312edc0c8e589b4de616160edce" exitCode=0 Feb 16 17:42:02.410396 master-0 kubenswrapper[4652]: I0216 17:42:02.410397 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xgxgv" event={"ID":"3f51de53-2fd8-4d8b-95f4-8f4d4504333c","Type":"ContainerDied","Data":"332c56a94cb37366abc854c5d9674bff116df312edc0c8e589b4de616160edce"} Feb 16 17:42:03.865339 master-0 kubenswrapper[4652]: I0216 17:42:03.865280 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xgxgv" Feb 16 17:42:03.954396 master-0 kubenswrapper[4652]: I0216 17:42:03.954361 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-config-data\") pod \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\" (UID: \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\") " Feb 16 17:42:03.954681 master-0 kubenswrapper[4652]: I0216 17:42:03.954661 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rk2l\" (UniqueName: \"kubernetes.io/projected/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-kube-api-access-5rk2l\") pod \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\" (UID: \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\") " Feb 16 17:42:03.954939 master-0 kubenswrapper[4652]: I0216 17:42:03.954925 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-combined-ca-bundle\") pod \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\" (UID: \"3f51de53-2fd8-4d8b-95f4-8f4d4504333c\") " Feb 16 17:42:03.957342 master-0 kubenswrapper[4652]: I0216 17:42:03.957296 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-kube-api-access-5rk2l" (OuterVolumeSpecName: "kube-api-access-5rk2l") pod "3f51de53-2fd8-4d8b-95f4-8f4d4504333c" (UID: "3f51de53-2fd8-4d8b-95f4-8f4d4504333c"). InnerVolumeSpecName "kube-api-access-5rk2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:03.979693 master-0 kubenswrapper[4652]: I0216 17:42:03.979643 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f51de53-2fd8-4d8b-95f4-8f4d4504333c" (UID: "3f51de53-2fd8-4d8b-95f4-8f4d4504333c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:03.999803 master-0 kubenswrapper[4652]: I0216 17:42:03.999712 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-config-data" (OuterVolumeSpecName: "config-data") pod "3f51de53-2fd8-4d8b-95f4-8f4d4504333c" (UID: "3f51de53-2fd8-4d8b-95f4-8f4d4504333c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:04.057036 master-0 kubenswrapper[4652]: I0216 17:42:04.056988 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:04.057036 master-0 kubenswrapper[4652]: I0216 17:42:04.057031 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rk2l\" (UniqueName: \"kubernetes.io/projected/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-kube-api-access-5rk2l\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:04.057270 master-0 kubenswrapper[4652]: I0216 17:42:04.057045 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f51de53-2fd8-4d8b-95f4-8f4d4504333c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:04.071740 master-0 kubenswrapper[4652]: I0216 17:42:04.071704 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fd8th" Feb 16 17:42:04.158222 master-0 kubenswrapper[4652]: I0216 17:42:04.158181 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-db-sync-config-data\") pod \"f977e20f-2501-45b2-b8d1-2dc333899a52\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " Feb 16 17:42:04.158527 master-0 kubenswrapper[4652]: I0216 17:42:04.158509 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-combined-ca-bundle\") pod \"f977e20f-2501-45b2-b8d1-2dc333899a52\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " Feb 16 17:42:04.158653 master-0 kubenswrapper[4652]: I0216 17:42:04.158641 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngnpb\" (UniqueName: \"kubernetes.io/projected/f977e20f-2501-45b2-b8d1-2dc333899a52-kube-api-access-ngnpb\") pod \"f977e20f-2501-45b2-b8d1-2dc333899a52\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " Feb 16 17:42:04.158836 master-0 kubenswrapper[4652]: I0216 17:42:04.158823 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-config-data\") pod \"f977e20f-2501-45b2-b8d1-2dc333899a52\" (UID: \"f977e20f-2501-45b2-b8d1-2dc333899a52\") " Feb 16 17:42:04.161466 master-0 kubenswrapper[4652]: I0216 17:42:04.161406 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f977e20f-2501-45b2-b8d1-2dc333899a52" (UID: "f977e20f-2501-45b2-b8d1-2dc333899a52"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:04.161786 master-0 kubenswrapper[4652]: I0216 17:42:04.161755 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f977e20f-2501-45b2-b8d1-2dc333899a52-kube-api-access-ngnpb" (OuterVolumeSpecName: "kube-api-access-ngnpb") pod "f977e20f-2501-45b2-b8d1-2dc333899a52" (UID: "f977e20f-2501-45b2-b8d1-2dc333899a52"). InnerVolumeSpecName "kube-api-access-ngnpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:04.188485 master-0 kubenswrapper[4652]: I0216 17:42:04.188405 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f977e20f-2501-45b2-b8d1-2dc333899a52" (UID: "f977e20f-2501-45b2-b8d1-2dc333899a52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:04.205424 master-0 kubenswrapper[4652]: I0216 17:42:04.205340 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-config-data" (OuterVolumeSpecName: "config-data") pod "f977e20f-2501-45b2-b8d1-2dc333899a52" (UID: "f977e20f-2501-45b2-b8d1-2dc333899a52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:04.261319 master-0 kubenswrapper[4652]: I0216 17:42:04.261184 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:04.261319 master-0 kubenswrapper[4652]: I0216 17:42:04.261228 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngnpb\" (UniqueName: \"kubernetes.io/projected/f977e20f-2501-45b2-b8d1-2dc333899a52-kube-api-access-ngnpb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:04.261319 master-0 kubenswrapper[4652]: I0216 17:42:04.261239 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:04.261319 master-0 kubenswrapper[4652]: I0216 17:42:04.261262 4652 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f977e20f-2501-45b2-b8d1-2dc333899a52-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:04.431416 master-0 kubenswrapper[4652]: I0216 17:42:04.431335 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fd8th" event={"ID":"f977e20f-2501-45b2-b8d1-2dc333899a52","Type":"ContainerDied","Data":"4280ccb2474f723821fbba28a278fa4861e9961e862126e90c717e8c96d5b770"} Feb 16 17:42:04.431416 master-0 kubenswrapper[4652]: I0216 17:42:04.431415 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4280ccb2474f723821fbba28a278fa4861e9961e862126e90c717e8c96d5b770" Feb 16 17:42:04.431670 master-0 kubenswrapper[4652]: I0216 17:42:04.431446 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fd8th" Feb 16 17:42:04.433073 master-0 kubenswrapper[4652]: I0216 17:42:04.433023 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xgxgv" event={"ID":"3f51de53-2fd8-4d8b-95f4-8f4d4504333c","Type":"ContainerDied","Data":"42135b6dcb26eb0a86b7be1e4a97ea0fb569799c34db904e9e960f4706b74526"} Feb 16 17:42:04.433073 master-0 kubenswrapper[4652]: I0216 17:42:04.433064 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42135b6dcb26eb0a86b7be1e4a97ea0fb569799c34db904e9e960f4706b74526" Feb 16 17:42:04.433209 master-0 kubenswrapper[4652]: I0216 17:42:04.433102 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xgxgv" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.833799 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tgkq5"] Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: E0216 17:42:04.834605 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21f35c66-58aa-4320-9ef0-80dfa90c72af" containerName="mariadb-account-create-update" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.834627 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="21f35c66-58aa-4320-9ef0-80dfa90c72af" containerName="mariadb-account-create-update" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: E0216 17:42:04.834641 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef8dd61-0c05-4c03-8d95-c5cc00267a2a" containerName="mariadb-database-create" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.834648 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef8dd61-0c05-4c03-8d95-c5cc00267a2a" containerName="mariadb-database-create" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: E0216 17:42:04.834663 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f977e20f-2501-45b2-b8d1-2dc333899a52" containerName="glance-db-sync" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.834673 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="f977e20f-2501-45b2-b8d1-2dc333899a52" containerName="glance-db-sync" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: E0216 17:42:04.834683 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95923bef-659a-4898-9f22-fde581751f95" containerName="mariadb-database-create" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.834696 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="95923bef-659a-4898-9f22-fde581751f95" containerName="mariadb-database-create" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: E0216 17:42:04.834720 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="940575f8-c708-470d-9674-9363119cc8e2" containerName="mariadb-account-create-update" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.834727 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="940575f8-c708-470d-9674-9363119cc8e2" containerName="mariadb-account-create-update" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: E0216 17:42:04.834746 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24217ad-6ba4-4280-8a72-de8b7543fef0" containerName="mariadb-account-create-update" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.834752 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24217ad-6ba4-4280-8a72-de8b7543fef0" containerName="mariadb-account-create-update" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: E0216 17:42:04.834766 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f51de53-2fd8-4d8b-95f4-8f4d4504333c" containerName="keystone-db-sync" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.834773 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f51de53-2fd8-4d8b-95f4-8f4d4504333c" containerName="keystone-db-sync" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.835072 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="f977e20f-2501-45b2-b8d1-2dc333899a52" containerName="glance-db-sync" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.835096 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef8dd61-0c05-4c03-8d95-c5cc00267a2a" containerName="mariadb-database-create" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.835111 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="940575f8-c708-470d-9674-9363119cc8e2" containerName="mariadb-account-create-update" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.835132 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f51de53-2fd8-4d8b-95f4-8f4d4504333c" containerName="keystone-db-sync" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.835149 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="21f35c66-58aa-4320-9ef0-80dfa90c72af" containerName="mariadb-account-create-update" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.835173 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24217ad-6ba4-4280-8a72-de8b7543fef0" containerName="mariadb-account-create-update" Feb 16 17:42:04.835769 master-0 kubenswrapper[4652]: I0216 17:42:04.835189 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="95923bef-659a-4898-9f22-fde581751f95" containerName="mariadb-database-create" Feb 16 17:42:04.860547 master-0 kubenswrapper[4652]: I0216 17:42:04.859781 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:04.868652 master-0 kubenswrapper[4652]: I0216 17:42:04.868602 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 17:42:04.869577 master-0 kubenswrapper[4652]: I0216 17:42:04.869543 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 17:42:04.877446 master-0 kubenswrapper[4652]: I0216 17:42:04.877343 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 17:42:04.881288 master-0 kubenswrapper[4652]: I0216 17:42:04.878144 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 17:42:04.886125 master-0 kubenswrapper[4652]: I0216 17:42:04.886072 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75cf8458ff-jkkqn"] Feb 16 17:42:04.886437 master-0 kubenswrapper[4652]: I0216 17:42:04.886403 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" podUID="930cccfa-080d-4b88-b4c5-c61bbebdd2ad" containerName="dnsmasq-dns" containerID="cri-o://0ef8461810b0b6c9fe3f8441a5e267cd29a10e466fbff4bf141f519a332cab08" gracePeriod=10 Feb 16 17:42:04.887479 master-0 kubenswrapper[4652]: I0216 17:42:04.887456 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:42:04.894170 master-0 kubenswrapper[4652]: I0216 17:42:04.894081 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq4d6\" (UniqueName: \"kubernetes.io/projected/09b63a1e-c6ec-4046-bc54-585d4031c6ed-kube-api-access-dq4d6\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:04.894355 master-0 kubenswrapper[4652]: I0216 17:42:04.894325 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-credential-keys\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:04.894532 master-0 kubenswrapper[4652]: I0216 17:42:04.894501 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-combined-ca-bundle\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:04.899104 master-0 kubenswrapper[4652]: I0216 17:42:04.894783 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-config-data\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:04.899104 master-0 kubenswrapper[4652]: I0216 17:42:04.895944 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-fernet-keys\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:04.899104 master-0 kubenswrapper[4652]: I0216 17:42:04.896648 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-scripts\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:04.908483 master-0 kubenswrapper[4652]: I0216 17:42:04.908360 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tgkq5"] Feb 16 17:42:04.935313 master-0 kubenswrapper[4652]: I0216 17:42:04.934507 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647b99b9f-kjks6"] Feb 16 17:42:04.938296 master-0 kubenswrapper[4652]: I0216 17:42:04.936291 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:04.942277 master-0 kubenswrapper[4652]: I0216 17:42:04.940451 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647b99b9f-kjks6"] Feb 16 17:42:04.956382 master-0 kubenswrapper[4652]: I0216 17:42:04.955645 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-create-x89lf"] Feb 16 17:42:04.959690 master-0 kubenswrapper[4652]: I0216 17:42:04.959361 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-x89lf" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.007301 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-ovsdbserver-nb\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.007420 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-credential-keys\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.007503 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/607e6833-eab2-4429-ac81-a161c3525702-operator-scripts\") pod \"ironic-db-create-x89lf\" (UID: \"607e6833-eab2-4429-ac81-a161c3525702\") " pod="openstack/ironic-db-create-x89lf" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.007540 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-combined-ca-bundle\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.007588 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5drg5\" (UniqueName: \"kubernetes.io/projected/607e6833-eab2-4429-ac81-a161c3525702-kube-api-access-5drg5\") pod \"ironic-db-create-x89lf\" (UID: \"607e6833-eab2-4429-ac81-a161c3525702\") " pod="openstack/ironic-db-create-x89lf" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.007649 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-dns-svc\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.007710 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-config-data\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.007756 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-fernet-keys\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.007754 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" podUID="930cccfa-080d-4b88-b4c5-c61bbebdd2ad" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.169:5353: connect: connection refused" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.007837 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-config\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.007896 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-ovsdbserver-sb\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.007960 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-dns-swift-storage-0\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.008052 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-scripts\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.008117 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq4d6\" (UniqueName: \"kubernetes.io/projected/09b63a1e-c6ec-4046-bc54-585d4031c6ed-kube-api-access-dq4d6\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:05.008299 master-0 kubenswrapper[4652]: I0216 17:42:05.008158 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqf7z\" (UniqueName: \"kubernetes.io/projected/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-kube-api-access-hqf7z\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.010015 master-0 kubenswrapper[4652]: I0216 17:42:05.009943 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-x89lf"] Feb 16 17:42:05.023328 master-0 kubenswrapper[4652]: I0216 17:42:05.017695 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-combined-ca-bundle\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:05.046396 master-0 kubenswrapper[4652]: I0216 17:42:05.029967 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-scripts\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:05.046396 master-0 kubenswrapper[4652]: I0216 17:42:05.032144 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-config-data\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:05.046396 master-0 kubenswrapper[4652]: I0216 17:42:05.035941 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-fernet-keys\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:05.073146 master-0 kubenswrapper[4652]: I0216 17:42:05.060988 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-credential-keys\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:05.102921 master-0 kubenswrapper[4652]: I0216 17:42:05.094508 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq4d6\" (UniqueName: \"kubernetes.io/projected/09b63a1e-c6ec-4046-bc54-585d4031c6ed-kube-api-access-dq4d6\") pod \"keystone-bootstrap-tgkq5\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:05.138219 master-0 kubenswrapper[4652]: I0216 17:42:05.138182 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-config\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.138473 master-0 kubenswrapper[4652]: I0216 17:42:05.138454 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-ovsdbserver-sb\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.138632 master-0 kubenswrapper[4652]: I0216 17:42:05.138619 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-dns-swift-storage-0\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.138838 master-0 kubenswrapper[4652]: I0216 17:42:05.138819 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqf7z\" (UniqueName: \"kubernetes.io/projected/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-kube-api-access-hqf7z\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.138963 master-0 kubenswrapper[4652]: I0216 17:42:05.138947 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-ovsdbserver-nb\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.139196 master-0 kubenswrapper[4652]: I0216 17:42:05.139176 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/607e6833-eab2-4429-ac81-a161c3525702-operator-scripts\") pod \"ironic-db-create-x89lf\" (UID: \"607e6833-eab2-4429-ac81-a161c3525702\") " pod="openstack/ironic-db-create-x89lf" Feb 16 17:42:05.139413 master-0 kubenswrapper[4652]: I0216 17:42:05.139386 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5drg5\" (UniqueName: \"kubernetes.io/projected/607e6833-eab2-4429-ac81-a161c3525702-kube-api-access-5drg5\") pod \"ironic-db-create-x89lf\" (UID: \"607e6833-eab2-4429-ac81-a161c3525702\") " pod="openstack/ironic-db-create-x89lf" Feb 16 17:42:05.139587 master-0 kubenswrapper[4652]: I0216 17:42:05.139566 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-dns-svc\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.140744 master-0 kubenswrapper[4652]: I0216 17:42:05.140726 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-dns-svc\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.141374 master-0 kubenswrapper[4652]: I0216 17:42:05.141359 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-config\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.142048 master-0 kubenswrapper[4652]: I0216 17:42:05.142032 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-ovsdbserver-sb\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.142235 master-0 kubenswrapper[4652]: I0216 17:42:05.142065 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-c255-account-create-update-ttmxj"] Feb 16 17:42:05.165954 master-0 kubenswrapper[4652]: I0216 17:42:05.143188 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-ovsdbserver-nb\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.166269 master-0 kubenswrapper[4652]: I0216 17:42:05.144037 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-dns-swift-storage-0\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.166489 master-0 kubenswrapper[4652]: I0216 17:42:05.144292 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/607e6833-eab2-4429-ac81-a161c3525702-operator-scripts\") pod \"ironic-db-create-x89lf\" (UID: \"607e6833-eab2-4429-ac81-a161c3525702\") " pod="openstack/ironic-db-create-x89lf" Feb 16 17:42:05.170739 master-0 kubenswrapper[4652]: I0216 17:42:05.167660 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c255-account-create-update-ttmxj" Feb 16 17:42:05.170739 master-0 kubenswrapper[4652]: I0216 17:42:05.170057 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-db-secret" Feb 16 17:42:05.183069 master-0 kubenswrapper[4652]: I0216 17:42:05.183009 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5drg5\" (UniqueName: \"kubernetes.io/projected/607e6833-eab2-4429-ac81-a161c3525702-kube-api-access-5drg5\") pod \"ironic-db-create-x89lf\" (UID: \"607e6833-eab2-4429-ac81-a161c3525702\") " pod="openstack/ironic-db-create-x89lf" Feb 16 17:42:05.216654 master-0 kubenswrapper[4652]: I0216 17:42:05.210718 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqf7z\" (UniqueName: \"kubernetes.io/projected/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-kube-api-access-hqf7z\") pod \"dnsmasq-dns-647b99b9f-kjks6\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.218376 master-0 kubenswrapper[4652]: I0216 17:42:05.218332 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-c255-account-create-update-ttmxj"] Feb 16 17:42:05.260410 master-0 kubenswrapper[4652]: I0216 17:42:05.253575 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzzbs\" (UniqueName: \"kubernetes.io/projected/d7068e65-9057-4efb-a478-53734617a8fe-kube-api-access-pzzbs\") pod \"ironic-c255-account-create-update-ttmxj\" (UID: \"d7068e65-9057-4efb-a478-53734617a8fe\") " pod="openstack/ironic-c255-account-create-update-ttmxj" Feb 16 17:42:05.260410 master-0 kubenswrapper[4652]: I0216 17:42:05.253658 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7068e65-9057-4efb-a478-53734617a8fe-operator-scripts\") pod \"ironic-c255-account-create-update-ttmxj\" (UID: \"d7068e65-9057-4efb-a478-53734617a8fe\") " pod="openstack/ironic-c255-account-create-update-ttmxj" Feb 16 17:42:05.261389 master-0 kubenswrapper[4652]: I0216 17:42:05.261155 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:05.293444 master-0 kubenswrapper[4652]: I0216 17:42:05.293400 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-x89lf" Feb 16 17:42:05.360711 master-0 kubenswrapper[4652]: I0216 17:42:05.302360 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c34a6-db-sync-5mcjg"] Feb 16 17:42:05.360711 master-0 kubenswrapper[4652]: I0216 17:42:05.306476 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.360711 master-0 kubenswrapper[4652]: I0216 17:42:05.313927 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-c34a6-config-data" Feb 16 17:42:05.360711 master-0 kubenswrapper[4652]: I0216 17:42:05.314151 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-c34a6-scripts" Feb 16 17:42:05.363457 master-0 kubenswrapper[4652]: I0216 17:42:05.363259 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-config-data\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.363457 master-0 kubenswrapper[4652]: I0216 17:42:05.363423 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-scripts\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.364346 master-0 kubenswrapper[4652]: I0216 17:42:05.364303 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-db-sync-config-data\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.364570 master-0 kubenswrapper[4652]: I0216 17:42:05.364542 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-combined-ca-bundle\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.364752 master-0 kubenswrapper[4652]: I0216 17:42:05.364736 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzzbs\" (UniqueName: \"kubernetes.io/projected/d7068e65-9057-4efb-a478-53734617a8fe-kube-api-access-pzzbs\") pod \"ironic-c255-account-create-update-ttmxj\" (UID: \"d7068e65-9057-4efb-a478-53734617a8fe\") " pod="openstack/ironic-c255-account-create-update-ttmxj" Feb 16 17:42:05.364860 master-0 kubenswrapper[4652]: I0216 17:42:05.364844 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7068e65-9057-4efb-a478-53734617a8fe-operator-scripts\") pod \"ironic-c255-account-create-update-ttmxj\" (UID: \"d7068e65-9057-4efb-a478-53734617a8fe\") " pod="openstack/ironic-c255-account-create-update-ttmxj" Feb 16 17:42:05.365176 master-0 kubenswrapper[4652]: I0216 17:42:05.365158 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9405c7d-2ad3-46cf-b8e4-4c91feead991-etc-machine-id\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.365314 master-0 kubenswrapper[4652]: I0216 17:42:05.365297 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcdn9\" (UniqueName: \"kubernetes.io/projected/c9405c7d-2ad3-46cf-b8e4-4c91feead991-kube-api-access-fcdn9\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.366308 master-0 kubenswrapper[4652]: I0216 17:42:05.366289 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7068e65-9057-4efb-a478-53734617a8fe-operator-scripts\") pod \"ironic-c255-account-create-update-ttmxj\" (UID: \"d7068e65-9057-4efb-a478-53734617a8fe\") " pod="openstack/ironic-c255-account-create-update-ttmxj" Feb 16 17:42:05.381265 master-0 kubenswrapper[4652]: I0216 17:42:05.381211 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-db-sync-5mcjg"] Feb 16 17:42:05.418520 master-0 kubenswrapper[4652]: I0216 17:42:05.413196 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzzbs\" (UniqueName: \"kubernetes.io/projected/d7068e65-9057-4efb-a478-53734617a8fe-kube-api-access-pzzbs\") pod \"ironic-c255-account-create-update-ttmxj\" (UID: \"d7068e65-9057-4efb-a478-53734617a8fe\") " pod="openstack/ironic-c255-account-create-update-ttmxj" Feb 16 17:42:05.421910 master-0 kubenswrapper[4652]: I0216 17:42:05.421842 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-74cn5"] Feb 16 17:42:05.424564 master-0 kubenswrapper[4652]: I0216 17:42:05.423995 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-74cn5" Feb 16 17:42:05.429174 master-0 kubenswrapper[4652]: I0216 17:42:05.428968 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 17:42:05.433342 master-0 kubenswrapper[4652]: I0216 17:42:05.432958 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 17:42:05.441974 master-0 kubenswrapper[4652]: I0216 17:42:05.440276 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-74cn5"] Feb 16 17:42:05.471674 master-0 kubenswrapper[4652]: I0216 17:42:05.470381 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-config-data\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.471674 master-0 kubenswrapper[4652]: I0216 17:42:05.470440 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-scripts\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.471674 master-0 kubenswrapper[4652]: I0216 17:42:05.470459 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-db-sync-config-data\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.471674 master-0 kubenswrapper[4652]: I0216 17:42:05.470484 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-combined-ca-bundle\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.471674 master-0 kubenswrapper[4652]: I0216 17:42:05.470558 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9405c7d-2ad3-46cf-b8e4-4c91feead991-etc-machine-id\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.471674 master-0 kubenswrapper[4652]: I0216 17:42:05.470578 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcdn9\" (UniqueName: \"kubernetes.io/projected/c9405c7d-2ad3-46cf-b8e4-4c91feead991-kube-api-access-fcdn9\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.473177 master-0 kubenswrapper[4652]: I0216 17:42:05.472234 4652 generic.go:334] "Generic (PLEG): container finished" podID="930cccfa-080d-4b88-b4c5-c61bbebdd2ad" containerID="0ef8461810b0b6c9fe3f8441a5e267cd29a10e466fbff4bf141f519a332cab08" exitCode=0 Feb 16 17:42:05.473177 master-0 kubenswrapper[4652]: I0216 17:42:05.472308 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" event={"ID":"930cccfa-080d-4b88-b4c5-c61bbebdd2ad","Type":"ContainerDied","Data":"0ef8461810b0b6c9fe3f8441a5e267cd29a10e466fbff4bf141f519a332cab08"} Feb 16 17:42:05.473177 master-0 kubenswrapper[4652]: I0216 17:42:05.472786 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9405c7d-2ad3-46cf-b8e4-4c91feead991-etc-machine-id\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.476819 master-0 kubenswrapper[4652]: I0216 17:42:05.476636 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-scripts\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.477291 master-0 kubenswrapper[4652]: I0216 17:42:05.477206 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-combined-ca-bundle\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.478161 master-0 kubenswrapper[4652]: I0216 17:42:05.478098 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-db-sync-config-data\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.480628 master-0 kubenswrapper[4652]: I0216 17:42:05.480590 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-config-data\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.547312 master-0 kubenswrapper[4652]: I0216 17:42:05.502360 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcdn9\" (UniqueName: \"kubernetes.io/projected/c9405c7d-2ad3-46cf-b8e4-4c91feead991-kube-api-access-fcdn9\") pod \"cinder-c34a6-db-sync-5mcjg\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.547312 master-0 kubenswrapper[4652]: I0216 17:42:05.502473 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647b99b9f-kjks6"] Feb 16 17:42:05.547312 master-0 kubenswrapper[4652]: I0216 17:42:05.511682 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-mw67q"] Feb 16 17:42:05.547312 master-0 kubenswrapper[4652]: I0216 17:42:05.511892 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:05.547312 master-0 kubenswrapper[4652]: I0216 17:42:05.513771 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.547312 master-0 kubenswrapper[4652]: I0216 17:42:05.517516 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 17:42:05.547312 master-0 kubenswrapper[4652]: I0216 17:42:05.518340 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 17:42:05.547312 master-0 kubenswrapper[4652]: I0216 17:42:05.532495 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-997495b47-lhjkc"] Feb 16 17:42:05.547312 master-0 kubenswrapper[4652]: I0216 17:42:05.534592 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.583332 master-0 kubenswrapper[4652]: I0216 17:42:05.575391 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mw67q"] Feb 16 17:42:05.583332 master-0 kubenswrapper[4652]: I0216 17:42:05.576155 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzhnz\" (UniqueName: \"kubernetes.io/projected/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-kube-api-access-vzhnz\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.583332 master-0 kubenswrapper[4652]: I0216 17:42:05.576361 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-config-data\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.583332 master-0 kubenswrapper[4652]: I0216 17:42:05.576427 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-scripts\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.583332 master-0 kubenswrapper[4652]: I0216 17:42:05.576470 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9aa2fd7-c127-4b90-973d-67f8be387ef6-combined-ca-bundle\") pod \"neutron-db-sync-74cn5\" (UID: \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\") " pod="openstack/neutron-db-sync-74cn5" Feb 16 17:42:05.583332 master-0 kubenswrapper[4652]: I0216 17:42:05.576526 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhqnj\" (UniqueName: \"kubernetes.io/projected/a9aa2fd7-c127-4b90-973d-67f8be387ef6-kube-api-access-lhqnj\") pod \"neutron-db-sync-74cn5\" (UID: \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\") " pod="openstack/neutron-db-sync-74cn5" Feb 16 17:42:05.583332 master-0 kubenswrapper[4652]: I0216 17:42:05.576664 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-combined-ca-bundle\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.583332 master-0 kubenswrapper[4652]: I0216 17:42:05.576699 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9aa2fd7-c127-4b90-973d-67f8be387ef6-config\") pod \"neutron-db-sync-74cn5\" (UID: \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\") " pod="openstack/neutron-db-sync-74cn5" Feb 16 17:42:05.583332 master-0 kubenswrapper[4652]: I0216 17:42:05.582207 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-logs\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.637842 master-0 kubenswrapper[4652]: I0216 17:42:05.635837 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-997495b47-lhjkc"] Feb 16 17:42:05.640954 master-0 kubenswrapper[4652]: I0216 17:42:05.640894 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c255-account-create-update-ttmxj" Feb 16 17:42:05.668236 master-0 kubenswrapper[4652]: I0216 17:42:05.667682 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:42:05.672383 master-0 kubenswrapper[4652]: I0216 17:42:05.672316 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:05.692834 master-0 kubenswrapper[4652]: I0216 17:42:05.691781 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-dns-svc\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.692834 master-0 kubenswrapper[4652]: I0216 17:42:05.691845 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-ovsdbserver-sb\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.692834 master-0 kubenswrapper[4652]: I0216 17:42:05.691901 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-logs\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.692834 master-0 kubenswrapper[4652]: I0216 17:42:05.691995 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-dns-swift-storage-0\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.692834 master-0 kubenswrapper[4652]: I0216 17:42:05.692022 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-ovsdbserver-nb\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.692834 master-0 kubenswrapper[4652]: I0216 17:42:05.692081 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzhnz\" (UniqueName: \"kubernetes.io/projected/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-kube-api-access-vzhnz\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.692834 master-0 kubenswrapper[4652]: I0216 17:42:05.692133 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-config\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.692834 master-0 kubenswrapper[4652]: I0216 17:42:05.692161 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-config-data\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.692834 master-0 kubenswrapper[4652]: I0216 17:42:05.692193 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-scripts\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.692834 master-0 kubenswrapper[4652]: I0216 17:42:05.692224 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9aa2fd7-c127-4b90-973d-67f8be387ef6-combined-ca-bundle\") pod \"neutron-db-sync-74cn5\" (UID: \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\") " pod="openstack/neutron-db-sync-74cn5" Feb 16 17:42:05.692834 master-0 kubenswrapper[4652]: I0216 17:42:05.692302 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhqnj\" (UniqueName: \"kubernetes.io/projected/a9aa2fd7-c127-4b90-973d-67f8be387ef6-kube-api-access-lhqnj\") pod \"neutron-db-sync-74cn5\" (UID: \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\") " pod="openstack/neutron-db-sync-74cn5" Feb 16 17:42:05.709437 master-0 kubenswrapper[4652]: I0216 17:42:05.700150 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-combined-ca-bundle\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.709437 master-0 kubenswrapper[4652]: I0216 17:42:05.700207 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9aa2fd7-c127-4b90-973d-67f8be387ef6-config\") pod \"neutron-db-sync-74cn5\" (UID: \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\") " pod="openstack/neutron-db-sync-74cn5" Feb 16 17:42:05.709437 master-0 kubenswrapper[4652]: I0216 17:42:05.700300 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txcdj\" (UniqueName: \"kubernetes.io/projected/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-kube-api-access-txcdj\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.709437 master-0 kubenswrapper[4652]: I0216 17:42:05.700936 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-logs\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.710066 master-0 kubenswrapper[4652]: I0216 17:42:05.710020 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-combined-ca-bundle\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.710386 master-0 kubenswrapper[4652]: I0216 17:42:05.710120 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9aa2fd7-c127-4b90-973d-67f8be387ef6-config\") pod \"neutron-db-sync-74cn5\" (UID: \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\") " pod="openstack/neutron-db-sync-74cn5" Feb 16 17:42:05.710907 master-0 kubenswrapper[4652]: I0216 17:42:05.710866 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-config-data\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.716972 master-0 kubenswrapper[4652]: I0216 17:42:05.716809 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-scripts\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.728532 master-0 kubenswrapper[4652]: I0216 17:42:05.728464 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzhnz\" (UniqueName: \"kubernetes.io/projected/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-kube-api-access-vzhnz\") pod \"placement-db-sync-mw67q\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.730321 master-0 kubenswrapper[4652]: I0216 17:42:05.730242 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9aa2fd7-c127-4b90-973d-67f8be387ef6-combined-ca-bundle\") pod \"neutron-db-sync-74cn5\" (UID: \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\") " pod="openstack/neutron-db-sync-74cn5" Feb 16 17:42:05.731947 master-0 kubenswrapper[4652]: I0216 17:42:05.731797 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhqnj\" (UniqueName: \"kubernetes.io/projected/a9aa2fd7-c127-4b90-973d-67f8be387ef6-kube-api-access-lhqnj\") pod \"neutron-db-sync-74cn5\" (UID: \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\") " pod="openstack/neutron-db-sync-74cn5" Feb 16 17:42:05.772496 master-0 kubenswrapper[4652]: I0216 17:42:05.771388 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-74cn5" Feb 16 17:42:05.805330 master-0 kubenswrapper[4652]: I0216 17:42:05.803360 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-ovsdbserver-sb\") pod \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " Feb 16 17:42:05.807340 master-0 kubenswrapper[4652]: I0216 17:42:05.805825 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drhgf\" (UniqueName: \"kubernetes.io/projected/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-kube-api-access-drhgf\") pod \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " Feb 16 17:42:05.807340 master-0 kubenswrapper[4652]: I0216 17:42:05.805885 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-config\") pod \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " Feb 16 17:42:05.809362 master-0 kubenswrapper[4652]: I0216 17:42:05.807593 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-dns-swift-storage-0\") pod \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " Feb 16 17:42:05.810165 master-0 kubenswrapper[4652]: I0216 17:42:05.810059 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-dns-svc\") pod \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " Feb 16 17:42:05.810410 master-0 kubenswrapper[4652]: I0216 17:42:05.810301 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-ovsdbserver-nb\") pod \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\" (UID: \"930cccfa-080d-4b88-b4c5-c61bbebdd2ad\") " Feb 16 17:42:05.813235 master-0 kubenswrapper[4652]: I0216 17:42:05.811867 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txcdj\" (UniqueName: \"kubernetes.io/projected/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-kube-api-access-txcdj\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.813235 master-0 kubenswrapper[4652]: I0216 17:42:05.812037 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-dns-svc\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.813235 master-0 kubenswrapper[4652]: I0216 17:42:05.812058 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-ovsdbserver-sb\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.813235 master-0 kubenswrapper[4652]: I0216 17:42:05.812165 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-dns-swift-storage-0\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.813235 master-0 kubenswrapper[4652]: I0216 17:42:05.812189 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-ovsdbserver-nb\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.813235 master-0 kubenswrapper[4652]: I0216 17:42:05.812279 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-config\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.813235 master-0 kubenswrapper[4652]: I0216 17:42:05.813236 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-config\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.829747 master-0 kubenswrapper[4652]: I0216 17:42:05.821655 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-ovsdbserver-sb\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.829747 master-0 kubenswrapper[4652]: I0216 17:42:05.823523 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-kube-api-access-drhgf" (OuterVolumeSpecName: "kube-api-access-drhgf") pod "930cccfa-080d-4b88-b4c5-c61bbebdd2ad" (UID: "930cccfa-080d-4b88-b4c5-c61bbebdd2ad"). InnerVolumeSpecName "kube-api-access-drhgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:05.829747 master-0 kubenswrapper[4652]: I0216 17:42:05.824097 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-dns-svc\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.829747 master-0 kubenswrapper[4652]: I0216 17:42:05.828601 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-dns-swift-storage-0\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.829747 master-0 kubenswrapper[4652]: I0216 17:42:05.829264 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-ovsdbserver-nb\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.860144 master-0 kubenswrapper[4652]: I0216 17:42:05.860073 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:05.885047 master-0 kubenswrapper[4652]: I0216 17:42:05.884993 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "930cccfa-080d-4b88-b4c5-c61bbebdd2ad" (UID: "930cccfa-080d-4b88-b4c5-c61bbebdd2ad"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:05.911494 master-0 kubenswrapper[4652]: I0216 17:42:05.911342 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txcdj\" (UniqueName: \"kubernetes.io/projected/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-kube-api-access-txcdj\") pod \"dnsmasq-dns-997495b47-lhjkc\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:05.915099 master-0 kubenswrapper[4652]: I0216 17:42:05.914751 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:05.915099 master-0 kubenswrapper[4652]: I0216 17:42:05.914789 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drhgf\" (UniqueName: \"kubernetes.io/projected/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-kube-api-access-drhgf\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:05.940162 master-0 kubenswrapper[4652]: I0216 17:42:05.940080 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "930cccfa-080d-4b88-b4c5-c61bbebdd2ad" (UID: "930cccfa-080d-4b88-b4c5-c61bbebdd2ad"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:05.941005 master-0 kubenswrapper[4652]: I0216 17:42:05.940917 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "930cccfa-080d-4b88-b4c5-c61bbebdd2ad" (UID: "930cccfa-080d-4b88-b4c5-c61bbebdd2ad"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:05.943576 master-0 kubenswrapper[4652]: I0216 17:42:05.943533 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "930cccfa-080d-4b88-b4c5-c61bbebdd2ad" (UID: "930cccfa-080d-4b88-b4c5-c61bbebdd2ad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:05.989386 master-0 kubenswrapper[4652]: I0216 17:42:05.989120 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-config" (OuterVolumeSpecName: "config") pod "930cccfa-080d-4b88-b4c5-c61bbebdd2ad" (UID: "930cccfa-080d-4b88-b4c5-c61bbebdd2ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:05.992336 master-0 kubenswrapper[4652]: I0216 17:42:05.992296 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tgkq5"] Feb 16 17:42:06.015729 master-0 kubenswrapper[4652]: W0216 17:42:06.012848 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09b63a1e_c6ec_4046_bc54_585d4031c6ed.slice/crio-f207b04e5bb480162b91a6e2f3fe5fd340d2254c05a72756926fda6576a2d42b WatchSource:0}: Error finding container f207b04e5bb480162b91a6e2f3fe5fd340d2254c05a72756926fda6576a2d42b: Status 404 returned error can't find the container with id f207b04e5bb480162b91a6e2f3fe5fd340d2254c05a72756926fda6576a2d42b Feb 16 17:42:06.022807 master-0 kubenswrapper[4652]: I0216 17:42:06.020837 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:06.022807 master-0 kubenswrapper[4652]: I0216 17:42:06.020876 4652 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:06.022807 master-0 kubenswrapper[4652]: I0216 17:42:06.020887 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:06.022807 master-0 kubenswrapper[4652]: I0216 17:42:06.021666 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/930cccfa-080d-4b88-b4c5-c61bbebdd2ad-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:06.196378 master-0 kubenswrapper[4652]: I0216 17:42:06.183178 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-create-x89lf"] Feb 16 17:42:06.196378 master-0 kubenswrapper[4652]: I0216 17:42:06.191380 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:06.425877 master-0 kubenswrapper[4652]: I0216 17:42:06.425703 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647b99b9f-kjks6"] Feb 16 17:42:06.494364 master-0 kubenswrapper[4652]: I0216 17:42:06.491087 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tgkq5" event={"ID":"09b63a1e-c6ec-4046-bc54-585d4031c6ed","Type":"ContainerStarted","Data":"9e97e6351dfa9e9574d588b51453a711c4449b16480af391fab93cf8e4c34d6b"} Feb 16 17:42:06.494364 master-0 kubenswrapper[4652]: I0216 17:42:06.491213 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tgkq5" event={"ID":"09b63a1e-c6ec-4046-bc54-585d4031c6ed","Type":"ContainerStarted","Data":"f207b04e5bb480162b91a6e2f3fe5fd340d2254c05a72756926fda6576a2d42b"} Feb 16 17:42:06.494364 master-0 kubenswrapper[4652]: I0216 17:42:06.491225 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647b99b9f-kjks6" event={"ID":"bbc867aa-ee90-41ae-aeb5-831ecc0208ae","Type":"ContainerStarted","Data":"3a8ef53d4775cc78032b8a5df1a48f1e01b37311608b51d8c652b94d6c0c765b"} Feb 16 17:42:06.494364 master-0 kubenswrapper[4652]: I0216 17:42:06.492290 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-x89lf" event={"ID":"607e6833-eab2-4429-ac81-a161c3525702","Type":"ContainerStarted","Data":"be3388cd39951c430106b2b985e2986b094036f0fe7cca7238c8c0d119e755b9"} Feb 16 17:42:06.494364 master-0 kubenswrapper[4652]: I0216 17:42:06.494219 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" event={"ID":"930cccfa-080d-4b88-b4c5-c61bbebdd2ad","Type":"ContainerDied","Data":"0749d065e9f7658db2fcf5efcc1887cf15ec655f8a71c0b150c3b3170a4a6201"} Feb 16 17:42:06.494364 master-0 kubenswrapper[4652]: I0216 17:42:06.494265 4652 scope.go:117] "RemoveContainer" containerID="0ef8461810b0b6c9fe3f8441a5e267cd29a10e466fbff4bf141f519a332cab08" Feb 16 17:42:06.494781 master-0 kubenswrapper[4652]: I0216 17:42:06.494540 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75cf8458ff-jkkqn" Feb 16 17:42:06.575360 master-0 kubenswrapper[4652]: I0216 17:42:06.575240 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tgkq5" podStartSLOduration=2.575220437 podStartE2EDuration="2.575220437s" podCreationTimestamp="2026-02-16 17:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:06.553547176 +0000 UTC m=+1083.941715692" watchObservedRunningTime="2026-02-16 17:42:06.575220437 +0000 UTC m=+1083.963388953" Feb 16 17:42:06.610732 master-0 kubenswrapper[4652]: I0216 17:42:06.601384 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75cf8458ff-jkkqn"] Feb 16 17:42:06.614529 master-0 kubenswrapper[4652]: I0216 17:42:06.614069 4652 scope.go:117] "RemoveContainer" containerID="9a6252446f18a121cc6ec568e1849f04ca470b6228629b5039affc618d4c5ff9" Feb 16 17:42:06.642140 master-0 kubenswrapper[4652]: I0216 17:42:06.642087 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75cf8458ff-jkkqn"] Feb 16 17:42:06.655776 master-0 kubenswrapper[4652]: W0216 17:42:06.655710 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7068e65_9057_4efb_a478_53734617a8fe.slice/crio-2b0d90eb0e2aa5f2802a0f063eb66e53e94020a90e25334e179006066d6ee684 WatchSource:0}: Error finding container 2b0d90eb0e2aa5f2802a0f063eb66e53e94020a90e25334e179006066d6ee684: Status 404 returned error can't find the container with id 2b0d90eb0e2aa5f2802a0f063eb66e53e94020a90e25334e179006066d6ee684 Feb 16 17:42:06.658905 master-0 kubenswrapper[4652]: I0216 17:42:06.658769 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-c255-account-create-update-ttmxj"] Feb 16 17:42:06.764053 master-0 kubenswrapper[4652]: I0216 17:42:06.763989 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="930cccfa-080d-4b88-b4c5-c61bbebdd2ad" path="/var/lib/kubelet/pods/930cccfa-080d-4b88-b4c5-c61bbebdd2ad/volumes" Feb 16 17:42:06.983043 master-0 kubenswrapper[4652]: I0216 17:42:06.982987 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:06.985117 master-0 kubenswrapper[4652]: E0216 17:42:06.985073 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930cccfa-080d-4b88-b4c5-c61bbebdd2ad" containerName="init" Feb 16 17:42:06.985117 master-0 kubenswrapper[4652]: I0216 17:42:06.985114 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="930cccfa-080d-4b88-b4c5-c61bbebdd2ad" containerName="init" Feb 16 17:42:06.985354 master-0 kubenswrapper[4652]: E0216 17:42:06.985148 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="930cccfa-080d-4b88-b4c5-c61bbebdd2ad" containerName="dnsmasq-dns" Feb 16 17:42:06.985354 master-0 kubenswrapper[4652]: I0216 17:42:06.985334 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="930cccfa-080d-4b88-b4c5-c61bbebdd2ad" containerName="dnsmasq-dns" Feb 16 17:42:06.985654 master-0 kubenswrapper[4652]: I0216 17:42:06.985626 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="930cccfa-080d-4b88-b4c5-c61bbebdd2ad" containerName="dnsmasq-dns" Feb 16 17:42:06.987232 master-0 kubenswrapper[4652]: I0216 17:42:06.987208 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:06.991325 master-0 kubenswrapper[4652]: I0216 17:42:06.991207 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-50e08-default-external-config-data" Feb 16 17:42:06.991825 master-0 kubenswrapper[4652]: I0216 17:42:06.991798 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 17:42:07.014236 master-0 kubenswrapper[4652]: I0216 17:42:07.011093 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-db-sync-5mcjg"] Feb 16 17:42:07.061644 master-0 kubenswrapper[4652]: I0216 17:42:07.042226 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:07.061644 master-0 kubenswrapper[4652]: W0216 17:42:07.060989 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9aa2fd7_c127_4b90_973d_67f8be387ef6.slice/crio-c7e6efe73f042dc37728ac35efecebdd666e61d51e0de99b9b36ea9967837a01 WatchSource:0}: Error finding container c7e6efe73f042dc37728ac35efecebdd666e61d51e0de99b9b36ea9967837a01: Status 404 returned error can't find the container with id c7e6efe73f042dc37728ac35efecebdd666e61d51e0de99b9b36ea9967837a01 Feb 16 17:42:07.073082 master-0 kubenswrapper[4652]: I0216 17:42:07.069148 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-74cn5"] Feb 16 17:42:07.092805 master-0 kubenswrapper[4652]: I0216 17:42:07.086026 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-config-data\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.092805 master-0 kubenswrapper[4652]: I0216 17:42:07.086296 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68mt5\" (UniqueName: \"kubernetes.io/projected/444fa8d3-d5ca-4493-91eb-cab64b088d2b-kube-api-access-68mt5\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.092805 master-0 kubenswrapper[4652]: I0216 17:42:07.086342 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-scripts\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.092805 master-0 kubenswrapper[4652]: I0216 17:42:07.086365 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-combined-ca-bundle\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.092805 master-0 kubenswrapper[4652]: I0216 17:42:07.086411 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444fa8d3-d5ca-4493-91eb-cab64b088d2b-httpd-run\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.092805 master-0 kubenswrapper[4652]: I0216 17:42:07.086441 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444fa8d3-d5ca-4493-91eb-cab64b088d2b-logs\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.092805 master-0 kubenswrapper[4652]: I0216 17:42:07.086671 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.092805 master-0 kubenswrapper[4652]: I0216 17:42:07.092312 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-mw67q"] Feb 16 17:42:07.199277 master-0 kubenswrapper[4652]: I0216 17:42:07.190161 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-config-data\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.199277 master-0 kubenswrapper[4652]: I0216 17:42:07.190311 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68mt5\" (UniqueName: \"kubernetes.io/projected/444fa8d3-d5ca-4493-91eb-cab64b088d2b-kube-api-access-68mt5\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.199277 master-0 kubenswrapper[4652]: I0216 17:42:07.190342 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-scripts\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.199277 master-0 kubenswrapper[4652]: I0216 17:42:07.190356 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-combined-ca-bundle\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.199277 master-0 kubenswrapper[4652]: I0216 17:42:07.190385 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444fa8d3-d5ca-4493-91eb-cab64b088d2b-httpd-run\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.199277 master-0 kubenswrapper[4652]: I0216 17:42:07.190406 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444fa8d3-d5ca-4493-91eb-cab64b088d2b-logs\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.199277 master-0 kubenswrapper[4652]: I0216 17:42:07.190441 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.199277 master-0 kubenswrapper[4652]: I0216 17:42:07.194755 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444fa8d3-d5ca-4493-91eb-cab64b088d2b-httpd-run\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.199277 master-0 kubenswrapper[4652]: I0216 17:42:07.195745 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:42:07.199277 master-0 kubenswrapper[4652]: I0216 17:42:07.195777 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/3d8bc9531c98396b6e6ea0108c18f808bdb9e170b0cc5e329df6a02a3996a78b/globalmount\"" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.199277 master-0 kubenswrapper[4652]: I0216 17:42:07.197065 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-config-data\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.199277 master-0 kubenswrapper[4652]: I0216 17:42:07.197303 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444fa8d3-d5ca-4493-91eb-cab64b088d2b-logs\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.199277 master-0 kubenswrapper[4652]: I0216 17:42:07.198995 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-997495b47-lhjkc"] Feb 16 17:42:07.200320 master-0 kubenswrapper[4652]: I0216 17:42:07.199870 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-scripts\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.206534 master-0 kubenswrapper[4652]: I0216 17:42:07.205693 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-combined-ca-bundle\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.222570 master-0 kubenswrapper[4652]: I0216 17:42:07.215504 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:07.231516 master-0 kubenswrapper[4652]: E0216 17:42:07.226256 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[glance kube-api-access-68mt5], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-50e08-default-external-api-0" podUID="444fa8d3-d5ca-4493-91eb-cab64b088d2b" Feb 16 17:42:07.231516 master-0 kubenswrapper[4652]: I0216 17:42:07.227765 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68mt5\" (UniqueName: \"kubernetes.io/projected/444fa8d3-d5ca-4493-91eb-cab64b088d2b-kube-api-access-68mt5\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.332273 master-0 kubenswrapper[4652]: I0216 17:42:07.330714 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:42:07.342275 master-0 kubenswrapper[4652]: I0216 17:42:07.338598 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.342275 master-0 kubenswrapper[4652]: I0216 17:42:07.341449 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-50e08-default-internal-config-data" Feb 16 17:42:07.454279 master-0 kubenswrapper[4652]: I0216 17:42:07.449998 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:42:07.502275 master-0 kubenswrapper[4652]: I0216 17:42:07.499961 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-config-data\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.502275 master-0 kubenswrapper[4652]: I0216 17:42:07.500087 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.502275 master-0 kubenswrapper[4652]: I0216 17:42:07.500122 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-combined-ca-bundle\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.502275 master-0 kubenswrapper[4652]: I0216 17:42:07.500146 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-scripts\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.502275 master-0 kubenswrapper[4652]: I0216 17:42:07.500226 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcqr7\" (UniqueName: \"kubernetes.io/projected/206bcb88-0042-48ac-a9cc-8a121b9fdb42-kube-api-access-mcqr7\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.502275 master-0 kubenswrapper[4652]: I0216 17:42:07.500421 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/206bcb88-0042-48ac-a9cc-8a121b9fdb42-httpd-run\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.502275 master-0 kubenswrapper[4652]: I0216 17:42:07.500458 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/206bcb88-0042-48ac-a9cc-8a121b9fdb42-logs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.517341 master-0 kubenswrapper[4652]: I0216 17:42:07.514633 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-db-sync-5mcjg" event={"ID":"c9405c7d-2ad3-46cf-b8e4-4c91feead991","Type":"ContainerStarted","Data":"55e563ae69f2964b622cf9c2b9690241e8e0d549a18b4750c5938bc832c1cf89"} Feb 16 17:42:07.517341 master-0 kubenswrapper[4652]: I0216 17:42:07.516410 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-74cn5" event={"ID":"a9aa2fd7-c127-4b90-973d-67f8be387ef6","Type":"ContainerStarted","Data":"7e9cf091a5f27ffb6fa78bcb63dc31fe17bd7187ecf9743206c305750a710cf4"} Feb 16 17:42:07.517341 master-0 kubenswrapper[4652]: I0216 17:42:07.516443 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-74cn5" event={"ID":"a9aa2fd7-c127-4b90-973d-67f8be387ef6","Type":"ContainerStarted","Data":"c7e6efe73f042dc37728ac35efecebdd666e61d51e0de99b9b36ea9967837a01"} Feb 16 17:42:07.526478 master-0 kubenswrapper[4652]: I0216 17:42:07.521101 4652 generic.go:334] "Generic (PLEG): container finished" podID="bbc867aa-ee90-41ae-aeb5-831ecc0208ae" containerID="3abb08bec15844e58cf00a4e30a895afb425ca53f6118e59a2f3d4f051a6318c" exitCode=0 Feb 16 17:42:07.526478 master-0 kubenswrapper[4652]: I0216 17:42:07.521205 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647b99b9f-kjks6" event={"ID":"bbc867aa-ee90-41ae-aeb5-831ecc0208ae","Type":"ContainerDied","Data":"3abb08bec15844e58cf00a4e30a895afb425ca53f6118e59a2f3d4f051a6318c"} Feb 16 17:42:07.526478 master-0 kubenswrapper[4652]: I0216 17:42:07.523750 4652 generic.go:334] "Generic (PLEG): container finished" podID="d7068e65-9057-4efb-a478-53734617a8fe" containerID="7861e37166c61949429d4e6033714e675c5f7aaea61e8b91765d0c35d244cfb6" exitCode=0 Feb 16 17:42:07.526478 master-0 kubenswrapper[4652]: I0216 17:42:07.523832 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c255-account-create-update-ttmxj" event={"ID":"d7068e65-9057-4efb-a478-53734617a8fe","Type":"ContainerDied","Data":"7861e37166c61949429d4e6033714e675c5f7aaea61e8b91765d0c35d244cfb6"} Feb 16 17:42:07.526478 master-0 kubenswrapper[4652]: I0216 17:42:07.523867 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c255-account-create-update-ttmxj" event={"ID":"d7068e65-9057-4efb-a478-53734617a8fe","Type":"ContainerStarted","Data":"2b0d90eb0e2aa5f2802a0f063eb66e53e94020a90e25334e179006066d6ee684"} Feb 16 17:42:07.531274 master-0 kubenswrapper[4652]: I0216 17:42:07.527947 4652 generic.go:334] "Generic (PLEG): container finished" podID="607e6833-eab2-4429-ac81-a161c3525702" containerID="bb340dd4be40ed1f61a3df17b5ea8cdc7837337166aa9d8be591accb8b17c863" exitCode=0 Feb 16 17:42:07.531274 master-0 kubenswrapper[4652]: I0216 17:42:07.528420 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-x89lf" event={"ID":"607e6833-eab2-4429-ac81-a161c3525702","Type":"ContainerDied","Data":"bb340dd4be40ed1f61a3df17b5ea8cdc7837337166aa9d8be591accb8b17c863"} Feb 16 17:42:07.531274 master-0 kubenswrapper[4652]: I0216 17:42:07.530752 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-997495b47-lhjkc" event={"ID":"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3","Type":"ContainerStarted","Data":"0ba81f351ddfe99e94a150828c80da775ce1855c6556a90b04de0ced042b4ad8"} Feb 16 17:42:07.532605 master-0 kubenswrapper[4652]: I0216 17:42:07.532493 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mw67q" event={"ID":"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a","Type":"ContainerStarted","Data":"f4370716fb3d46892bdc3c7c1420386ea47961d00778c527e39c754de6d57ece"} Feb 16 17:42:07.532605 master-0 kubenswrapper[4652]: I0216 17:42:07.532575 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.548371 master-0 kubenswrapper[4652]: I0216 17:42:07.547804 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-74cn5" podStartSLOduration=2.547785787 podStartE2EDuration="2.547785787s" podCreationTimestamp="2026-02-16 17:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:07.538315893 +0000 UTC m=+1084.926484409" watchObservedRunningTime="2026-02-16 17:42:07.547785787 +0000 UTC m=+1084.935954303" Feb 16 17:42:07.618176 master-0 kubenswrapper[4652]: I0216 17:42:07.618102 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.618583 master-0 kubenswrapper[4652]: I0216 17:42:07.618557 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-combined-ca-bundle\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.618644 master-0 kubenswrapper[4652]: I0216 17:42:07.618609 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-scripts\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.618735 master-0 kubenswrapper[4652]: I0216 17:42:07.618706 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcqr7\" (UniqueName: \"kubernetes.io/projected/206bcb88-0042-48ac-a9cc-8a121b9fdb42-kube-api-access-mcqr7\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.619156 master-0 kubenswrapper[4652]: I0216 17:42:07.619123 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/206bcb88-0042-48ac-a9cc-8a121b9fdb42-httpd-run\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.619235 master-0 kubenswrapper[4652]: I0216 17:42:07.619202 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/206bcb88-0042-48ac-a9cc-8a121b9fdb42-logs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.619576 master-0 kubenswrapper[4652]: I0216 17:42:07.619547 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-config-data\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.621494 master-0 kubenswrapper[4652]: I0216 17:42:07.621446 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:07.622119 master-0 kubenswrapper[4652]: I0216 17:42:07.622069 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/206bcb88-0042-48ac-a9cc-8a121b9fdb42-httpd-run\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.622527 master-0 kubenswrapper[4652]: I0216 17:42:07.622403 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/206bcb88-0042-48ac-a9cc-8a121b9fdb42-logs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.624601 master-0 kubenswrapper[4652]: I0216 17:42:07.624563 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:42:07.624680 master-0 kubenswrapper[4652]: I0216 17:42:07.624604 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6145192c64db548fddf9bb3cc8141db5764e5395e391d0e15bf39805d4ff5e26/globalmount\"" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.625133 master-0 kubenswrapper[4652]: I0216 17:42:07.624922 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-scripts\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.654388 master-0 kubenswrapper[4652]: I0216 17:42:07.651797 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-combined-ca-bundle\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.654388 master-0 kubenswrapper[4652]: I0216 17:42:07.653742 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-config-data\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.655596 master-0 kubenswrapper[4652]: I0216 17:42:07.654834 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcqr7\" (UniqueName: \"kubernetes.io/projected/206bcb88-0042-48ac-a9cc-8a121b9fdb42-kube-api-access-mcqr7\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:07.721451 master-0 kubenswrapper[4652]: I0216 17:42:07.721393 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68mt5\" (UniqueName: \"kubernetes.io/projected/444fa8d3-d5ca-4493-91eb-cab64b088d2b-kube-api-access-68mt5\") pod \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " Feb 16 17:42:07.721672 master-0 kubenswrapper[4652]: I0216 17:42:07.721484 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-scripts\") pod \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " Feb 16 17:42:07.721672 master-0 kubenswrapper[4652]: I0216 17:42:07.721531 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-combined-ca-bundle\") pod \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " Feb 16 17:42:07.721672 master-0 kubenswrapper[4652]: I0216 17:42:07.721580 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444fa8d3-d5ca-4493-91eb-cab64b088d2b-logs\") pod \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " Feb 16 17:42:07.721672 master-0 kubenswrapper[4652]: I0216 17:42:07.721601 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444fa8d3-d5ca-4493-91eb-cab64b088d2b-httpd-run\") pod \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " Feb 16 17:42:07.721672 master-0 kubenswrapper[4652]: I0216 17:42:07.721649 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-config-data\") pod \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " Feb 16 17:42:07.725028 master-0 kubenswrapper[4652]: I0216 17:42:07.724992 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-config-data" (OuterVolumeSpecName: "config-data") pod "444fa8d3-d5ca-4493-91eb-cab64b088d2b" (UID: "444fa8d3-d5ca-4493-91eb-cab64b088d2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:07.725267 master-0 kubenswrapper[4652]: I0216 17:42:07.725224 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444fa8d3-d5ca-4493-91eb-cab64b088d2b-logs" (OuterVolumeSpecName: "logs") pod "444fa8d3-d5ca-4493-91eb-cab64b088d2b" (UID: "444fa8d3-d5ca-4493-91eb-cab64b088d2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:42:07.725442 master-0 kubenswrapper[4652]: I0216 17:42:07.725413 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444fa8d3-d5ca-4493-91eb-cab64b088d2b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "444fa8d3-d5ca-4493-91eb-cab64b088d2b" (UID: "444fa8d3-d5ca-4493-91eb-cab64b088d2b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:42:07.725566 master-0 kubenswrapper[4652]: I0216 17:42:07.725519 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "444fa8d3-d5ca-4493-91eb-cab64b088d2b" (UID: "444fa8d3-d5ca-4493-91eb-cab64b088d2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:07.729029 master-0 kubenswrapper[4652]: I0216 17:42:07.729000 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444fa8d3-d5ca-4493-91eb-cab64b088d2b-kube-api-access-68mt5" (OuterVolumeSpecName: "kube-api-access-68mt5") pod "444fa8d3-d5ca-4493-91eb-cab64b088d2b" (UID: "444fa8d3-d5ca-4493-91eb-cab64b088d2b"). InnerVolumeSpecName "kube-api-access-68mt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:07.729237 master-0 kubenswrapper[4652]: I0216 17:42:07.729160 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-scripts" (OuterVolumeSpecName: "scripts") pod "444fa8d3-d5ca-4493-91eb-cab64b088d2b" (UID: "444fa8d3-d5ca-4493-91eb-cab64b088d2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:07.826434 master-0 kubenswrapper[4652]: I0216 17:42:07.824162 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:07.826434 master-0 kubenswrapper[4652]: I0216 17:42:07.824197 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/444fa8d3-d5ca-4493-91eb-cab64b088d2b-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:07.826434 master-0 kubenswrapper[4652]: I0216 17:42:07.824206 4652 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/444fa8d3-d5ca-4493-91eb-cab64b088d2b-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:07.826434 master-0 kubenswrapper[4652]: I0216 17:42:07.824214 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:07.826434 master-0 kubenswrapper[4652]: I0216 17:42:07.824224 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68mt5\" (UniqueName: \"kubernetes.io/projected/444fa8d3-d5ca-4493-91eb-cab64b088d2b-kube-api-access-68mt5\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:07.826434 master-0 kubenswrapper[4652]: I0216 17:42:07.824233 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444fa8d3-d5ca-4493-91eb-cab64b088d2b-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:08.022443 master-0 kubenswrapper[4652]: I0216 17:42:08.022282 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:08.130185 master-0 kubenswrapper[4652]: I0216 17:42:08.129992 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-config\") pod \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " Feb 16 17:42:08.130185 master-0 kubenswrapper[4652]: I0216 17:42:08.130071 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-ovsdbserver-sb\") pod \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " Feb 16 17:42:08.130485 master-0 kubenswrapper[4652]: I0216 17:42:08.130213 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-dns-svc\") pod \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " Feb 16 17:42:08.130485 master-0 kubenswrapper[4652]: I0216 17:42:08.130408 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-ovsdbserver-nb\") pod \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " Feb 16 17:42:08.130485 master-0 kubenswrapper[4652]: I0216 17:42:08.130429 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-dns-swift-storage-0\") pod \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " Feb 16 17:42:08.130619 master-0 kubenswrapper[4652]: I0216 17:42:08.130585 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqf7z\" (UniqueName: \"kubernetes.io/projected/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-kube-api-access-hqf7z\") pod \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\" (UID: \"bbc867aa-ee90-41ae-aeb5-831ecc0208ae\") " Feb 16 17:42:08.134336 master-0 kubenswrapper[4652]: I0216 17:42:08.134218 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-kube-api-access-hqf7z" (OuterVolumeSpecName: "kube-api-access-hqf7z") pod "bbc867aa-ee90-41ae-aeb5-831ecc0208ae" (UID: "bbc867aa-ee90-41ae-aeb5-831ecc0208ae"). InnerVolumeSpecName "kube-api-access-hqf7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:08.158640 master-0 kubenswrapper[4652]: I0216 17:42:08.158567 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bbc867aa-ee90-41ae-aeb5-831ecc0208ae" (UID: "bbc867aa-ee90-41ae-aeb5-831ecc0208ae"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:08.166794 master-0 kubenswrapper[4652]: I0216 17:42:08.166411 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bbc867aa-ee90-41ae-aeb5-831ecc0208ae" (UID: "bbc867aa-ee90-41ae-aeb5-831ecc0208ae"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:08.168527 master-0 kubenswrapper[4652]: I0216 17:42:08.166825 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-config" (OuterVolumeSpecName: "config") pod "bbc867aa-ee90-41ae-aeb5-831ecc0208ae" (UID: "bbc867aa-ee90-41ae-aeb5-831ecc0208ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:08.169738 master-0 kubenswrapper[4652]: I0216 17:42:08.169709 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bbc867aa-ee90-41ae-aeb5-831ecc0208ae" (UID: "bbc867aa-ee90-41ae-aeb5-831ecc0208ae"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:08.194417 master-0 kubenswrapper[4652]: I0216 17:42:08.194367 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bbc867aa-ee90-41ae-aeb5-831ecc0208ae" (UID: "bbc867aa-ee90-41ae-aeb5-831ecc0208ae"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:08.233314 master-0 kubenswrapper[4652]: I0216 17:42:08.233238 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:08.233314 master-0 kubenswrapper[4652]: I0216 17:42:08.233302 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:08.233314 master-0 kubenswrapper[4652]: I0216 17:42:08.233317 4652 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:08.233664 master-0 kubenswrapper[4652]: I0216 17:42:08.233328 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqf7z\" (UniqueName: \"kubernetes.io/projected/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-kube-api-access-hqf7z\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:08.233664 master-0 kubenswrapper[4652]: I0216 17:42:08.233346 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:08.233664 master-0 kubenswrapper[4652]: I0216 17:42:08.233357 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bbc867aa-ee90-41ae-aeb5-831ecc0208ae-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:08.553267 master-0 kubenswrapper[4652]: I0216 17:42:08.553057 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647b99b9f-kjks6" event={"ID":"bbc867aa-ee90-41ae-aeb5-831ecc0208ae","Type":"ContainerDied","Data":"3a8ef53d4775cc78032b8a5df1a48f1e01b37311608b51d8c652b94d6c0c765b"} Feb 16 17:42:08.553267 master-0 kubenswrapper[4652]: I0216 17:42:08.553138 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647b99b9f-kjks6" Feb 16 17:42:08.553267 master-0 kubenswrapper[4652]: I0216 17:42:08.553197 4652 scope.go:117] "RemoveContainer" containerID="3abb08bec15844e58cf00a4e30a895afb425ca53f6118e59a2f3d4f051a6318c" Feb 16 17:42:08.565077 master-0 kubenswrapper[4652]: I0216 17:42:08.558697 4652 generic.go:334] "Generic (PLEG): container finished" podID="cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" containerID="5b77eae8f52143140c1240f9b433b52826f7d532b0156b521b127a534abda182" exitCode=0 Feb 16 17:42:08.565077 master-0 kubenswrapper[4652]: I0216 17:42:08.558828 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-997495b47-lhjkc" event={"ID":"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3","Type":"ContainerDied","Data":"5b77eae8f52143140c1240f9b433b52826f7d532b0156b521b127a534abda182"} Feb 16 17:42:08.565077 master-0 kubenswrapper[4652]: I0216 17:42:08.558876 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.694269 master-0 kubenswrapper[4652]: I0216 17:42:08.692770 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.716413 master-0 kubenswrapper[4652]: I0216 17:42:08.707592 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:08.716413 master-0 kubenswrapper[4652]: I0216 17:42:08.713855 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:08.731276 master-0 kubenswrapper[4652]: I0216 17:42:08.728117 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:08.731276 master-0 kubenswrapper[4652]: E0216 17:42:08.729205 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbc867aa-ee90-41ae-aeb5-831ecc0208ae" containerName="init" Feb 16 17:42:08.731276 master-0 kubenswrapper[4652]: I0216 17:42:08.729219 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbc867aa-ee90-41ae-aeb5-831ecc0208ae" containerName="init" Feb 16 17:42:08.735278 master-0 kubenswrapper[4652]: I0216 17:42:08.732676 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbc867aa-ee90-41ae-aeb5-831ecc0208ae" containerName="init" Feb 16 17:42:08.741310 master-0 kubenswrapper[4652]: I0216 17:42:08.735695 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.741310 master-0 kubenswrapper[4652]: I0216 17:42:08.739153 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-50e08-default-external-config-data" Feb 16 17:42:08.861947 master-0 kubenswrapper[4652]: I0216 17:42:08.850527 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\" (UID: \"444fa8d3-d5ca-4493-91eb-cab64b088d2b\") " Feb 16 17:42:08.861947 master-0 kubenswrapper[4652]: I0216 17:42:08.852587 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e22863f-9673-436b-a912-4253af989909-logs\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.861947 master-0 kubenswrapper[4652]: I0216 17:42:08.853126 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvh9g\" (UniqueName: \"kubernetes.io/projected/2e22863f-9673-436b-a912-4253af989909-kube-api-access-qvh9g\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.861947 master-0 kubenswrapper[4652]: I0216 17:42:08.853288 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-config-data\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.861947 master-0 kubenswrapper[4652]: I0216 17:42:08.853625 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-scripts\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.861947 master-0 kubenswrapper[4652]: I0216 17:42:08.853716 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-combined-ca-bundle\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.861947 master-0 kubenswrapper[4652]: I0216 17:42:08.853869 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e22863f-9673-436b-a912-4253af989909-httpd-run\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.870276 master-0 kubenswrapper[4652]: I0216 17:42:08.866743 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647b99b9f-kjks6"] Feb 16 17:42:08.870276 master-0 kubenswrapper[4652]: I0216 17:42:08.866802 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647b99b9f-kjks6"] Feb 16 17:42:08.870276 master-0 kubenswrapper[4652]: I0216 17:42:08.866822 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:08.968275 master-0 kubenswrapper[4652]: I0216 17:42:08.960544 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvh9g\" (UniqueName: \"kubernetes.io/projected/2e22863f-9673-436b-a912-4253af989909-kube-api-access-qvh9g\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.968275 master-0 kubenswrapper[4652]: I0216 17:42:08.960643 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-config-data\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.968275 master-0 kubenswrapper[4652]: I0216 17:42:08.960735 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-scripts\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.968275 master-0 kubenswrapper[4652]: I0216 17:42:08.960767 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-combined-ca-bundle\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.968275 master-0 kubenswrapper[4652]: I0216 17:42:08.960833 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e22863f-9673-436b-a912-4253af989909-httpd-run\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.968275 master-0 kubenswrapper[4652]: I0216 17:42:08.960926 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e22863f-9673-436b-a912-4253af989909-logs\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.968275 master-0 kubenswrapper[4652]: I0216 17:42:08.961851 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e22863f-9673-436b-a912-4253af989909-logs\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.982283 master-0 kubenswrapper[4652]: I0216 17:42:08.982209 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-combined-ca-bundle\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.993276 master-0 kubenswrapper[4652]: I0216 17:42:08.988042 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e22863f-9673-436b-a912-4253af989909-httpd-run\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:08.993276 master-0 kubenswrapper[4652]: I0216 17:42:08.989707 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-scripts\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:09.002279 master-0 kubenswrapper[4652]: I0216 17:42:08.998992 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvh9g\" (UniqueName: \"kubernetes.io/projected/2e22863f-9673-436b-a912-4253af989909-kube-api-access-qvh9g\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:09.071408 master-0 kubenswrapper[4652]: I0216 17:42:09.067777 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-config-data\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:09.243275 master-0 kubenswrapper[4652]: I0216 17:42:09.242024 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c255-account-create-update-ttmxj" Feb 16 17:42:09.323279 master-0 kubenswrapper[4652]: I0216 17:42:09.313843 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-x89lf" Feb 16 17:42:09.404334 master-0 kubenswrapper[4652]: I0216 17:42:09.403985 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7068e65-9057-4efb-a478-53734617a8fe-operator-scripts\") pod \"d7068e65-9057-4efb-a478-53734617a8fe\" (UID: \"d7068e65-9057-4efb-a478-53734617a8fe\") " Feb 16 17:42:09.404498 master-0 kubenswrapper[4652]: I0216 17:42:09.404433 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/607e6833-eab2-4429-ac81-a161c3525702-operator-scripts\") pod \"607e6833-eab2-4429-ac81-a161c3525702\" (UID: \"607e6833-eab2-4429-ac81-a161c3525702\") " Feb 16 17:42:09.406806 master-0 kubenswrapper[4652]: I0216 17:42:09.404762 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5drg5\" (UniqueName: \"kubernetes.io/projected/607e6833-eab2-4429-ac81-a161c3525702-kube-api-access-5drg5\") pod \"607e6833-eab2-4429-ac81-a161c3525702\" (UID: \"607e6833-eab2-4429-ac81-a161c3525702\") " Feb 16 17:42:09.406806 master-0 kubenswrapper[4652]: I0216 17:42:09.404808 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzzbs\" (UniqueName: \"kubernetes.io/projected/d7068e65-9057-4efb-a478-53734617a8fe-kube-api-access-pzzbs\") pod \"d7068e65-9057-4efb-a478-53734617a8fe\" (UID: \"d7068e65-9057-4efb-a478-53734617a8fe\") " Feb 16 17:42:09.406806 master-0 kubenswrapper[4652]: I0216 17:42:09.405439 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/607e6833-eab2-4429-ac81-a161c3525702-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "607e6833-eab2-4429-ac81-a161c3525702" (UID: "607e6833-eab2-4429-ac81-a161c3525702"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:09.406806 master-0 kubenswrapper[4652]: I0216 17:42:09.406028 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7068e65-9057-4efb-a478-53734617a8fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7068e65-9057-4efb-a478-53734617a8fe" (UID: "d7068e65-9057-4efb-a478-53734617a8fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:09.407371 master-0 kubenswrapper[4652]: I0216 17:42:09.407335 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/607e6833-eab2-4429-ac81-a161c3525702-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:09.407371 master-0 kubenswrapper[4652]: I0216 17:42:09.407364 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7068e65-9057-4efb-a478-53734617a8fe-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:09.409114 master-0 kubenswrapper[4652]: I0216 17:42:09.408717 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/607e6833-eab2-4429-ac81-a161c3525702-kube-api-access-5drg5" (OuterVolumeSpecName: "kube-api-access-5drg5") pod "607e6833-eab2-4429-ac81-a161c3525702" (UID: "607e6833-eab2-4429-ac81-a161c3525702"). InnerVolumeSpecName "kube-api-access-5drg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:09.417700 master-0 kubenswrapper[4652]: I0216 17:42:09.417180 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7068e65-9057-4efb-a478-53734617a8fe-kube-api-access-pzzbs" (OuterVolumeSpecName: "kube-api-access-pzzbs") pod "d7068e65-9057-4efb-a478-53734617a8fe" (UID: "d7068e65-9057-4efb-a478-53734617a8fe"). InnerVolumeSpecName "kube-api-access-pzzbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:09.512275 master-0 kubenswrapper[4652]: I0216 17:42:09.512126 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzzbs\" (UniqueName: \"kubernetes.io/projected/d7068e65-9057-4efb-a478-53734617a8fe-kube-api-access-pzzbs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:09.512275 master-0 kubenswrapper[4652]: I0216 17:42:09.512160 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5drg5\" (UniqueName: \"kubernetes.io/projected/607e6833-eab2-4429-ac81-a161c3525702-kube-api-access-5drg5\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:09.588813 master-0 kubenswrapper[4652]: I0216 17:42:09.588748 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-create-x89lf" event={"ID":"607e6833-eab2-4429-ac81-a161c3525702","Type":"ContainerDied","Data":"be3388cd39951c430106b2b985e2986b094036f0fe7cca7238c8c0d119e755b9"} Feb 16 17:42:09.588813 master-0 kubenswrapper[4652]: I0216 17:42:09.588803 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be3388cd39951c430106b2b985e2986b094036f0fe7cca7238c8c0d119e755b9" Feb 16 17:42:09.589086 master-0 kubenswrapper[4652]: I0216 17:42:09.588894 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-create-x89lf" Feb 16 17:42:09.591775 master-0 kubenswrapper[4652]: I0216 17:42:09.591733 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-997495b47-lhjkc" event={"ID":"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3","Type":"ContainerStarted","Data":"386676b15cb32a929b68b61ebacc8a6208451a2c271e0704bda2fd3ee92dcaa5"} Feb 16 17:42:09.591912 master-0 kubenswrapper[4652]: I0216 17:42:09.591888 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:09.597592 master-0 kubenswrapper[4652]: I0216 17:42:09.597510 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-c255-account-create-update-ttmxj" event={"ID":"d7068e65-9057-4efb-a478-53734617a8fe","Type":"ContainerDied","Data":"2b0d90eb0e2aa5f2802a0f063eb66e53e94020a90e25334e179006066d6ee684"} Feb 16 17:42:09.597592 master-0 kubenswrapper[4652]: I0216 17:42:09.597550 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b0d90eb0e2aa5f2802a0f063eb66e53e94020a90e25334e179006066d6ee684" Feb 16 17:42:09.597830 master-0 kubenswrapper[4652]: I0216 17:42:09.597616 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-c255-account-create-update-ttmxj" Feb 16 17:42:09.625332 master-0 kubenswrapper[4652]: I0216 17:42:09.625200 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-997495b47-lhjkc" podStartSLOduration=4.625175934 podStartE2EDuration="4.625175934s" podCreationTimestamp="2026-02-16 17:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:09.619854872 +0000 UTC m=+1087.008023388" watchObservedRunningTime="2026-02-16 17:42:09.625175934 +0000 UTC m=+1087.013344470" Feb 16 17:42:10.419047 master-0 kubenswrapper[4652]: I0216 17:42:10.418998 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"glance-50e08-default-internal-api-0\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:10.464273 master-0 kubenswrapper[4652]: I0216 17:42:10.463005 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:10.573355 master-0 kubenswrapper[4652]: I0216 17:42:10.573204 4652 trace.go:236] Trace[50864878]: "Calculate volume metrics of mysql-db for pod openstack/openstack-galera-0" (16-Feb-2026 17:42:08.354) (total time: 2218ms): Feb 16 17:42:10.573355 master-0 kubenswrapper[4652]: Trace[50864878]: [2.218633272s] [2.218633272s] END Feb 16 17:42:10.595583 master-0 kubenswrapper[4652]: I0216 17:42:10.595534 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5" (OuterVolumeSpecName: "glance") pod "444fa8d3-d5ca-4493-91eb-cab64b088d2b" (UID: "444fa8d3-d5ca-4493-91eb-cab64b088d2b"). InnerVolumeSpecName "pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:42:10.615139 master-0 kubenswrapper[4652]: I0216 17:42:10.615044 4652 generic.go:334] "Generic (PLEG): container finished" podID="09b63a1e-c6ec-4046-bc54-585d4031c6ed" containerID="9e97e6351dfa9e9574d588b51453a711c4449b16480af391fab93cf8e4c34d6b" exitCode=0 Feb 16 17:42:10.615933 master-0 kubenswrapper[4652]: I0216 17:42:10.615904 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tgkq5" event={"ID":"09b63a1e-c6ec-4046-bc54-585d4031c6ed","Type":"ContainerDied","Data":"9e97e6351dfa9e9574d588b51453a711c4449b16480af391fab93cf8e4c34d6b"} Feb 16 17:42:10.657049 master-0 kubenswrapper[4652]: I0216 17:42:10.656992 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:10.758197 master-0 kubenswrapper[4652]: I0216 17:42:10.758132 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444fa8d3-d5ca-4493-91eb-cab64b088d2b" path="/var/lib/kubelet/pods/444fa8d3-d5ca-4493-91eb-cab64b088d2b/volumes" Feb 16 17:42:10.758827 master-0 kubenswrapper[4652]: I0216 17:42:10.758794 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbc867aa-ee90-41ae-aeb5-831ecc0208ae" path="/var/lib/kubelet/pods/bbc867aa-ee90-41ae-aeb5-831ecc0208ae/volumes" Feb 16 17:42:11.976039 master-0 kubenswrapper[4652]: I0216 17:42:11.975982 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:12.074315 master-0 kubenswrapper[4652]: I0216 17:42:12.073403 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:12.197353 master-0 kubenswrapper[4652]: I0216 17:42:12.195347 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-scripts\") pod \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " Feb 16 17:42:12.197353 master-0 kubenswrapper[4652]: I0216 17:42:12.195427 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-config-data\") pod \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " Feb 16 17:42:12.197353 master-0 kubenswrapper[4652]: I0216 17:42:12.195476 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-fernet-keys\") pod \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " Feb 16 17:42:12.197353 master-0 kubenswrapper[4652]: I0216 17:42:12.196145 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-combined-ca-bundle\") pod \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " Feb 16 17:42:12.197353 master-0 kubenswrapper[4652]: I0216 17:42:12.196530 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq4d6\" (UniqueName: \"kubernetes.io/projected/09b63a1e-c6ec-4046-bc54-585d4031c6ed-kube-api-access-dq4d6\") pod \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " Feb 16 17:42:12.197353 master-0 kubenswrapper[4652]: I0216 17:42:12.196586 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-credential-keys\") pod \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\" (UID: \"09b63a1e-c6ec-4046-bc54-585d4031c6ed\") " Feb 16 17:42:12.199118 master-0 kubenswrapper[4652]: I0216 17:42:12.199057 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "09b63a1e-c6ec-4046-bc54-585d4031c6ed" (UID: "09b63a1e-c6ec-4046-bc54-585d4031c6ed"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:12.199432 master-0 kubenswrapper[4652]: I0216 17:42:12.199376 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-scripts" (OuterVolumeSpecName: "scripts") pod "09b63a1e-c6ec-4046-bc54-585d4031c6ed" (UID: "09b63a1e-c6ec-4046-bc54-585d4031c6ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:12.201062 master-0 kubenswrapper[4652]: I0216 17:42:12.201020 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b63a1e-c6ec-4046-bc54-585d4031c6ed-kube-api-access-dq4d6" (OuterVolumeSpecName: "kube-api-access-dq4d6") pod "09b63a1e-c6ec-4046-bc54-585d4031c6ed" (UID: "09b63a1e-c6ec-4046-bc54-585d4031c6ed"). InnerVolumeSpecName "kube-api-access-dq4d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:12.202371 master-0 kubenswrapper[4652]: I0216 17:42:12.202319 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "09b63a1e-c6ec-4046-bc54-585d4031c6ed" (UID: "09b63a1e-c6ec-4046-bc54-585d4031c6ed"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:12.224959 master-0 kubenswrapper[4652]: I0216 17:42:12.224878 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-config-data" (OuterVolumeSpecName: "config-data") pod "09b63a1e-c6ec-4046-bc54-585d4031c6ed" (UID: "09b63a1e-c6ec-4046-bc54-585d4031c6ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:12.234347 master-0 kubenswrapper[4652]: I0216 17:42:12.234300 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09b63a1e-c6ec-4046-bc54-585d4031c6ed" (UID: "09b63a1e-c6ec-4046-bc54-585d4031c6ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:12.278267 master-0 kubenswrapper[4652]: I0216 17:42:12.277780 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:12.279791 master-0 kubenswrapper[4652]: I0216 17:42:12.279726 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:42:12.283577 master-0 kubenswrapper[4652]: W0216 17:42:12.283536 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod206bcb88_0042_48ac_a9cc_8a121b9fdb42.slice/crio-f5a7912e89b7a7563d3f2d2e0937a5a96b4006ec817677dbcd47d40bf02641b8 WatchSource:0}: Error finding container f5a7912e89b7a7563d3f2d2e0937a5a96b4006ec817677dbcd47d40bf02641b8: Status 404 returned error can't find the container with id f5a7912e89b7a7563d3f2d2e0937a5a96b4006ec817677dbcd47d40bf02641b8 Feb 16 17:42:12.299363 master-0 kubenswrapper[4652]: I0216 17:42:12.299222 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq4d6\" (UniqueName: \"kubernetes.io/projected/09b63a1e-c6ec-4046-bc54-585d4031c6ed-kube-api-access-dq4d6\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:12.299363 master-0 kubenswrapper[4652]: I0216 17:42:12.299277 4652 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:12.299363 master-0 kubenswrapper[4652]: I0216 17:42:12.299290 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:12.299363 master-0 kubenswrapper[4652]: I0216 17:42:12.299302 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:12.299363 master-0 kubenswrapper[4652]: I0216 17:42:12.299313 4652 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:12.299363 master-0 kubenswrapper[4652]: I0216 17:42:12.299324 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b63a1e-c6ec-4046-bc54-585d4031c6ed-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:12.651941 master-0 kubenswrapper[4652]: I0216 17:42:12.650761 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mw67q" event={"ID":"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a","Type":"ContainerStarted","Data":"66c99613ddcc757ca3e590f3a5ebfd9250c3e880cf12ba1c7adc8a58b754987a"} Feb 16 17:42:12.657699 master-0 kubenswrapper[4652]: I0216 17:42:12.657637 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tgkq5" event={"ID":"09b63a1e-c6ec-4046-bc54-585d4031c6ed","Type":"ContainerDied","Data":"f207b04e5bb480162b91a6e2f3fe5fd340d2254c05a72756926fda6576a2d42b"} Feb 16 17:42:12.657699 master-0 kubenswrapper[4652]: I0216 17:42:12.657686 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tgkq5" Feb 16 17:42:12.657943 master-0 kubenswrapper[4652]: I0216 17:42:12.657718 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f207b04e5bb480162b91a6e2f3fe5fd340d2254c05a72756926fda6576a2d42b" Feb 16 17:42:12.674461 master-0 kubenswrapper[4652]: I0216 17:42:12.669273 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"206bcb88-0042-48ac-a9cc-8a121b9fdb42","Type":"ContainerStarted","Data":"f5a7912e89b7a7563d3f2d2e0937a5a96b4006ec817677dbcd47d40bf02641b8"} Feb 16 17:42:12.707649 master-0 kubenswrapper[4652]: I0216 17:42:12.706608 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-mw67q" podStartSLOduration=3.090388403 podStartE2EDuration="7.706582984s" podCreationTimestamp="2026-02-16 17:42:05 +0000 UTC" firstStartedPulling="2026-02-16 17:42:07.07522433 +0000 UTC m=+1084.463392846" lastFinishedPulling="2026-02-16 17:42:11.691418911 +0000 UTC m=+1089.079587427" observedRunningTime="2026-02-16 17:42:12.674661238 +0000 UTC m=+1090.062829754" watchObservedRunningTime="2026-02-16 17:42:12.706582984 +0000 UTC m=+1090.094751520" Feb 16 17:42:12.831133 master-0 kubenswrapper[4652]: I0216 17:42:12.831072 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tgkq5"] Feb 16 17:42:12.840498 master-0 kubenswrapper[4652]: I0216 17:42:12.839610 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tgkq5"] Feb 16 17:42:12.877760 master-0 kubenswrapper[4652]: I0216 17:42:12.877724 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:12.940101 master-0 kubenswrapper[4652]: I0216 17:42:12.939955 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-9w7qn"] Feb 16 17:42:12.940580 master-0 kubenswrapper[4652]: E0216 17:42:12.940539 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7068e65-9057-4efb-a478-53734617a8fe" containerName="mariadb-account-create-update" Feb 16 17:42:12.940580 master-0 kubenswrapper[4652]: I0216 17:42:12.940569 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7068e65-9057-4efb-a478-53734617a8fe" containerName="mariadb-account-create-update" Feb 16 17:42:12.940692 master-0 kubenswrapper[4652]: E0216 17:42:12.940611 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09b63a1e-c6ec-4046-bc54-585d4031c6ed" containerName="keystone-bootstrap" Feb 16 17:42:12.940692 master-0 kubenswrapper[4652]: I0216 17:42:12.940621 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="09b63a1e-c6ec-4046-bc54-585d4031c6ed" containerName="keystone-bootstrap" Feb 16 17:42:12.940692 master-0 kubenswrapper[4652]: E0216 17:42:12.940644 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607e6833-eab2-4429-ac81-a161c3525702" containerName="mariadb-database-create" Feb 16 17:42:12.940692 master-0 kubenswrapper[4652]: I0216 17:42:12.940653 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="607e6833-eab2-4429-ac81-a161c3525702" containerName="mariadb-database-create" Feb 16 17:42:12.940949 master-0 kubenswrapper[4652]: I0216 17:42:12.940905 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="607e6833-eab2-4429-ac81-a161c3525702" containerName="mariadb-database-create" Feb 16 17:42:12.940949 master-0 kubenswrapper[4652]: I0216 17:42:12.940937 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7068e65-9057-4efb-a478-53734617a8fe" containerName="mariadb-account-create-update" Feb 16 17:42:12.941044 master-0 kubenswrapper[4652]: I0216 17:42:12.940959 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="09b63a1e-c6ec-4046-bc54-585d4031c6ed" containerName="keystone-bootstrap" Feb 16 17:42:12.941844 master-0 kubenswrapper[4652]: I0216 17:42:12.941807 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:12.947224 master-0 kubenswrapper[4652]: I0216 17:42:12.947183 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 17:42:12.947445 master-0 kubenswrapper[4652]: I0216 17:42:12.947424 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 17:42:12.947613 master-0 kubenswrapper[4652]: I0216 17:42:12.947585 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 17:42:12.950456 master-0 kubenswrapper[4652]: I0216 17:42:12.949264 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 17:42:12.971486 master-0 kubenswrapper[4652]: I0216 17:42:12.971398 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9w7qn"] Feb 16 17:42:13.021786 master-0 kubenswrapper[4652]: I0216 17:42:13.021282 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-combined-ca-bundle\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.021786 master-0 kubenswrapper[4652]: I0216 17:42:13.021398 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-credential-keys\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.021786 master-0 kubenswrapper[4652]: I0216 17:42:13.021560 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-fernet-keys\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.023274 master-0 kubenswrapper[4652]: I0216 17:42:13.021820 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-scripts\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.023274 master-0 kubenswrapper[4652]: I0216 17:42:13.021909 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-config-data\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.023274 master-0 kubenswrapper[4652]: I0216 17:42:13.021968 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twblf\" (UniqueName: \"kubernetes.io/projected/70d550d3-576a-460e-9595-6ade1d630c47-kube-api-access-twblf\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.125314 master-0 kubenswrapper[4652]: I0216 17:42:13.124296 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-fernet-keys\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.125314 master-0 kubenswrapper[4652]: I0216 17:42:13.124411 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-scripts\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.125314 master-0 kubenswrapper[4652]: I0216 17:42:13.124450 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-config-data\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.125314 master-0 kubenswrapper[4652]: I0216 17:42:13.124490 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twblf\" (UniqueName: \"kubernetes.io/projected/70d550d3-576a-460e-9595-6ade1d630c47-kube-api-access-twblf\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.125314 master-0 kubenswrapper[4652]: I0216 17:42:13.124551 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-combined-ca-bundle\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.125314 master-0 kubenswrapper[4652]: I0216 17:42:13.124612 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-credential-keys\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.128959 master-0 kubenswrapper[4652]: I0216 17:42:13.128903 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-credential-keys\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.132954 master-0 kubenswrapper[4652]: I0216 17:42:13.132881 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-fernet-keys\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.134148 master-0 kubenswrapper[4652]: I0216 17:42:13.134109 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-combined-ca-bundle\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.139740 master-0 kubenswrapper[4652]: I0216 17:42:13.139668 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-config-data\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.142459 master-0 kubenswrapper[4652]: I0216 17:42:13.142418 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twblf\" (UniqueName: \"kubernetes.io/projected/70d550d3-576a-460e-9595-6ade1d630c47-kube-api-access-twblf\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.146139 master-0 kubenswrapper[4652]: I0216 17:42:13.146091 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-scripts\") pod \"keystone-bootstrap-9w7qn\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.363716 master-0 kubenswrapper[4652]: I0216 17:42:13.363193 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:13.692717 master-0 kubenswrapper[4652]: I0216 17:42:13.692669 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"206bcb88-0042-48ac-a9cc-8a121b9fdb42","Type":"ContainerStarted","Data":"1ced99ce195946034eef55ee5e6062d9862949436bf20696711534985f9fee6f"} Feb 16 17:42:13.692717 master-0 kubenswrapper[4652]: I0216 17:42:13.692714 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"206bcb88-0042-48ac-a9cc-8a121b9fdb42","Type":"ContainerStarted","Data":"8363e92ba9815dcbcf8ebbe3399c5cfcb1619260ad50dd78888ab854a75d3199"} Feb 16 17:42:13.701350 master-0 kubenswrapper[4652]: I0216 17:42:13.697524 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"2e22863f-9673-436b-a912-4253af989909","Type":"ContainerStarted","Data":"0f8c346fff61ca37cc528b477dd49715c231378cd4531e518a9edd3021a4053e"} Feb 16 17:42:13.701350 master-0 kubenswrapper[4652]: I0216 17:42:13.697585 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"2e22863f-9673-436b-a912-4253af989909","Type":"ContainerStarted","Data":"d9b6daade23927535be95680f6b2b4418bfc793638dbf6698592c627f1aee79a"} Feb 16 17:42:13.740156 master-0 kubenswrapper[4652]: I0216 17:42:13.740063 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-50e08-default-internal-api-0" podStartSLOduration=6.740045776 podStartE2EDuration="6.740045776s" podCreationTimestamp="2026-02-16 17:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:13.736182283 +0000 UTC m=+1091.124350799" watchObservedRunningTime="2026-02-16 17:42:13.740045776 +0000 UTC m=+1091.128214312" Feb 16 17:42:13.840573 master-0 kubenswrapper[4652]: I0216 17:42:13.840485 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-9w7qn"] Feb 16 17:42:13.873378 master-0 kubenswrapper[4652]: W0216 17:42:13.864766 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d550d3_576a_460e_9595_6ade1d630c47.slice/crio-3c64ef637b109ac41f99fa15ae830e067af2d102d87770686fb958dfa69a6826 WatchSource:0}: Error finding container 3c64ef637b109ac41f99fa15ae830e067af2d102d87770686fb958dfa69a6826: Status 404 returned error can't find the container with id 3c64ef637b109ac41f99fa15ae830e067af2d102d87770686fb958dfa69a6826 Feb 16 17:42:14.707676 master-0 kubenswrapper[4652]: I0216 17:42:14.707581 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"2e22863f-9673-436b-a912-4253af989909","Type":"ContainerStarted","Data":"2de5d5807e37bcd83a9045ad621e3dc5b56e8339e932c7d094dda1e6b76d4731"} Feb 16 17:42:14.712405 master-0 kubenswrapper[4652]: I0216 17:42:14.712354 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9w7qn" event={"ID":"70d550d3-576a-460e-9595-6ade1d630c47","Type":"ContainerStarted","Data":"31b99946c6bee2983a671733289b9481b3866d105c9a507d1cd3c9ad117064f7"} Feb 16 17:42:14.712405 master-0 kubenswrapper[4652]: I0216 17:42:14.712397 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9w7qn" event={"ID":"70d550d3-576a-460e-9595-6ade1d630c47","Type":"ContainerStarted","Data":"3c64ef637b109ac41f99fa15ae830e067af2d102d87770686fb958dfa69a6826"} Feb 16 17:42:14.755274 master-0 kubenswrapper[4652]: I0216 17:42:14.754845 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-50e08-default-external-api-0" podStartSLOduration=6.754825459 podStartE2EDuration="6.754825459s" podCreationTimestamp="2026-02-16 17:42:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:14.732636444 +0000 UTC m=+1092.120804970" watchObservedRunningTime="2026-02-16 17:42:14.754825459 +0000 UTC m=+1092.142993975" Feb 16 17:42:14.774505 master-0 kubenswrapper[4652]: I0216 17:42:14.774445 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b63a1e-c6ec-4046-bc54-585d4031c6ed" path="/var/lib/kubelet/pods/09b63a1e-c6ec-4046-bc54-585d4031c6ed/volumes" Feb 16 17:42:14.777612 master-0 kubenswrapper[4652]: I0216 17:42:14.777545 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-9w7qn" podStartSLOduration=2.777531737 podStartE2EDuration="2.777531737s" podCreationTimestamp="2026-02-16 17:42:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:14.766712077 +0000 UTC m=+1092.154880593" watchObservedRunningTime="2026-02-16 17:42:14.777531737 +0000 UTC m=+1092.165700253" Feb 16 17:42:15.311399 master-0 kubenswrapper[4652]: I0216 17:42:15.311322 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:15.380437 master-0 kubenswrapper[4652]: I0216 17:42:15.380365 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:42:15.456497 master-0 kubenswrapper[4652]: I0216 17:42:15.456346 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-db-sync-ndjf5"] Feb 16 17:42:15.458942 master-0 kubenswrapper[4652]: I0216 17:42:15.458890 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.462655 master-0 kubenswrapper[4652]: I0216 17:42:15.462603 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 16 17:42:15.463947 master-0 kubenswrapper[4652]: I0216 17:42:15.463915 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-scripts" Feb 16 17:42:15.477282 master-0 kubenswrapper[4652]: I0216 17:42:15.470680 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-ndjf5"] Feb 16 17:42:15.485276 master-0 kubenswrapper[4652]: I0216 17:42:15.484396 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-scripts\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.485276 master-0 kubenswrapper[4652]: I0216 17:42:15.484485 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-config-data\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.485276 master-0 kubenswrapper[4652]: I0216 17:42:15.484541 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e5005365-36c1-44e2-be02-84737aa7a60a-etc-podinfo\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.485276 master-0 kubenswrapper[4652]: I0216 17:42:15.484579 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lqxh\" (UniqueName: \"kubernetes.io/projected/e5005365-36c1-44e2-be02-84737aa7a60a-kube-api-access-7lqxh\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.485276 master-0 kubenswrapper[4652]: I0216 17:42:15.484629 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e5005365-36c1-44e2-be02-84737aa7a60a-config-data-merged\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.485276 master-0 kubenswrapper[4652]: I0216 17:42:15.484646 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-combined-ca-bundle\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.587024 master-0 kubenswrapper[4652]: I0216 17:42:15.586868 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e5005365-36c1-44e2-be02-84737aa7a60a-etc-podinfo\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.587024 master-0 kubenswrapper[4652]: I0216 17:42:15.586969 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lqxh\" (UniqueName: \"kubernetes.io/projected/e5005365-36c1-44e2-be02-84737aa7a60a-kube-api-access-7lqxh\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.587282 master-0 kubenswrapper[4652]: I0216 17:42:15.587050 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e5005365-36c1-44e2-be02-84737aa7a60a-config-data-merged\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.587282 master-0 kubenswrapper[4652]: I0216 17:42:15.587077 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-combined-ca-bundle\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.587282 master-0 kubenswrapper[4652]: I0216 17:42:15.587121 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-scripts\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.587282 master-0 kubenswrapper[4652]: I0216 17:42:15.587219 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-config-data\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.588672 master-0 kubenswrapper[4652]: I0216 17:42:15.588108 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e5005365-36c1-44e2-be02-84737aa7a60a-config-data-merged\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.591532 master-0 kubenswrapper[4652]: I0216 17:42:15.591495 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-scripts\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.593337 master-0 kubenswrapper[4652]: I0216 17:42:15.593296 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-combined-ca-bundle\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.600876 master-0 kubenswrapper[4652]: I0216 17:42:15.598988 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-config-data\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.600876 master-0 kubenswrapper[4652]: I0216 17:42:15.599703 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e5005365-36c1-44e2-be02-84737aa7a60a-etc-podinfo\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.604935 master-0 kubenswrapper[4652]: I0216 17:42:15.604884 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lqxh\" (UniqueName: \"kubernetes.io/projected/e5005365-36c1-44e2-be02-84737aa7a60a-kube-api-access-7lqxh\") pod \"ironic-db-sync-ndjf5\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:15.730845 master-0 kubenswrapper[4652]: I0216 17:42:15.730787 4652 generic.go:334] "Generic (PLEG): container finished" podID="4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a" containerID="66c99613ddcc757ca3e590f3a5ebfd9250c3e880cf12ba1c7adc8a58b754987a" exitCode=0 Feb 16 17:42:15.731487 master-0 kubenswrapper[4652]: I0216 17:42:15.730859 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mw67q" event={"ID":"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a","Type":"ContainerDied","Data":"66c99613ddcc757ca3e590f3a5ebfd9250c3e880cf12ba1c7adc8a58b754987a"} Feb 16 17:42:15.731487 master-0 kubenswrapper[4652]: I0216 17:42:15.731100 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-50e08-default-internal-api-0" podUID="206bcb88-0042-48ac-a9cc-8a121b9fdb42" containerName="glance-log" containerID="cri-o://8363e92ba9815dcbcf8ebbe3399c5cfcb1619260ad50dd78888ab854a75d3199" gracePeriod=30 Feb 16 17:42:15.731487 master-0 kubenswrapper[4652]: I0216 17:42:15.731397 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-50e08-default-internal-api-0" podUID="206bcb88-0042-48ac-a9cc-8a121b9fdb42" containerName="glance-httpd" containerID="cri-o://1ced99ce195946034eef55ee5e6062d9862949436bf20696711534985f9fee6f" gracePeriod=30 Feb 16 17:42:15.798576 master-0 kubenswrapper[4652]: I0216 17:42:15.798516 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:16.384813 master-0 kubenswrapper[4652]: I0216 17:42:16.383210 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:42:16.743813 master-0 kubenswrapper[4652]: I0216 17:42:16.743754 4652 generic.go:334] "Generic (PLEG): container finished" podID="206bcb88-0042-48ac-a9cc-8a121b9fdb42" containerID="1ced99ce195946034eef55ee5e6062d9862949436bf20696711534985f9fee6f" exitCode=0 Feb 16 17:42:16.743813 master-0 kubenswrapper[4652]: I0216 17:42:16.743791 4652 generic.go:334] "Generic (PLEG): container finished" podID="206bcb88-0042-48ac-a9cc-8a121b9fdb42" containerID="8363e92ba9815dcbcf8ebbe3399c5cfcb1619260ad50dd78888ab854a75d3199" exitCode=143 Feb 16 17:42:16.744487 master-0 kubenswrapper[4652]: I0216 17:42:16.743837 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"206bcb88-0042-48ac-a9cc-8a121b9fdb42","Type":"ContainerDied","Data":"1ced99ce195946034eef55ee5e6062d9862949436bf20696711534985f9fee6f"} Feb 16 17:42:16.744487 master-0 kubenswrapper[4652]: I0216 17:42:16.743898 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"206bcb88-0042-48ac-a9cc-8a121b9fdb42","Type":"ContainerDied","Data":"8363e92ba9815dcbcf8ebbe3399c5cfcb1619260ad50dd78888ab854a75d3199"} Feb 16 17:42:16.744487 master-0 kubenswrapper[4652]: I0216 17:42:16.744016 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-50e08-default-external-api-0" podUID="2e22863f-9673-436b-a912-4253af989909" containerName="glance-log" containerID="cri-o://0f8c346fff61ca37cc528b477dd49715c231378cd4531e518a9edd3021a4053e" gracePeriod=30 Feb 16 17:42:16.744487 master-0 kubenswrapper[4652]: I0216 17:42:16.744075 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-50e08-default-external-api-0" podUID="2e22863f-9673-436b-a912-4253af989909" containerName="glance-httpd" containerID="cri-o://2de5d5807e37bcd83a9045ad621e3dc5b56e8339e932c7d094dda1e6b76d4731" gracePeriod=30 Feb 16 17:42:17.400634 master-0 kubenswrapper[4652]: I0216 17:42:17.400571 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-4rvpk"] Feb 16 17:42:17.400875 master-0 kubenswrapper[4652]: I0216 17:42:17.400843 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" podUID="6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" containerName="dnsmasq-dns" containerID="cri-o://d2405f65f5a9e898a93938632be47f9bfa5859d60eb354f495245df0535a938b" gracePeriod=10 Feb 16 17:42:17.763926 master-0 kubenswrapper[4652]: I0216 17:42:17.763708 4652 generic.go:334] "Generic (PLEG): container finished" podID="70d550d3-576a-460e-9595-6ade1d630c47" containerID="31b99946c6bee2983a671733289b9481b3866d105c9a507d1cd3c9ad117064f7" exitCode=0 Feb 16 17:42:17.763926 master-0 kubenswrapper[4652]: I0216 17:42:17.763798 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9w7qn" event={"ID":"70d550d3-576a-460e-9595-6ade1d630c47","Type":"ContainerDied","Data":"31b99946c6bee2983a671733289b9481b3866d105c9a507d1cd3c9ad117064f7"} Feb 16 17:42:17.768947 master-0 kubenswrapper[4652]: I0216 17:42:17.768905 4652 generic.go:334] "Generic (PLEG): container finished" podID="6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" containerID="d2405f65f5a9e898a93938632be47f9bfa5859d60eb354f495245df0535a938b" exitCode=0 Feb 16 17:42:17.769076 master-0 kubenswrapper[4652]: I0216 17:42:17.768984 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" event={"ID":"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742","Type":"ContainerDied","Data":"d2405f65f5a9e898a93938632be47f9bfa5859d60eb354f495245df0535a938b"} Feb 16 17:42:17.771372 master-0 kubenswrapper[4652]: I0216 17:42:17.771332 4652 generic.go:334] "Generic (PLEG): container finished" podID="2e22863f-9673-436b-a912-4253af989909" containerID="2de5d5807e37bcd83a9045ad621e3dc5b56e8339e932c7d094dda1e6b76d4731" exitCode=0 Feb 16 17:42:17.771372 master-0 kubenswrapper[4652]: I0216 17:42:17.771370 4652 generic.go:334] "Generic (PLEG): container finished" podID="2e22863f-9673-436b-a912-4253af989909" containerID="0f8c346fff61ca37cc528b477dd49715c231378cd4531e518a9edd3021a4053e" exitCode=143 Feb 16 17:42:17.771531 master-0 kubenswrapper[4652]: I0216 17:42:17.771397 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"2e22863f-9673-436b-a912-4253af989909","Type":"ContainerDied","Data":"2de5d5807e37bcd83a9045ad621e3dc5b56e8339e932c7d094dda1e6b76d4731"} Feb 16 17:42:17.771531 master-0 kubenswrapper[4652]: I0216 17:42:17.771448 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"2e22863f-9673-436b-a912-4253af989909","Type":"ContainerDied","Data":"0f8c346fff61ca37cc528b477dd49715c231378cd4531e518a9edd3021a4053e"} Feb 16 17:42:21.669124 master-0 kubenswrapper[4652]: I0216 17:42:21.669032 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" podUID="6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.150:5353: connect: connection refused" Feb 16 17:42:23.618055 master-0 kubenswrapper[4652]: I0216 17:42:23.618015 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:23.637704 master-0 kubenswrapper[4652]: I0216 17:42:23.637657 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:23.673119 master-0 kubenswrapper[4652]: I0216 17:42:23.673069 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-logs\") pod \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " Feb 16 17:42:23.673337 master-0 kubenswrapper[4652]: I0216 17:42:23.673217 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-combined-ca-bundle\") pod \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " Feb 16 17:42:23.673337 master-0 kubenswrapper[4652]: I0216 17:42:23.673282 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-config-data\") pod \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " Feb 16 17:42:23.673337 master-0 kubenswrapper[4652]: I0216 17:42:23.673309 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twblf\" (UniqueName: \"kubernetes.io/projected/70d550d3-576a-460e-9595-6ade1d630c47-kube-api-access-twblf\") pod \"70d550d3-576a-460e-9595-6ade1d630c47\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " Feb 16 17:42:23.673337 master-0 kubenswrapper[4652]: I0216 17:42:23.673327 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-fernet-keys\") pod \"70d550d3-576a-460e-9595-6ade1d630c47\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " Feb 16 17:42:23.673531 master-0 kubenswrapper[4652]: I0216 17:42:23.673390 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzhnz\" (UniqueName: \"kubernetes.io/projected/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-kube-api-access-vzhnz\") pod \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " Feb 16 17:42:23.673531 master-0 kubenswrapper[4652]: I0216 17:42:23.673410 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-scripts\") pod \"70d550d3-576a-460e-9595-6ade1d630c47\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " Feb 16 17:42:23.673531 master-0 kubenswrapper[4652]: I0216 17:42:23.673452 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-config-data\") pod \"70d550d3-576a-460e-9595-6ade1d630c47\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " Feb 16 17:42:23.673531 master-0 kubenswrapper[4652]: I0216 17:42:23.673504 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-logs" (OuterVolumeSpecName: "logs") pod "4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a" (UID: "4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:42:23.673704 master-0 kubenswrapper[4652]: I0216 17:42:23.673539 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-combined-ca-bundle\") pod \"70d550d3-576a-460e-9595-6ade1d630c47\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " Feb 16 17:42:23.673704 master-0 kubenswrapper[4652]: I0216 17:42:23.673566 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-scripts\") pod \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\" (UID: \"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a\") " Feb 16 17:42:23.673704 master-0 kubenswrapper[4652]: I0216 17:42:23.673628 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-credential-keys\") pod \"70d550d3-576a-460e-9595-6ade1d630c47\" (UID: \"70d550d3-576a-460e-9595-6ade1d630c47\") " Feb 16 17:42:23.674066 master-0 kubenswrapper[4652]: I0216 17:42:23.674035 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:23.677043 master-0 kubenswrapper[4652]: I0216 17:42:23.677007 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "70d550d3-576a-460e-9595-6ade1d630c47" (UID: "70d550d3-576a-460e-9595-6ade1d630c47"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:23.681959 master-0 kubenswrapper[4652]: I0216 17:42:23.681912 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-scripts" (OuterVolumeSpecName: "scripts") pod "70d550d3-576a-460e-9595-6ade1d630c47" (UID: "70d550d3-576a-460e-9595-6ade1d630c47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:23.681959 master-0 kubenswrapper[4652]: I0216 17:42:23.681949 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d550d3-576a-460e-9595-6ade1d630c47-kube-api-access-twblf" (OuterVolumeSpecName: "kube-api-access-twblf") pod "70d550d3-576a-460e-9595-6ade1d630c47" (UID: "70d550d3-576a-460e-9595-6ade1d630c47"). InnerVolumeSpecName "kube-api-access-twblf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:23.682119 master-0 kubenswrapper[4652]: I0216 17:42:23.681958 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "70d550d3-576a-460e-9595-6ade1d630c47" (UID: "70d550d3-576a-460e-9595-6ade1d630c47"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:23.682476 master-0 kubenswrapper[4652]: I0216 17:42:23.682141 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-kube-api-access-vzhnz" (OuterVolumeSpecName: "kube-api-access-vzhnz") pod "4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a" (UID: "4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a"). InnerVolumeSpecName "kube-api-access-vzhnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:23.688298 master-0 kubenswrapper[4652]: I0216 17:42:23.686569 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-scripts" (OuterVolumeSpecName: "scripts") pod "4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a" (UID: "4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:23.702503 master-0 kubenswrapper[4652]: I0216 17:42:23.702318 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70d550d3-576a-460e-9595-6ade1d630c47" (UID: "70d550d3-576a-460e-9595-6ade1d630c47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:23.708412 master-0 kubenswrapper[4652]: I0216 17:42:23.708373 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-config-data" (OuterVolumeSpecName: "config-data") pod "70d550d3-576a-460e-9595-6ade1d630c47" (UID: "70d550d3-576a-460e-9595-6ade1d630c47"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:23.712417 master-0 kubenswrapper[4652]: I0216 17:42:23.712364 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-config-data" (OuterVolumeSpecName: "config-data") pod "4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a" (UID: "4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:23.712417 master-0 kubenswrapper[4652]: I0216 17:42:23.712373 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a" (UID: "4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:23.775648 master-0 kubenswrapper[4652]: I0216 17:42:23.775375 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:23.775648 master-0 kubenswrapper[4652]: I0216 17:42:23.775408 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:23.775648 master-0 kubenswrapper[4652]: I0216 17:42:23.775419 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twblf\" (UniqueName: \"kubernetes.io/projected/70d550d3-576a-460e-9595-6ade1d630c47-kube-api-access-twblf\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:23.775648 master-0 kubenswrapper[4652]: I0216 17:42:23.775431 4652 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:23.775648 master-0 kubenswrapper[4652]: I0216 17:42:23.775441 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzhnz\" (UniqueName: \"kubernetes.io/projected/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-kube-api-access-vzhnz\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:23.775648 master-0 kubenswrapper[4652]: I0216 17:42:23.775451 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:23.775648 master-0 kubenswrapper[4652]: I0216 17:42:23.775459 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:23.775648 master-0 kubenswrapper[4652]: I0216 17:42:23.775467 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:23.775648 master-0 kubenswrapper[4652]: I0216 17:42:23.775475 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:23.775648 master-0 kubenswrapper[4652]: I0216 17:42:23.775482 4652 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/70d550d3-576a-460e-9595-6ade1d630c47-credential-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:23.842176 master-0 kubenswrapper[4652]: I0216 17:42:23.842130 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-9w7qn" event={"ID":"70d550d3-576a-460e-9595-6ade1d630c47","Type":"ContainerDied","Data":"3c64ef637b109ac41f99fa15ae830e067af2d102d87770686fb958dfa69a6826"} Feb 16 17:42:23.842304 master-0 kubenswrapper[4652]: I0216 17:42:23.842180 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c64ef637b109ac41f99fa15ae830e067af2d102d87770686fb958dfa69a6826" Feb 16 17:42:23.842304 master-0 kubenswrapper[4652]: I0216 17:42:23.842235 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-9w7qn" Feb 16 17:42:23.849310 master-0 kubenswrapper[4652]: I0216 17:42:23.849258 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-mw67q" event={"ID":"4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a","Type":"ContainerDied","Data":"f4370716fb3d46892bdc3c7c1420386ea47961d00778c527e39c754de6d57ece"} Feb 16 17:42:23.849411 master-0 kubenswrapper[4652]: I0216 17:42:23.849316 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4370716fb3d46892bdc3c7c1420386ea47961d00778c527e39c754de6d57ece" Feb 16 17:42:23.849411 master-0 kubenswrapper[4652]: I0216 17:42:23.849400 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-mw67q" Feb 16 17:42:23.856220 master-0 kubenswrapper[4652]: I0216 17:42:23.856157 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:42:23.981411 master-0 kubenswrapper[4652]: I0216 17:42:23.981272 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-config\") pod \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " Feb 16 17:42:23.981411 master-0 kubenswrapper[4652]: I0216 17:42:23.981325 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-ovsdbserver-nb\") pod \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " Feb 16 17:42:23.981689 master-0 kubenswrapper[4652]: I0216 17:42:23.981493 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftjzq\" (UniqueName: \"kubernetes.io/projected/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-kube-api-access-ftjzq\") pod \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " Feb 16 17:42:23.981689 master-0 kubenswrapper[4652]: I0216 17:42:23.981580 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-dns-svc\") pod \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " Feb 16 17:42:23.981689 master-0 kubenswrapper[4652]: I0216 17:42:23.981614 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-ovsdbserver-sb\") pod \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\" (UID: \"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742\") " Feb 16 17:42:23.990679 master-0 kubenswrapper[4652]: I0216 17:42:23.990623 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-kube-api-access-ftjzq" (OuterVolumeSpecName: "kube-api-access-ftjzq") pod "6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" (UID: "6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742"). InnerVolumeSpecName "kube-api-access-ftjzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:24.038366 master-0 kubenswrapper[4652]: I0216 17:42:24.038289 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-config" (OuterVolumeSpecName: "config") pod "6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" (UID: "6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:24.045683 master-0 kubenswrapper[4652]: I0216 17:42:24.045629 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" (UID: "6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:24.045874 master-0 kubenswrapper[4652]: I0216 17:42:24.045707 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" (UID: "6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:24.049777 master-0 kubenswrapper[4652]: I0216 17:42:24.049749 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" (UID: "6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:24.056928 master-0 kubenswrapper[4652]: I0216 17:42:24.056853 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:24.084066 master-0 kubenswrapper[4652]: I0216 17:42:24.083971 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.084066 master-0 kubenswrapper[4652]: I0216 17:42:24.084007 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.084066 master-0 kubenswrapper[4652]: I0216 17:42:24.084021 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.084066 master-0 kubenswrapper[4652]: I0216 17:42:24.084030 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.084066 master-0 kubenswrapper[4652]: I0216 17:42:24.084040 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftjzq\" (UniqueName: \"kubernetes.io/projected/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742-kube-api-access-ftjzq\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.185652 master-0 kubenswrapper[4652]: I0216 17:42:24.184963 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-config-data\") pod \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " Feb 16 17:42:24.185652 master-0 kubenswrapper[4652]: I0216 17:42:24.185058 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/206bcb88-0042-48ac-a9cc-8a121b9fdb42-httpd-run\") pod \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " Feb 16 17:42:24.185652 master-0 kubenswrapper[4652]: I0216 17:42:24.185217 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " Feb 16 17:42:24.185652 master-0 kubenswrapper[4652]: I0216 17:42:24.185330 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-scripts\") pod \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " Feb 16 17:42:24.185652 master-0 kubenswrapper[4652]: I0216 17:42:24.185371 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/206bcb88-0042-48ac-a9cc-8a121b9fdb42-logs\") pod \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " Feb 16 17:42:24.185652 master-0 kubenswrapper[4652]: I0216 17:42:24.185403 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcqr7\" (UniqueName: \"kubernetes.io/projected/206bcb88-0042-48ac-a9cc-8a121b9fdb42-kube-api-access-mcqr7\") pod \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " Feb 16 17:42:24.185652 master-0 kubenswrapper[4652]: I0216 17:42:24.185502 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-combined-ca-bundle\") pod \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\" (UID: \"206bcb88-0042-48ac-a9cc-8a121b9fdb42\") " Feb 16 17:42:24.187224 master-0 kubenswrapper[4652]: I0216 17:42:24.187184 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/206bcb88-0042-48ac-a9cc-8a121b9fdb42-logs" (OuterVolumeSpecName: "logs") pod "206bcb88-0042-48ac-a9cc-8a121b9fdb42" (UID: "206bcb88-0042-48ac-a9cc-8a121b9fdb42"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:42:24.189684 master-0 kubenswrapper[4652]: I0216 17:42:24.189654 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/206bcb88-0042-48ac-a9cc-8a121b9fdb42-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "206bcb88-0042-48ac-a9cc-8a121b9fdb42" (UID: "206bcb88-0042-48ac-a9cc-8a121b9fdb42"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:42:24.189866 master-0 kubenswrapper[4652]: I0216 17:42:24.189831 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-scripts" (OuterVolumeSpecName: "scripts") pod "206bcb88-0042-48ac-a9cc-8a121b9fdb42" (UID: "206bcb88-0042-48ac-a9cc-8a121b9fdb42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:24.189960 master-0 kubenswrapper[4652]: I0216 17:42:24.189934 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:24.190438 master-0 kubenswrapper[4652]: I0216 17:42:24.190383 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206bcb88-0042-48ac-a9cc-8a121b9fdb42-kube-api-access-mcqr7" (OuterVolumeSpecName: "kube-api-access-mcqr7") pod "206bcb88-0042-48ac-a9cc-8a121b9fdb42" (UID: "206bcb88-0042-48ac-a9cc-8a121b9fdb42"). InnerVolumeSpecName "kube-api-access-mcqr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:24.210197 master-0 kubenswrapper[4652]: I0216 17:42:24.210145 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364" (OuterVolumeSpecName: "glance") pod "206bcb88-0042-48ac-a9cc-8a121b9fdb42" (UID: "206bcb88-0042-48ac-a9cc-8a121b9fdb42"). InnerVolumeSpecName "pvc-f5bb6936-02e9-48af-847a-b5f88beeba22". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:42:24.219960 master-0 kubenswrapper[4652]: I0216 17:42:24.219906 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "206bcb88-0042-48ac-a9cc-8a121b9fdb42" (UID: "206bcb88-0042-48ac-a9cc-8a121b9fdb42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:24.236616 master-0 kubenswrapper[4652]: I0216 17:42:24.236462 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-config-data" (OuterVolumeSpecName: "config-data") pod "206bcb88-0042-48ac-a9cc-8a121b9fdb42" (UID: "206bcb88-0042-48ac-a9cc-8a121b9fdb42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:24.289406 master-0 kubenswrapper[4652]: I0216 17:42:24.289345 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"2e22863f-9673-436b-a912-4253af989909\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " Feb 16 17:42:24.289624 master-0 kubenswrapper[4652]: I0216 17:42:24.289430 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-config-data\") pod \"2e22863f-9673-436b-a912-4253af989909\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " Feb 16 17:42:24.289624 master-0 kubenswrapper[4652]: I0216 17:42:24.289532 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvh9g\" (UniqueName: \"kubernetes.io/projected/2e22863f-9673-436b-a912-4253af989909-kube-api-access-qvh9g\") pod \"2e22863f-9673-436b-a912-4253af989909\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " Feb 16 17:42:24.289718 master-0 kubenswrapper[4652]: I0216 17:42:24.289684 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e22863f-9673-436b-a912-4253af989909-logs\") pod \"2e22863f-9673-436b-a912-4253af989909\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " Feb 16 17:42:24.289785 master-0 kubenswrapper[4652]: I0216 17:42:24.289746 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-scripts\") pod \"2e22863f-9673-436b-a912-4253af989909\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " Feb 16 17:42:24.289916 master-0 kubenswrapper[4652]: I0216 17:42:24.289827 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e22863f-9673-436b-a912-4253af989909-httpd-run\") pod \"2e22863f-9673-436b-a912-4253af989909\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " Feb 16 17:42:24.289975 master-0 kubenswrapper[4652]: I0216 17:42:24.289947 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-combined-ca-bundle\") pod \"2e22863f-9673-436b-a912-4253af989909\" (UID: \"2e22863f-9673-436b-a912-4253af989909\") " Feb 16 17:42:24.290526 master-0 kubenswrapper[4652]: I0216 17:42:24.290470 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e22863f-9673-436b-a912-4253af989909-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2e22863f-9673-436b-a912-4253af989909" (UID: "2e22863f-9673-436b-a912-4253af989909"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:42:24.290643 master-0 kubenswrapper[4652]: I0216 17:42:24.290555 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e22863f-9673-436b-a912-4253af989909-logs" (OuterVolumeSpecName: "logs") pod "2e22863f-9673-436b-a912-4253af989909" (UID: "2e22863f-9673-436b-a912-4253af989909"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:42:24.291662 master-0 kubenswrapper[4652]: I0216 17:42:24.291578 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.291662 master-0 kubenswrapper[4652]: I0216 17:42:24.291659 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.292049 master-0 kubenswrapper[4652]: I0216 17:42:24.291678 4652 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/206bcb88-0042-48ac-a9cc-8a121b9fdb42-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.292049 master-0 kubenswrapper[4652]: I0216 17:42:24.291763 4652 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") on node \"master-0\" " Feb 16 17:42:24.292049 master-0 kubenswrapper[4652]: I0216 17:42:24.291786 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/206bcb88-0042-48ac-a9cc-8a121b9fdb42-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.292049 master-0 kubenswrapper[4652]: I0216 17:42:24.291844 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/206bcb88-0042-48ac-a9cc-8a121b9fdb42-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.292049 master-0 kubenswrapper[4652]: I0216 17:42:24.291868 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e22863f-9673-436b-a912-4253af989909-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.292049 master-0 kubenswrapper[4652]: I0216 17:42:24.291921 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcqr7\" (UniqueName: \"kubernetes.io/projected/206bcb88-0042-48ac-a9cc-8a121b9fdb42-kube-api-access-mcqr7\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.292049 master-0 kubenswrapper[4652]: I0216 17:42:24.291943 4652 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2e22863f-9673-436b-a912-4253af989909-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.292990 master-0 kubenswrapper[4652]: I0216 17:42:24.292947 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-scripts" (OuterVolumeSpecName: "scripts") pod "2e22863f-9673-436b-a912-4253af989909" (UID: "2e22863f-9673-436b-a912-4253af989909"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:24.293629 master-0 kubenswrapper[4652]: I0216 17:42:24.293554 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e22863f-9673-436b-a912-4253af989909-kube-api-access-qvh9g" (OuterVolumeSpecName: "kube-api-access-qvh9g") pod "2e22863f-9673-436b-a912-4253af989909" (UID: "2e22863f-9673-436b-a912-4253af989909"). InnerVolumeSpecName "kube-api-access-qvh9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:24.321332 master-0 kubenswrapper[4652]: I0216 17:42:24.319684 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5" (OuterVolumeSpecName: "glance") pod "2e22863f-9673-436b-a912-4253af989909" (UID: "2e22863f-9673-436b-a912-4253af989909"). InnerVolumeSpecName "pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:42:24.329760 master-0 kubenswrapper[4652]: I0216 17:42:24.329629 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e22863f-9673-436b-a912-4253af989909" (UID: "2e22863f-9673-436b-a912-4253af989909"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:24.342977 master-0 kubenswrapper[4652]: I0216 17:42:24.342924 4652 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:42:24.343196 master-0 kubenswrapper[4652]: I0216 17:42:24.343102 4652 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f5bb6936-02e9-48af-847a-b5f88beeba22" (UniqueName: "kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364") on node "master-0" Feb 16 17:42:24.374498 master-0 kubenswrapper[4652]: I0216 17:42:24.374429 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-config-data" (OuterVolumeSpecName: "config-data") pod "2e22863f-9673-436b-a912-4253af989909" (UID: "2e22863f-9673-436b-a912-4253af989909"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:24.391676 master-0 kubenswrapper[4652]: I0216 17:42:24.391613 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-db-sync-ndjf5"] Feb 16 17:42:24.393730 master-0 kubenswrapper[4652]: I0216 17:42:24.393675 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.393730 master-0 kubenswrapper[4652]: I0216 17:42:24.393715 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.393909 master-0 kubenswrapper[4652]: I0216 17:42:24.393756 4652 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") on node \"master-0\" " Feb 16 17:42:24.393909 master-0 kubenswrapper[4652]: I0216 17:42:24.393772 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e22863f-9673-436b-a912-4253af989909-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.393909 master-0 kubenswrapper[4652]: I0216 17:42:24.393784 4652 reconciler_common.go:293] "Volume detached for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.393909 master-0 kubenswrapper[4652]: I0216 17:42:24.393796 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvh9g\" (UniqueName: \"kubernetes.io/projected/2e22863f-9673-436b-a912-4253af989909-kube-api-access-qvh9g\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.430622 master-0 kubenswrapper[4652]: I0216 17:42:24.430579 4652 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:42:24.430849 master-0 kubenswrapper[4652]: I0216 17:42:24.430830 4652 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028" (UniqueName: "kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5") on node "master-0" Feb 16 17:42:24.498026 master-0 kubenswrapper[4652]: I0216 17:42:24.497919 4652 reconciler_common.go:293] "Volume detached for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:24.840406 master-0 kubenswrapper[4652]: I0216 17:42:24.840317 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5559c64944-9qfgd"] Feb 16 17:42:24.840958 master-0 kubenswrapper[4652]: E0216 17:42:24.840914 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a" containerName="placement-db-sync" Feb 16 17:42:24.840958 master-0 kubenswrapper[4652]: I0216 17:42:24.840932 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a" containerName="placement-db-sync" Feb 16 17:42:24.840958 master-0 kubenswrapper[4652]: E0216 17:42:24.840952 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e22863f-9673-436b-a912-4253af989909" containerName="glance-httpd" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.840963 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e22863f-9673-436b-a912-4253af989909" containerName="glance-httpd" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: E0216 17:42:24.840978 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206bcb88-0042-48ac-a9cc-8a121b9fdb42" containerName="glance-httpd" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.840988 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="206bcb88-0042-48ac-a9cc-8a121b9fdb42" containerName="glance-httpd" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: E0216 17:42:24.841006 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206bcb88-0042-48ac-a9cc-8a121b9fdb42" containerName="glance-log" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.841013 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="206bcb88-0042-48ac-a9cc-8a121b9fdb42" containerName="glance-log" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: E0216 17:42:24.841040 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d550d3-576a-460e-9595-6ade1d630c47" containerName="keystone-bootstrap" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.841048 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d550d3-576a-460e-9595-6ade1d630c47" containerName="keystone-bootstrap" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: E0216 17:42:24.841058 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e22863f-9673-436b-a912-4253af989909" containerName="glance-log" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.841066 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e22863f-9673-436b-a912-4253af989909" containerName="glance-log" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: E0216 17:42:24.841077 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" containerName="dnsmasq-dns" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.841085 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" containerName="dnsmasq-dns" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: E0216 17:42:24.841100 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" containerName="init" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.841107 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" containerName="init" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.841328 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="206bcb88-0042-48ac-a9cc-8a121b9fdb42" containerName="glance-log" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.841348 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a" containerName="placement-db-sync" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.841387 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="206bcb88-0042-48ac-a9cc-8a121b9fdb42" containerName="glance-httpd" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.841398 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e22863f-9673-436b-a912-4253af989909" containerName="glance-httpd" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.841405 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d550d3-576a-460e-9595-6ade1d630c47" containerName="keystone-bootstrap" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.841422 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e22863f-9673-436b-a912-4253af989909" containerName="glance-log" Feb 16 17:42:24.841829 master-0 kubenswrapper[4652]: I0216 17:42:24.841429 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" containerName="dnsmasq-dns" Feb 16 17:42:24.842515 master-0 kubenswrapper[4652]: I0216 17:42:24.842491 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:24.845416 master-0 kubenswrapper[4652]: I0216 17:42:24.845210 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 17:42:24.845416 master-0 kubenswrapper[4652]: I0216 17:42:24.845241 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 17:42:24.845416 master-0 kubenswrapper[4652]: I0216 17:42:24.845210 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 17:42:24.845416 master-0 kubenswrapper[4652]: I0216 17:42:24.845283 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 17:42:24.856698 master-0 kubenswrapper[4652]: I0216 17:42:24.856644 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5559c64944-9qfgd"] Feb 16 17:42:24.875289 master-0 kubenswrapper[4652]: I0216 17:42:24.874667 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" event={"ID":"6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742","Type":"ContainerDied","Data":"62d84f57e0b1d72961534ff38b9c8d57398efc9ce89f46151b97a83cc51aaa34"} Feb 16 17:42:24.875289 master-0 kubenswrapper[4652]: I0216 17:42:24.874735 4652 scope.go:117] "RemoveContainer" containerID="d2405f65f5a9e898a93938632be47f9bfa5859d60eb354f495245df0535a938b" Feb 16 17:42:24.875289 master-0 kubenswrapper[4652]: I0216 17:42:24.874902 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fd49994df-4rvpk" Feb 16 17:42:24.890596 master-0 kubenswrapper[4652]: I0216 17:42:24.889711 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-db-sync-5mcjg" event={"ID":"c9405c7d-2ad3-46cf-b8e4-4c91feead991","Type":"ContainerStarted","Data":"355f2f1cf3208235cfa2edd5deb6abf54dc635abacb4d7f7bf980e853bb6a8b4"} Feb 16 17:42:24.905183 master-0 kubenswrapper[4652]: I0216 17:42:24.905137 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-public-tls-certs\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:24.905598 master-0 kubenswrapper[4652]: I0216 17:42:24.905542 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-scripts\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:24.905888 master-0 kubenswrapper[4652]: I0216 17:42:24.905863 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-internal-tls-certs\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:24.906049 master-0 kubenswrapper[4652]: I0216 17:42:24.906030 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-combined-ca-bundle\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:24.906196 master-0 kubenswrapper[4652]: I0216 17:42:24.906182 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06840359-14e1-46d7-b74a-a3acd120905b-logs\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:24.906495 master-0 kubenswrapper[4652]: I0216 17:42:24.906462 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbdb4\" (UniqueName: \"kubernetes.io/projected/06840359-14e1-46d7-b74a-a3acd120905b-kube-api-access-wbdb4\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:24.906573 master-0 kubenswrapper[4652]: I0216 17:42:24.906501 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-config-data\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:24.911444 master-0 kubenswrapper[4652]: I0216 17:42:24.911395 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"2e22863f-9673-436b-a912-4253af989909","Type":"ContainerDied","Data":"d9b6daade23927535be95680f6b2b4418bfc793638dbf6698592c627f1aee79a"} Feb 16 17:42:24.911783 master-0 kubenswrapper[4652]: I0216 17:42:24.911766 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:24.924058 master-0 kubenswrapper[4652]: I0216 17:42:24.924004 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-95c564f-wdb5n"] Feb 16 17:42:24.926497 master-0 kubenswrapper[4652]: I0216 17:42:24.925946 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ndjf5" event={"ID":"e5005365-36c1-44e2-be02-84737aa7a60a","Type":"ContainerStarted","Data":"32b6a640cf270fd339afc54c06d5c12c0399b0dcac619bafdd3601be3a4ca224"} Feb 16 17:42:24.926497 master-0 kubenswrapper[4652]: I0216 17:42:24.926044 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:24.930583 master-0 kubenswrapper[4652]: I0216 17:42:24.930283 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 17:42:24.931009 master-0 kubenswrapper[4652]: I0216 17:42:24.930883 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 17:42:24.931009 master-0 kubenswrapper[4652]: I0216 17:42:24.930901 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 17:42:24.931009 master-0 kubenswrapper[4652]: I0216 17:42:24.930927 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 17:42:24.935662 master-0 kubenswrapper[4652]: I0216 17:42:24.933674 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 17:42:24.937568 master-0 kubenswrapper[4652]: I0216 17:42:24.937532 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:24.938420 master-0 kubenswrapper[4652]: I0216 17:42:24.938381 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-4rvpk"] Feb 16 17:42:24.938507 master-0 kubenswrapper[4652]: I0216 17:42:24.938436 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"206bcb88-0042-48ac-a9cc-8a121b9fdb42","Type":"ContainerDied","Data":"f5a7912e89b7a7563d3f2d2e0937a5a96b4006ec817677dbcd47d40bf02641b8"} Feb 16 17:42:24.942576 master-0 kubenswrapper[4652]: I0216 17:42:24.941841 4652 scope.go:117] "RemoveContainer" containerID="7ef9550e515c13adf316cefdb9949d5d301b65f144d2755c976c47339b1ecd6e" Feb 16 17:42:24.966701 master-0 kubenswrapper[4652]: I0216 17:42:24.965669 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fd49994df-4rvpk"] Feb 16 17:42:24.969241 master-0 kubenswrapper[4652]: I0216 17:42:24.968937 4652 scope.go:117] "RemoveContainer" containerID="2de5d5807e37bcd83a9045ad621e3dc5b56e8339e932c7d094dda1e6b76d4731" Feb 16 17:42:24.977426 master-0 kubenswrapper[4652]: I0216 17:42:24.975233 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-95c564f-wdb5n"] Feb 16 17:42:24.977426 master-0 kubenswrapper[4652]: I0216 17:42:24.976152 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c34a6-db-sync-5mcjg" podStartSLOduration=3.422995008 podStartE2EDuration="19.9761332s" podCreationTimestamp="2026-02-16 17:42:05 +0000 UTC" firstStartedPulling="2026-02-16 17:42:07.023897014 +0000 UTC m=+1084.412065530" lastFinishedPulling="2026-02-16 17:42:23.577035206 +0000 UTC m=+1100.965203722" observedRunningTime="2026-02-16 17:42:24.935613403 +0000 UTC m=+1102.323781919" watchObservedRunningTime="2026-02-16 17:42:24.9761332 +0000 UTC m=+1102.364301716" Feb 16 17:42:25.008870 master-0 kubenswrapper[4652]: I0216 17:42:25.008824 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-combined-ca-bundle\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.008870 master-0 kubenswrapper[4652]: I0216 17:42:25.008870 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-combined-ca-bundle\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.009208 master-0 kubenswrapper[4652]: I0216 17:42:25.008898 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-credential-keys\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.009208 master-0 kubenswrapper[4652]: I0216 17:42:25.008925 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06840359-14e1-46d7-b74a-a3acd120905b-logs\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.009208 master-0 kubenswrapper[4652]: I0216 17:42:25.008982 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-internal-tls-certs\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.009208 master-0 kubenswrapper[4652]: I0216 17:42:25.009006 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-config-data\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.009208 master-0 kubenswrapper[4652]: I0216 17:42:25.009025 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbdb4\" (UniqueName: \"kubernetes.io/projected/06840359-14e1-46d7-b74a-a3acd120905b-kube-api-access-wbdb4\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.009208 master-0 kubenswrapper[4652]: I0216 17:42:25.009052 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-config-data\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.009208 master-0 kubenswrapper[4652]: I0216 17:42:25.009085 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6k8g\" (UniqueName: \"kubernetes.io/projected/6a4fbd29-1529-4b16-b05f-bd412e329c3f-kube-api-access-d6k8g\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.009208 master-0 kubenswrapper[4652]: I0216 17:42:25.009103 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-fernet-keys\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.009208 master-0 kubenswrapper[4652]: I0216 17:42:25.009142 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-public-tls-certs\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.009208 master-0 kubenswrapper[4652]: I0216 17:42:25.009173 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-public-tls-certs\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.009208 master-0 kubenswrapper[4652]: I0216 17:42:25.009204 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-scripts\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.009857 master-0 kubenswrapper[4652]: I0216 17:42:25.009317 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-scripts\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.009857 master-0 kubenswrapper[4652]: I0216 17:42:25.009341 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-internal-tls-certs\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.012725 master-0 kubenswrapper[4652]: I0216 17:42:25.012681 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-scripts\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.013290 master-0 kubenswrapper[4652]: I0216 17:42:25.013076 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06840359-14e1-46d7-b74a-a3acd120905b-logs\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.015925 master-0 kubenswrapper[4652]: I0216 17:42:25.015885 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-public-tls-certs\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.018611 master-0 kubenswrapper[4652]: I0216 17:42:25.018180 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-config-data\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.021052 master-0 kubenswrapper[4652]: I0216 17:42:25.021008 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-internal-tls-certs\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.028147 master-0 kubenswrapper[4652]: I0216 17:42:25.027390 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-combined-ca-bundle\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.045804 master-0 kubenswrapper[4652]: I0216 17:42:25.044674 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbdb4\" (UniqueName: \"kubernetes.io/projected/06840359-14e1-46d7-b74a-a3acd120905b-kube-api-access-wbdb4\") pod \"placement-5559c64944-9qfgd\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.056089 master-0 kubenswrapper[4652]: I0216 17:42:25.056005 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:25.093277 master-0 kubenswrapper[4652]: I0216 17:42:25.092345 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:25.115530 master-0 kubenswrapper[4652]: I0216 17:42:25.113261 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-internal-tls-certs\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.115530 master-0 kubenswrapper[4652]: I0216 17:42:25.113296 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-config-data\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.115530 master-0 kubenswrapper[4652]: I0216 17:42:25.113339 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6k8g\" (UniqueName: \"kubernetes.io/projected/6a4fbd29-1529-4b16-b05f-bd412e329c3f-kube-api-access-d6k8g\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.115530 master-0 kubenswrapper[4652]: I0216 17:42:25.113358 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-fernet-keys\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.115530 master-0 kubenswrapper[4652]: I0216 17:42:25.113599 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-public-tls-certs\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.115530 master-0 kubenswrapper[4652]: I0216 17:42:25.113867 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-scripts\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.115530 master-0 kubenswrapper[4652]: I0216 17:42:25.113920 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-combined-ca-bundle\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.115530 master-0 kubenswrapper[4652]: I0216 17:42:25.113944 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-credential-keys\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.120331 master-0 kubenswrapper[4652]: I0216 17:42:25.120125 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:25.122626 master-0 kubenswrapper[4652]: I0216 17:42:25.122592 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.123907 master-0 kubenswrapper[4652]: I0216 17:42:25.123841 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-credential-keys\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.126920 master-0 kubenswrapper[4652]: I0216 17:42:25.125268 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-internal-tls-certs\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.127772 master-0 kubenswrapper[4652]: I0216 17:42:25.127732 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-config-data\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.132205 master-0 kubenswrapper[4652]: I0216 17:42:25.132163 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 17:42:25.132776 master-0 kubenswrapper[4652]: I0216 17:42:25.132736 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-public-tls-certs\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.132972 master-0 kubenswrapper[4652]: I0216 17:42:25.132944 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 17:42:25.133486 master-0 kubenswrapper[4652]: I0216 17:42:25.133454 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-50e08-default-external-config-data" Feb 16 17:42:25.133586 master-0 kubenswrapper[4652]: I0216 17:42:25.133565 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:42:25.134787 master-0 kubenswrapper[4652]: I0216 17:42:25.134727 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-scripts\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.135392 master-0 kubenswrapper[4652]: I0216 17:42:25.135362 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-fernet-keys\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.135994 master-0 kubenswrapper[4652]: I0216 17:42:25.135961 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a4fbd29-1529-4b16-b05f-bd412e329c3f-combined-ca-bundle\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.141346 master-0 kubenswrapper[4652]: I0216 17:42:25.140061 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:42:25.142842 master-0 kubenswrapper[4652]: I0216 17:42:25.142819 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6k8g\" (UniqueName: \"kubernetes.io/projected/6a4fbd29-1529-4b16-b05f-bd412e329c3f-kube-api-access-d6k8g\") pod \"keystone-95c564f-wdb5n\" (UID: \"6a4fbd29-1529-4b16-b05f-bd412e329c3f\") " pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.154345 master-0 kubenswrapper[4652]: I0216 17:42:25.152864 4652 scope.go:117] "RemoveContainer" containerID="0f8c346fff61ca37cc528b477dd49715c231378cd4531e518a9edd3021a4053e" Feb 16 17:42:25.168545 master-0 kubenswrapper[4652]: I0216 17:42:25.168493 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:25.171809 master-0 kubenswrapper[4652]: I0216 17:42:25.171768 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:25.187015 master-0 kubenswrapper[4652]: I0216 17:42:25.186974 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:42:25.192568 master-0 kubenswrapper[4652]: I0216 17:42:25.192531 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.193098 master-0 kubenswrapper[4652]: I0216 17:42:25.193045 4652 scope.go:117] "RemoveContainer" containerID="1ced99ce195946034eef55ee5e6062d9862949436bf20696711534985f9fee6f" Feb 16 17:42:25.195320 master-0 kubenswrapper[4652]: I0216 17:42:25.195286 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-50e08-default-internal-config-data" Feb 16 17:42:25.195320 master-0 kubenswrapper[4652]: I0216 17:42:25.195309 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 17:42:25.201900 master-0 kubenswrapper[4652]: I0216 17:42:25.201856 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:42:25.217367 master-0 kubenswrapper[4652]: I0216 17:42:25.216433 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-config-data\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.217367 master-0 kubenswrapper[4652]: I0216 17:42:25.216536 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.217367 master-0 kubenswrapper[4652]: I0216 17:42:25.216609 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-logs\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.217367 master-0 kubenswrapper[4652]: I0216 17:42:25.216642 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v6rd\" (UniqueName: \"kubernetes.io/projected/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-kube-api-access-7v6rd\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.217367 master-0 kubenswrapper[4652]: I0216 17:42:25.216700 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-scripts\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.217367 master-0 kubenswrapper[4652]: I0216 17:42:25.216755 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-httpd-run\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.217367 master-0 kubenswrapper[4652]: I0216 17:42:25.216920 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-public-tls-certs\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.217367 master-0 kubenswrapper[4652]: I0216 17:42:25.217045 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-combined-ca-bundle\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.273446 master-0 kubenswrapper[4652]: I0216 17:42:25.273418 4652 scope.go:117] "RemoveContainer" containerID="8363e92ba9815dcbcf8ebbe3399c5cfcb1619260ad50dd78888ab854a75d3199" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.319804 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-logs\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.319859 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v6rd\" (UniqueName: \"kubernetes.io/projected/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-kube-api-access-7v6rd\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.319896 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-scripts\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.319928 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-internal-tls-certs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.319957 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-httpd-run\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.319980 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-combined-ca-bundle\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.320010 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.320081 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpj8x\" (UniqueName: \"kubernetes.io/projected/67dcf429-d644-435b-8edb-e08198064dfb-kube-api-access-mpj8x\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.320107 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-config-data\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.320159 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67dcf429-d644-435b-8edb-e08198064dfb-logs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.320186 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-public-tls-certs\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.320226 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-scripts\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.320316 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-combined-ca-bundle\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.320344 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-config-data\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.320373 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67dcf429-d644-435b-8edb-e08198064dfb-httpd-run\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.320421 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.321290 master-0 kubenswrapper[4652]: I0216 17:42:25.321222 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-logs\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.322217 master-0 kubenswrapper[4652]: I0216 17:42:25.321755 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-httpd-run\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.324807 master-0 kubenswrapper[4652]: I0216 17:42:25.324612 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-scripts\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.325280 master-0 kubenswrapper[4652]: I0216 17:42:25.325235 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-combined-ca-bundle\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.329195 master-0 kubenswrapper[4652]: I0216 17:42:25.329160 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:42:25.329459 master-0 kubenswrapper[4652]: I0216 17:42:25.329200 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/3d8bc9531c98396b6e6ea0108c18f808bdb9e170b0cc5e329df6a02a3996a78b/globalmount\"" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.344334 master-0 kubenswrapper[4652]: I0216 17:42:25.335040 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-config-data\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.344334 master-0 kubenswrapper[4652]: I0216 17:42:25.341414 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-public-tls-certs\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.352221 master-0 kubenswrapper[4652]: I0216 17:42:25.352100 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v6rd\" (UniqueName: \"kubernetes.io/projected/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-kube-api-access-7v6rd\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:25.422763 master-0 kubenswrapper[4652]: I0216 17:42:25.422712 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-config-data\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.422996 master-0 kubenswrapper[4652]: I0216 17:42:25.422980 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpj8x\" (UniqueName: \"kubernetes.io/projected/67dcf429-d644-435b-8edb-e08198064dfb-kube-api-access-mpj8x\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.423111 master-0 kubenswrapper[4652]: I0216 17:42:25.423099 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67dcf429-d644-435b-8edb-e08198064dfb-logs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.423232 master-0 kubenswrapper[4652]: I0216 17:42:25.423209 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-scripts\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.423430 master-0 kubenswrapper[4652]: I0216 17:42:25.423413 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67dcf429-d644-435b-8edb-e08198064dfb-httpd-run\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.423633 master-0 kubenswrapper[4652]: I0216 17:42:25.423606 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-internal-tls-certs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.423801 master-0 kubenswrapper[4652]: I0216 17:42:25.423777 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-combined-ca-bundle\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.423900 master-0 kubenswrapper[4652]: I0216 17:42:25.423887 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.426077 master-0 kubenswrapper[4652]: I0216 17:42:25.425807 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67dcf429-d644-435b-8edb-e08198064dfb-httpd-run\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.426700 master-0 kubenswrapper[4652]: I0216 17:42:25.426508 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67dcf429-d644-435b-8edb-e08198064dfb-logs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.427096 master-0 kubenswrapper[4652]: I0216 17:42:25.426952 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:42:25.427096 master-0 kubenswrapper[4652]: I0216 17:42:25.426980 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6145192c64db548fddf9bb3cc8141db5764e5395e391d0e15bf39805d4ff5e26/globalmount\"" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.428257 master-0 kubenswrapper[4652]: I0216 17:42:25.427908 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:25.431221 master-0 kubenswrapper[4652]: I0216 17:42:25.431147 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-internal-tls-certs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.433550 master-0 kubenswrapper[4652]: I0216 17:42:25.433488 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-combined-ca-bundle\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.435101 master-0 kubenswrapper[4652]: I0216 17:42:25.435084 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-config-data\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.436699 master-0 kubenswrapper[4652]: I0216 17:42:25.436664 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-scripts\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.442100 master-0 kubenswrapper[4652]: I0216 17:42:25.442018 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpj8x\" (UniqueName: \"kubernetes.io/projected/67dcf429-d644-435b-8edb-e08198064dfb-kube-api-access-mpj8x\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:25.608202 master-0 kubenswrapper[4652]: I0216 17:42:25.606539 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6869cdf564-cp8xm"] Feb 16 17:42:25.612280 master-0 kubenswrapper[4652]: I0216 17:42:25.611044 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.624914 master-0 kubenswrapper[4652]: I0216 17:42:25.624865 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6869cdf564-cp8xm"] Feb 16 17:42:25.695349 master-0 kubenswrapper[4652]: I0216 17:42:25.695314 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5559c64944-9qfgd"] Feb 16 17:42:25.742421 master-0 kubenswrapper[4652]: I0216 17:42:25.742371 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-internal-tls-certs\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.742765 master-0 kubenswrapper[4652]: I0216 17:42:25.742744 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-config-data\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.742835 master-0 kubenswrapper[4652]: I0216 17:42:25.742782 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-public-tls-certs\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.742903 master-0 kubenswrapper[4652]: I0216 17:42:25.742880 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5t8b\" (UniqueName: \"kubernetes.io/projected/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-kube-api-access-x5t8b\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.743103 master-0 kubenswrapper[4652]: I0216 17:42:25.743006 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-combined-ca-bundle\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.743459 master-0 kubenswrapper[4652]: I0216 17:42:25.743442 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-scripts\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.743506 master-0 kubenswrapper[4652]: I0216 17:42:25.743479 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-logs\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.849197 master-0 kubenswrapper[4652]: I0216 17:42:25.849142 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-internal-tls-certs\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.859619 master-0 kubenswrapper[4652]: I0216 17:42:25.849956 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-config-data\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.859619 master-0 kubenswrapper[4652]: I0216 17:42:25.850350 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-public-tls-certs\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.859619 master-0 kubenswrapper[4652]: I0216 17:42:25.850497 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5t8b\" (UniqueName: \"kubernetes.io/projected/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-kube-api-access-x5t8b\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.859619 master-0 kubenswrapper[4652]: I0216 17:42:25.850711 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-combined-ca-bundle\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.859619 master-0 kubenswrapper[4652]: I0216 17:42:25.850938 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-scripts\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.859619 master-0 kubenswrapper[4652]: I0216 17:42:25.851581 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-logs\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.859619 master-0 kubenswrapper[4652]: I0216 17:42:25.851923 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-logs\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.859619 master-0 kubenswrapper[4652]: I0216 17:42:25.853416 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-internal-tls-certs\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.859619 master-0 kubenswrapper[4652]: I0216 17:42:25.853906 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-combined-ca-bundle\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.859619 master-0 kubenswrapper[4652]: I0216 17:42:25.854408 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-public-tls-certs\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.859619 master-0 kubenswrapper[4652]: I0216 17:42:25.854471 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-scripts\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.859619 master-0 kubenswrapper[4652]: I0216 17:42:25.858589 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-config-data\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.870070 master-0 kubenswrapper[4652]: I0216 17:42:25.870000 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5t8b\" (UniqueName: \"kubernetes.io/projected/4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0-kube-api-access-x5t8b\") pod \"placement-6869cdf564-cp8xm\" (UID: \"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0\") " pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.950033 master-0 kubenswrapper[4652]: I0216 17:42:25.940768 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:25.950033 master-0 kubenswrapper[4652]: I0216 17:42:25.949616 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-95c564f-wdb5n"] Feb 16 17:42:25.956733 master-0 kubenswrapper[4652]: I0216 17:42:25.956668 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5559c64944-9qfgd" event={"ID":"06840359-14e1-46d7-b74a-a3acd120905b","Type":"ContainerStarted","Data":"f89656633e0498ec2792dfe6087952c09d2db54e9d1069b5c453e1871a20a408"} Feb 16 17:42:25.971904 master-0 kubenswrapper[4652]: W0216 17:42:25.971859 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a4fbd29_1529_4b16_b05f_bd412e329c3f.slice/crio-7a7c370fa8ab0abd8ecb2ffc85bb50c46ae4985e84f526d34e1f59a639c2f78c WatchSource:0}: Error finding container 7a7c370fa8ab0abd8ecb2ffc85bb50c46ae4985e84f526d34e1f59a639c2f78c: Status 404 returned error can't find the container with id 7a7c370fa8ab0abd8ecb2ffc85bb50c46ae4985e84f526d34e1f59a639c2f78c Feb 16 17:42:26.444433 master-0 kubenswrapper[4652]: I0216 17:42:26.444391 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6869cdf564-cp8xm"] Feb 16 17:42:26.615396 master-0 kubenswrapper[4652]: I0216 17:42:26.615343 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:26.698727 master-0 kubenswrapper[4652]: I0216 17:42:26.698613 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:26.761322 master-0 kubenswrapper[4652]: I0216 17:42:26.761269 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206bcb88-0042-48ac-a9cc-8a121b9fdb42" path="/var/lib/kubelet/pods/206bcb88-0042-48ac-a9cc-8a121b9fdb42/volumes" Feb 16 17:42:26.762150 master-0 kubenswrapper[4652]: I0216 17:42:26.762115 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e22863f-9673-436b-a912-4253af989909" path="/var/lib/kubelet/pods/2e22863f-9673-436b-a912-4253af989909/volumes" Feb 16 17:42:26.762877 master-0 kubenswrapper[4652]: I0216 17:42:26.762845 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742" path="/var/lib/kubelet/pods/6c3cf9bf-0a39-4767-ab99-aeaf8ec3a742/volumes" Feb 16 17:42:26.986162 master-0 kubenswrapper[4652]: I0216 17:42:26.986039 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-95c564f-wdb5n" event={"ID":"6a4fbd29-1529-4b16-b05f-bd412e329c3f","Type":"ContainerStarted","Data":"f5e7270d4e6d1a4c7f906196cffeb6005c1563a72526c6950de3a8670ca34fbf"} Feb 16 17:42:26.986162 master-0 kubenswrapper[4652]: I0216 17:42:26.986095 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-95c564f-wdb5n" event={"ID":"6a4fbd29-1529-4b16-b05f-bd412e329c3f","Type":"ContainerStarted","Data":"7a7c370fa8ab0abd8ecb2ffc85bb50c46ae4985e84f526d34e1f59a639c2f78c"} Feb 16 17:42:26.986815 master-0 kubenswrapper[4652]: I0216 17:42:26.986785 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:26.992783 master-0 kubenswrapper[4652]: I0216 17:42:26.992724 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6869cdf564-cp8xm" event={"ID":"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0","Type":"ContainerStarted","Data":"de53ab2a71b33ae9e792c6c804cb93e389cc8c51fbab5aae44b6a70ae84655f4"} Feb 16 17:42:26.992783 master-0 kubenswrapper[4652]: I0216 17:42:26.992783 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6869cdf564-cp8xm" event={"ID":"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0","Type":"ContainerStarted","Data":"14cf81fef43ea12959b6902d0bdfb9c2c62aaaf6ead3b8c1cd1e652923d17638"} Feb 16 17:42:26.995922 master-0 kubenswrapper[4652]: I0216 17:42:26.995882 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5559c64944-9qfgd" event={"ID":"06840359-14e1-46d7-b74a-a3acd120905b","Type":"ContainerStarted","Data":"a953984f730749c4669fd2456a04fc762aaa2b368617c7a5ec1858a57e604a8b"} Feb 16 17:42:26.996059 master-0 kubenswrapper[4652]: I0216 17:42:26.995932 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5559c64944-9qfgd" event={"ID":"06840359-14e1-46d7-b74a-a3acd120905b","Type":"ContainerStarted","Data":"a548065352cd4d8365baa3bfdf711167608f8b840f55133535592bb1ed4fb564"} Feb 16 17:42:26.996574 master-0 kubenswrapper[4652]: I0216 17:42:26.996537 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:27.047277 master-0 kubenswrapper[4652]: I0216 17:42:27.046637 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-95c564f-wdb5n" podStartSLOduration=3.046615501 podStartE2EDuration="3.046615501s" podCreationTimestamp="2026-02-16 17:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:27.02380608 +0000 UTC m=+1104.411974596" watchObservedRunningTime="2026-02-16 17:42:27.046615501 +0000 UTC m=+1104.434784017" Feb 16 17:42:27.065294 master-0 kubenswrapper[4652]: I0216 17:42:27.065180 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5559c64944-9qfgd" podStartSLOduration=3.065157548 podStartE2EDuration="3.065157548s" podCreationTimestamp="2026-02-16 17:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:27.049303773 +0000 UTC m=+1104.437472299" watchObservedRunningTime="2026-02-16 17:42:27.065157548 +0000 UTC m=+1104.453326064" Feb 16 17:42:28.008221 master-0 kubenswrapper[4652]: I0216 17:42:28.005842 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:42:28.349975 master-0 kubenswrapper[4652]: I0216 17:42:28.347891 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"glance-50e08-default-internal-api-0\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:28.449980 master-0 kubenswrapper[4652]: I0216 17:42:28.449916 4652 trace.go:236] Trace[690235965]: "Calculate volume metrics of mysql-db for pod openstack/openstack-cell1-galera-0" (16-Feb-2026 17:42:26.134) (total time: 2315ms): Feb 16 17:42:28.449980 master-0 kubenswrapper[4652]: Trace[690235965]: [2.315385585s] [2.315385585s] END Feb 16 17:42:28.523458 master-0 kubenswrapper[4652]: I0216 17:42:28.523389 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:29.793039 master-0 kubenswrapper[4652]: I0216 17:42:29.792989 4652 trace.go:236] Trace[1049866373]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (16-Feb-2026 17:42:26.647) (total time: 3145ms): Feb 16 17:42:29.793039 master-0 kubenswrapper[4652]: Trace[1049866373]: [3.145319463s] [3.145319463s] END Feb 16 17:42:31.551769 master-0 kubenswrapper[4652]: I0216 17:42:31.551691 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:42:31.570767 master-0 kubenswrapper[4652]: W0216 17:42:31.570639 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b6c637_c97f_4d6e_b233_6c6e1a54cc95.slice/crio-1ec31d9bafaaf086cfc60e24d11417b56c744f1116b7f115c763c3efbdb7a781 WatchSource:0}: Error finding container 1ec31d9bafaaf086cfc60e24d11417b56c744f1116b7f115c763c3efbdb7a781: Status 404 returned error can't find the container with id 1ec31d9bafaaf086cfc60e24d11417b56c744f1116b7f115c763c3efbdb7a781 Feb 16 17:42:32.132426 master-0 kubenswrapper[4652]: I0216 17:42:32.132301 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95","Type":"ContainerStarted","Data":"2557431c0c136ca2439fed860ea95097d8ceaeb286271c01f87bb1d349766acf"} Feb 16 17:42:32.132426 master-0 kubenswrapper[4652]: I0216 17:42:32.132374 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95","Type":"ContainerStarted","Data":"1ec31d9bafaaf086cfc60e24d11417b56c744f1116b7f115c763c3efbdb7a781"} Feb 16 17:42:32.135083 master-0 kubenswrapper[4652]: I0216 17:42:32.135023 4652 generic.go:334] "Generic (PLEG): container finished" podID="e5005365-36c1-44e2-be02-84737aa7a60a" containerID="93dc276590cb13f099e03549689a110e1c330753736978dd58323e394f463f8a" exitCode=0 Feb 16 17:42:32.135175 master-0 kubenswrapper[4652]: I0216 17:42:32.135091 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ndjf5" event={"ID":"e5005365-36c1-44e2-be02-84737aa7a60a","Type":"ContainerDied","Data":"93dc276590cb13f099e03549689a110e1c330753736978dd58323e394f463f8a"} Feb 16 17:42:32.140432 master-0 kubenswrapper[4652]: I0216 17:42:32.140312 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6869cdf564-cp8xm" event={"ID":"4b5d6e1b-09c0-4b4e-8a3b-2504d0c3a5d0","Type":"ContainerStarted","Data":"98b70c74d6f8bb770fccecb88737aa84a2d6249e98f4a4a7a2d17eee05fc2635"} Feb 16 17:42:32.142555 master-0 kubenswrapper[4652]: I0216 17:42:32.141426 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:32.142555 master-0 kubenswrapper[4652]: I0216 17:42:32.141599 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:32.209216 master-0 kubenswrapper[4652]: I0216 17:42:32.207245 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6869cdf564-cp8xm" podStartSLOduration=7.207224575 podStartE2EDuration="7.207224575s" podCreationTimestamp="2026-02-16 17:42:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:32.197228877 +0000 UTC m=+1109.585397403" watchObservedRunningTime="2026-02-16 17:42:32.207224575 +0000 UTC m=+1109.595393131" Feb 16 17:42:32.348614 master-0 kubenswrapper[4652]: E0216 17:42:32.348560 4652 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 16 17:42:32.348614 master-0 kubenswrapper[4652]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/e5005365-36c1-44e2-be02-84737aa7a60a/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory Feb 16 17:42:32.348614 master-0 kubenswrapper[4652]: > podSandboxID="32b6a640cf270fd339afc54c06d5c12c0399b0dcac619bafdd3601be3a4ca224" Feb 16 17:42:32.348763 master-0 kubenswrapper[4652]: E0216 17:42:32.348719 4652 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 16 17:42:32.348763 master-0 kubenswrapper[4652]: container &Container{Name:ironic-db-sync,Image:quay.io/podified-antelope-centos9/openstack-ironic-conductor@sha256:1f519a69686478381fe122716a13d116612a9b6eaeb47ab00ef4cd82b93468bf,Command:[/bin/bash],Args:[-c /usr/local/bin/container-scripts/dbsync.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-merged,ReadOnly:false,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-podinfo,ReadOnly:false,MountPath:/etc/podinfo,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lqxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-db-sync-ndjf5_openstack(e5005365-36c1-44e2-be02-84737aa7a60a): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/e5005365-36c1-44e2-be02-84737aa7a60a/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory Feb 16 17:42:32.348763 master-0 kubenswrapper[4652]: > logger="UnhandledError" Feb 16 17:42:32.351369 master-0 kubenswrapper[4652]: E0216 17:42:32.351326 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-db-sync\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/e5005365-36c1-44e2-be02-84737aa7a60a/volume-subpaths/config-data/ironic-db-sync/3` to `var/lib/kolla/config_files/config.json`: No such file or directory\\n\"" pod="openstack/ironic-db-sync-ndjf5" podUID="e5005365-36c1-44e2-be02-84737aa7a60a" Feb 16 17:42:32.461597 master-0 kubenswrapper[4652]: I0216 17:42:32.461436 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:42:32.475057 master-0 kubenswrapper[4652]: W0216 17:42:32.474724 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67dcf429_d644_435b_8edb_e08198064dfb.slice/crio-d38c349d591ef395ce194f3f72ba8a27ec32122317d0210bd8cbe86ed6538b5d WatchSource:0}: Error finding container d38c349d591ef395ce194f3f72ba8a27ec32122317d0210bd8cbe86ed6538b5d: Status 404 returned error can't find the container with id d38c349d591ef395ce194f3f72ba8a27ec32122317d0210bd8cbe86ed6538b5d Feb 16 17:42:33.153992 master-0 kubenswrapper[4652]: I0216 17:42:33.153869 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"67dcf429-d644-435b-8edb-e08198064dfb","Type":"ContainerStarted","Data":"076fa119e8cae849e305bb8138aa038ea39286ea3cb353a183aef6ac4148ad49"} Feb 16 17:42:33.153992 master-0 kubenswrapper[4652]: I0216 17:42:33.153918 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"67dcf429-d644-435b-8edb-e08198064dfb","Type":"ContainerStarted","Data":"d38c349d591ef395ce194f3f72ba8a27ec32122317d0210bd8cbe86ed6538b5d"} Feb 16 17:42:33.157995 master-0 kubenswrapper[4652]: I0216 17:42:33.157783 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95","Type":"ContainerStarted","Data":"31c93779bdd8a5bb7167d32420acdff15bffcf01001b6ed861d4a2a2c1c4c2be"} Feb 16 17:42:33.190488 master-0 kubenswrapper[4652]: I0216 17:42:33.190276 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-50e08-default-external-api-0" podStartSLOduration=9.190259396 podStartE2EDuration="9.190259396s" podCreationTimestamp="2026-02-16 17:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:33.188191201 +0000 UTC m=+1110.576359717" watchObservedRunningTime="2026-02-16 17:42:33.190259396 +0000 UTC m=+1110.578427912" Feb 16 17:42:33.374261 master-0 kubenswrapper[4652]: I0216 17:42:33.374190 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:34.169230 master-0 kubenswrapper[4652]: I0216 17:42:34.169183 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ndjf5" event={"ID":"e5005365-36c1-44e2-be02-84737aa7a60a","Type":"ContainerStarted","Data":"c1c837aa92bbf8d10ed9025bffe0c521bc1a557d9b0e1ef931701d4432c0a8af"} Feb 16 17:42:34.172392 master-0 kubenswrapper[4652]: I0216 17:42:34.172361 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"67dcf429-d644-435b-8edb-e08198064dfb","Type":"ContainerStarted","Data":"81692546bc5049a915cfd0b82e0a924cf6a246e940f0254c5f82ef23ef987a76"} Feb 16 17:42:34.192984 master-0 kubenswrapper[4652]: I0216 17:42:34.192916 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-db-sync-ndjf5" podStartSLOduration=12.483686066 podStartE2EDuration="19.192895293s" podCreationTimestamp="2026-02-16 17:42:15 +0000 UTC" firstStartedPulling="2026-02-16 17:42:24.407229069 +0000 UTC m=+1101.795397585" lastFinishedPulling="2026-02-16 17:42:31.116438296 +0000 UTC m=+1108.504606812" observedRunningTime="2026-02-16 17:42:34.186387899 +0000 UTC m=+1111.574556435" watchObservedRunningTime="2026-02-16 17:42:34.192895293 +0000 UTC m=+1111.581063809" Feb 16 17:42:34.215339 master-0 kubenswrapper[4652]: I0216 17:42:34.215242 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-50e08-default-internal-api-0" podStartSLOduration=9.215219202 podStartE2EDuration="9.215219202s" podCreationTimestamp="2026-02-16 17:42:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:34.204663639 +0000 UTC m=+1111.592832165" watchObservedRunningTime="2026-02-16 17:42:34.215219202 +0000 UTC m=+1111.603387728" Feb 16 17:42:36.204948 master-0 kubenswrapper[4652]: I0216 17:42:36.204865 4652 generic.go:334] "Generic (PLEG): container finished" podID="c9405c7d-2ad3-46cf-b8e4-4c91feead991" containerID="355f2f1cf3208235cfa2edd5deb6abf54dc635abacb4d7f7bf980e853bb6a8b4" exitCode=0 Feb 16 17:42:36.204948 master-0 kubenswrapper[4652]: I0216 17:42:36.204925 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-db-sync-5mcjg" event={"ID":"c9405c7d-2ad3-46cf-b8e4-4c91feead991","Type":"ContainerDied","Data":"355f2f1cf3208235cfa2edd5deb6abf54dc635abacb4d7f7bf980e853bb6a8b4"} Feb 16 17:42:36.699477 master-0 kubenswrapper[4652]: I0216 17:42:36.699400 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:36.699477 master-0 kubenswrapper[4652]: I0216 17:42:36.699484 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:36.730213 master-0 kubenswrapper[4652]: I0216 17:42:36.730140 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:36.741538 master-0 kubenswrapper[4652]: I0216 17:42:36.741093 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:37.216175 master-0 kubenswrapper[4652]: I0216 17:42:37.216111 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:37.216175 master-0 kubenswrapper[4652]: I0216 17:42:37.216173 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:37.628923 master-0 kubenswrapper[4652]: I0216 17:42:37.628865 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:37.781285 master-0 kubenswrapper[4652]: I0216 17:42:37.781231 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-combined-ca-bundle\") pod \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " Feb 16 17:42:37.781504 master-0 kubenswrapper[4652]: I0216 17:42:37.781405 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-scripts\") pod \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " Feb 16 17:42:37.781504 master-0 kubenswrapper[4652]: I0216 17:42:37.781427 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-db-sync-config-data\") pod \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " Feb 16 17:42:37.781504 master-0 kubenswrapper[4652]: I0216 17:42:37.781459 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcdn9\" (UniqueName: \"kubernetes.io/projected/c9405c7d-2ad3-46cf-b8e4-4c91feead991-kube-api-access-fcdn9\") pod \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " Feb 16 17:42:37.781667 master-0 kubenswrapper[4652]: I0216 17:42:37.781603 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9405c7d-2ad3-46cf-b8e4-4c91feead991-etc-machine-id\") pod \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " Feb 16 17:42:37.781726 master-0 kubenswrapper[4652]: I0216 17:42:37.781670 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-config-data\") pod \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\" (UID: \"c9405c7d-2ad3-46cf-b8e4-4c91feead991\") " Feb 16 17:42:37.781913 master-0 kubenswrapper[4652]: I0216 17:42:37.781788 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c9405c7d-2ad3-46cf-b8e4-4c91feead991-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c9405c7d-2ad3-46cf-b8e4-4c91feead991" (UID: "c9405c7d-2ad3-46cf-b8e4-4c91feead991"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:37.782333 master-0 kubenswrapper[4652]: I0216 17:42:37.782308 4652 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c9405c7d-2ad3-46cf-b8e4-4c91feead991-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:37.784706 master-0 kubenswrapper[4652]: I0216 17:42:37.784611 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-scripts" (OuterVolumeSpecName: "scripts") pod "c9405c7d-2ad3-46cf-b8e4-4c91feead991" (UID: "c9405c7d-2ad3-46cf-b8e4-4c91feead991"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:37.785336 master-0 kubenswrapper[4652]: I0216 17:42:37.785244 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9405c7d-2ad3-46cf-b8e4-4c91feead991-kube-api-access-fcdn9" (OuterVolumeSpecName: "kube-api-access-fcdn9") pod "c9405c7d-2ad3-46cf-b8e4-4c91feead991" (UID: "c9405c7d-2ad3-46cf-b8e4-4c91feead991"). InnerVolumeSpecName "kube-api-access-fcdn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:37.805633 master-0 kubenswrapper[4652]: I0216 17:42:37.805526 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c9405c7d-2ad3-46cf-b8e4-4c91feead991" (UID: "c9405c7d-2ad3-46cf-b8e4-4c91feead991"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:37.806399 master-0 kubenswrapper[4652]: I0216 17:42:37.806372 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9405c7d-2ad3-46cf-b8e4-4c91feead991" (UID: "c9405c7d-2ad3-46cf-b8e4-4c91feead991"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:37.834580 master-0 kubenswrapper[4652]: I0216 17:42:37.834494 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-config-data" (OuterVolumeSpecName: "config-data") pod "c9405c7d-2ad3-46cf-b8e4-4c91feead991" (UID: "c9405c7d-2ad3-46cf-b8e4-4c91feead991"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:37.884660 master-0 kubenswrapper[4652]: I0216 17:42:37.884617 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:37.884898 master-0 kubenswrapper[4652]: I0216 17:42:37.884883 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:37.885002 master-0 kubenswrapper[4652]: I0216 17:42:37.884985 4652 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-db-sync-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:37.885096 master-0 kubenswrapper[4652]: I0216 17:42:37.885082 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcdn9\" (UniqueName: \"kubernetes.io/projected/c9405c7d-2ad3-46cf-b8e4-4c91feead991-kube-api-access-fcdn9\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:37.885179 master-0 kubenswrapper[4652]: I0216 17:42:37.885167 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9405c7d-2ad3-46cf-b8e4-4c91feead991-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:38.226572 master-0 kubenswrapper[4652]: I0216 17:42:38.226540 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-db-sync-5mcjg" Feb 16 17:42:38.227298 master-0 kubenswrapper[4652]: I0216 17:42:38.226498 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-db-sync-5mcjg" event={"ID":"c9405c7d-2ad3-46cf-b8e4-4c91feead991","Type":"ContainerDied","Data":"55e563ae69f2964b622cf9c2b9690241e8e0d549a18b4750c5938bc832c1cf89"} Feb 16 17:42:38.227384 master-0 kubenswrapper[4652]: I0216 17:42:38.227333 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55e563ae69f2964b622cf9c2b9690241e8e0d549a18b4750c5938bc832c1cf89" Feb 16 17:42:38.524575 master-0 kubenswrapper[4652]: I0216 17:42:38.524436 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:38.525188 master-0 kubenswrapper[4652]: I0216 17:42:38.525162 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:38.631278 master-0 kubenswrapper[4652]: I0216 17:42:38.615844 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:38.645380 master-0 kubenswrapper[4652]: I0216 17:42:38.638030 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:38.893277 master-0 kubenswrapper[4652]: I0216 17:42:38.889403 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c34a6-scheduler-0"] Feb 16 17:42:38.893277 master-0 kubenswrapper[4652]: E0216 17:42:38.889905 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9405c7d-2ad3-46cf-b8e4-4c91feead991" containerName="cinder-c34a6-db-sync" Feb 16 17:42:38.893277 master-0 kubenswrapper[4652]: I0216 17:42:38.889918 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9405c7d-2ad3-46cf-b8e4-4c91feead991" containerName="cinder-c34a6-db-sync" Feb 16 17:42:38.893277 master-0 kubenswrapper[4652]: I0216 17:42:38.890159 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9405c7d-2ad3-46cf-b8e4-4c91feead991" containerName="cinder-c34a6-db-sync" Feb 16 17:42:38.893277 master-0 kubenswrapper[4652]: I0216 17:42:38.891300 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:38.893846 master-0 kubenswrapper[4652]: I0216 17:42:38.893668 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-c34a6-config-data" Feb 16 17:42:38.893846 master-0 kubenswrapper[4652]: I0216 17:42:38.893835 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-c34a6-scheduler-config-data" Feb 16 17:42:38.895431 master-0 kubenswrapper[4652]: I0216 17:42:38.895239 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-c34a6-scripts" Feb 16 17:42:38.931872 master-0 kubenswrapper[4652]: I0216 17:42:38.931811 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c34a6-volume-lvm-iscsi-0"] Feb 16 17:42:38.939531 master-0 kubenswrapper[4652]: I0216 17:42:38.939071 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:38.942938 master-0 kubenswrapper[4652]: I0216 17:42:38.942895 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-c34a6-volume-lvm-iscsi-config-data" Feb 16 17:42:38.966474 master-0 kubenswrapper[4652]: I0216 17:42:38.966420 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-scheduler-0"] Feb 16 17:42:39.019524 master-0 kubenswrapper[4652]: I0216 17:42:39.019363 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-volume-lvm-iscsi-0"] Feb 16 17:42:39.029561 master-0 kubenswrapper[4652]: I0216 17:42:39.029501 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c34a6-backup-0"] Feb 16 17:42:39.032424 master-0 kubenswrapper[4652]: I0216 17:42:39.032379 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.035004 master-0 kubenswrapper[4652]: I0216 17:42:39.034974 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-c34a6-backup-config-data" Feb 16 17:42:39.044856 master-0 kubenswrapper[4652]: I0216 17:42:39.044785 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-config-data\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.044856 master-0 kubenswrapper[4652]: I0216 17:42:39.044837 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58xqt\" (UniqueName: \"kubernetes.io/projected/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-kube-api-access-58xqt\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.045221 master-0 kubenswrapper[4652]: I0216 17:42:39.044868 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-lib-modules\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.045221 master-0 kubenswrapper[4652]: I0216 17:42:39.044884 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-etc-machine-id\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.045221 master-0 kubenswrapper[4652]: I0216 17:42:39.044913 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-scripts\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.045221 master-0 kubenswrapper[4652]: I0216 17:42:39.044932 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-lib-cinder\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.045221 master-0 kubenswrapper[4652]: I0216 17:42:39.044996 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-sys\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.045221 master-0 kubenswrapper[4652]: I0216 17:42:39.045022 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-iscsi\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.045221 master-0 kubenswrapper[4652]: I0216 17:42:39.045053 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdwfv\" (UniqueName: \"kubernetes.io/projected/7430db54-9280-474e-b84f-bddab08df2d2-kube-api-access-hdwfv\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.045221 master-0 kubenswrapper[4652]: I0216 17:42:39.045097 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-nvme\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.045221 master-0 kubenswrapper[4652]: I0216 17:42:39.045122 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-locks-brick\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.045221 master-0 kubenswrapper[4652]: I0216 17:42:39.045193 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-combined-ca-bundle\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.045647 master-0 kubenswrapper[4652]: I0216 17:42:39.045239 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-scripts\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.045647 master-0 kubenswrapper[4652]: I0216 17:42:39.045314 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-config-data\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.045647 master-0 kubenswrapper[4652]: I0216 17:42:39.045345 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-run\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.045647 master-0 kubenswrapper[4652]: I0216 17:42:39.045368 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-locks-cinder\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.045647 master-0 kubenswrapper[4652]: I0216 17:42:39.045403 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-dev\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.046670 master-0 kubenswrapper[4652]: I0216 17:42:39.046635 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-config-data-custom\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.046737 master-0 kubenswrapper[4652]: I0216 17:42:39.046689 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-config-data-custom\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.046737 master-0 kubenswrapper[4652]: I0216 17:42:39.046716 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-machine-id\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.046833 master-0 kubenswrapper[4652]: I0216 17:42:39.046740 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-combined-ca-bundle\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.060850 master-0 kubenswrapper[4652]: I0216 17:42:39.060726 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-backup-0"] Feb 16 17:42:39.148916 master-0 kubenswrapper[4652]: I0216 17:42:39.148853 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-dev\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.148916 master-0 kubenswrapper[4652]: I0216 17:42:39.148914 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-scripts\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.149220 master-0 kubenswrapper[4652]: I0216 17:42:39.148985 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-iscsi\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.149220 master-0 kubenswrapper[4652]: I0216 17:42:39.149120 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-config-data-custom\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.149220 master-0 kubenswrapper[4652]: I0216 17:42:39.149187 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-config-data-custom\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.149220 master-0 kubenswrapper[4652]: I0216 17:42:39.149212 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-combined-ca-bundle\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.149406 master-0 kubenswrapper[4652]: I0216 17:42:39.149228 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-machine-id\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.149406 master-0 kubenswrapper[4652]: I0216 17:42:39.149265 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-machine-id\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.149406 master-0 kubenswrapper[4652]: I0216 17:42:39.149300 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-combined-ca-bundle\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.149406 master-0 kubenswrapper[4652]: I0216 17:42:39.149320 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-config-data\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.149406 master-0 kubenswrapper[4652]: I0216 17:42:39.149343 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58xqt\" (UniqueName: \"kubernetes.io/projected/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-kube-api-access-58xqt\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.149406 master-0 kubenswrapper[4652]: I0216 17:42:39.149368 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-lib-modules\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.149406 master-0 kubenswrapper[4652]: I0216 17:42:39.149385 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-config-data-custom\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.149406 master-0 kubenswrapper[4652]: I0216 17:42:39.149405 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-etc-machine-id\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.149696 master-0 kubenswrapper[4652]: I0216 17:42:39.149432 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-run\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.149696 master-0 kubenswrapper[4652]: I0216 17:42:39.149449 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-scripts\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.149696 master-0 kubenswrapper[4652]: I0216 17:42:39.149465 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-lib-cinder\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.149696 master-0 kubenswrapper[4652]: I0216 17:42:39.149508 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-sys\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.149696 master-0 kubenswrapper[4652]: I0216 17:42:39.149526 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-nvme\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.149696 master-0 kubenswrapper[4652]: I0216 17:42:39.149549 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-iscsi\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.149696 master-0 kubenswrapper[4652]: I0216 17:42:39.149569 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-dev\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.149696 master-0 kubenswrapper[4652]: I0216 17:42:39.149595 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdwfv\" (UniqueName: \"kubernetes.io/projected/7430db54-9280-474e-b84f-bddab08df2d2-kube-api-access-hdwfv\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.149696 master-0 kubenswrapper[4652]: I0216 17:42:39.149613 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-nvme\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.149696 master-0 kubenswrapper[4652]: I0216 17:42:39.149633 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-config-data\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.149696 master-0 kubenswrapper[4652]: I0216 17:42:39.149649 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-locks-brick\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.149696 master-0 kubenswrapper[4652]: I0216 17:42:39.149668 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-locks-cinder\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.149696 master-0 kubenswrapper[4652]: I0216 17:42:39.149698 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-combined-ca-bundle\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.150207 master-0 kubenswrapper[4652]: I0216 17:42:39.149724 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tjf6\" (UniqueName: \"kubernetes.io/projected/f1339fd1-e639-4057-bb76-3ad09c3000fe-kube-api-access-2tjf6\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.150207 master-0 kubenswrapper[4652]: I0216 17:42:39.149750 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-lib-modules\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.150207 master-0 kubenswrapper[4652]: I0216 17:42:39.149769 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-lib-cinder\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.150207 master-0 kubenswrapper[4652]: I0216 17:42:39.149788 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-locks-brick\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.150207 master-0 kubenswrapper[4652]: I0216 17:42:39.149805 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-sys\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.150207 master-0 kubenswrapper[4652]: I0216 17:42:39.149823 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-scripts\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.150207 master-0 kubenswrapper[4652]: I0216 17:42:39.149841 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-config-data\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.150207 master-0 kubenswrapper[4652]: I0216 17:42:39.149867 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-run\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.150207 master-0 kubenswrapper[4652]: I0216 17:42:39.149885 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-locks-cinder\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.150207 master-0 kubenswrapper[4652]: I0216 17:42:39.150171 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-locks-cinder\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.150687 master-0 kubenswrapper[4652]: I0216 17:42:39.150217 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-dev\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.150735 master-0 kubenswrapper[4652]: I0216 17:42:39.150697 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-iscsi\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.151380 master-0 kubenswrapper[4652]: I0216 17:42:39.150884 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-lib-modules\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.151380 master-0 kubenswrapper[4652]: I0216 17:42:39.150948 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-run\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.151380 master-0 kubenswrapper[4652]: I0216 17:42:39.151092 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-lib-cinder\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.151380 master-0 kubenswrapper[4652]: I0216 17:42:39.151135 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-etc-machine-id\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.151380 master-0 kubenswrapper[4652]: I0216 17:42:39.151159 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-machine-id\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.151649 master-0 kubenswrapper[4652]: I0216 17:42:39.151620 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-sys\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.151788 master-0 kubenswrapper[4652]: I0216 17:42:39.151761 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-nvme\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.152010 master-0 kubenswrapper[4652]: I0216 17:42:39.151967 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-locks-brick\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.155152 master-0 kubenswrapper[4652]: I0216 17:42:39.154531 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-scripts\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.155152 master-0 kubenswrapper[4652]: I0216 17:42:39.154815 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-config-data-custom\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.155152 master-0 kubenswrapper[4652]: I0216 17:42:39.155110 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-scripts\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.155528 master-0 kubenswrapper[4652]: I0216 17:42:39.155289 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-config-data-custom\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.156021 master-0 kubenswrapper[4652]: I0216 17:42:39.156004 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-combined-ca-bundle\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.157733 master-0 kubenswrapper[4652]: I0216 17:42:39.157715 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-config-data\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.165070 master-0 kubenswrapper[4652]: I0216 17:42:39.164972 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-combined-ca-bundle\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.166089 master-0 kubenswrapper[4652]: I0216 17:42:39.165972 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-config-data\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.210913 master-0 kubenswrapper[4652]: I0216 17:42:39.210862 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58xqt\" (UniqueName: \"kubernetes.io/projected/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-kube-api-access-58xqt\") pod \"cinder-c34a6-scheduler-0\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.219957 master-0 kubenswrapper[4652]: I0216 17:42:39.219885 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dd98456c9-m47zr"] Feb 16 17:42:39.220950 master-0 kubenswrapper[4652]: I0216 17:42:39.220912 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdwfv\" (UniqueName: \"kubernetes.io/projected/7430db54-9280-474e-b84f-bddab08df2d2-kube-api-access-hdwfv\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.223857 master-0 kubenswrapper[4652]: I0216 17:42:39.223808 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.240887 master-0 kubenswrapper[4652]: I0216 17:42:39.240841 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:39.240887 master-0 kubenswrapper[4652]: I0216 17:42:39.240883 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:39.253411 master-0 kubenswrapper[4652]: I0216 17:42:39.253331 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dd98456c9-m47zr"] Feb 16 17:42:39.254047 master-0 kubenswrapper[4652]: I0216 17:42:39.253988 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.256138 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-iscsi\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.256477 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-machine-id\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.256608 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-combined-ca-bundle\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.256732 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-config-data-custom\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.256845 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-run\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.258785 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-nvme\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.258862 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-dev\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.258960 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-config-data\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.259191 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-locks-cinder\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.259297 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tjf6\" (UniqueName: \"kubernetes.io/projected/f1339fd1-e639-4057-bb76-3ad09c3000fe-kube-api-access-2tjf6\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.259349 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-lib-modules\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.259388 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-lib-cinder\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.259422 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-locks-brick\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.259857 master-0 kubenswrapper[4652]: I0216 17:42:39.259456 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-sys\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.260602 master-0 kubenswrapper[4652]: I0216 17:42:39.260041 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-scripts\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.271407 master-0 kubenswrapper[4652]: I0216 17:42:39.262480 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-dev\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.271407 master-0 kubenswrapper[4652]: I0216 17:42:39.262582 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-iscsi\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.271407 master-0 kubenswrapper[4652]: I0216 17:42:39.262611 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-machine-id\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.271407 master-0 kubenswrapper[4652]: I0216 17:42:39.263155 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-lib-modules\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.271407 master-0 kubenswrapper[4652]: I0216 17:42:39.263483 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-locks-brick\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.271407 master-0 kubenswrapper[4652]: I0216 17:42:39.264413 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-lib-cinder\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.271407 master-0 kubenswrapper[4652]: I0216 17:42:39.264441 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-run\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.271407 master-0 kubenswrapper[4652]: I0216 17:42:39.265273 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-scripts\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.271407 master-0 kubenswrapper[4652]: I0216 17:42:39.265377 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-nvme\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.271407 master-0 kubenswrapper[4652]: I0216 17:42:39.265436 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-locks-cinder\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.271407 master-0 kubenswrapper[4652]: I0216 17:42:39.265471 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-sys\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.277040 master-0 kubenswrapper[4652]: I0216 17:42:39.276992 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-combined-ca-bundle\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.285308 master-0 kubenswrapper[4652]: I0216 17:42:39.282114 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:39.289765 master-0 kubenswrapper[4652]: I0216 17:42:39.287750 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tjf6\" (UniqueName: \"kubernetes.io/projected/f1339fd1-e639-4057-bb76-3ad09c3000fe-kube-api-access-2tjf6\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.289765 master-0 kubenswrapper[4652]: I0216 17:42:39.288805 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-config-data-custom\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.299235 master-0 kubenswrapper[4652]: I0216 17:42:39.297024 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-config-data\") pod \"cinder-c34a6-backup-0\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.352656 master-0 kubenswrapper[4652]: I0216 17:42:39.351971 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:39.404464 master-0 kubenswrapper[4652]: I0216 17:42:39.404400 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-config\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.404636 master-0 kubenswrapper[4652]: I0216 17:42:39.404475 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lr86\" (UniqueName: \"kubernetes.io/projected/de3fc756-8d16-4d85-8068-0d667549b93a-kube-api-access-8lr86\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.404900 master-0 kubenswrapper[4652]: I0216 17:42:39.404866 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.404971 master-0 kubenswrapper[4652]: I0216 17:42:39.404919 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-dns-svc\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.405227 master-0 kubenswrapper[4652]: I0216 17:42:39.405199 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-dns-swift-storage-0\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.405560 master-0 kubenswrapper[4652]: I0216 17:42:39.405377 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.405772 master-0 kubenswrapper[4652]: I0216 17:42:39.405728 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:39.405887 master-0 kubenswrapper[4652]: I0216 17:42:39.405866 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:42:39.409850 master-0 kubenswrapper[4652]: I0216 17:42:39.409810 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:42:39.421905 master-0 kubenswrapper[4652]: I0216 17:42:39.421857 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c34a6-api-0"] Feb 16 17:42:39.437659 master-0 kubenswrapper[4652]: I0216 17:42:39.424613 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.437659 master-0 kubenswrapper[4652]: I0216 17:42:39.432854 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-c34a6-api-config-data" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.509664 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-config\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.509725 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lr86\" (UniqueName: \"kubernetes.io/projected/de3fc756-8d16-4d85-8068-0d667549b93a-kube-api-access-8lr86\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.509787 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-scripts\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.509848 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg8lx\" (UniqueName: \"kubernetes.io/projected/ebf43401-bc7c-428b-bc43-30799491a116-kube-api-access-dg8lx\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.509886 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-config-data\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.509917 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.509944 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-dns-svc\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.509994 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf43401-bc7c-428b-bc43-30799491a116-logs\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.510034 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebf43401-bc7c-428b-bc43-30799491a116-etc-machine-id\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.510099 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-dns-swift-storage-0\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.510126 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-config-data-custom\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.510155 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-combined-ca-bundle\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.510229 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.511429 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-ovsdbserver-sb\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.513223 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-config\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.514071 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-dns-svc\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.517271 master-0 kubenswrapper[4652]: I0216 17:42:39.514307 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-dns-swift-storage-0\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.518186 master-0 kubenswrapper[4652]: I0216 17:42:39.517496 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-ovsdbserver-nb\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.533279 master-0 kubenswrapper[4652]: I0216 17:42:39.522616 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-api-0"] Feb 16 17:42:39.553339 master-0 kubenswrapper[4652]: I0216 17:42:39.539518 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lr86\" (UniqueName: \"kubernetes.io/projected/de3fc756-8d16-4d85-8068-0d667549b93a-kube-api-access-8lr86\") pod \"dnsmasq-dns-7dd98456c9-m47zr\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.617348 master-0 kubenswrapper[4652]: I0216 17:42:39.614139 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-scripts\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.617348 master-0 kubenswrapper[4652]: I0216 17:42:39.614939 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg8lx\" (UniqueName: \"kubernetes.io/projected/ebf43401-bc7c-428b-bc43-30799491a116-kube-api-access-dg8lx\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.617348 master-0 kubenswrapper[4652]: I0216 17:42:39.615334 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-config-data\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.617348 master-0 kubenswrapper[4652]: I0216 17:42:39.615376 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf43401-bc7c-428b-bc43-30799491a116-logs\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.617348 master-0 kubenswrapper[4652]: I0216 17:42:39.615402 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebf43401-bc7c-428b-bc43-30799491a116-etc-machine-id\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.617348 master-0 kubenswrapper[4652]: I0216 17:42:39.615461 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-config-data-custom\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.617348 master-0 kubenswrapper[4652]: I0216 17:42:39.615481 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-combined-ca-bundle\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.617348 master-0 kubenswrapper[4652]: I0216 17:42:39.615880 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebf43401-bc7c-428b-bc43-30799491a116-etc-machine-id\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.617348 master-0 kubenswrapper[4652]: I0216 17:42:39.617117 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf43401-bc7c-428b-bc43-30799491a116-logs\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.621279 master-0 kubenswrapper[4652]: I0216 17:42:39.619557 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-scripts\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.627026 master-0 kubenswrapper[4652]: I0216 17:42:39.623812 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-config-data-custom\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.674283 master-0 kubenswrapper[4652]: I0216 17:42:39.673892 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-config-data\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.674530 master-0 kubenswrapper[4652]: I0216 17:42:39.674493 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-combined-ca-bundle\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.749414 master-0 kubenswrapper[4652]: I0216 17:42:39.749058 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:39.863746 master-0 kubenswrapper[4652]: I0216 17:42:39.857750 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg8lx\" (UniqueName: \"kubernetes.io/projected/ebf43401-bc7c-428b-bc43-30799491a116-kube-api-access-dg8lx\") pod \"cinder-c34a6-api-0\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:39.981417 master-0 kubenswrapper[4652]: I0216 17:42:39.981010 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-scheduler-0"] Feb 16 17:42:40.086558 master-0 kubenswrapper[4652]: I0216 17:42:40.086456 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:40.095893 master-0 kubenswrapper[4652]: I0216 17:42:40.094828 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-volume-lvm-iscsi-0"] Feb 16 17:42:40.286907 master-0 kubenswrapper[4652]: I0216 17:42:40.286688 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-backup-0"] Feb 16 17:42:40.295704 master-0 kubenswrapper[4652]: I0216 17:42:40.295643 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" event={"ID":"7430db54-9280-474e-b84f-bddab08df2d2","Type":"ContainerStarted","Data":"b77d650bf007b6292d41f4d599f8b338a4eae00583fe663954a73d2b47a0e27b"} Feb 16 17:42:40.302880 master-0 kubenswrapper[4652]: I0216 17:42:40.302381 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-scheduler-0" event={"ID":"9e0296d1-dc25-4d4d-a617-3d1354eadb6f","Type":"ContainerStarted","Data":"c1305d2596f56ad20bf45cbe22fdf60d40abe90954cb3bc7e2bf7f54d6877a19"} Feb 16 17:42:40.468178 master-0 kubenswrapper[4652]: I0216 17:42:40.468095 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dd98456c9-m47zr"] Feb 16 17:42:40.622371 master-0 kubenswrapper[4652]: I0216 17:42:40.620003 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-api-0"] Feb 16 17:42:41.314812 master-0 kubenswrapper[4652]: I0216 17:42:41.314759 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-api-0" event={"ID":"ebf43401-bc7c-428b-bc43-30799491a116","Type":"ContainerStarted","Data":"01d8704bde3862531ee1dd0b4d04175baa1638753845ff346db179b71d5d5b6c"} Feb 16 17:42:41.328804 master-0 kubenswrapper[4652]: I0216 17:42:41.320786 4652 generic.go:334] "Generic (PLEG): container finished" podID="de3fc756-8d16-4d85-8068-0d667549b93a" containerID="33208c88d9087b475e020c337f2186446aaa37c22db51a2745702e258916b601" exitCode=0 Feb 16 17:42:41.328804 master-0 kubenswrapper[4652]: I0216 17:42:41.320838 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" event={"ID":"de3fc756-8d16-4d85-8068-0d667549b93a","Type":"ContainerDied","Data":"33208c88d9087b475e020c337f2186446aaa37c22db51a2745702e258916b601"} Feb 16 17:42:41.328804 master-0 kubenswrapper[4652]: I0216 17:42:41.320857 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" event={"ID":"de3fc756-8d16-4d85-8068-0d667549b93a","Type":"ContainerStarted","Data":"81f470419328aa034c617d207a8df6fa116fd54965456b0b315a6ca519d07c6b"} Feb 16 17:42:41.328804 master-0 kubenswrapper[4652]: I0216 17:42:41.326974 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:42:41.328804 master-0 kubenswrapper[4652]: I0216 17:42:41.326987 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:42:41.328804 master-0 kubenswrapper[4652]: I0216 17:42:41.327989 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-backup-0" event={"ID":"f1339fd1-e639-4057-bb76-3ad09c3000fe","Type":"ContainerStarted","Data":"1fe2246476947c524df4c99471343375b5da26b585327c3905ce9de173f7c50b"} Feb 16 17:42:41.875599 master-0 kubenswrapper[4652]: I0216 17:42:41.875547 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:41.875814 master-0 kubenswrapper[4652]: I0216 17:42:41.875611 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:42:42.358489 master-0 kubenswrapper[4652]: I0216 17:42:42.358427 4652 generic.go:334] "Generic (PLEG): container finished" podID="a9aa2fd7-c127-4b90-973d-67f8be387ef6" containerID="7e9cf091a5f27ffb6fa78bcb63dc31fe17bd7187ecf9743206c305750a710cf4" exitCode=0 Feb 16 17:42:42.363868 master-0 kubenswrapper[4652]: I0216 17:42:42.358531 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-74cn5" event={"ID":"a9aa2fd7-c127-4b90-973d-67f8be387ef6","Type":"ContainerDied","Data":"7e9cf091a5f27ffb6fa78bcb63dc31fe17bd7187ecf9743206c305750a710cf4"} Feb 16 17:42:42.383346 master-0 kubenswrapper[4652]: I0216 17:42:42.382993 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-scheduler-0" event={"ID":"9e0296d1-dc25-4d4d-a617-3d1354eadb6f","Type":"ContainerStarted","Data":"e4eeb696ba18e952a959884a793243b321b1c30a60e9da8f9265308dda2bc9d1"} Feb 16 17:42:42.406854 master-0 kubenswrapper[4652]: I0216 17:42:42.405366 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-api-0" event={"ID":"ebf43401-bc7c-428b-bc43-30799491a116","Type":"ContainerStarted","Data":"e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623"} Feb 16 17:42:42.412034 master-0 kubenswrapper[4652]: I0216 17:42:42.410519 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" event={"ID":"de3fc756-8d16-4d85-8068-0d667549b93a","Type":"ContainerStarted","Data":"3a7f934dd8388b3603b04795bc4376a4aec9538130209d42195f3c97ec82650f"} Feb 16 17:42:42.412034 master-0 kubenswrapper[4652]: I0216 17:42:42.411712 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:42.429512 master-0 kubenswrapper[4652]: I0216 17:42:42.429464 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" event={"ID":"7430db54-9280-474e-b84f-bddab08df2d2","Type":"ContainerStarted","Data":"74d5e3a453b4b28f53163373eec0c47b54c469fe1ecbc08861bfe8f776973aa4"} Feb 16 17:42:42.482935 master-0 kubenswrapper[4652]: I0216 17:42:42.482879 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" podStartSLOduration=3.482861202 podStartE2EDuration="3.482861202s" podCreationTimestamp="2026-02-16 17:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:42.453421413 +0000 UTC m=+1119.841589939" watchObservedRunningTime="2026-02-16 17:42:42.482861202 +0000 UTC m=+1119.871029708" Feb 16 17:42:42.611963 master-0 kubenswrapper[4652]: I0216 17:42:42.611839 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c34a6-api-0"] Feb 16 17:42:43.458385 master-0 kubenswrapper[4652]: I0216 17:42:43.458334 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-backup-0" event={"ID":"f1339fd1-e639-4057-bb76-3ad09c3000fe","Type":"ContainerStarted","Data":"f51ef18270d7ffa218605c7022402385de3d816737b77ce4de126043a6d58f45"} Feb 16 17:42:43.458385 master-0 kubenswrapper[4652]: I0216 17:42:43.458385 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-backup-0" event={"ID":"f1339fd1-e639-4057-bb76-3ad09c3000fe","Type":"ContainerStarted","Data":"901a3cf6aefe6b0588fc42aaadc11445f06847028e31673918cd8ece3b9a6728"} Feb 16 17:42:43.466472 master-0 kubenswrapper[4652]: I0216 17:42:43.466421 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" event={"ID":"7430db54-9280-474e-b84f-bddab08df2d2","Type":"ContainerStarted","Data":"94e4d2d7d338f607835bfdd0aab259b621394f3a20816db842efe8432c21ae55"} Feb 16 17:42:43.470672 master-0 kubenswrapper[4652]: I0216 17:42:43.470605 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-scheduler-0" event={"ID":"9e0296d1-dc25-4d4d-a617-3d1354eadb6f","Type":"ContainerStarted","Data":"edf0c3be42c6e3861ca6d7f31f947964c75aa1e890616af1d2083c84e7e1d950"} Feb 16 17:42:43.474271 master-0 kubenswrapper[4652]: I0216 17:42:43.474190 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-c34a6-api-0" podUID="ebf43401-bc7c-428b-bc43-30799491a116" containerName="cinder-c34a6-api-log" containerID="cri-o://e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623" gracePeriod=30 Feb 16 17:42:43.474663 master-0 kubenswrapper[4652]: I0216 17:42:43.474593 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-api-0" event={"ID":"ebf43401-bc7c-428b-bc43-30799491a116","Type":"ContainerStarted","Data":"bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417"} Feb 16 17:42:43.478256 master-0 kubenswrapper[4652]: I0216 17:42:43.478195 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:43.478337 master-0 kubenswrapper[4652]: I0216 17:42:43.478221 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-c34a6-api-0" podUID="ebf43401-bc7c-428b-bc43-30799491a116" containerName="cinder-api" containerID="cri-o://bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417" gracePeriod=30 Feb 16 17:42:43.516389 master-0 kubenswrapper[4652]: I0216 17:42:43.516328 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c34a6-backup-0" podStartSLOduration=3.934719989 podStartE2EDuration="5.516310025s" podCreationTimestamp="2026-02-16 17:42:38 +0000 UTC" firstStartedPulling="2026-02-16 17:42:40.297859242 +0000 UTC m=+1117.686027758" lastFinishedPulling="2026-02-16 17:42:41.879449278 +0000 UTC m=+1119.267617794" observedRunningTime="2026-02-16 17:42:43.490530464 +0000 UTC m=+1120.878698990" watchObservedRunningTime="2026-02-16 17:42:43.516310025 +0000 UTC m=+1120.904478541" Feb 16 17:42:43.565448 master-0 kubenswrapper[4652]: I0216 17:42:43.564651 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" podStartSLOduration=4.675156847 podStartE2EDuration="5.564582029s" podCreationTimestamp="2026-02-16 17:42:38 +0000 UTC" firstStartedPulling="2026-02-16 17:42:40.066643664 +0000 UTC m=+1117.454812180" lastFinishedPulling="2026-02-16 17:42:40.956068846 +0000 UTC m=+1118.344237362" observedRunningTime="2026-02-16 17:42:43.528591554 +0000 UTC m=+1120.916760070" watchObservedRunningTime="2026-02-16 17:42:43.564582029 +0000 UTC m=+1120.952750545" Feb 16 17:42:43.588942 master-0 kubenswrapper[4652]: I0216 17:42:43.588871 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c34a6-scheduler-0" podStartSLOduration=4.713225398 podStartE2EDuration="5.58885339s" podCreationTimestamp="2026-02-16 17:42:38 +0000 UTC" firstStartedPulling="2026-02-16 17:42:40.009893313 +0000 UTC m=+1117.398061829" lastFinishedPulling="2026-02-16 17:42:40.885521305 +0000 UTC m=+1118.273689821" observedRunningTime="2026-02-16 17:42:43.578637116 +0000 UTC m=+1120.966805642" watchObservedRunningTime="2026-02-16 17:42:43.58885339 +0000 UTC m=+1120.977021916" Feb 16 17:42:43.638844 master-0 kubenswrapper[4652]: I0216 17:42:43.638771 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c34a6-api-0" podStartSLOduration=4.638754087 podStartE2EDuration="4.638754087s" podCreationTimestamp="2026-02-16 17:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:43.620587081 +0000 UTC m=+1121.008755607" watchObservedRunningTime="2026-02-16 17:42:43.638754087 +0000 UTC m=+1121.026922593" Feb 16 17:42:44.084153 master-0 kubenswrapper[4652]: I0216 17:42:44.084109 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-74cn5" Feb 16 17:42:44.254986 master-0 kubenswrapper[4652]: I0216 17:42:44.254940 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:44.265623 master-0 kubenswrapper[4652]: I0216 17:42:44.264463 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9aa2fd7-c127-4b90-973d-67f8be387ef6-combined-ca-bundle\") pod \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\" (UID: \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\") " Feb 16 17:42:44.265623 master-0 kubenswrapper[4652]: I0216 17:42:44.264754 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9aa2fd7-c127-4b90-973d-67f8be387ef6-config\") pod \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\" (UID: \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\") " Feb 16 17:42:44.265623 master-0 kubenswrapper[4652]: I0216 17:42:44.264804 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhqnj\" (UniqueName: \"kubernetes.io/projected/a9aa2fd7-c127-4b90-973d-67f8be387ef6-kube-api-access-lhqnj\") pod \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\" (UID: \"a9aa2fd7-c127-4b90-973d-67f8be387ef6\") " Feb 16 17:42:44.274139 master-0 kubenswrapper[4652]: I0216 17:42:44.273785 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9aa2fd7-c127-4b90-973d-67f8be387ef6-kube-api-access-lhqnj" (OuterVolumeSpecName: "kube-api-access-lhqnj") pod "a9aa2fd7-c127-4b90-973d-67f8be387ef6" (UID: "a9aa2fd7-c127-4b90-973d-67f8be387ef6"). InnerVolumeSpecName "kube-api-access-lhqnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:44.286461 master-0 kubenswrapper[4652]: I0216 17:42:44.286402 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:44.301715 master-0 kubenswrapper[4652]: I0216 17:42:44.301655 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9aa2fd7-c127-4b90-973d-67f8be387ef6-config" (OuterVolumeSpecName: "config") pod "a9aa2fd7-c127-4b90-973d-67f8be387ef6" (UID: "a9aa2fd7-c127-4b90-973d-67f8be387ef6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:44.325276 master-0 kubenswrapper[4652]: I0216 17:42:44.324834 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9aa2fd7-c127-4b90-973d-67f8be387ef6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9aa2fd7-c127-4b90-973d-67f8be387ef6" (UID: "a9aa2fd7-c127-4b90-973d-67f8be387ef6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:44.353245 master-0 kubenswrapper[4652]: I0216 17:42:44.352961 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:44.368010 master-0 kubenswrapper[4652]: I0216 17:42:44.367905 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a9aa2fd7-c127-4b90-973d-67f8be387ef6-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:44.368010 master-0 kubenswrapper[4652]: I0216 17:42:44.367996 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhqnj\" (UniqueName: \"kubernetes.io/projected/a9aa2fd7-c127-4b90-973d-67f8be387ef6-kube-api-access-lhqnj\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:44.368010 master-0 kubenswrapper[4652]: I0216 17:42:44.368011 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9aa2fd7-c127-4b90-973d-67f8be387ef6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:44.447345 master-0 kubenswrapper[4652]: I0216 17:42:44.447306 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:44.517340 master-0 kubenswrapper[4652]: I0216 17:42:44.517277 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-74cn5" event={"ID":"a9aa2fd7-c127-4b90-973d-67f8be387ef6","Type":"ContainerDied","Data":"c7e6efe73f042dc37728ac35efecebdd666e61d51e0de99b9b36ea9967837a01"} Feb 16 17:42:44.517340 master-0 kubenswrapper[4652]: I0216 17:42:44.517326 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7e6efe73f042dc37728ac35efecebdd666e61d51e0de99b9b36ea9967837a01" Feb 16 17:42:44.517947 master-0 kubenswrapper[4652]: I0216 17:42:44.517387 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-74cn5" Feb 16 17:42:44.522821 master-0 kubenswrapper[4652]: I0216 17:42:44.522763 4652 generic.go:334] "Generic (PLEG): container finished" podID="ebf43401-bc7c-428b-bc43-30799491a116" containerID="bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417" exitCode=0 Feb 16 17:42:44.522821 master-0 kubenswrapper[4652]: I0216 17:42:44.522797 4652 generic.go:334] "Generic (PLEG): container finished" podID="ebf43401-bc7c-428b-bc43-30799491a116" containerID="e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623" exitCode=143 Feb 16 17:42:44.523271 master-0 kubenswrapper[4652]: I0216 17:42:44.523182 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-api-0" event={"ID":"ebf43401-bc7c-428b-bc43-30799491a116","Type":"ContainerDied","Data":"bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417"} Feb 16 17:42:44.523410 master-0 kubenswrapper[4652]: I0216 17:42:44.523389 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-api-0" event={"ID":"ebf43401-bc7c-428b-bc43-30799491a116","Type":"ContainerDied","Data":"e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623"} Feb 16 17:42:44.523529 master-0 kubenswrapper[4652]: I0216 17:42:44.523512 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-api-0" event={"ID":"ebf43401-bc7c-428b-bc43-30799491a116","Type":"ContainerDied","Data":"01d8704bde3862531ee1dd0b4d04175baa1638753845ff346db179b71d5d5b6c"} Feb 16 17:42:44.523644 master-0 kubenswrapper[4652]: I0216 17:42:44.523436 4652 scope.go:117] "RemoveContainer" containerID="bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417" Feb 16 17:42:44.523996 master-0 kubenswrapper[4652]: I0216 17:42:44.523294 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:44.559541 master-0 kubenswrapper[4652]: I0216 17:42:44.555571 4652 scope.go:117] "RemoveContainer" containerID="e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623" Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.576641 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-config-data-custom\") pod \"ebf43401-bc7c-428b-bc43-30799491a116\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.576778 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebf43401-bc7c-428b-bc43-30799491a116-etc-machine-id\") pod \"ebf43401-bc7c-428b-bc43-30799491a116\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.576919 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg8lx\" (UniqueName: \"kubernetes.io/projected/ebf43401-bc7c-428b-bc43-30799491a116-kube-api-access-dg8lx\") pod \"ebf43401-bc7c-428b-bc43-30799491a116\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.576950 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-config-data\") pod \"ebf43401-bc7c-428b-bc43-30799491a116\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.576993 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf43401-bc7c-428b-bc43-30799491a116-logs\") pod \"ebf43401-bc7c-428b-bc43-30799491a116\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.577037 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-combined-ca-bundle\") pod \"ebf43401-bc7c-428b-bc43-30799491a116\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.577121 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-scripts\") pod \"ebf43401-bc7c-428b-bc43-30799491a116\" (UID: \"ebf43401-bc7c-428b-bc43-30799491a116\") " Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.577473 4652 scope.go:117] "RemoveContainer" containerID="bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417" Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.577876 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebf43401-bc7c-428b-bc43-30799491a116-logs" (OuterVolumeSpecName: "logs") pod "ebf43401-bc7c-428b-bc43-30799491a116" (UID: "ebf43401-bc7c-428b-bc43-30799491a116"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: E0216 17:42:44.578208 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417\": container with ID starting with bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417 not found: ID does not exist" containerID="bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417" Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.578281 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ebf43401-bc7c-428b-bc43-30799491a116-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.578239 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417"} err="failed to get container status \"bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417\": rpc error: code = NotFound desc = could not find container \"bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417\": container with ID starting with bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417 not found: ID does not exist" Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.578346 4652 scope.go:117] "RemoveContainer" containerID="e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623" Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: E0216 17:42:44.580019 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623\": container with ID starting with e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623 not found: ID does not exist" containerID="e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623" Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.580043 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623"} err="failed to get container status \"e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623\": rpc error: code = NotFound desc = could not find container \"e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623\": container with ID starting with e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623 not found: ID does not exist" Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.580063 4652 scope.go:117] "RemoveContainer" containerID="bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417" Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.580126 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ebf43401-bc7c-428b-bc43-30799491a116-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ebf43401-bc7c-428b-bc43-30799491a116" (UID: "ebf43401-bc7c-428b-bc43-30799491a116"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.580485 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417"} err="failed to get container status \"bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417\": rpc error: code = NotFound desc = could not find container \"bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417\": container with ID starting with bf7d44708375cc83f3644a9eae41ac44869d727fb1014733e42441e6130b2417 not found: ID does not exist" Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.580523 4652 scope.go:117] "RemoveContainer" containerID="e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623" Feb 16 17:42:44.581343 master-0 kubenswrapper[4652]: I0216 17:42:44.580932 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623"} err="failed to get container status \"e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623\": rpc error: code = NotFound desc = could not find container \"e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623\": container with ID starting with e95af3a6503376cc1de05ca07d856281a005c050342807cdead3c8f2193c3623 not found: ID does not exist" Feb 16 17:42:44.582696 master-0 kubenswrapper[4652]: I0216 17:42:44.582121 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ebf43401-bc7c-428b-bc43-30799491a116" (UID: "ebf43401-bc7c-428b-bc43-30799491a116"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:44.583571 master-0 kubenswrapper[4652]: I0216 17:42:44.583526 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-scripts" (OuterVolumeSpecName: "scripts") pod "ebf43401-bc7c-428b-bc43-30799491a116" (UID: "ebf43401-bc7c-428b-bc43-30799491a116"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:44.594501 master-0 kubenswrapper[4652]: I0216 17:42:44.594436 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf43401-bc7c-428b-bc43-30799491a116-kube-api-access-dg8lx" (OuterVolumeSpecName: "kube-api-access-dg8lx") pod "ebf43401-bc7c-428b-bc43-30799491a116" (UID: "ebf43401-bc7c-428b-bc43-30799491a116"). InnerVolumeSpecName "kube-api-access-dg8lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:44.662346 master-0 kubenswrapper[4652]: I0216 17:42:44.661424 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-config-data" (OuterVolumeSpecName: "config-data") pod "ebf43401-bc7c-428b-bc43-30799491a116" (UID: "ebf43401-bc7c-428b-bc43-30799491a116"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:44.664980 master-0 kubenswrapper[4652]: I0216 17:42:44.663268 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ebf43401-bc7c-428b-bc43-30799491a116" (UID: "ebf43401-bc7c-428b-bc43-30799491a116"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:44.798840 master-0 kubenswrapper[4652]: I0216 17:42:44.790201 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg8lx\" (UniqueName: \"kubernetes.io/projected/ebf43401-bc7c-428b-bc43-30799491a116-kube-api-access-dg8lx\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:44.798840 master-0 kubenswrapper[4652]: I0216 17:42:44.790236 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:44.798840 master-0 kubenswrapper[4652]: I0216 17:42:44.790260 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:44.798840 master-0 kubenswrapper[4652]: I0216 17:42:44.790275 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:44.798840 master-0 kubenswrapper[4652]: I0216 17:42:44.790284 4652 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ebf43401-bc7c-428b-bc43-30799491a116-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:44.798840 master-0 kubenswrapper[4652]: I0216 17:42:44.790302 4652 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ebf43401-bc7c-428b-bc43-30799491a116-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:44.870042 master-0 kubenswrapper[4652]: I0216 17:42:44.867322 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dd98456c9-m47zr"] Feb 16 17:42:44.887838 master-0 kubenswrapper[4652]: I0216 17:42:44.887799 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-547dcb69f9-nqbv9"] Feb 16 17:42:44.888781 master-0 kubenswrapper[4652]: E0216 17:42:44.888762 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf43401-bc7c-428b-bc43-30799491a116" containerName="cinder-api" Feb 16 17:42:44.888883 master-0 kubenswrapper[4652]: I0216 17:42:44.888872 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf43401-bc7c-428b-bc43-30799491a116" containerName="cinder-api" Feb 16 17:42:44.888966 master-0 kubenswrapper[4652]: E0216 17:42:44.888955 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf43401-bc7c-428b-bc43-30799491a116" containerName="cinder-c34a6-api-log" Feb 16 17:42:44.889023 master-0 kubenswrapper[4652]: I0216 17:42:44.889014 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf43401-bc7c-428b-bc43-30799491a116" containerName="cinder-c34a6-api-log" Feb 16 17:42:44.889098 master-0 kubenswrapper[4652]: E0216 17:42:44.889089 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9aa2fd7-c127-4b90-973d-67f8be387ef6" containerName="neutron-db-sync" Feb 16 17:42:44.889166 master-0 kubenswrapper[4652]: I0216 17:42:44.889157 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9aa2fd7-c127-4b90-973d-67f8be387ef6" containerName="neutron-db-sync" Feb 16 17:42:44.889475 master-0 kubenswrapper[4652]: I0216 17:42:44.889462 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf43401-bc7c-428b-bc43-30799491a116" containerName="cinder-api" Feb 16 17:42:44.889558 master-0 kubenswrapper[4652]: I0216 17:42:44.889548 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf43401-bc7c-428b-bc43-30799491a116" containerName="cinder-c34a6-api-log" Feb 16 17:42:44.889635 master-0 kubenswrapper[4652]: I0216 17:42:44.889626 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9aa2fd7-c127-4b90-973d-67f8be387ef6" containerName="neutron-db-sync" Feb 16 17:42:44.890947 master-0 kubenswrapper[4652]: I0216 17:42:44.890929 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:44.945786 master-0 kubenswrapper[4652]: I0216 17:42:44.945729 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-547dcb69f9-nqbv9"] Feb 16 17:42:45.004071 master-0 kubenswrapper[4652]: I0216 17:42:45.004004 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-ovsdbserver-sb\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.004329 master-0 kubenswrapper[4652]: I0216 17:42:45.004105 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2tfn\" (UniqueName: \"kubernetes.io/projected/f1048855-86d3-4f6a-b538-a53b51711bce-kube-api-access-w2tfn\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.004329 master-0 kubenswrapper[4652]: I0216 17:42:45.004131 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-dns-swift-storage-0\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.004329 master-0 kubenswrapper[4652]: I0216 17:42:45.004184 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-ovsdbserver-nb\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.004329 master-0 kubenswrapper[4652]: I0216 17:42:45.004303 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-dns-svc\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.004520 master-0 kubenswrapper[4652]: I0216 17:42:45.004364 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-config\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.035521 master-0 kubenswrapper[4652]: I0216 17:42:45.034395 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c34a6-api-0"] Feb 16 17:42:45.054499 master-0 kubenswrapper[4652]: I0216 17:42:45.053896 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c34a6-api-0"] Feb 16 17:42:45.065465 master-0 kubenswrapper[4652]: I0216 17:42:45.065325 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-66f9d86cdb-h58xd"] Feb 16 17:42:45.073011 master-0 kubenswrapper[4652]: I0216 17:42:45.072164 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.074138 master-0 kubenswrapper[4652]: I0216 17:42:45.074106 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 17:42:45.074573 master-0 kubenswrapper[4652]: I0216 17:42:45.074490 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 17:42:45.074781 master-0 kubenswrapper[4652]: I0216 17:42:45.074664 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 17:42:45.082129 master-0 kubenswrapper[4652]: I0216 17:42:45.082096 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c34a6-api-0"] Feb 16 17:42:45.084430 master-0 kubenswrapper[4652]: I0216 17:42:45.084390 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.086388 master-0 kubenswrapper[4652]: I0216 17:42:45.086368 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 17:42:45.086611 master-0 kubenswrapper[4652]: I0216 17:42:45.086588 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-c34a6-api-config-data" Feb 16 17:42:45.086776 master-0 kubenswrapper[4652]: I0216 17:42:45.086761 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 17:42:45.092405 master-0 kubenswrapper[4652]: I0216 17:42:45.092328 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66f9d86cdb-h58xd"] Feb 16 17:42:45.104745 master-0 kubenswrapper[4652]: I0216 17:42:45.104689 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-api-0"] Feb 16 17:42:45.106375 master-0 kubenswrapper[4652]: I0216 17:42:45.106329 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2tfn\" (UniqueName: \"kubernetes.io/projected/f1048855-86d3-4f6a-b538-a53b51711bce-kube-api-access-w2tfn\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.106450 master-0 kubenswrapper[4652]: I0216 17:42:45.106383 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-dns-swift-storage-0\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.106450 master-0 kubenswrapper[4652]: I0216 17:42:45.106439 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-ovsdbserver-nb\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.106521 master-0 kubenswrapper[4652]: I0216 17:42:45.106513 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-dns-svc\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.106572 master-0 kubenswrapper[4652]: I0216 17:42:45.106556 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-config\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.106616 master-0 kubenswrapper[4652]: I0216 17:42:45.106607 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-ovsdbserver-sb\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.107483 master-0 kubenswrapper[4652]: I0216 17:42:45.107454 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-ovsdbserver-nb\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.107939 master-0 kubenswrapper[4652]: I0216 17:42:45.107899 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-dns-svc\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.110847 master-0 kubenswrapper[4652]: I0216 17:42:45.110790 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-dns-swift-storage-0\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.116608 master-0 kubenswrapper[4652]: I0216 17:42:45.116565 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-ovsdbserver-sb\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.117428 master-0 kubenswrapper[4652]: I0216 17:42:45.117397 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-config\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.141179 master-0 kubenswrapper[4652]: I0216 17:42:45.141122 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2tfn\" (UniqueName: \"kubernetes.io/projected/f1048855-86d3-4f6a-b538-a53b51711bce-kube-api-access-w2tfn\") pod \"dnsmasq-dns-547dcb69f9-nqbv9\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.210271 master-0 kubenswrapper[4652]: I0216 17:42:45.210187 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7604569-ce06-4b64-873c-445738b11cff-logs\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.210271 master-0 kubenswrapper[4652]: I0216 17:42:45.210277 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-httpd-config\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.210599 master-0 kubenswrapper[4652]: I0216 17:42:45.210328 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7604569-ce06-4b64-873c-445738b11cff-etc-machine-id\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.210599 master-0 kubenswrapper[4652]: I0216 17:42:45.210353 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-internal-tls-certs\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.210599 master-0 kubenswrapper[4652]: I0216 17:42:45.210539 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9qds\" (UniqueName: \"kubernetes.io/projected/c7604569-ce06-4b64-873c-445738b11cff-kube-api-access-l9qds\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.210751 master-0 kubenswrapper[4652]: I0216 17:42:45.210630 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-config\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.210751 master-0 kubenswrapper[4652]: I0216 17:42:45.210735 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-public-tls-certs\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.210883 master-0 kubenswrapper[4652]: I0216 17:42:45.210762 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-combined-ca-bundle\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.210940 master-0 kubenswrapper[4652]: I0216 17:42:45.210912 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-scripts\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.211429 master-0 kubenswrapper[4652]: I0216 17:42:45.211031 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-config-data-custom\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.211429 master-0 kubenswrapper[4652]: I0216 17:42:45.211089 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-config-data\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.211429 master-0 kubenswrapper[4652]: I0216 17:42:45.211199 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-combined-ca-bundle\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.211429 master-0 kubenswrapper[4652]: I0216 17:42:45.211317 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4flz\" (UniqueName: \"kubernetes.io/projected/d4f31917-de6f-4a2d-a7ec-14023e52f58d-kube-api-access-x4flz\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.211429 master-0 kubenswrapper[4652]: I0216 17:42:45.211414 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-ovndb-tls-certs\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.266885 master-0 kubenswrapper[4652]: I0216 17:42:45.266816 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314138 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-scripts\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314224 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-config-data-custom\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314271 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-config-data\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314350 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-combined-ca-bundle\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314420 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4flz\" (UniqueName: \"kubernetes.io/projected/d4f31917-de6f-4a2d-a7ec-14023e52f58d-kube-api-access-x4flz\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314468 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-ovndb-tls-certs\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314524 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7604569-ce06-4b64-873c-445738b11cff-logs\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314557 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-httpd-config\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314605 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7604569-ce06-4b64-873c-445738b11cff-etc-machine-id\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314639 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-internal-tls-certs\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314683 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9qds\" (UniqueName: \"kubernetes.io/projected/c7604569-ce06-4b64-873c-445738b11cff-kube-api-access-l9qds\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314752 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-config\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314803 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-public-tls-certs\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.314842 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-combined-ca-bundle\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.316046 master-0 kubenswrapper[4652]: I0216 17:42:45.315974 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7604569-ce06-4b64-873c-445738b11cff-logs\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.318427 master-0 kubenswrapper[4652]: I0216 17:42:45.318368 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-combined-ca-bundle\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.322202 master-0 kubenswrapper[4652]: I0216 17:42:45.319333 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-scripts\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.322202 master-0 kubenswrapper[4652]: I0216 17:42:45.319776 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7604569-ce06-4b64-873c-445738b11cff-etc-machine-id\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.325700 master-0 kubenswrapper[4652]: I0216 17:42:45.322476 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-ovndb-tls-certs\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.325700 master-0 kubenswrapper[4652]: I0216 17:42:45.324525 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-combined-ca-bundle\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.337991 master-0 kubenswrapper[4652]: I0216 17:42:45.329197 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-config-data-custom\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.337991 master-0 kubenswrapper[4652]: I0216 17:42:45.329199 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-httpd-config\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.337991 master-0 kubenswrapper[4652]: I0216 17:42:45.329538 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-config\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.337991 master-0 kubenswrapper[4652]: I0216 17:42:45.329637 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-public-tls-certs\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.337991 master-0 kubenswrapper[4652]: I0216 17:42:45.336670 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4flz\" (UniqueName: \"kubernetes.io/projected/d4f31917-de6f-4a2d-a7ec-14023e52f58d-kube-api-access-x4flz\") pod \"neutron-66f9d86cdb-h58xd\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.337991 master-0 kubenswrapper[4652]: I0216 17:42:45.336750 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-internal-tls-certs\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.340346 master-0 kubenswrapper[4652]: I0216 17:42:45.339089 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9qds\" (UniqueName: \"kubernetes.io/projected/c7604569-ce06-4b64-873c-445738b11cff-kube-api-access-l9qds\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.340346 master-0 kubenswrapper[4652]: I0216 17:42:45.340029 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7604569-ce06-4b64-873c-445738b11cff-config-data\") pod \"cinder-c34a6-api-0\" (UID: \"c7604569-ce06-4b64-873c-445738b11cff\") " pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.409350 master-0 kubenswrapper[4652]: I0216 17:42:45.409001 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:45.429948 master-0 kubenswrapper[4652]: I0216 17:42:45.429446 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:45.542135 master-0 kubenswrapper[4652]: I0216 17:42:45.541733 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" podUID="de3fc756-8d16-4d85-8068-0d667549b93a" containerName="dnsmasq-dns" containerID="cri-o://3a7f934dd8388b3603b04795bc4376a4aec9538130209d42195f3c97ec82650f" gracePeriod=10 Feb 16 17:42:45.796736 master-0 kubenswrapper[4652]: I0216 17:42:45.796689 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-547dcb69f9-nqbv9"] Feb 16 17:42:45.983501 master-0 kubenswrapper[4652]: I0216 17:42:45.982342 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-api-0"] Feb 16 17:42:46.187110 master-0 kubenswrapper[4652]: I0216 17:42:46.187042 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-66f9d86cdb-h58xd"] Feb 16 17:42:46.218815 master-0 kubenswrapper[4652]: W0216 17:42:46.218759 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4f31917_de6f_4a2d_a7ec_14023e52f58d.slice/crio-c32aeb76122c0938b82f34b69a7287e62d3455243fef801010c5cfe9f22a7c19 WatchSource:0}: Error finding container c32aeb76122c0938b82f34b69a7287e62d3455243fef801010c5cfe9f22a7c19: Status 404 returned error can't find the container with id c32aeb76122c0938b82f34b69a7287e62d3455243fef801010c5cfe9f22a7c19 Feb 16 17:42:46.237116 master-0 kubenswrapper[4652]: I0216 17:42:46.237069 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:46.345317 master-0 kubenswrapper[4652]: I0216 17:42:46.345257 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-dns-svc\") pod \"de3fc756-8d16-4d85-8068-0d667549b93a\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " Feb 16 17:42:46.345558 master-0 kubenswrapper[4652]: I0216 17:42:46.345360 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-ovsdbserver-nb\") pod \"de3fc756-8d16-4d85-8068-0d667549b93a\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " Feb 16 17:42:46.345558 master-0 kubenswrapper[4652]: I0216 17:42:46.345402 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-config\") pod \"de3fc756-8d16-4d85-8068-0d667549b93a\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " Feb 16 17:42:46.345558 master-0 kubenswrapper[4652]: I0216 17:42:46.345436 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-dns-swift-storage-0\") pod \"de3fc756-8d16-4d85-8068-0d667549b93a\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " Feb 16 17:42:46.345683 master-0 kubenswrapper[4652]: I0216 17:42:46.345633 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lr86\" (UniqueName: \"kubernetes.io/projected/de3fc756-8d16-4d85-8068-0d667549b93a-kube-api-access-8lr86\") pod \"de3fc756-8d16-4d85-8068-0d667549b93a\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " Feb 16 17:42:46.345771 master-0 kubenswrapper[4652]: I0216 17:42:46.345737 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-ovsdbserver-sb\") pod \"de3fc756-8d16-4d85-8068-0d667549b93a\" (UID: \"de3fc756-8d16-4d85-8068-0d667549b93a\") " Feb 16 17:42:46.350342 master-0 kubenswrapper[4652]: I0216 17:42:46.350289 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de3fc756-8d16-4d85-8068-0d667549b93a-kube-api-access-8lr86" (OuterVolumeSpecName: "kube-api-access-8lr86") pod "de3fc756-8d16-4d85-8068-0d667549b93a" (UID: "de3fc756-8d16-4d85-8068-0d667549b93a"). InnerVolumeSpecName "kube-api-access-8lr86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:46.436140 master-0 kubenswrapper[4652]: I0216 17:42:46.436009 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de3fc756-8d16-4d85-8068-0d667549b93a" (UID: "de3fc756-8d16-4d85-8068-0d667549b93a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:46.451422 master-0 kubenswrapper[4652]: I0216 17:42:46.449996 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lr86\" (UniqueName: \"kubernetes.io/projected/de3fc756-8d16-4d85-8068-0d667549b93a-kube-api-access-8lr86\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:46.451422 master-0 kubenswrapper[4652]: I0216 17:42:46.450036 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:46.546591 master-0 kubenswrapper[4652]: I0216 17:42:46.546537 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "de3fc756-8d16-4d85-8068-0d667549b93a" (UID: "de3fc756-8d16-4d85-8068-0d667549b93a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:46.547374 master-0 kubenswrapper[4652]: I0216 17:42:46.547215 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de3fc756-8d16-4d85-8068-0d667549b93a" (UID: "de3fc756-8d16-4d85-8068-0d667549b93a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:46.549547 master-0 kubenswrapper[4652]: I0216 17:42:46.549514 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de3fc756-8d16-4d85-8068-0d667549b93a" (UID: "de3fc756-8d16-4d85-8068-0d667549b93a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:46.550061 master-0 kubenswrapper[4652]: I0216 17:42:46.549926 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-config" (OuterVolumeSpecName: "config") pod "de3fc756-8d16-4d85-8068-0d667549b93a" (UID: "de3fc756-8d16-4d85-8068-0d667549b93a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:46.551905 master-0 kubenswrapper[4652]: I0216 17:42:46.551851 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:46.551905 master-0 kubenswrapper[4652]: I0216 17:42:46.551892 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:46.551905 master-0 kubenswrapper[4652]: I0216 17:42:46.551901 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:46.552167 master-0 kubenswrapper[4652]: I0216 17:42:46.551912 4652 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/de3fc756-8d16-4d85-8068-0d667549b93a-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:46.565383 master-0 kubenswrapper[4652]: I0216 17:42:46.565298 4652 generic.go:334] "Generic (PLEG): container finished" podID="de3fc756-8d16-4d85-8068-0d667549b93a" containerID="3a7f934dd8388b3603b04795bc4376a4aec9538130209d42195f3c97ec82650f" exitCode=0 Feb 16 17:42:46.565504 master-0 kubenswrapper[4652]: I0216 17:42:46.565380 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" event={"ID":"de3fc756-8d16-4d85-8068-0d667549b93a","Type":"ContainerDied","Data":"3a7f934dd8388b3603b04795bc4376a4aec9538130209d42195f3c97ec82650f"} Feb 16 17:42:46.565504 master-0 kubenswrapper[4652]: I0216 17:42:46.565449 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" event={"ID":"de3fc756-8d16-4d85-8068-0d667549b93a","Type":"ContainerDied","Data":"81f470419328aa034c617d207a8df6fa116fd54965456b0b315a6ca519d07c6b"} Feb 16 17:42:46.565504 master-0 kubenswrapper[4652]: I0216 17:42:46.565468 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dd98456c9-m47zr" Feb 16 17:42:46.566043 master-0 kubenswrapper[4652]: I0216 17:42:46.565472 4652 scope.go:117] "RemoveContainer" containerID="3a7f934dd8388b3603b04795bc4376a4aec9538130209d42195f3c97ec82650f" Feb 16 17:42:46.578303 master-0 kubenswrapper[4652]: I0216 17:42:46.577664 4652 generic.go:334] "Generic (PLEG): container finished" podID="f1048855-86d3-4f6a-b538-a53b51711bce" containerID="625a3d084c36034a7e26b3f97434383c5e8e5aac1e54d3059e865a7ba8559aac" exitCode=0 Feb 16 17:42:46.578303 master-0 kubenswrapper[4652]: I0216 17:42:46.577788 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" event={"ID":"f1048855-86d3-4f6a-b538-a53b51711bce","Type":"ContainerDied","Data":"625a3d084c36034a7e26b3f97434383c5e8e5aac1e54d3059e865a7ba8559aac"} Feb 16 17:42:46.578303 master-0 kubenswrapper[4652]: I0216 17:42:46.577816 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" event={"ID":"f1048855-86d3-4f6a-b538-a53b51711bce","Type":"ContainerStarted","Data":"73b729266acb21df6f54861ed13545fe83c93043040ccca0c63cd758c61cedcd"} Feb 16 17:42:46.585428 master-0 kubenswrapper[4652]: I0216 17:42:46.585198 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-api-0" event={"ID":"c7604569-ce06-4b64-873c-445738b11cff","Type":"ContainerStarted","Data":"c9583a9e4ae9d825bf3477d4568c4a9640fd63d2641979ecb8d993a97fbdcb15"} Feb 16 17:42:46.588053 master-0 kubenswrapper[4652]: I0216 17:42:46.588007 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66f9d86cdb-h58xd" event={"ID":"d4f31917-de6f-4a2d-a7ec-14023e52f58d","Type":"ContainerStarted","Data":"c32aeb76122c0938b82f34b69a7287e62d3455243fef801010c5cfe9f22a7c19"} Feb 16 17:42:46.622074 master-0 kubenswrapper[4652]: I0216 17:42:46.622031 4652 scope.go:117] "RemoveContainer" containerID="33208c88d9087b475e020c337f2186446aaa37c22db51a2745702e258916b601" Feb 16 17:42:46.684273 master-0 kubenswrapper[4652]: I0216 17:42:46.681217 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dd98456c9-m47zr"] Feb 16 17:42:46.696654 master-0 kubenswrapper[4652]: I0216 17:42:46.696604 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dd98456c9-m47zr"] Feb 16 17:42:46.765322 master-0 kubenswrapper[4652]: I0216 17:42:46.763134 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de3fc756-8d16-4d85-8068-0d667549b93a" path="/var/lib/kubelet/pods/de3fc756-8d16-4d85-8068-0d667549b93a/volumes" Feb 16 17:42:46.766415 master-0 kubenswrapper[4652]: I0216 17:42:46.766379 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebf43401-bc7c-428b-bc43-30799491a116" path="/var/lib/kubelet/pods/ebf43401-bc7c-428b-bc43-30799491a116/volumes" Feb 16 17:42:46.831951 master-0 kubenswrapper[4652]: I0216 17:42:46.831914 4652 scope.go:117] "RemoveContainer" containerID="3a7f934dd8388b3603b04795bc4376a4aec9538130209d42195f3c97ec82650f" Feb 16 17:42:46.832538 master-0 kubenswrapper[4652]: E0216 17:42:46.832487 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a7f934dd8388b3603b04795bc4376a4aec9538130209d42195f3c97ec82650f\": container with ID starting with 3a7f934dd8388b3603b04795bc4376a4aec9538130209d42195f3c97ec82650f not found: ID does not exist" containerID="3a7f934dd8388b3603b04795bc4376a4aec9538130209d42195f3c97ec82650f" Feb 16 17:42:46.832604 master-0 kubenswrapper[4652]: I0216 17:42:46.832545 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a7f934dd8388b3603b04795bc4376a4aec9538130209d42195f3c97ec82650f"} err="failed to get container status \"3a7f934dd8388b3603b04795bc4376a4aec9538130209d42195f3c97ec82650f\": rpc error: code = NotFound desc = could not find container \"3a7f934dd8388b3603b04795bc4376a4aec9538130209d42195f3c97ec82650f\": container with ID starting with 3a7f934dd8388b3603b04795bc4376a4aec9538130209d42195f3c97ec82650f not found: ID does not exist" Feb 16 17:42:46.832672 master-0 kubenswrapper[4652]: I0216 17:42:46.832602 4652 scope.go:117] "RemoveContainer" containerID="33208c88d9087b475e020c337f2186446aaa37c22db51a2745702e258916b601" Feb 16 17:42:46.833341 master-0 kubenswrapper[4652]: E0216 17:42:46.833307 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33208c88d9087b475e020c337f2186446aaa37c22db51a2745702e258916b601\": container with ID starting with 33208c88d9087b475e020c337f2186446aaa37c22db51a2745702e258916b601 not found: ID does not exist" containerID="33208c88d9087b475e020c337f2186446aaa37c22db51a2745702e258916b601" Feb 16 17:42:46.833421 master-0 kubenswrapper[4652]: I0216 17:42:46.833346 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33208c88d9087b475e020c337f2186446aaa37c22db51a2745702e258916b601"} err="failed to get container status \"33208c88d9087b475e020c337f2186446aaa37c22db51a2745702e258916b601\": rpc error: code = NotFound desc = could not find container \"33208c88d9087b475e020c337f2186446aaa37c22db51a2745702e258916b601\": container with ID starting with 33208c88d9087b475e020c337f2186446aaa37c22db51a2745702e258916b601 not found: ID does not exist" Feb 16 17:42:47.605185 master-0 kubenswrapper[4652]: I0216 17:42:47.604903 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" event={"ID":"f1048855-86d3-4f6a-b538-a53b51711bce","Type":"ContainerStarted","Data":"fe0cbd5fe2da30213b7f2a95b245500724cc0e206ab0c7a595299ce83f31936f"} Feb 16 17:42:47.606575 master-0 kubenswrapper[4652]: I0216 17:42:47.606527 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:47.608462 master-0 kubenswrapper[4652]: I0216 17:42:47.608407 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-api-0" event={"ID":"c7604569-ce06-4b64-873c-445738b11cff","Type":"ContainerStarted","Data":"d2b2065ad38bb2797b68b9b7a162496bda4cb6fd7b9ad021e60f4d53ff6d7ace"} Feb 16 17:42:47.612327 master-0 kubenswrapper[4652]: I0216 17:42:47.612163 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66f9d86cdb-h58xd" event={"ID":"d4f31917-de6f-4a2d-a7ec-14023e52f58d","Type":"ContainerStarted","Data":"3d0602731bf458ec2a01be157b2e02706750d5d14698c9fbe4f0b1a62587c519"} Feb 16 17:42:47.612327 master-0 kubenswrapper[4652]: I0216 17:42:47.612202 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66f9d86cdb-h58xd" event={"ID":"d4f31917-de6f-4a2d-a7ec-14023e52f58d","Type":"ContainerStarted","Data":"5fc807d82c3e5673a6f374ae1a575834bfeb59f1ee48fd03180d2278dec790d1"} Feb 16 17:42:47.613132 master-0 kubenswrapper[4652]: I0216 17:42:47.613100 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:42:48.106714 master-0 kubenswrapper[4652]: I0216 17:42:48.106213 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" podStartSLOduration=4.106181311 podStartE2EDuration="4.106181311s" podCreationTimestamp="2026-02-16 17:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:48.077505812 +0000 UTC m=+1125.465674338" watchObservedRunningTime="2026-02-16 17:42:48.106181311 +0000 UTC m=+1125.494349817" Feb 16 17:42:48.525880 master-0 kubenswrapper[4652]: I0216 17:42:48.525810 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-66f9d86cdb-h58xd" podStartSLOduration=4.525785619 podStartE2EDuration="4.525785619s" podCreationTimestamp="2026-02-16 17:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:48.523486037 +0000 UTC m=+1125.911654573" watchObservedRunningTime="2026-02-16 17:42:48.525785619 +0000 UTC m=+1125.913954135" Feb 16 17:42:48.629230 master-0 kubenswrapper[4652]: I0216 17:42:48.629133 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-api-0" event={"ID":"c7604569-ce06-4b64-873c-445738b11cff","Type":"ContainerStarted","Data":"bdcce87abe8254a3887cf7d46b582b2a5981e9382596f0e3b08df9f1678be6d5"} Feb 16 17:42:49.456819 master-0 kubenswrapper[4652]: I0216 17:42:49.456751 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:49.520975 master-0 kubenswrapper[4652]: I0216 17:42:49.520855 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:49.550450 master-0 kubenswrapper[4652]: I0216 17:42:49.550394 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:49.640640 master-0 kubenswrapper[4652]: I0216 17:42:49.640589 4652 generic.go:334] "Generic (PLEG): container finished" podID="e5005365-36c1-44e2-be02-84737aa7a60a" containerID="c1c837aa92bbf8d10ed9025bffe0c521bc1a557d9b0e1ef931701d4432c0a8af" exitCode=0 Feb 16 17:42:49.641287 master-0 kubenswrapper[4652]: I0216 17:42:49.640736 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ndjf5" event={"ID":"e5005365-36c1-44e2-be02-84737aa7a60a","Type":"ContainerDied","Data":"c1c837aa92bbf8d10ed9025bffe0c521bc1a557d9b0e1ef931701d4432c0a8af"} Feb 16 17:42:49.641569 master-0 kubenswrapper[4652]: I0216 17:42:49.641541 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:50.875916 master-0 kubenswrapper[4652]: I0216 17:42:50.875823 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c34a6-api-0" podStartSLOduration=6.875801503 podStartE2EDuration="6.875801503s" podCreationTimestamp="2026-02-16 17:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:49.032662547 +0000 UTC m=+1126.420831073" watchObservedRunningTime="2026-02-16 17:42:50.875801503 +0000 UTC m=+1128.263970019" Feb 16 17:42:51.033149 master-0 kubenswrapper[4652]: I0216 17:42:51.032623 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c34a6-scheduler-0"] Feb 16 17:42:51.033149 master-0 kubenswrapper[4652]: I0216 17:42:51.032907 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-c34a6-scheduler-0" podUID="9e0296d1-dc25-4d4d-a617-3d1354eadb6f" containerName="cinder-scheduler" containerID="cri-o://e4eeb696ba18e952a959884a793243b321b1c30a60e9da8f9265308dda2bc9d1" gracePeriod=30 Feb 16 17:42:51.033149 master-0 kubenswrapper[4652]: I0216 17:42:51.032971 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-c34a6-scheduler-0" podUID="9e0296d1-dc25-4d4d-a617-3d1354eadb6f" containerName="probe" containerID="cri-o://edf0c3be42c6e3861ca6d7f31f947964c75aa1e890616af1d2083c84e7e1d950" gracePeriod=30 Feb 16 17:42:51.183155 master-0 kubenswrapper[4652]: I0216 17:42:51.177593 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:51.290427 master-0 kubenswrapper[4652]: I0216 17:42:51.290224 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-scripts\") pod \"e5005365-36c1-44e2-be02-84737aa7a60a\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " Feb 16 17:42:51.290427 master-0 kubenswrapper[4652]: I0216 17:42:51.290344 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-config-data\") pod \"e5005365-36c1-44e2-be02-84737aa7a60a\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " Feb 16 17:42:51.290427 master-0 kubenswrapper[4652]: I0216 17:42:51.290389 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lqxh\" (UniqueName: \"kubernetes.io/projected/e5005365-36c1-44e2-be02-84737aa7a60a-kube-api-access-7lqxh\") pod \"e5005365-36c1-44e2-be02-84737aa7a60a\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " Feb 16 17:42:51.290735 master-0 kubenswrapper[4652]: I0216 17:42:51.290500 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-combined-ca-bundle\") pod \"e5005365-36c1-44e2-be02-84737aa7a60a\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " Feb 16 17:42:51.290735 master-0 kubenswrapper[4652]: I0216 17:42:51.290538 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e5005365-36c1-44e2-be02-84737aa7a60a-etc-podinfo\") pod \"e5005365-36c1-44e2-be02-84737aa7a60a\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " Feb 16 17:42:51.290735 master-0 kubenswrapper[4652]: I0216 17:42:51.290606 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e5005365-36c1-44e2-be02-84737aa7a60a-config-data-merged\") pod \"e5005365-36c1-44e2-be02-84737aa7a60a\" (UID: \"e5005365-36c1-44e2-be02-84737aa7a60a\") " Feb 16 17:42:51.291220 master-0 kubenswrapper[4652]: I0216 17:42:51.291174 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5005365-36c1-44e2-be02-84737aa7a60a-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "e5005365-36c1-44e2-be02-84737aa7a60a" (UID: "e5005365-36c1-44e2-be02-84737aa7a60a"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:42:51.293970 master-0 kubenswrapper[4652]: I0216 17:42:51.293918 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5005365-36c1-44e2-be02-84737aa7a60a-kube-api-access-7lqxh" (OuterVolumeSpecName: "kube-api-access-7lqxh") pod "e5005365-36c1-44e2-be02-84737aa7a60a" (UID: "e5005365-36c1-44e2-be02-84737aa7a60a"). InnerVolumeSpecName "kube-api-access-7lqxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:51.295122 master-0 kubenswrapper[4652]: I0216 17:42:51.295057 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e5005365-36c1-44e2-be02-84737aa7a60a-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "e5005365-36c1-44e2-be02-84737aa7a60a" (UID: "e5005365-36c1-44e2-be02-84737aa7a60a"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 17:42:51.295387 master-0 kubenswrapper[4652]: I0216 17:42:51.295346 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-scripts" (OuterVolumeSpecName: "scripts") pod "e5005365-36c1-44e2-be02-84737aa7a60a" (UID: "e5005365-36c1-44e2-be02-84737aa7a60a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:51.369307 master-0 kubenswrapper[4652]: I0216 17:42:51.366835 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-config-data" (OuterVolumeSpecName: "config-data") pod "e5005365-36c1-44e2-be02-84737aa7a60a" (UID: "e5005365-36c1-44e2-be02-84737aa7a60a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:51.392976 master-0 kubenswrapper[4652]: I0216 17:42:51.392908 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:51.392976 master-0 kubenswrapper[4652]: I0216 17:42:51.392964 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:51.392976 master-0 kubenswrapper[4652]: I0216 17:42:51.392983 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lqxh\" (UniqueName: \"kubernetes.io/projected/e5005365-36c1-44e2-be02-84737aa7a60a-kube-api-access-7lqxh\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:51.393314 master-0 kubenswrapper[4652]: I0216 17:42:51.392994 4652 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/e5005365-36c1-44e2-be02-84737aa7a60a-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:51.393314 master-0 kubenswrapper[4652]: I0216 17:42:51.393006 4652 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/e5005365-36c1-44e2-be02-84737aa7a60a-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:51.398416 master-0 kubenswrapper[4652]: I0216 17:42:51.398375 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5005365-36c1-44e2-be02-84737aa7a60a" (UID: "e5005365-36c1-44e2-be02-84737aa7a60a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:51.494906 master-0 kubenswrapper[4652]: I0216 17:42:51.494769 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5005365-36c1-44e2-be02-84737aa7a60a-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:51.664880 master-0 kubenswrapper[4652]: I0216 17:42:51.664720 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-db-sync-ndjf5" event={"ID":"e5005365-36c1-44e2-be02-84737aa7a60a","Type":"ContainerDied","Data":"32b6a640cf270fd339afc54c06d5c12c0399b0dcac619bafdd3601be3a4ca224"} Feb 16 17:42:51.664880 master-0 kubenswrapper[4652]: I0216 17:42:51.664768 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32b6a640cf270fd339afc54c06d5c12c0399b0dcac619bafdd3601be3a4ca224" Feb 16 17:42:51.664880 master-0 kubenswrapper[4652]: I0216 17:42:51.664736 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-db-sync-ndjf5" Feb 16 17:42:51.942967 master-0 kubenswrapper[4652]: I0216 17:42:51.942905 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c34a6-backup-0"] Feb 16 17:42:51.943548 master-0 kubenswrapper[4652]: I0216 17:42:51.943307 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-c34a6-backup-0" podUID="f1339fd1-e639-4057-bb76-3ad09c3000fe" containerName="cinder-backup" containerID="cri-o://901a3cf6aefe6b0588fc42aaadc11445f06847028e31673918cd8ece3b9a6728" gracePeriod=30 Feb 16 17:42:51.943619 master-0 kubenswrapper[4652]: I0216 17:42:51.943579 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-c34a6-backup-0" podUID="f1339fd1-e639-4057-bb76-3ad09c3000fe" containerName="probe" containerID="cri-o://f51ef18270d7ffa218605c7022402385de3d816737b77ce4de126043a6d58f45" gracePeriod=30 Feb 16 17:42:52.164277 master-0 kubenswrapper[4652]: I0216 17:42:52.164184 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-859ff674f7-llnnx"] Feb 16 17:42:52.166214 master-0 kubenswrapper[4652]: E0216 17:42:52.166190 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5005365-36c1-44e2-be02-84737aa7a60a" containerName="init" Feb 16 17:42:52.166363 master-0 kubenswrapper[4652]: I0216 17:42:52.166349 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5005365-36c1-44e2-be02-84737aa7a60a" containerName="init" Feb 16 17:42:52.166777 master-0 kubenswrapper[4652]: E0216 17:42:52.166761 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5005365-36c1-44e2-be02-84737aa7a60a" containerName="ironic-db-sync" Feb 16 17:42:52.166921 master-0 kubenswrapper[4652]: I0216 17:42:52.166907 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5005365-36c1-44e2-be02-84737aa7a60a" containerName="ironic-db-sync" Feb 16 17:42:52.167049 master-0 kubenswrapper[4652]: E0216 17:42:52.167035 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3fc756-8d16-4d85-8068-0d667549b93a" containerName="dnsmasq-dns" Feb 16 17:42:52.167133 master-0 kubenswrapper[4652]: I0216 17:42:52.167121 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3fc756-8d16-4d85-8068-0d667549b93a" containerName="dnsmasq-dns" Feb 16 17:42:52.167281 master-0 kubenswrapper[4652]: E0216 17:42:52.167267 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de3fc756-8d16-4d85-8068-0d667549b93a" containerName="init" Feb 16 17:42:52.167450 master-0 kubenswrapper[4652]: I0216 17:42:52.167437 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="de3fc756-8d16-4d85-8068-0d667549b93a" containerName="init" Feb 16 17:42:52.168378 master-0 kubenswrapper[4652]: I0216 17:42:52.168357 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="de3fc756-8d16-4d85-8068-0d667549b93a" containerName="dnsmasq-dns" Feb 16 17:42:52.168572 master-0 kubenswrapper[4652]: I0216 17:42:52.168555 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5005365-36c1-44e2-be02-84737aa7a60a" containerName="ironic-db-sync" Feb 16 17:42:52.171352 master-0 kubenswrapper[4652]: I0216 17:42:52.171271 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.174397 master-0 kubenswrapper[4652]: I0216 17:42:52.174359 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 17:42:52.175182 master-0 kubenswrapper[4652]: I0216 17:42:52.175154 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 17:42:52.215428 master-0 kubenswrapper[4652]: I0216 17:42:52.215355 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-ovndb-tls-certs\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.215658 master-0 kubenswrapper[4652]: I0216 17:42:52.215435 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-internal-tls-certs\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.215658 master-0 kubenswrapper[4652]: I0216 17:42:52.215528 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-httpd-config\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.215658 master-0 kubenswrapper[4652]: I0216 17:42:52.215624 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-config\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.215933 master-0 kubenswrapper[4652]: I0216 17:42:52.215770 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-public-tls-certs\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.215933 master-0 kubenswrapper[4652]: I0216 17:42:52.215834 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-combined-ca-bundle\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.215933 master-0 kubenswrapper[4652]: I0216 17:42:52.215873 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp2cw\" (UniqueName: \"kubernetes.io/projected/ff7081eb-1b20-430a-a1ee-fa889b9acefd-kube-api-access-qp2cw\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.313359 master-0 kubenswrapper[4652]: I0216 17:42:52.313158 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c34a6-volume-lvm-iscsi-0"] Feb 16 17:42:52.313581 master-0 kubenswrapper[4652]: I0216 17:42:52.313551 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" podUID="7430db54-9280-474e-b84f-bddab08df2d2" containerName="cinder-volume" containerID="cri-o://74d5e3a453b4b28f53163373eec0c47b54c469fe1ecbc08861bfe8f776973aa4" gracePeriod=30 Feb 16 17:42:52.317415 master-0 kubenswrapper[4652]: I0216 17:42:52.313735 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" podUID="7430db54-9280-474e-b84f-bddab08df2d2" containerName="probe" containerID="cri-o://94e4d2d7d338f607835bfdd0aab259b621394f3a20816db842efe8432c21ae55" gracePeriod=30 Feb 16 17:42:52.320086 master-0 kubenswrapper[4652]: I0216 17:42:52.318308 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-config\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.320086 master-0 kubenswrapper[4652]: I0216 17:42:52.318409 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-public-tls-certs\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.320086 master-0 kubenswrapper[4652]: I0216 17:42:52.319012 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-combined-ca-bundle\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.320086 master-0 kubenswrapper[4652]: I0216 17:42:52.319050 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp2cw\" (UniqueName: \"kubernetes.io/projected/ff7081eb-1b20-430a-a1ee-fa889b9acefd-kube-api-access-qp2cw\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.320086 master-0 kubenswrapper[4652]: I0216 17:42:52.319123 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-ovndb-tls-certs\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.320086 master-0 kubenswrapper[4652]: I0216 17:42:52.319145 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-internal-tls-certs\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.320086 master-0 kubenswrapper[4652]: I0216 17:42:52.319203 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-httpd-config\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.329779 master-0 kubenswrapper[4652]: I0216 17:42:52.327832 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-859ff674f7-llnnx"] Feb 16 17:42:52.329779 master-0 kubenswrapper[4652]: I0216 17:42:52.329195 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-config\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.331096 master-0 kubenswrapper[4652]: I0216 17:42:52.330927 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-httpd-config\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.333520 master-0 kubenswrapper[4652]: I0216 17:42:52.333446 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-ovndb-tls-certs\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.340623 master-0 kubenswrapper[4652]: I0216 17:42:52.340230 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-public-tls-certs\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.358219 master-0 kubenswrapper[4652]: I0216 17:42:52.358120 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-combined-ca-bundle\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.368289 master-0 kubenswrapper[4652]: I0216 17:42:52.366959 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ff7081eb-1b20-430a-a1ee-fa889b9acefd-internal-tls-certs\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.682014 master-0 kubenswrapper[4652]: I0216 17:42:52.681938 4652 generic.go:334] "Generic (PLEG): container finished" podID="7430db54-9280-474e-b84f-bddab08df2d2" containerID="74d5e3a453b4b28f53163373eec0c47b54c469fe1ecbc08861bfe8f776973aa4" exitCode=0 Feb 16 17:42:52.682694 master-0 kubenswrapper[4652]: I0216 17:42:52.682037 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" event={"ID":"7430db54-9280-474e-b84f-bddab08df2d2","Type":"ContainerDied","Data":"74d5e3a453b4b28f53163373eec0c47b54c469fe1ecbc08861bfe8f776973aa4"} Feb 16 17:42:52.685400 master-0 kubenswrapper[4652]: I0216 17:42:52.685333 4652 generic.go:334] "Generic (PLEG): container finished" podID="9e0296d1-dc25-4d4d-a617-3d1354eadb6f" containerID="edf0c3be42c6e3861ca6d7f31f947964c75aa1e890616af1d2083c84e7e1d950" exitCode=0 Feb 16 17:42:52.685400 master-0 kubenswrapper[4652]: I0216 17:42:52.685365 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-scheduler-0" event={"ID":"9e0296d1-dc25-4d4d-a617-3d1354eadb6f","Type":"ContainerDied","Data":"edf0c3be42c6e3861ca6d7f31f947964c75aa1e890616af1d2083c84e7e1d950"} Feb 16 17:42:52.774461 master-0 kubenswrapper[4652]: I0216 17:42:52.774365 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp2cw\" (UniqueName: \"kubernetes.io/projected/ff7081eb-1b20-430a-a1ee-fa889b9acefd-kube-api-access-qp2cw\") pod \"neutron-859ff674f7-llnnx\" (UID: \"ff7081eb-1b20-430a-a1ee-fa889b9acefd\") " pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:52.803354 master-0 kubenswrapper[4652]: I0216 17:42:52.791535 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:53.692282 master-0 kubenswrapper[4652]: I0216 17:42:53.688944 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-859ff674f7-llnnx"] Feb 16 17:42:53.700308 master-0 kubenswrapper[4652]: I0216 17:42:53.698920 4652 generic.go:334] "Generic (PLEG): container finished" podID="f1339fd1-e639-4057-bb76-3ad09c3000fe" containerID="f51ef18270d7ffa218605c7022402385de3d816737b77ce4de126043a6d58f45" exitCode=0 Feb 16 17:42:53.700308 master-0 kubenswrapper[4652]: I0216 17:42:53.698993 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-backup-0" event={"ID":"f1339fd1-e639-4057-bb76-3ad09c3000fe","Type":"ContainerDied","Data":"f51ef18270d7ffa218605c7022402385de3d816737b77ce4de126043a6d58f45"} Feb 16 17:42:53.702052 master-0 kubenswrapper[4652]: I0216 17:42:53.701757 4652 generic.go:334] "Generic (PLEG): container finished" podID="7430db54-9280-474e-b84f-bddab08df2d2" containerID="94e4d2d7d338f607835bfdd0aab259b621394f3a20816db842efe8432c21ae55" exitCode=0 Feb 16 17:42:53.702052 master-0 kubenswrapper[4652]: I0216 17:42:53.701810 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" event={"ID":"7430db54-9280-474e-b84f-bddab08df2d2","Type":"ContainerDied","Data":"94e4d2d7d338f607835bfdd0aab259b621394f3a20816db842efe8432c21ae55"} Feb 16 17:42:54.104017 master-0 kubenswrapper[4652]: I0216 17:42:54.102823 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-create-m4w4d"] Feb 16 17:42:54.104412 master-0 kubenswrapper[4652]: I0216 17:42:54.104371 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-m4w4d" Feb 16 17:42:54.240840 master-0 kubenswrapper[4652]: I0216 17:42:54.240622 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-m4w4d"] Feb 16 17:42:54.250552 master-0 kubenswrapper[4652]: I0216 17:42:54.250377 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-neutron-agent-6975fcc79b-5wclc"] Feb 16 17:42:54.252894 master-0 kubenswrapper[4652]: I0216 17:42:54.251906 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82jrc\" (UniqueName: \"kubernetes.io/projected/109e606b-e77f-4512-957a-77f228cd55ed-kube-api-access-82jrc\") pod \"ironic-inspector-db-create-m4w4d\" (UID: \"109e606b-e77f-4512-957a-77f228cd55ed\") " pod="openstack/ironic-inspector-db-create-m4w4d" Feb 16 17:42:54.252894 master-0 kubenswrapper[4652]: I0216 17:42:54.252397 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/109e606b-e77f-4512-957a-77f228cd55ed-operator-scripts\") pod \"ironic-inspector-db-create-m4w4d\" (UID: \"109e606b-e77f-4512-957a-77f228cd55ed\") " pod="openstack/ironic-inspector-db-create-m4w4d" Feb 16 17:42:54.263646 master-0 kubenswrapper[4652]: I0216 17:42:54.262860 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:42:54.269147 master-0 kubenswrapper[4652]: I0216 17:42:54.269064 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-ironic-neutron-agent-config-data" Feb 16 17:42:54.317617 master-0 kubenswrapper[4652]: I0216 17:42:54.317558 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-6975fcc79b-5wclc"] Feb 16 17:42:54.374135 master-0 kubenswrapper[4652]: I0216 17:42:54.358928 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8f3751fd-c328-4914-8e15-a14ad13a527d-config\") pod \"ironic-neutron-agent-6975fcc79b-5wclc\" (UID: \"8f3751fd-c328-4914-8e15-a14ad13a527d\") " pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:42:54.374135 master-0 kubenswrapper[4652]: I0216 17:42:54.359003 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpvsk\" (UniqueName: \"kubernetes.io/projected/8f3751fd-c328-4914-8e15-a14ad13a527d-kube-api-access-rpvsk\") pod \"ironic-neutron-agent-6975fcc79b-5wclc\" (UID: \"8f3751fd-c328-4914-8e15-a14ad13a527d\") " pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:42:54.374135 master-0 kubenswrapper[4652]: I0216 17:42:54.359054 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/109e606b-e77f-4512-957a-77f228cd55ed-operator-scripts\") pod \"ironic-inspector-db-create-m4w4d\" (UID: \"109e606b-e77f-4512-957a-77f228cd55ed\") " pod="openstack/ironic-inspector-db-create-m4w4d" Feb 16 17:42:54.374135 master-0 kubenswrapper[4652]: I0216 17:42:54.360001 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82jrc\" (UniqueName: \"kubernetes.io/projected/109e606b-e77f-4512-957a-77f228cd55ed-kube-api-access-82jrc\") pod \"ironic-inspector-db-create-m4w4d\" (UID: \"109e606b-e77f-4512-957a-77f228cd55ed\") " pod="openstack/ironic-inspector-db-create-m4w4d" Feb 16 17:42:54.374135 master-0 kubenswrapper[4652]: I0216 17:42:54.360114 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3751fd-c328-4914-8e15-a14ad13a527d-combined-ca-bundle\") pod \"ironic-neutron-agent-6975fcc79b-5wclc\" (UID: \"8f3751fd-c328-4914-8e15-a14ad13a527d\") " pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:42:54.374135 master-0 kubenswrapper[4652]: I0216 17:42:54.361515 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/109e606b-e77f-4512-957a-77f228cd55ed-operator-scripts\") pod \"ironic-inspector-db-create-m4w4d\" (UID: \"109e606b-e77f-4512-957a-77f228cd55ed\") " pod="openstack/ironic-inspector-db-create-m4w4d" Feb 16 17:42:54.374135 master-0 kubenswrapper[4652]: I0216 17:42:54.361932 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-e5ec-account-create-update-nr7fv"] Feb 16 17:42:54.374135 master-0 kubenswrapper[4652]: I0216 17:42:54.363327 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" Feb 16 17:42:54.374628 master-0 kubenswrapper[4652]: I0216 17:42:54.374494 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-db-secret" Feb 16 17:42:54.398562 master-0 kubenswrapper[4652]: I0216 17:42:54.398501 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82jrc\" (UniqueName: \"kubernetes.io/projected/109e606b-e77f-4512-957a-77f228cd55ed-kube-api-access-82jrc\") pod \"ironic-inspector-db-create-m4w4d\" (UID: \"109e606b-e77f-4512-957a-77f228cd55ed\") " pod="openstack/ironic-inspector-db-create-m4w4d" Feb 16 17:42:54.398795 master-0 kubenswrapper[4652]: I0216 17:42:54.398575 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-e5ec-account-create-update-nr7fv"] Feb 16 17:42:54.410910 master-0 kubenswrapper[4652]: I0216 17:42:54.410867 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-547dcb69f9-nqbv9"] Feb 16 17:42:54.411301 master-0 kubenswrapper[4652]: I0216 17:42:54.411213 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" podUID="f1048855-86d3-4f6a-b538-a53b51711bce" containerName="dnsmasq-dns" containerID="cri-o://fe0cbd5fe2da30213b7f2a95b245500724cc0e206ab0c7a595299ce83f31936f" gracePeriod=10 Feb 16 17:42:54.412728 master-0 kubenswrapper[4652]: I0216 17:42:54.412690 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:54.413188 master-0 kubenswrapper[4652]: I0216 17:42:54.413130 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-m4w4d" Feb 16 17:42:54.435445 master-0 kubenswrapper[4652]: I0216 17:42:54.435225 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:54.463065 master-0 kubenswrapper[4652]: I0216 17:42:54.462867 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-machine-id\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.463201 master-0 kubenswrapper[4652]: I0216 17:42:54.463088 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-config-data-custom\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.463201 master-0 kubenswrapper[4652]: I0216 17:42:54.463163 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-locks-brick\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.463201 master-0 kubenswrapper[4652]: I0216 17:42:54.463186 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-nvme\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.463379 master-0 kubenswrapper[4652]: I0216 17:42:54.463210 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-combined-ca-bundle\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.463379 master-0 kubenswrapper[4652]: I0216 17:42:54.463350 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-lib-cinder\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.463379 master-0 kubenswrapper[4652]: I0216 17:42:54.463367 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-sys\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.463407 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-dev\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.463482 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdwfv\" (UniqueName: \"kubernetes.io/projected/7430db54-9280-474e-b84f-bddab08df2d2-kube-api-access-hdwfv\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.463497 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-run\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.463497 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.463513 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-lib-modules\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.463571 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-locks-cinder\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.463651 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-iscsi\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.463688 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-config-data\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.463737 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-scripts\") pod \"7430db54-9280-474e-b84f-bddab08df2d2\" (UID: \"7430db54-9280-474e-b84f-bddab08df2d2\") " Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.464365 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.464380 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-sys" (OuterVolumeSpecName: "sys") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.464403 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-dev" (OuterVolumeSpecName: "dev") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.464474 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.464488 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.464524 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-run" (OuterVolumeSpecName: "run") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.464572 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.464600 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:54.464783 master-0 kubenswrapper[4652]: I0216 17:42:54.464655 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465215 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3751fd-c328-4914-8e15-a14ad13a527d-combined-ca-bundle\") pod \"ironic-neutron-agent-6975fcc79b-5wclc\" (UID: \"8f3751fd-c328-4914-8e15-a14ad13a527d\") " pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465346 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205faff9-2936-475b-9fce-f11ab722187e-operator-scripts\") pod \"ironic-inspector-e5ec-account-create-update-nr7fv\" (UID: \"205faff9-2936-475b-9fce-f11ab722187e\") " pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465514 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8f3751fd-c328-4914-8e15-a14ad13a527d-config\") pod \"ironic-neutron-agent-6975fcc79b-5wclc\" (UID: \"8f3751fd-c328-4914-8e15-a14ad13a527d\") " pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465558 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpvsk\" (UniqueName: \"kubernetes.io/projected/8f3751fd-c328-4914-8e15-a14ad13a527d-kube-api-access-rpvsk\") pod \"ironic-neutron-agent-6975fcc79b-5wclc\" (UID: \"8f3751fd-c328-4914-8e15-a14ad13a527d\") " pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465639 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj7rx\" (UniqueName: \"kubernetes.io/projected/205faff9-2936-475b-9fce-f11ab722187e-kube-api-access-sj7rx\") pod \"ironic-inspector-e5ec-account-create-update-nr7fv\" (UID: \"205faff9-2936-475b-9fce-f11ab722187e\") " pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465737 4652 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465749 4652 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-sys\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465757 4652 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-dev\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465766 4652 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-run\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465775 4652 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465783 4652 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465791 4652 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465799 4652 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465808 4652 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.466768 master-0 kubenswrapper[4652]: I0216 17:42:54.465817 4652 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7430db54-9280-474e-b84f-bddab08df2d2-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.468728 master-0 kubenswrapper[4652]: I0216 17:42:54.468685 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-scripts" (OuterVolumeSpecName: "scripts") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:54.471348 master-0 kubenswrapper[4652]: I0216 17:42:54.470766 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f3751fd-c328-4914-8e15-a14ad13a527d-combined-ca-bundle\") pod \"ironic-neutron-agent-6975fcc79b-5wclc\" (UID: \"8f3751fd-c328-4914-8e15-a14ad13a527d\") " pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:42:54.471348 master-0 kubenswrapper[4652]: I0216 17:42:54.470923 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:54.471348 master-0 kubenswrapper[4652]: I0216 17:42:54.471206 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7430db54-9280-474e-b84f-bddab08df2d2-kube-api-access-hdwfv" (OuterVolumeSpecName: "kube-api-access-hdwfv") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "kube-api-access-hdwfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:54.471800 master-0 kubenswrapper[4652]: I0216 17:42:54.471735 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8f3751fd-c328-4914-8e15-a14ad13a527d-config\") pod \"ironic-neutron-agent-6975fcc79b-5wclc\" (UID: \"8f3751fd-c328-4914-8e15-a14ad13a527d\") " pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:42:54.546768 master-0 kubenswrapper[4652]: I0216 17:42:54.546708 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:54.568123 master-0 kubenswrapper[4652]: I0216 17:42:54.568046 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205faff9-2936-475b-9fce-f11ab722187e-operator-scripts\") pod \"ironic-inspector-e5ec-account-create-update-nr7fv\" (UID: \"205faff9-2936-475b-9fce-f11ab722187e\") " pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" Feb 16 17:42:54.568435 master-0 kubenswrapper[4652]: I0216 17:42:54.568371 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj7rx\" (UniqueName: \"kubernetes.io/projected/205faff9-2936-475b-9fce-f11ab722187e-kube-api-access-sj7rx\") pod \"ironic-inspector-e5ec-account-create-update-nr7fv\" (UID: \"205faff9-2936-475b-9fce-f11ab722187e\") " pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" Feb 16 17:42:54.568768 master-0 kubenswrapper[4652]: I0216 17:42:54.568625 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.568768 master-0 kubenswrapper[4652]: I0216 17:42:54.568671 4652 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.568768 master-0 kubenswrapper[4652]: I0216 17:42:54.568686 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.568768 master-0 kubenswrapper[4652]: I0216 17:42:54.568699 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdwfv\" (UniqueName: \"kubernetes.io/projected/7430db54-9280-474e-b84f-bddab08df2d2-kube-api-access-hdwfv\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.569053 master-0 kubenswrapper[4652]: I0216 17:42:54.569005 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205faff9-2936-475b-9fce-f11ab722187e-operator-scripts\") pod \"ironic-inspector-e5ec-account-create-update-nr7fv\" (UID: \"205faff9-2936-475b-9fce-f11ab722187e\") " pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" Feb 16 17:42:54.623346 master-0 kubenswrapper[4652]: I0216 17:42:54.584482 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-config-data" (OuterVolumeSpecName: "config-data") pod "7430db54-9280-474e-b84f-bddab08df2d2" (UID: "7430db54-9280-474e-b84f-bddab08df2d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:54.671078 master-0 kubenswrapper[4652]: I0216 17:42:54.671024 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7430db54-9280-474e-b84f-bddab08df2d2-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:54.702430 master-0 kubenswrapper[4652]: I0216 17:42:54.700230 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj7rx\" (UniqueName: \"kubernetes.io/projected/205faff9-2936-475b-9fce-f11ab722187e-kube-api-access-sj7rx\") pod \"ironic-inspector-e5ec-account-create-update-nr7fv\" (UID: \"205faff9-2936-475b-9fce-f11ab722187e\") " pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" Feb 16 17:42:54.709078 master-0 kubenswrapper[4652]: I0216 17:42:54.709008 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpvsk\" (UniqueName: \"kubernetes.io/projected/8f3751fd-c328-4914-8e15-a14ad13a527d-kube-api-access-rpvsk\") pod \"ironic-neutron-agent-6975fcc79b-5wclc\" (UID: \"8f3751fd-c328-4914-8e15-a14ad13a527d\") " pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:42:54.726727 master-0 kubenswrapper[4652]: I0216 17:42:54.724771 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:42:54.732658 master-0 kubenswrapper[4652]: I0216 17:42:54.732596 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" event={"ID":"7430db54-9280-474e-b84f-bddab08df2d2","Type":"ContainerDied","Data":"b77d650bf007b6292d41f4d599f8b338a4eae00583fe663954a73d2b47a0e27b"} Feb 16 17:42:54.732658 master-0 kubenswrapper[4652]: I0216 17:42:54.732657 4652 scope.go:117] "RemoveContainer" containerID="94e4d2d7d338f607835bfdd0aab259b621394f3a20816db842efe8432c21ae55" Feb 16 17:42:54.732931 master-0 kubenswrapper[4652]: I0216 17:42:54.732848 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:54.764844 master-0 kubenswrapper[4652]: I0216 17:42:54.764785 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" Feb 16 17:42:54.776374 master-0 kubenswrapper[4652]: I0216 17:42:54.776158 4652 generic.go:334] "Generic (PLEG): container finished" podID="f1048855-86d3-4f6a-b538-a53b51711bce" containerID="fe0cbd5fe2da30213b7f2a95b245500724cc0e206ab0c7a595299ce83f31936f" exitCode=0 Feb 16 17:42:54.779355 master-0 kubenswrapper[4652]: I0216 17:42:54.779293 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:42:54.779355 master-0 kubenswrapper[4652]: I0216 17:42:54.779331 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-859ff674f7-llnnx" event={"ID":"ff7081eb-1b20-430a-a1ee-fa889b9acefd","Type":"ContainerStarted","Data":"72beb7bc6f2fce8e1d80d864918a7ddd7a0331c9f0ac1d738ac042893fce62e6"} Feb 16 17:42:54.779355 master-0 kubenswrapper[4652]: I0216 17:42:54.779348 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-859ff674f7-llnnx" event={"ID":"ff7081eb-1b20-430a-a1ee-fa889b9acefd","Type":"ContainerStarted","Data":"0b5789709b98a9fd1452f99db63c0895639c3bc339da9ccdfb586e6e17b4e4b2"} Feb 16 17:42:54.779355 master-0 kubenswrapper[4652]: I0216 17:42:54.779357 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-859ff674f7-llnnx" event={"ID":"ff7081eb-1b20-430a-a1ee-fa889b9acefd","Type":"ContainerStarted","Data":"1cf3c6c4604611141aad91631fca1d2d60ca633a6ffbd7bcaeb79e4a100f1398"} Feb 16 17:42:54.779554 master-0 kubenswrapper[4652]: I0216 17:42:54.779368 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" event={"ID":"f1048855-86d3-4f6a-b538-a53b51711bce","Type":"ContainerDied","Data":"fe0cbd5fe2da30213b7f2a95b245500724cc0e206ab0c7a595299ce83f31936f"} Feb 16 17:42:54.787962 master-0 kubenswrapper[4652]: I0216 17:42:54.787868 4652 scope.go:117] "RemoveContainer" containerID="74d5e3a453b4b28f53163373eec0c47b54c469fe1ecbc08861bfe8f776973aa4" Feb 16 17:42:55.198381 master-0 kubenswrapper[4652]: I0216 17:42:55.198237 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ffcb9997-88bvh"] Feb 16 17:42:55.206713 master-0 kubenswrapper[4652]: E0216 17:42:55.206566 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7430db54-9280-474e-b84f-bddab08df2d2" containerName="probe" Feb 16 17:42:55.206713 master-0 kubenswrapper[4652]: I0216 17:42:55.206613 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="7430db54-9280-474e-b84f-bddab08df2d2" containerName="probe" Feb 16 17:42:55.206713 master-0 kubenswrapper[4652]: E0216 17:42:55.206642 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7430db54-9280-474e-b84f-bddab08df2d2" containerName="cinder-volume" Feb 16 17:42:55.206713 master-0 kubenswrapper[4652]: I0216 17:42:55.206652 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="7430db54-9280-474e-b84f-bddab08df2d2" containerName="cinder-volume" Feb 16 17:42:55.207330 master-0 kubenswrapper[4652]: I0216 17:42:55.207220 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="7430db54-9280-474e-b84f-bddab08df2d2" containerName="cinder-volume" Feb 16 17:42:55.207330 master-0 kubenswrapper[4652]: I0216 17:42:55.207279 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="7430db54-9280-474e-b84f-bddab08df2d2" containerName="probe" Feb 16 17:42:55.216390 master-0 kubenswrapper[4652]: I0216 17:42:55.215550 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.307994 master-0 kubenswrapper[4652]: I0216 17:42:55.304350 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ffcb9997-88bvh"] Feb 16 17:42:55.330492 master-0 kubenswrapper[4652]: I0216 17:42:55.330450 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-config\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.330717 master-0 kubenswrapper[4652]: I0216 17:42:55.330694 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-dns-swift-storage-0\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.330826 master-0 kubenswrapper[4652]: I0216 17:42:55.330809 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-ovsdbserver-nb\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.331393 master-0 kubenswrapper[4652]: I0216 17:42:55.331370 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lbn5\" (UniqueName: \"kubernetes.io/projected/c8c13c9e-779c-40f4-b478-b6c3d5baf083-kube-api-access-7lbn5\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.337753 master-0 kubenswrapper[4652]: I0216 17:42:55.337696 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-dns-svc\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.337984 master-0 kubenswrapper[4652]: I0216 17:42:55.337960 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-ovsdbserver-sb\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.350690 master-0 kubenswrapper[4652]: I0216 17:42:55.350643 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-create-m4w4d"] Feb 16 17:42:55.353322 master-0 kubenswrapper[4652]: I0216 17:42:55.353218 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-859ff674f7-llnnx" podStartSLOduration=4.353181323 podStartE2EDuration="4.353181323s" podCreationTimestamp="2026-02-16 17:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:55.093369309 +0000 UTC m=+1132.481537825" watchObservedRunningTime="2026-02-16 17:42:55.353181323 +0000 UTC m=+1132.741349839" Feb 16 17:42:55.419346 master-0 kubenswrapper[4652]: I0216 17:42:55.416019 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c34a6-volume-lvm-iscsi-0"] Feb 16 17:42:55.476335 master-0 kubenswrapper[4652]: I0216 17:42:55.475918 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lbn5\" (UniqueName: \"kubernetes.io/projected/c8c13c9e-779c-40f4-b478-b6c3d5baf083-kube-api-access-7lbn5\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.476335 master-0 kubenswrapper[4652]: I0216 17:42:55.476094 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-dns-svc\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.476335 master-0 kubenswrapper[4652]: I0216 17:42:55.476143 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-ovsdbserver-sb\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.476335 master-0 kubenswrapper[4652]: I0216 17:42:55.476205 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-config\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.476335 master-0 kubenswrapper[4652]: I0216 17:42:55.476235 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-dns-swift-storage-0\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.477444 master-0 kubenswrapper[4652]: I0216 17:42:55.477371 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-ovsdbserver-nb\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.478936 master-0 kubenswrapper[4652]: I0216 17:42:55.478897 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-ovsdbserver-nb\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.482485 master-0 kubenswrapper[4652]: I0216 17:42:55.480861 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-ovsdbserver-sb\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.482485 master-0 kubenswrapper[4652]: I0216 17:42:55.481545 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-config\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.482485 master-0 kubenswrapper[4652]: I0216 17:42:55.482131 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-dns-swift-storage-0\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.488459 master-0 kubenswrapper[4652]: I0216 17:42:55.488409 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:55.489774 master-0 kubenswrapper[4652]: I0216 17:42:55.489748 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-dns-svc\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.550305 master-0 kubenswrapper[4652]: I0216 17:42:55.550231 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c34a6-volume-lvm-iscsi-0"] Feb 16 17:42:55.568343 master-0 kubenswrapper[4652]: I0216 17:42:55.568307 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lbn5\" (UniqueName: \"kubernetes.io/projected/c8c13c9e-779c-40f4-b478-b6c3d5baf083-kube-api-access-7lbn5\") pod \"dnsmasq-dns-85ffcb9997-88bvh\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.589749 master-0 kubenswrapper[4652]: I0216 17:42:55.589701 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-ovsdbserver-sb\") pod \"f1048855-86d3-4f6a-b538-a53b51711bce\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " Feb 16 17:42:55.590098 master-0 kubenswrapper[4652]: I0216 17:42:55.590079 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-dns-swift-storage-0\") pod \"f1048855-86d3-4f6a-b538-a53b51711bce\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " Feb 16 17:42:55.590260 master-0 kubenswrapper[4652]: I0216 17:42:55.590226 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2tfn\" (UniqueName: \"kubernetes.io/projected/f1048855-86d3-4f6a-b538-a53b51711bce-kube-api-access-w2tfn\") pod \"f1048855-86d3-4f6a-b538-a53b51711bce\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " Feb 16 17:42:55.590492 master-0 kubenswrapper[4652]: I0216 17:42:55.590473 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-ovsdbserver-nb\") pod \"f1048855-86d3-4f6a-b538-a53b51711bce\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " Feb 16 17:42:55.590770 master-0 kubenswrapper[4652]: I0216 17:42:55.590752 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-config\") pod \"f1048855-86d3-4f6a-b538-a53b51711bce\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " Feb 16 17:42:55.590958 master-0 kubenswrapper[4652]: I0216 17:42:55.590942 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-dns-svc\") pod \"f1048855-86d3-4f6a-b538-a53b51711bce\" (UID: \"f1048855-86d3-4f6a-b538-a53b51711bce\") " Feb 16 17:42:55.630387 master-0 kubenswrapper[4652]: I0216 17:42:55.623054 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1048855-86d3-4f6a-b538-a53b51711bce-kube-api-access-w2tfn" (OuterVolumeSpecName: "kube-api-access-w2tfn") pod "f1048855-86d3-4f6a-b538-a53b51711bce" (UID: "f1048855-86d3-4f6a-b538-a53b51711bce"). InnerVolumeSpecName "kube-api-access-w2tfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:55.694392 master-0 kubenswrapper[4652]: I0216 17:42:55.694334 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-79d877c778-jztbq"] Feb 16 17:42:55.694659 master-0 kubenswrapper[4652]: I0216 17:42:55.694382 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2tfn\" (UniqueName: \"kubernetes.io/projected/f1048855-86d3-4f6a-b538-a53b51711bce-kube-api-access-w2tfn\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:55.695311 master-0 kubenswrapper[4652]: E0216 17:42:55.695278 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1048855-86d3-4f6a-b538-a53b51711bce" containerName="dnsmasq-dns" Feb 16 17:42:55.695311 master-0 kubenswrapper[4652]: I0216 17:42:55.695308 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1048855-86d3-4f6a-b538-a53b51711bce" containerName="dnsmasq-dns" Feb 16 17:42:55.695384 master-0 kubenswrapper[4652]: E0216 17:42:55.695345 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1048855-86d3-4f6a-b538-a53b51711bce" containerName="init" Feb 16 17:42:55.695384 master-0 kubenswrapper[4652]: I0216 17:42:55.695355 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1048855-86d3-4f6a-b538-a53b51711bce" containerName="init" Feb 16 17:42:55.695746 master-0 kubenswrapper[4652]: I0216 17:42:55.695717 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1048855-86d3-4f6a-b538-a53b51711bce" containerName="dnsmasq-dns" Feb 16 17:42:55.698044 master-0 kubenswrapper[4652]: I0216 17:42:55.698007 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.703008 master-0 kubenswrapper[4652]: I0216 17:42:55.702948 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-transport" Feb 16 17:42:55.703629 master-0 kubenswrapper[4652]: I0216 17:42:55.703156 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-config-data" Feb 16 17:42:55.703629 master-0 kubenswrapper[4652]: I0216 17:42:55.703321 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 17:42:55.704365 master-0 kubenswrapper[4652]: I0216 17:42:55.704349 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-scripts" Feb 16 17:42:55.709128 master-0 kubenswrapper[4652]: I0216 17:42:55.709072 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-79d877c778-jztbq"] Feb 16 17:42:55.712533 master-0 kubenswrapper[4652]: I0216 17:42:55.712508 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-api-config-data" Feb 16 17:42:55.742753 master-0 kubenswrapper[4652]: I0216 17:42:55.741307 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c34a6-volume-lvm-iscsi-0"] Feb 16 17:42:55.743544 master-0 kubenswrapper[4652]: I0216 17:42:55.743517 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.751667 master-0 kubenswrapper[4652]: I0216 17:42:55.750835 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-c34a6-volume-lvm-iscsi-config-data" Feb 16 17:42:55.778576 master-0 kubenswrapper[4652]: I0216 17:42:55.776452 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-volume-lvm-iscsi-0"] Feb 16 17:42:55.806735 master-0 kubenswrapper[4652]: I0216 17:42:55.806686 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" event={"ID":"f1048855-86d3-4f6a-b538-a53b51711bce","Type":"ContainerDied","Data":"73b729266acb21df6f54861ed13545fe83c93043040ccca0c63cd758c61cedcd"} Feb 16 17:42:55.806735 master-0 kubenswrapper[4652]: I0216 17:42:55.806733 4652 scope.go:117] "RemoveContainer" containerID="fe0cbd5fe2da30213b7f2a95b245500724cc0e206ab0c7a595299ce83f31936f" Feb 16 17:42:55.806954 master-0 kubenswrapper[4652]: I0216 17:42:55.806862 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" Feb 16 17:42:55.815228 master-0 kubenswrapper[4652]: I0216 17:42:55.815180 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-config-data\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.815436 master-0 kubenswrapper[4652]: I0216 17:42:55.815340 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3c06307c-938d-45f7-b671-948d93bf0642-config-data-merged\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.815436 master-0 kubenswrapper[4652]: I0216 17:42:55.815378 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-config-data-custom\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.815509 master-0 kubenswrapper[4652]: I0216 17:42:55.815451 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4v5x\" (UniqueName: \"kubernetes.io/projected/3c06307c-938d-45f7-b671-948d93bf0642-kube-api-access-w4v5x\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.815509 master-0 kubenswrapper[4652]: I0216 17:42:55.815481 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c06307c-938d-45f7-b671-948d93bf0642-logs\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.815577 master-0 kubenswrapper[4652]: I0216 17:42:55.815559 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3c06307c-938d-45f7-b671-948d93bf0642-etc-podinfo\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.815667 master-0 kubenswrapper[4652]: I0216 17:42:55.815646 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-scripts\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.815768 master-0 kubenswrapper[4652]: I0216 17:42:55.815748 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-combined-ca-bundle\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.821493 master-0 kubenswrapper[4652]: I0216 17:42:55.821401 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" event={"ID":"205faff9-2936-475b-9fce-f11ab722187e","Type":"ContainerStarted","Data":"98d20acfbea441230135471cef24d2cf59ec707c4a6f9f5f001000f1302121bd"} Feb 16 17:42:55.831221 master-0 kubenswrapper[4652]: I0216 17:42:55.831167 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" event={"ID":"8f3751fd-c328-4914-8e15-a14ad13a527d","Type":"ContainerStarted","Data":"9efe085e90d40234cde8028fac38b8ad4e897bf9aac51081271917754cdaf258"} Feb 16 17:42:55.868462 master-0 kubenswrapper[4652]: I0216 17:42:55.865801 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:42:55.868462 master-0 kubenswrapper[4652]: I0216 17:42:55.866867 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-e5ec-account-create-update-nr7fv"] Feb 16 17:42:55.892007 master-0 kubenswrapper[4652]: I0216 17:42:55.891955 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-neutron-agent-6975fcc79b-5wclc"] Feb 16 17:42:55.896244 master-0 kubenswrapper[4652]: I0216 17:42:55.895665 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-m4w4d" event={"ID":"109e606b-e77f-4512-957a-77f228cd55ed","Type":"ContainerStarted","Data":"a20d7230e12bcd94893c1f24294186910a15dda72b7b32141bdb2aedb9a735a7"} Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.918669 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-combined-ca-bundle\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.918760 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-run\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.918779 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-var-locks-cinder\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.918800 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-config-data-custom\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.918828 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-lib-modules\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.918851 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-scripts\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.918871 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-dev\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.918936 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-etc-machine-id\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.918953 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-sys\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.918979 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-var-lib-cinder\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.918999 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-etc-iscsi\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.919021 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-config-data\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.919094 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3c06307c-938d-45f7-b671-948d93bf0642-config-data-merged\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.919111 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-config-data-custom\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.919139 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4v5x\" (UniqueName: \"kubernetes.io/projected/3c06307c-938d-45f7-b671-948d93bf0642-kube-api-access-w4v5x\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.919153 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c06307c-938d-45f7-b671-948d93bf0642-logs\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.919171 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-combined-ca-bundle\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.919203 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-etc-nvme\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.919261 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3c06307c-938d-45f7-b671-948d93bf0642-etc-podinfo\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.919317 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2jnz\" (UniqueName: \"kubernetes.io/projected/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-kube-api-access-d2jnz\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.919351 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-scripts\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.920282 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3c06307c-938d-45f7-b671-948d93bf0642-config-data-merged\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.921619 4652 generic.go:334] "Generic (PLEG): container finished" podID="9e0296d1-dc25-4d4d-a617-3d1354eadb6f" containerID="e4eeb696ba18e952a959884a793243b321b1c30a60e9da8f9265308dda2bc9d1" exitCode=0 Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.921786 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c06307c-938d-45f7-b671-948d93bf0642-logs\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.922201 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-scheduler-0" event={"ID":"9e0296d1-dc25-4d4d-a617-3d1354eadb6f","Type":"ContainerDied","Data":"e4eeb696ba18e952a959884a793243b321b1c30a60e9da8f9265308dda2bc9d1"} Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.922615 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-config-data\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.924400 master-0 kubenswrapper[4652]: I0216 17:42:55.922656 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-var-locks-brick\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:55.939533 master-0 kubenswrapper[4652]: I0216 17:42:55.936006 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-config-data\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.939533 master-0 kubenswrapper[4652]: I0216 17:42:55.938026 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-combined-ca-bundle\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.950550 master-0 kubenswrapper[4652]: I0216 17:42:55.941503 4652 scope.go:117] "RemoveContainer" containerID="625a3d084c36034a7e26b3f97434383c5e8e5aac1e54d3059e865a7ba8559aac" Feb 16 17:42:55.950550 master-0 kubenswrapper[4652]: I0216 17:42:55.942585 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-config-data-custom\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.950550 master-0 kubenswrapper[4652]: I0216 17:42:55.945150 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3c06307c-938d-45f7-b671-948d93bf0642-etc-podinfo\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.951079 master-0 kubenswrapper[4652]: I0216 17:42:55.951017 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-scripts\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:55.963194 master-0 kubenswrapper[4652]: I0216 17:42:55.962870 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6869cdf564-cp8xm" Feb 16 17:42:56.027263 master-0 kubenswrapper[4652]: I0216 17:42:56.027173 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2jnz\" (UniqueName: \"kubernetes.io/projected/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-kube-api-access-d2jnz\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.027459 master-0 kubenswrapper[4652]: I0216 17:42:56.027298 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-config-data\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.027459 master-0 kubenswrapper[4652]: I0216 17:42:56.027327 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-var-locks-brick\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.027459 master-0 kubenswrapper[4652]: I0216 17:42:56.027422 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-run\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.027459 master-0 kubenswrapper[4652]: I0216 17:42:56.027447 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-var-locks-cinder\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.028786 master-0 kubenswrapper[4652]: I0216 17:42:56.027472 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-config-data-custom\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.028786 master-0 kubenswrapper[4652]: I0216 17:42:56.027514 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-lib-modules\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.028786 master-0 kubenswrapper[4652]: I0216 17:42:56.027539 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-scripts\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.028786 master-0 kubenswrapper[4652]: I0216 17:42:56.027569 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-dev\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.028786 master-0 kubenswrapper[4652]: I0216 17:42:56.027681 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-etc-machine-id\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.028786 master-0 kubenswrapper[4652]: I0216 17:42:56.027701 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-sys\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.028786 master-0 kubenswrapper[4652]: I0216 17:42:56.027735 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-var-lib-cinder\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.028786 master-0 kubenswrapper[4652]: I0216 17:42:56.027762 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-etc-iscsi\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.028786 master-0 kubenswrapper[4652]: I0216 17:42:56.027873 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-combined-ca-bundle\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.028786 master-0 kubenswrapper[4652]: I0216 17:42:56.027911 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-etc-nvme\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.037176 master-0 kubenswrapper[4652]: I0216 17:42:56.033505 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-var-lib-cinder\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.037176 master-0 kubenswrapper[4652]: I0216 17:42:56.033555 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-var-locks-cinder\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.037176 master-0 kubenswrapper[4652]: I0216 17:42:56.033621 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-var-locks-brick\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.037176 master-0 kubenswrapper[4652]: I0216 17:42:56.034077 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-run\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.037176 master-0 kubenswrapper[4652]: I0216 17:42:56.034303 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-etc-machine-id\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.037176 master-0 kubenswrapper[4652]: I0216 17:42:56.034338 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-sys\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.037176 master-0 kubenswrapper[4652]: I0216 17:42:56.034369 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-etc-iscsi\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.037176 master-0 kubenswrapper[4652]: I0216 17:42:56.034646 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-etc-nvme\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.037176 master-0 kubenswrapper[4652]: I0216 17:42:56.034687 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-lib-modules\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.037176 master-0 kubenswrapper[4652]: I0216 17:42:56.035151 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-dev\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.058284 master-0 kubenswrapper[4652]: I0216 17:42:56.049048 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4v5x\" (UniqueName: \"kubernetes.io/projected/3c06307c-938d-45f7-b671-948d93bf0642-kube-api-access-w4v5x\") pod \"ironic-79d877c778-jztbq\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:56.058284 master-0 kubenswrapper[4652]: I0216 17:42:56.057970 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-scripts\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.085700 master-0 kubenswrapper[4652]: I0216 17:42:56.085122 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2jnz\" (UniqueName: \"kubernetes.io/projected/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-kube-api-access-d2jnz\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.088235 master-0 kubenswrapper[4652]: I0216 17:42:56.088154 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f1048855-86d3-4f6a-b538-a53b51711bce" (UID: "f1048855-86d3-4f6a-b538-a53b51711bce"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:56.096188 master-0 kubenswrapper[4652]: I0216 17:42:56.096062 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5559c64944-9qfgd"] Feb 16 17:42:56.100155 master-0 kubenswrapper[4652]: I0216 17:42:56.100037 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5559c64944-9qfgd" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-log" containerID="cri-o://a953984f730749c4669fd2456a04fc762aaa2b368617c7a5ec1858a57e604a8b" gracePeriod=30 Feb 16 17:42:56.100731 master-0 kubenswrapper[4652]: I0216 17:42:56.100684 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-5559c64944-9qfgd" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-api" containerID="cri-o://a548065352cd4d8365baa3bfdf711167608f8b840f55133535592bb1ed4fb564" gracePeriod=30 Feb 16 17:42:56.109671 master-0 kubenswrapper[4652]: I0216 17:42:56.109584 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-config" (OuterVolumeSpecName: "config") pod "f1048855-86d3-4f6a-b538-a53b51711bce" (UID: "f1048855-86d3-4f6a-b538-a53b51711bce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:56.128947 master-0 kubenswrapper[4652]: I0216 17:42:56.128654 4652 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/placement-5559c64944-9qfgd" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-log" probeResult="failure" output="Get \"https://10.128.0.183:8778/\": EOF" Feb 16 17:42:56.128947 master-0 kubenswrapper[4652]: I0216 17:42:56.128903 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-5559c64944-9qfgd" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-api" probeResult="failure" output="Get \"https://10.128.0.183:8778/\": EOF" Feb 16 17:42:56.129419 master-0 kubenswrapper[4652]: I0216 17:42:56.129391 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-5559c64944-9qfgd" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-log" probeResult="failure" output="Get \"https://10.128.0.183:8778/\": EOF" Feb 16 17:42:56.133372 master-0 kubenswrapper[4652]: I0216 17:42:56.132633 4652 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/placement-5559c64944-9qfgd" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-api" probeResult="failure" output="Get \"https://10.128.0.183:8778/\": EOF" Feb 16 17:42:56.143180 master-0 kubenswrapper[4652]: I0216 17:42:56.143106 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-5559c64944-9qfgd" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-log" probeResult="failure" output="Get \"https://10.128.0.183:8778/\": EOF" Feb 16 17:42:56.153511 master-0 kubenswrapper[4652]: I0216 17:42:56.144509 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-5559c64944-9qfgd" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-api" probeResult="failure" output="Get \"https://10.128.0.183:8778/\": read tcp 10.128.0.2:56598->10.128.0.183:8778: read: connection reset by peer" Feb 16 17:42:56.153511 master-0 kubenswrapper[4652]: I0216 17:42:56.144722 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f1048855-86d3-4f6a-b538-a53b51711bce" (UID: "f1048855-86d3-4f6a-b538-a53b51711bce"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:56.153511 master-0 kubenswrapper[4652]: I0216 17:42:56.147011 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-combined-ca-bundle\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.153511 master-0 kubenswrapper[4652]: I0216 17:42:56.147784 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f1048855-86d3-4f6a-b538-a53b51711bce" (UID: "f1048855-86d3-4f6a-b538-a53b51711bce"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:56.155933 master-0 kubenswrapper[4652]: I0216 17:42:56.155877 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-config-data\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.158378 master-0 kubenswrapper[4652]: I0216 17:42:56.157383 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:56.158378 master-0 kubenswrapper[4652]: I0216 17:42:56.157675 4652 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:56.158378 master-0 kubenswrapper[4652]: I0216 17:42:56.157909 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:56.158378 master-0 kubenswrapper[4652]: I0216 17:42:56.158148 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:56.190488 master-0 kubenswrapper[4652]: I0216 17:42:56.189789 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c5ba513b-d2ca-4d7c-b419-6e8009ebe299-config-data-custom\") pod \"cinder-c34a6-volume-lvm-iscsi-0\" (UID: \"c5ba513b-d2ca-4d7c-b419-6e8009ebe299\") " pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.234079 master-0 kubenswrapper[4652]: I0216 17:42:56.233837 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f1048855-86d3-4f6a-b538-a53b51711bce" (UID: "f1048855-86d3-4f6a-b538-a53b51711bce"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:56.265562 master-0 kubenswrapper[4652]: I0216 17:42:56.265486 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1048855-86d3-4f6a-b538-a53b51711bce-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:56.277425 master-0 kubenswrapper[4652]: I0216 17:42:56.277382 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:42:56.286579 master-0 kubenswrapper[4652]: I0216 17:42:56.286534 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-conductor-0"] Feb 16 17:42:56.290413 master-0 kubenswrapper[4652]: I0216 17:42:56.290086 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 16 17:42:56.292959 master-0 kubenswrapper[4652]: I0216 17:42:56.292922 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-scripts" Feb 16 17:42:56.293140 master-0 kubenswrapper[4652]: I0216 17:42:56.293120 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-conductor-config-data" Feb 16 17:42:56.311573 master-0 kubenswrapper[4652]: I0216 17:42:56.311530 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 16 17:42:56.319892 master-0 kubenswrapper[4652]: I0216 17:42:56.317821 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:42:56.513914 master-0 kubenswrapper[4652]: I0216 17:42:56.513851 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.514136 master-0 kubenswrapper[4652]: I0216 17:42:56.513925 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-scripts\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.514136 master-0 kubenswrapper[4652]: I0216 17:42:56.513946 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln592\" (UniqueName: \"kubernetes.io/projected/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-kube-api-access-ln592\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.514136 master-0 kubenswrapper[4652]: I0216 17:42:56.513982 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.514136 master-0 kubenswrapper[4652]: I0216 17:42:56.514013 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.514136 master-0 kubenswrapper[4652]: I0216 17:42:56.514038 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-config-data\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.514136 master-0 kubenswrapper[4652]: I0216 17:42:56.514064 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-92a843c2-baa6-47c4-82c5-cbc6baff27b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2461c95d-53cb-4e27-bce6-40d7883bfcbd\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.514136 master-0 kubenswrapper[4652]: I0216 17:42:56.514103 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.617605 master-0 kubenswrapper[4652]: I0216 17:42:56.616324 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.617605 master-0 kubenswrapper[4652]: I0216 17:42:56.616496 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.617605 master-0 kubenswrapper[4652]: I0216 17:42:56.616541 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-scripts\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.617605 master-0 kubenswrapper[4652]: I0216 17:42:56.616565 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln592\" (UniqueName: \"kubernetes.io/projected/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-kube-api-access-ln592\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.617605 master-0 kubenswrapper[4652]: I0216 17:42:56.616613 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.617605 master-0 kubenswrapper[4652]: I0216 17:42:56.616656 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.617605 master-0 kubenswrapper[4652]: I0216 17:42:56.616687 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-config-data\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.617605 master-0 kubenswrapper[4652]: I0216 17:42:56.616725 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-92a843c2-baa6-47c4-82c5-cbc6baff27b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2461c95d-53cb-4e27-bce6-40d7883bfcbd\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.617605 master-0 kubenswrapper[4652]: I0216 17:42:56.617054 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-config-data-merged\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.624363 master-0 kubenswrapper[4652]: I0216 17:42:56.624133 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:42:56.624363 master-0 kubenswrapper[4652]: I0216 17:42:56.624198 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-92a843c2-baa6-47c4-82c5-cbc6baff27b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2461c95d-53cb-4e27-bce6-40d7883bfcbd\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/8a94044110c3b5cf044bb5b68049421ec1bc76f7fc943b3ef548f36b7820445f/globalmount\"" pod="openstack/ironic-conductor-0" Feb 16 17:42:56.631421 master-0 kubenswrapper[4652]: I0216 17:42:56.630725 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-combined-ca-bundle\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.631421 master-0 kubenswrapper[4652]: I0216 17:42:56.631145 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-config-data-custom\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.649709 master-0 kubenswrapper[4652]: I0216 17:42:56.638283 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-scripts\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.658162 master-0 kubenswrapper[4652]: I0216 17:42:56.654790 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-etc-podinfo\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.666293 master-0 kubenswrapper[4652]: I0216 17:42:56.665380 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ffcb9997-88bvh"] Feb 16 17:42:56.673959 master-0 kubenswrapper[4652]: I0216 17:42:56.667002 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-config-data\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.673959 master-0 kubenswrapper[4652]: I0216 17:42:56.672303 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln592\" (UniqueName: \"kubernetes.io/projected/7d01cc93-4bd4-4091-92a8-1c9a7e035c3e-kube-api-access-ln592\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:56.720773 master-0 kubenswrapper[4652]: W0216 17:42:56.720481 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8c13c9e_779c_40f4_b478_b6c3d5baf083.slice/crio-18c1b82dace0d6289c60411d913b41368dd8611c6bef95229f063eb3fe54c4ba WatchSource:0}: Error finding container 18c1b82dace0d6289c60411d913b41368dd8611c6bef95229f063eb3fe54c4ba: Status 404 returned error can't find the container with id 18c1b82dace0d6289c60411d913b41368dd8611c6bef95229f063eb3fe54c4ba Feb 16 17:42:56.733136 master-0 kubenswrapper[4652]: I0216 17:42:56.733050 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-547dcb69f9-nqbv9"] Feb 16 17:42:56.763394 master-0 kubenswrapper[4652]: I0216 17:42:56.758701 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:56.821342 master-0 kubenswrapper[4652]: I0216 17:42:56.821233 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-scripts\") pod \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " Feb 16 17:42:56.821586 master-0 kubenswrapper[4652]: I0216 17:42:56.821382 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-config-data\") pod \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " Feb 16 17:42:56.821586 master-0 kubenswrapper[4652]: I0216 17:42:56.821405 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-etc-machine-id\") pod \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " Feb 16 17:42:56.821586 master-0 kubenswrapper[4652]: I0216 17:42:56.821471 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-config-data-custom\") pod \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " Feb 16 17:42:56.821586 master-0 kubenswrapper[4652]: I0216 17:42:56.821530 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58xqt\" (UniqueName: \"kubernetes.io/projected/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-kube-api-access-58xqt\") pod \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " Feb 16 17:42:56.822195 master-0 kubenswrapper[4652]: I0216 17:42:56.821688 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-combined-ca-bundle\") pod \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\" (UID: \"9e0296d1-dc25-4d4d-a617-3d1354eadb6f\") " Feb 16 17:42:56.824673 master-0 kubenswrapper[4652]: I0216 17:42:56.824434 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9e0296d1-dc25-4d4d-a617-3d1354eadb6f" (UID: "9e0296d1-dc25-4d4d-a617-3d1354eadb6f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:56.826524 master-0 kubenswrapper[4652]: I0216 17:42:56.826486 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-scripts" (OuterVolumeSpecName: "scripts") pod "9e0296d1-dc25-4d4d-a617-3d1354eadb6f" (UID: "9e0296d1-dc25-4d4d-a617-3d1354eadb6f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:56.828145 master-0 kubenswrapper[4652]: I0216 17:42:56.828106 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9e0296d1-dc25-4d4d-a617-3d1354eadb6f" (UID: "9e0296d1-dc25-4d4d-a617-3d1354eadb6f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:56.835581 master-0 kubenswrapper[4652]: I0216 17:42:56.835521 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7430db54-9280-474e-b84f-bddab08df2d2" path="/var/lib/kubelet/pods/7430db54-9280-474e-b84f-bddab08df2d2/volumes" Feb 16 17:42:56.837907 master-0 kubenswrapper[4652]: I0216 17:42:56.837734 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-547dcb69f9-nqbv9"] Feb 16 17:42:56.839739 master-0 kubenswrapper[4652]: I0216 17:42:56.839599 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-kube-api-access-58xqt" (OuterVolumeSpecName: "kube-api-access-58xqt") pod "9e0296d1-dc25-4d4d-a617-3d1354eadb6f" (UID: "9e0296d1-dc25-4d4d-a617-3d1354eadb6f"). InnerVolumeSpecName "kube-api-access-58xqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:56.924853 master-0 kubenswrapper[4652]: I0216 17:42:56.924813 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:56.924853 master-0 kubenswrapper[4652]: I0216 17:42:56.924847 4652 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:56.924853 master-0 kubenswrapper[4652]: I0216 17:42:56.924856 4652 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:56.925131 master-0 kubenswrapper[4652]: I0216 17:42:56.924867 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58xqt\" (UniqueName: \"kubernetes.io/projected/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-kube-api-access-58xqt\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:56.929934 master-0 kubenswrapper[4652]: I0216 17:42:56.929799 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e0296d1-dc25-4d4d-a617-3d1354eadb6f" (UID: "9e0296d1-dc25-4d4d-a617-3d1354eadb6f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:56.970321 master-0 kubenswrapper[4652]: I0216 17:42:56.970107 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-config-data" (OuterVolumeSpecName: "config-data") pod "9e0296d1-dc25-4d4d-a617-3d1354eadb6f" (UID: "9e0296d1-dc25-4d4d-a617-3d1354eadb6f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:56.982880 master-0 kubenswrapper[4652]: I0216 17:42:56.982073 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" event={"ID":"205faff9-2936-475b-9fce-f11ab722187e","Type":"ContainerStarted","Data":"3d455baac623cdb0d7613eacaf1b287025e2b8ec11b914b50afcc83dea9e618c"} Feb 16 17:42:57.008469 master-0 kubenswrapper[4652]: I0216 17:42:57.008328 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" podStartSLOduration=3.008237358 podStartE2EDuration="3.008237358s" podCreationTimestamp="2026-02-16 17:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:42:57.005434013 +0000 UTC m=+1134.393602529" watchObservedRunningTime="2026-02-16 17:42:57.008237358 +0000 UTC m=+1134.396405874" Feb 16 17:42:57.016270 master-0 kubenswrapper[4652]: I0216 17:42:57.016108 4652 generic.go:334] "Generic (PLEG): container finished" podID="06840359-14e1-46d7-b74a-a3acd120905b" containerID="a953984f730749c4669fd2456a04fc762aaa2b368617c7a5ec1858a57e604a8b" exitCode=143 Feb 16 17:42:57.016270 master-0 kubenswrapper[4652]: I0216 17:42:57.016192 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5559c64944-9qfgd" event={"ID":"06840359-14e1-46d7-b74a-a3acd120905b","Type":"ContainerDied","Data":"a953984f730749c4669fd2456a04fc762aaa2b368617c7a5ec1858a57e604a8b"} Feb 16 17:42:57.018406 master-0 kubenswrapper[4652]: I0216 17:42:57.018132 4652 generic.go:334] "Generic (PLEG): container finished" podID="109e606b-e77f-4512-957a-77f228cd55ed" containerID="e412aa24735fea9f02dc30f13130564eec6490783445260c3a7795c839754a9a" exitCode=0 Feb 16 17:42:57.018406 master-0 kubenswrapper[4652]: I0216 17:42:57.018169 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-m4w4d" event={"ID":"109e606b-e77f-4512-957a-77f228cd55ed","Type":"ContainerDied","Data":"e412aa24735fea9f02dc30f13130564eec6490783445260c3a7795c839754a9a"} Feb 16 17:42:57.029805 master-0 kubenswrapper[4652]: I0216 17:42:57.029752 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" event={"ID":"c8c13c9e-779c-40f4-b478-b6c3d5baf083","Type":"ContainerStarted","Data":"18c1b82dace0d6289c60411d913b41368dd8611c6bef95229f063eb3fe54c4ba"} Feb 16 17:42:57.030953 master-0 kubenswrapper[4652]: I0216 17:42:57.030912 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.031024 master-0 kubenswrapper[4652]: I0216 17:42:57.030967 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e0296d1-dc25-4d4d-a617-3d1354eadb6f-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.042856 master-0 kubenswrapper[4652]: I0216 17:42:57.042793 4652 generic.go:334] "Generic (PLEG): container finished" podID="f1339fd1-e639-4057-bb76-3ad09c3000fe" containerID="901a3cf6aefe6b0588fc42aaadc11445f06847028e31673918cd8ece3b9a6728" exitCode=0 Feb 16 17:42:57.043081 master-0 kubenswrapper[4652]: I0216 17:42:57.042973 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-backup-0" event={"ID":"f1339fd1-e639-4057-bb76-3ad09c3000fe","Type":"ContainerDied","Data":"901a3cf6aefe6b0588fc42aaadc11445f06847028e31673918cd8ece3b9a6728"} Feb 16 17:42:57.084354 master-0 kubenswrapper[4652]: I0216 17:42:57.084307 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-scheduler-0" event={"ID":"9e0296d1-dc25-4d4d-a617-3d1354eadb6f","Type":"ContainerDied","Data":"c1305d2596f56ad20bf45cbe22fdf60d40abe90954cb3bc7e2bf7f54d6877a19"} Feb 16 17:42:57.084568 master-0 kubenswrapper[4652]: I0216 17:42:57.084364 4652 scope.go:117] "RemoveContainer" containerID="edf0c3be42c6e3861ca6d7f31f947964c75aa1e890616af1d2083c84e7e1d950" Feb 16 17:42:57.084709 master-0 kubenswrapper[4652]: I0216 17:42:57.084689 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.158049 master-0 kubenswrapper[4652]: I0216 17:42:57.157570 4652 scope.go:117] "RemoveContainer" containerID="e4eeb696ba18e952a959884a793243b321b1c30a60e9da8f9265308dda2bc9d1" Feb 16 17:42:57.261551 master-0 kubenswrapper[4652]: I0216 17:42:57.261369 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c34a6-scheduler-0"] Feb 16 17:42:57.284766 master-0 kubenswrapper[4652]: I0216 17:42:57.284697 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c34a6-scheduler-0"] Feb 16 17:42:57.311422 master-0 kubenswrapper[4652]: I0216 17:42:57.310987 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c34a6-scheduler-0"] Feb 16 17:42:57.311653 master-0 kubenswrapper[4652]: E0216 17:42:57.311524 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0296d1-dc25-4d4d-a617-3d1354eadb6f" containerName="probe" Feb 16 17:42:57.311653 master-0 kubenswrapper[4652]: I0216 17:42:57.311540 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0296d1-dc25-4d4d-a617-3d1354eadb6f" containerName="probe" Feb 16 17:42:57.311653 master-0 kubenswrapper[4652]: E0216 17:42:57.311562 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e0296d1-dc25-4d4d-a617-3d1354eadb6f" containerName="cinder-scheduler" Feb 16 17:42:57.311653 master-0 kubenswrapper[4652]: I0216 17:42:57.311568 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e0296d1-dc25-4d4d-a617-3d1354eadb6f" containerName="cinder-scheduler" Feb 16 17:42:57.311841 master-0 kubenswrapper[4652]: I0216 17:42:57.311792 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0296d1-dc25-4d4d-a617-3d1354eadb6f" containerName="cinder-scheduler" Feb 16 17:42:57.311841 master-0 kubenswrapper[4652]: I0216 17:42:57.311812 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e0296d1-dc25-4d4d-a617-3d1354eadb6f" containerName="probe" Feb 16 17:42:57.313967 master-0 kubenswrapper[4652]: I0216 17:42:57.313935 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.318015 master-0 kubenswrapper[4652]: I0216 17:42:57.317812 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-c34a6-scheduler-config-data" Feb 16 17:42:57.336114 master-0 kubenswrapper[4652]: I0216 17:42:57.336040 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-scheduler-0"] Feb 16 17:42:57.360489 master-0 kubenswrapper[4652]: I0216 17:42:57.360358 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-combined-ca-bundle\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.360699 master-0 kubenswrapper[4652]: I0216 17:42:57.360512 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-etc-machine-id\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.360756 master-0 kubenswrapper[4652]: I0216 17:42:57.360732 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-config-data-custom\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.360921 master-0 kubenswrapper[4652]: I0216 17:42:57.360859 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drhj9\" (UniqueName: \"kubernetes.io/projected/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-kube-api-access-drhj9\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.360964 master-0 kubenswrapper[4652]: I0216 17:42:57.360951 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-scripts\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.361156 master-0 kubenswrapper[4652]: I0216 17:42:57.361119 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-config-data\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.462895 master-0 kubenswrapper[4652]: I0216 17:42:57.462824 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-config-data-custom\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.463055 master-0 kubenswrapper[4652]: I0216 17:42:57.462904 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drhj9\" (UniqueName: \"kubernetes.io/projected/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-kube-api-access-drhj9\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.463055 master-0 kubenswrapper[4652]: I0216 17:42:57.462932 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-scripts\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.463055 master-0 kubenswrapper[4652]: I0216 17:42:57.463002 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-config-data\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.463055 master-0 kubenswrapper[4652]: I0216 17:42:57.463036 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-combined-ca-bundle\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.463200 master-0 kubenswrapper[4652]: I0216 17:42:57.463063 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-etc-machine-id\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.463200 master-0 kubenswrapper[4652]: I0216 17:42:57.463150 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-etc-machine-id\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.468466 master-0 kubenswrapper[4652]: I0216 17:42:57.468437 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-scripts\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.471679 master-0 kubenswrapper[4652]: I0216 17:42:57.471643 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-config-data-custom\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.473420 master-0 kubenswrapper[4652]: I0216 17:42:57.473362 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-combined-ca-bundle\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.496522 master-0 kubenswrapper[4652]: I0216 17:42:57.496453 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-config-data\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.513211 master-0 kubenswrapper[4652]: I0216 17:42:57.513081 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-79d877c778-jztbq"] Feb 16 17:42:57.523438 master-0 kubenswrapper[4652]: I0216 17:42:57.523338 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drhj9\" (UniqueName: \"kubernetes.io/projected/0f6e10ee-00f9-4c6e-b67e-3f631e8c7363-kube-api-access-drhj9\") pod \"cinder-c34a6-scheduler-0\" (UID: \"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363\") " pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.532417 master-0 kubenswrapper[4652]: I0216 17:42:57.532378 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-volume-lvm-iscsi-0"] Feb 16 17:42:57.736281 master-0 kubenswrapper[4652]: I0216 17:42:57.734848 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:57.740275 master-0 kubenswrapper[4652]: I0216 17:42:57.739509 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.798964 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-run\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799072 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-iscsi\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799105 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-config-data\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799127 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-machine-id\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799198 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-lib-cinder\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799232 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tjf6\" (UniqueName: \"kubernetes.io/projected/f1339fd1-e639-4057-bb76-3ad09c3000fe-kube-api-access-2tjf6\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799294 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-nvme\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799335 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-locks-cinder\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799376 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-dev\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799426 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-config-data-custom\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799452 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-sys\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799527 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-scripts\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799577 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-locks-brick\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799634 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-lib-modules\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.799687 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-combined-ca-bundle\") pod \"f1339fd1-e639-4057-bb76-3ad09c3000fe\" (UID: \"f1339fd1-e639-4057-bb76-3ad09c3000fe\") " Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.800367 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.800393 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.800432 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-run" (OuterVolumeSpecName: "run") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:57.800811 master-0 kubenswrapper[4652]: I0216 17:42:57.800454 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:57.801621 master-0 kubenswrapper[4652]: I0216 17:42:57.800831 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-lib-cinder" (OuterVolumeSpecName: "var-lib-cinder") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "var-lib-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:57.801621 master-0 kubenswrapper[4652]: I0216 17:42:57.800862 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:57.801621 master-0 kubenswrapper[4652]: I0216 17:42:57.801089 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-sys" (OuterVolumeSpecName: "sys") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:57.801621 master-0 kubenswrapper[4652]: I0216 17:42:57.801116 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-locks-cinder" (OuterVolumeSpecName: "var-locks-cinder") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "var-locks-cinder". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:57.801621 master-0 kubenswrapper[4652]: I0216 17:42:57.801136 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-dev" (OuterVolumeSpecName: "dev") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:57.807277 master-0 kubenswrapper[4652]: I0216 17:42:57.805647 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 17:42:57.817317 master-0 kubenswrapper[4652]: I0216 17:42:57.816378 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:57.817317 master-0 kubenswrapper[4652]: I0216 17:42:57.817193 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1339fd1-e639-4057-bb76-3ad09c3000fe-kube-api-access-2tjf6" (OuterVolumeSpecName: "kube-api-access-2tjf6") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "kube-api-access-2tjf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:57.826271 master-0 kubenswrapper[4652]: I0216 17:42:57.822885 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-scripts" (OuterVolumeSpecName: "scripts") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:57.836282 master-0 kubenswrapper[4652]: I0216 17:42:57.826546 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.836282 master-0 kubenswrapper[4652]: I0216 17:42:57.826593 4652 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-locks-brick\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.836282 master-0 kubenswrapper[4652]: I0216 17:42:57.826609 4652 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-lib-modules\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.836282 master-0 kubenswrapper[4652]: I0216 17:42:57.826621 4652 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-run\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.836282 master-0 kubenswrapper[4652]: I0216 17:42:57.826636 4652 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-iscsi\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.836282 master-0 kubenswrapper[4652]: I0216 17:42:57.826652 4652 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-machine-id\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.836282 master-0 kubenswrapper[4652]: I0216 17:42:57.826664 4652 reconciler_common.go:293] "Volume detached for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-lib-cinder\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.836282 master-0 kubenswrapper[4652]: I0216 17:42:57.826681 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tjf6\" (UniqueName: \"kubernetes.io/projected/f1339fd1-e639-4057-bb76-3ad09c3000fe-kube-api-access-2tjf6\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.836282 master-0 kubenswrapper[4652]: I0216 17:42:57.826694 4652 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-etc-nvme\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.836282 master-0 kubenswrapper[4652]: I0216 17:42:57.826709 4652 reconciler_common.go:293] "Volume detached for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-var-locks-cinder\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.836282 master-0 kubenswrapper[4652]: I0216 17:42:57.826721 4652 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-dev\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.836282 master-0 kubenswrapper[4652]: I0216 17:42:57.826735 4652 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.836282 master-0 kubenswrapper[4652]: I0216 17:42:57.826748 4652 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f1339fd1-e639-4057-bb76-3ad09c3000fe-sys\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:57.921433 master-0 kubenswrapper[4652]: I0216 17:42:57.921356 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:57.930812 master-0 kubenswrapper[4652]: I0216 17:42:57.930752 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:58.004587 master-0 kubenswrapper[4652]: I0216 17:42:58.004521 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-config-data" (OuterVolumeSpecName: "config-data") pod "f1339fd1-e639-4057-bb76-3ad09c3000fe" (UID: "f1339fd1-e639-4057-bb76-3ad09c3000fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:42:58.033100 master-0 kubenswrapper[4652]: I0216 17:42:58.032658 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1339fd1-e639-4057-bb76-3ad09c3000fe-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:58.110270 master-0 kubenswrapper[4652]: I0216 17:42:58.107103 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-backup-0" event={"ID":"f1339fd1-e639-4057-bb76-3ad09c3000fe","Type":"ContainerDied","Data":"1fe2246476947c524df4c99471343375b5da26b585327c3905ce9de173f7c50b"} Feb 16 17:42:58.110270 master-0 kubenswrapper[4652]: I0216 17:42:58.107162 4652 scope.go:117] "RemoveContainer" containerID="f51ef18270d7ffa218605c7022402385de3d816737b77ce4de126043a6d58f45" Feb 16 17:42:58.110270 master-0 kubenswrapper[4652]: I0216 17:42:58.107308 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.112640 master-0 kubenswrapper[4652]: I0216 17:42:58.111512 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-79d877c778-jztbq" event={"ID":"3c06307c-938d-45f7-b671-948d93bf0642","Type":"ContainerStarted","Data":"c9616ceee950a2b7c407689ce98154b7a8c84f095b4ec4833e25d361a04f5db3"} Feb 16 17:42:58.126278 master-0 kubenswrapper[4652]: I0216 17:42:58.125595 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" event={"ID":"c5ba513b-d2ca-4d7c-b419-6e8009ebe299","Type":"ContainerStarted","Data":"5a345d7839bd0c200823736db7151c2cf85caa3cdad36bf0ed6b0bcd916e5a0e"} Feb 16 17:42:58.130271 master-0 kubenswrapper[4652]: I0216 17:42:58.129616 4652 generic.go:334] "Generic (PLEG): container finished" podID="205faff9-2936-475b-9fce-f11ab722187e" containerID="3d455baac623cdb0d7613eacaf1b287025e2b8ec11b914b50afcc83dea9e618c" exitCode=0 Feb 16 17:42:58.130271 master-0 kubenswrapper[4652]: I0216 17:42:58.130155 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" event={"ID":"205faff9-2936-475b-9fce-f11ab722187e","Type":"ContainerDied","Data":"3d455baac623cdb0d7613eacaf1b287025e2b8ec11b914b50afcc83dea9e618c"} Feb 16 17:42:58.143271 master-0 kubenswrapper[4652]: I0216 17:42:58.142726 4652 generic.go:334] "Generic (PLEG): container finished" podID="c8c13c9e-779c-40f4-b478-b6c3d5baf083" containerID="d28138aa222c38b45c1131708c1b8b34c847f0a7d85745d32e2619d9a6bcba6e" exitCode=0 Feb 16 17:42:58.147276 master-0 kubenswrapper[4652]: I0216 17:42:58.144365 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" event={"ID":"c8c13c9e-779c-40f4-b478-b6c3d5baf083","Type":"ContainerDied","Data":"d28138aa222c38b45c1131708c1b8b34c847f0a7d85745d32e2619d9a6bcba6e"} Feb 16 17:42:58.169269 master-0 kubenswrapper[4652]: I0216 17:42:58.169070 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-92a843c2-baa6-47c4-82c5-cbc6baff27b6\" (UniqueName: \"kubernetes.io/csi/topolvm.io^2461c95d-53cb-4e27-bce6-40d7883bfcbd\") pod \"ironic-conductor-0\" (UID: \"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e\") " pod="openstack/ironic-conductor-0" Feb 16 17:42:58.211273 master-0 kubenswrapper[4652]: I0216 17:42:58.210838 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-conductor-0" Feb 16 17:42:58.232278 master-0 kubenswrapper[4652]: I0216 17:42:58.224721 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c34a6-backup-0"] Feb 16 17:42:58.301276 master-0 kubenswrapper[4652]: I0216 17:42:58.300328 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c34a6-backup-0"] Feb 16 17:42:58.316693 master-0 kubenswrapper[4652]: I0216 17:42:58.316638 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-95c564f-wdb5n" Feb 16 17:42:58.345686 master-0 kubenswrapper[4652]: I0216 17:42:58.345616 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c34a6-backup-0"] Feb 16 17:42:58.346763 master-0 kubenswrapper[4652]: E0216 17:42:58.346689 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1339fd1-e639-4057-bb76-3ad09c3000fe" containerName="probe" Feb 16 17:42:58.346763 master-0 kubenswrapper[4652]: I0216 17:42:58.346717 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1339fd1-e639-4057-bb76-3ad09c3000fe" containerName="probe" Feb 16 17:42:58.346763 master-0 kubenswrapper[4652]: E0216 17:42:58.346737 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1339fd1-e639-4057-bb76-3ad09c3000fe" containerName="cinder-backup" Feb 16 17:42:58.346763 master-0 kubenswrapper[4652]: I0216 17:42:58.346746 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1339fd1-e639-4057-bb76-3ad09c3000fe" containerName="cinder-backup" Feb 16 17:42:58.347288 master-0 kubenswrapper[4652]: I0216 17:42:58.347160 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1339fd1-e639-4057-bb76-3ad09c3000fe" containerName="cinder-backup" Feb 16 17:42:58.347288 master-0 kubenswrapper[4652]: I0216 17:42:58.347189 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1339fd1-e639-4057-bb76-3ad09c3000fe" containerName="probe" Feb 16 17:42:58.349896 master-0 kubenswrapper[4652]: I0216 17:42:58.349807 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.354746 master-0 kubenswrapper[4652]: I0216 17:42:58.354326 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-c34a6-backup-config-data" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.446282 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-lib-modules\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.446356 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-dev\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.446411 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvf9n\" (UniqueName: \"kubernetes.io/projected/580471cb-c8d4-40ec-86ed-7062a15b7d24-kube-api-access-dvf9n\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.446440 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/580471cb-c8d4-40ec-86ed-7062a15b7d24-config-data-custom\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.446517 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-var-lib-cinder\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.446578 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-etc-iscsi\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.446700 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/580471cb-c8d4-40ec-86ed-7062a15b7d24-config-data\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.446773 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/580471cb-c8d4-40ec-86ed-7062a15b7d24-scripts\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.446846 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580471cb-c8d4-40ec-86ed-7062a15b7d24-combined-ca-bundle\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.446879 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-var-locks-brick\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.446927 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-sys\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.446963 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-run\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.446986 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-var-locks-cinder\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.447019 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-etc-nvme\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.447784 master-0 kubenswrapper[4652]: I0216 17:42:58.447102 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-etc-machine-id\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.476141 master-0 kubenswrapper[4652]: I0216 17:42:58.471323 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-backup-0"] Feb 16 17:42:58.548615 master-0 kubenswrapper[4652]: I0216 17:42:58.548574 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/580471cb-c8d4-40ec-86ed-7062a15b7d24-config-data\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.548872 master-0 kubenswrapper[4652]: I0216 17:42:58.548857 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/580471cb-c8d4-40ec-86ed-7062a15b7d24-scripts\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.549016 master-0 kubenswrapper[4652]: I0216 17:42:58.549003 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580471cb-c8d4-40ec-86ed-7062a15b7d24-combined-ca-bundle\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.549141 master-0 kubenswrapper[4652]: I0216 17:42:58.549089 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-var-locks-brick\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.549281 master-0 kubenswrapper[4652]: I0216 17:42:58.549265 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-sys\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.549395 master-0 kubenswrapper[4652]: I0216 17:42:58.549380 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-run\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.549473 master-0 kubenswrapper[4652]: I0216 17:42:58.549460 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-var-locks-cinder\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.549575 master-0 kubenswrapper[4652]: I0216 17:42:58.549558 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-etc-nvme\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.549718 master-0 kubenswrapper[4652]: I0216 17:42:58.549703 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-etc-machine-id\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.549818 master-0 kubenswrapper[4652]: I0216 17:42:58.549802 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-lib-modules\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.549922 master-0 kubenswrapper[4652]: I0216 17:42:58.549907 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-dev\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.550065 master-0 kubenswrapper[4652]: I0216 17:42:58.550052 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvf9n\" (UniqueName: \"kubernetes.io/projected/580471cb-c8d4-40ec-86ed-7062a15b7d24-kube-api-access-dvf9n\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.550191 master-0 kubenswrapper[4652]: I0216 17:42:58.550175 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/580471cb-c8d4-40ec-86ed-7062a15b7d24-config-data-custom\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.550328 master-0 kubenswrapper[4652]: I0216 17:42:58.550314 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-var-lib-cinder\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.550475 master-0 kubenswrapper[4652]: I0216 17:42:58.550461 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-dev\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.550540 master-0 kubenswrapper[4652]: I0216 17:42:58.550482 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-var-locks-cinder\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.550609 master-0 kubenswrapper[4652]: I0216 17:42:58.550544 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-run\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.550673 master-0 kubenswrapper[4652]: I0216 17:42:58.550576 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-var-lib-cinder\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.551622 master-0 kubenswrapper[4652]: I0216 17:42:58.551558 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-etc-nvme\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.551707 master-0 kubenswrapper[4652]: I0216 17:42:58.551682 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-etc-machine-id\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.551772 master-0 kubenswrapper[4652]: I0216 17:42:58.551753 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-lib-modules\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.551815 master-0 kubenswrapper[4652]: I0216 17:42:58.551799 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-etc-iscsi\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.552190 master-0 kubenswrapper[4652]: I0216 17:42:58.552148 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-etc-iscsi\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.557951 master-0 kubenswrapper[4652]: I0216 17:42:58.557900 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/580471cb-c8d4-40ec-86ed-7062a15b7d24-combined-ca-bundle\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.558098 master-0 kubenswrapper[4652]: I0216 17:42:58.558001 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-var-locks-brick\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.558265 master-0 kubenswrapper[4652]: I0216 17:42:58.558235 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/580471cb-c8d4-40ec-86ed-7062a15b7d24-scripts\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.558362 master-0 kubenswrapper[4652]: I0216 17:42:58.558291 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/580471cb-c8d4-40ec-86ed-7062a15b7d24-sys\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.562446 master-0 kubenswrapper[4652]: I0216 17:42:58.559781 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/580471cb-c8d4-40ec-86ed-7062a15b7d24-config-data\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.572498 master-0 kubenswrapper[4652]: I0216 17:42:58.569404 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-5665b8875d-tx66w"] Feb 16 17:42:58.576697 master-0 kubenswrapper[4652]: I0216 17:42:58.576646 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/580471cb-c8d4-40ec-86ed-7062a15b7d24-config-data-custom\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.583302 master-0 kubenswrapper[4652]: I0216 17:42:58.578779 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.583302 master-0 kubenswrapper[4652]: I0216 17:42:58.580579 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5665b8875d-tx66w"] Feb 16 17:42:58.585903 master-0 kubenswrapper[4652]: I0216 17:42:58.583621 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-public-svc" Feb 16 17:42:58.585903 master-0 kubenswrapper[4652]: I0216 17:42:58.583785 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-internal-svc" Feb 16 17:42:58.586517 master-0 kubenswrapper[4652]: I0216 17:42:58.586468 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvf9n\" (UniqueName: \"kubernetes.io/projected/580471cb-c8d4-40ec-86ed-7062a15b7d24-kube-api-access-dvf9n\") pod \"cinder-c34a6-backup-0\" (UID: \"580471cb-c8d4-40ec-86ed-7062a15b7d24\") " pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.659385 master-0 kubenswrapper[4652]: I0216 17:42:58.659326 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-logs\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.659508 master-0 kubenswrapper[4652]: I0216 17:42:58.659407 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-etc-podinfo\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.659508 master-0 kubenswrapper[4652]: I0216 17:42:58.659482 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-scripts\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.659590 master-0 kubenswrapper[4652]: I0216 17:42:58.659533 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp4gm\" (UniqueName: \"kubernetes.io/projected/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-kube-api-access-pp4gm\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.659590 master-0 kubenswrapper[4652]: I0216 17:42:58.659564 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-internal-tls-certs\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.659669 master-0 kubenswrapper[4652]: I0216 17:42:58.659613 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-public-tls-certs\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.660992 master-0 kubenswrapper[4652]: I0216 17:42:58.659732 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-combined-ca-bundle\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.660992 master-0 kubenswrapper[4652]: I0216 17:42:58.659807 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-config-data-custom\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.660992 master-0 kubenswrapper[4652]: I0216 17:42:58.659867 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-config-data\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.660992 master-0 kubenswrapper[4652]: I0216 17:42:58.659898 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-config-data-merged\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.675214 master-0 kubenswrapper[4652]: I0216 17:42:58.675160 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c34a6-backup-0" Feb 16 17:42:58.762079 master-0 kubenswrapper[4652]: I0216 17:42:58.762015 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-config-data\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.762079 master-0 kubenswrapper[4652]: I0216 17:42:58.762088 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-config-data-merged\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.762897 master-0 kubenswrapper[4652]: I0216 17:42:58.762165 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-logs\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.762897 master-0 kubenswrapper[4652]: I0216 17:42:58.762206 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-etc-podinfo\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.762897 master-0 kubenswrapper[4652]: I0216 17:42:58.762278 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-scripts\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.762897 master-0 kubenswrapper[4652]: I0216 17:42:58.762334 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp4gm\" (UniqueName: \"kubernetes.io/projected/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-kube-api-access-pp4gm\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.762897 master-0 kubenswrapper[4652]: I0216 17:42:58.762361 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-internal-tls-certs\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.762897 master-0 kubenswrapper[4652]: I0216 17:42:58.762419 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-public-tls-certs\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.762897 master-0 kubenswrapper[4652]: I0216 17:42:58.762584 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-combined-ca-bundle\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.762897 master-0 kubenswrapper[4652]: I0216 17:42:58.762655 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-config-data-custom\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.767723 master-0 kubenswrapper[4652]: I0216 17:42:58.767669 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-config-data-custom\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.769835 master-0 kubenswrapper[4652]: I0216 17:42:58.769780 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-etc-podinfo\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.771013 master-0 kubenswrapper[4652]: I0216 17:42:58.770954 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e0296d1-dc25-4d4d-a617-3d1354eadb6f" path="/var/lib/kubelet/pods/9e0296d1-dc25-4d4d-a617-3d1354eadb6f/volumes" Feb 16 17:42:58.774281 master-0 kubenswrapper[4652]: I0216 17:42:58.771721 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1048855-86d3-4f6a-b538-a53b51711bce" path="/var/lib/kubelet/pods/f1048855-86d3-4f6a-b538-a53b51711bce/volumes" Feb 16 17:42:58.774281 master-0 kubenswrapper[4652]: I0216 17:42:58.772370 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1339fd1-e639-4057-bb76-3ad09c3000fe" path="/var/lib/kubelet/pods/f1339fd1-e639-4057-bb76-3ad09c3000fe/volumes" Feb 16 17:42:58.774281 master-0 kubenswrapper[4652]: I0216 17:42:58.772909 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-public-tls-certs\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.774281 master-0 kubenswrapper[4652]: I0216 17:42:58.773112 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-scripts\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.774281 master-0 kubenswrapper[4652]: I0216 17:42:58.773315 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-config-data-merged\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.774281 master-0 kubenswrapper[4652]: I0216 17:42:58.773494 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-logs\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.774281 master-0 kubenswrapper[4652]: I0216 17:42:58.773711 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 17:42:58.774711 master-0 kubenswrapper[4652]: I0216 17:42:58.774597 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-config-data\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.804278 master-0 kubenswrapper[4652]: I0216 17:42:58.782621 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-internal-tls-certs\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.804278 master-0 kubenswrapper[4652]: I0216 17:42:58.783879 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-combined-ca-bundle\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.804278 master-0 kubenswrapper[4652]: I0216 17:42:58.794615 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 17:42:58.804278 master-0 kubenswrapper[4652]: I0216 17:42:58.798357 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 17:42:58.815304 master-0 kubenswrapper[4652]: I0216 17:42:58.810438 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 17:42:58.817026 master-0 kubenswrapper[4652]: I0216 17:42:58.816946 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 17:42:58.829944 master-0 kubenswrapper[4652]: I0216 17:42:58.827329 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp4gm\" (UniqueName: \"kubernetes.io/projected/f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7-kube-api-access-pp4gm\") pod \"ironic-5665b8875d-tx66w\" (UID: \"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7\") " pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.851760 master-0 kubenswrapper[4652]: I0216 17:42:58.851714 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-c34a6-api-0" Feb 16 17:42:58.868127 master-0 kubenswrapper[4652]: I0216 17:42:58.868056 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5496a92a-9932-4b28-8c4e-69b754218e51-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5496a92a-9932-4b28-8c4e-69b754218e51\") " pod="openstack/openstackclient" Feb 16 17:42:58.868462 master-0 kubenswrapper[4652]: I0216 17:42:58.868408 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5496a92a-9932-4b28-8c4e-69b754218e51-openstack-config-secret\") pod \"openstackclient\" (UID: \"5496a92a-9932-4b28-8c4e-69b754218e51\") " pod="openstack/openstackclient" Feb 16 17:42:58.868777 master-0 kubenswrapper[4652]: I0216 17:42:58.868614 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dmpf\" (UniqueName: \"kubernetes.io/projected/5496a92a-9932-4b28-8c4e-69b754218e51-kube-api-access-2dmpf\") pod \"openstackclient\" (UID: \"5496a92a-9932-4b28-8c4e-69b754218e51\") " pod="openstack/openstackclient" Feb 16 17:42:58.868777 master-0 kubenswrapper[4652]: I0216 17:42:58.868687 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5496a92a-9932-4b28-8c4e-69b754218e51-openstack-config\") pod \"openstackclient\" (UID: \"5496a92a-9932-4b28-8c4e-69b754218e51\") " pod="openstack/openstackclient" Feb 16 17:42:58.901065 master-0 kubenswrapper[4652]: I0216 17:42:58.901015 4652 scope.go:117] "RemoveContainer" containerID="901a3cf6aefe6b0588fc42aaadc11445f06847028e31673918cd8ece3b9a6728" Feb 16 17:42:58.972267 master-0 kubenswrapper[4652]: I0216 17:42:58.971833 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dmpf\" (UniqueName: \"kubernetes.io/projected/5496a92a-9932-4b28-8c4e-69b754218e51-kube-api-access-2dmpf\") pod \"openstackclient\" (UID: \"5496a92a-9932-4b28-8c4e-69b754218e51\") " pod="openstack/openstackclient" Feb 16 17:42:58.972267 master-0 kubenswrapper[4652]: I0216 17:42:58.971904 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5496a92a-9932-4b28-8c4e-69b754218e51-openstack-config\") pod \"openstackclient\" (UID: \"5496a92a-9932-4b28-8c4e-69b754218e51\") " pod="openstack/openstackclient" Feb 16 17:42:58.972267 master-0 kubenswrapper[4652]: I0216 17:42:58.972090 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5496a92a-9932-4b28-8c4e-69b754218e51-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5496a92a-9932-4b28-8c4e-69b754218e51\") " pod="openstack/openstackclient" Feb 16 17:42:58.972267 master-0 kubenswrapper[4652]: I0216 17:42:58.972198 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5496a92a-9932-4b28-8c4e-69b754218e51-openstack-config-secret\") pod \"openstackclient\" (UID: \"5496a92a-9932-4b28-8c4e-69b754218e51\") " pod="openstack/openstackclient" Feb 16 17:42:58.973993 master-0 kubenswrapper[4652]: I0216 17:42:58.973436 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5496a92a-9932-4b28-8c4e-69b754218e51-openstack-config\") pod \"openstackclient\" (UID: \"5496a92a-9932-4b28-8c4e-69b754218e51\") " pod="openstack/openstackclient" Feb 16 17:42:58.987274 master-0 kubenswrapper[4652]: I0216 17:42:58.977766 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5496a92a-9932-4b28-8c4e-69b754218e51-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5496a92a-9932-4b28-8c4e-69b754218e51\") " pod="openstack/openstackclient" Feb 16 17:42:58.987274 master-0 kubenswrapper[4652]: I0216 17:42:58.978386 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:42:58.998274 master-0 kubenswrapper[4652]: I0216 17:42:58.990652 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-m4w4d" Feb 16 17:42:59.010800 master-0 kubenswrapper[4652]: I0216 17:42:59.010760 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5496a92a-9932-4b28-8c4e-69b754218e51-openstack-config-secret\") pod \"openstackclient\" (UID: \"5496a92a-9932-4b28-8c4e-69b754218e51\") " pod="openstack/openstackclient" Feb 16 17:42:59.095281 master-0 kubenswrapper[4652]: I0216 17:42:59.090207 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82jrc\" (UniqueName: \"kubernetes.io/projected/109e606b-e77f-4512-957a-77f228cd55ed-kube-api-access-82jrc\") pod \"109e606b-e77f-4512-957a-77f228cd55ed\" (UID: \"109e606b-e77f-4512-957a-77f228cd55ed\") " Feb 16 17:42:59.095281 master-0 kubenswrapper[4652]: I0216 17:42:59.090445 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/109e606b-e77f-4512-957a-77f228cd55ed-operator-scripts\") pod \"109e606b-e77f-4512-957a-77f228cd55ed\" (UID: \"109e606b-e77f-4512-957a-77f228cd55ed\") " Feb 16 17:42:59.110271 master-0 kubenswrapper[4652]: I0216 17:42:59.109729 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/109e606b-e77f-4512-957a-77f228cd55ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "109e606b-e77f-4512-957a-77f228cd55ed" (UID: "109e606b-e77f-4512-957a-77f228cd55ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:42:59.116192 master-0 kubenswrapper[4652]: I0216 17:42:59.116124 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/109e606b-e77f-4512-957a-77f228cd55ed-kube-api-access-82jrc" (OuterVolumeSpecName: "kube-api-access-82jrc") pod "109e606b-e77f-4512-957a-77f228cd55ed" (UID: "109e606b-e77f-4512-957a-77f228cd55ed"). InnerVolumeSpecName "kube-api-access-82jrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:42:59.199889 master-0 kubenswrapper[4652]: I0216 17:42:59.198032 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82jrc\" (UniqueName: \"kubernetes.io/projected/109e606b-e77f-4512-957a-77f228cd55ed-kube-api-access-82jrc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:59.199889 master-0 kubenswrapper[4652]: I0216 17:42:59.198074 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/109e606b-e77f-4512-957a-77f228cd55ed-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:42:59.207486 master-0 kubenswrapper[4652]: I0216 17:42:59.204652 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-create-m4w4d" event={"ID":"109e606b-e77f-4512-957a-77f228cd55ed","Type":"ContainerDied","Data":"a20d7230e12bcd94893c1f24294186910a15dda72b7b32141bdb2aedb9a735a7"} Feb 16 17:42:59.207486 master-0 kubenswrapper[4652]: I0216 17:42:59.204698 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a20d7230e12bcd94893c1f24294186910a15dda72b7b32141bdb2aedb9a735a7" Feb 16 17:42:59.207486 master-0 kubenswrapper[4652]: I0216 17:42:59.204761 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-create-m4w4d" Feb 16 17:42:59.318975 master-0 kubenswrapper[4652]: I0216 17:42:59.312698 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dmpf\" (UniqueName: \"kubernetes.io/projected/5496a92a-9932-4b28-8c4e-69b754218e51-kube-api-access-2dmpf\") pod \"openstackclient\" (UID: \"5496a92a-9932-4b28-8c4e-69b754218e51\") " pod="openstack/openstackclient" Feb 16 17:42:59.494777 master-0 kubenswrapper[4652]: I0216 17:42:59.480787 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 17:42:59.631028 master-0 kubenswrapper[4652]: I0216 17:42:59.630878 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-scheduler-0"] Feb 16 17:43:00.024776 master-0 kubenswrapper[4652]: I0216 17:43:00.008807 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-conductor-0"] Feb 16 17:43:00.249501 master-0 kubenswrapper[4652]: I0216 17:43:00.249353 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e","Type":"ContainerStarted","Data":"ae788e942f9fc2e63d9b76ba77a56a89ad27f7f45db2f71cc32ab895547c7b26"} Feb 16 17:43:00.272650 master-0 kubenswrapper[4652]: I0216 17:43:00.272529 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-547dcb69f9-nqbv9" podUID="f1048855-86d3-4f6a-b538-a53b51711bce" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.193:5353: i/o timeout" Feb 16 17:43:00.272650 master-0 kubenswrapper[4652]: I0216 17:43:00.272629 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" event={"ID":"c5ba513b-d2ca-4d7c-b419-6e8009ebe299","Type":"ContainerStarted","Data":"3416fb0f1a98d65df44caf88dc1fb32c261ffc696cd57f1f0479c105704a04af"} Feb 16 17:43:00.279312 master-0 kubenswrapper[4652]: I0216 17:43:00.272671 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" event={"ID":"c5ba513b-d2ca-4d7c-b419-6e8009ebe299","Type":"ContainerStarted","Data":"8c318a5167698b42da606f94c6b83bdda3df51ac7a3e7541294f873166879efa"} Feb 16 17:43:00.279312 master-0 kubenswrapper[4652]: I0216 17:43:00.274872 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" Feb 16 17:43:00.298789 master-0 kubenswrapper[4652]: I0216 17:43:00.298647 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" event={"ID":"205faff9-2936-475b-9fce-f11ab722187e","Type":"ContainerDied","Data":"98d20acfbea441230135471cef24d2cf59ec707c4a6f9f5f001000f1302121bd"} Feb 16 17:43:00.298789 master-0 kubenswrapper[4652]: I0216 17:43:00.298724 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98d20acfbea441230135471cef24d2cf59ec707c4a6f9f5f001000f1302121bd" Feb 16 17:43:00.306695 master-0 kubenswrapper[4652]: I0216 17:43:00.306581 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" podStartSLOduration=5.306562563 podStartE2EDuration="5.306562563s" podCreationTimestamp="2026-02-16 17:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:00.302136344 +0000 UTC m=+1137.690304860" watchObservedRunningTime="2026-02-16 17:43:00.306562563 +0000 UTC m=+1137.694731079" Feb 16 17:43:00.325990 master-0 kubenswrapper[4652]: I0216 17:43:00.325922 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" event={"ID":"8f3751fd-c328-4914-8e15-a14ad13a527d","Type":"ContainerStarted","Data":"e07cfb5ea981209d4c967c50097928a505874d15bdd82edbb42a84d6c59ed438"} Feb 16 17:43:00.327081 master-0 kubenswrapper[4652]: I0216 17:43:00.327040 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:43:00.332326 master-0 kubenswrapper[4652]: I0216 17:43:00.332235 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj7rx\" (UniqueName: \"kubernetes.io/projected/205faff9-2936-475b-9fce-f11ab722187e-kube-api-access-sj7rx\") pod \"205faff9-2936-475b-9fce-f11ab722187e\" (UID: \"205faff9-2936-475b-9fce-f11ab722187e\") " Feb 16 17:43:00.333545 master-0 kubenswrapper[4652]: I0216 17:43:00.332513 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205faff9-2936-475b-9fce-f11ab722187e-operator-scripts\") pod \"205faff9-2936-475b-9fce-f11ab722187e\" (UID: \"205faff9-2936-475b-9fce-f11ab722187e\") " Feb 16 17:43:00.336590 master-0 kubenswrapper[4652]: I0216 17:43:00.336549 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/205faff9-2936-475b-9fce-f11ab722187e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "205faff9-2936-475b-9fce-f11ab722187e" (UID: "205faff9-2936-475b-9fce-f11ab722187e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:00.349270 master-0 kubenswrapper[4652]: I0216 17:43:00.344438 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205faff9-2936-475b-9fce-f11ab722187e-kube-api-access-sj7rx" (OuterVolumeSpecName: "kube-api-access-sj7rx") pod "205faff9-2936-475b-9fce-f11ab722187e" (UID: "205faff9-2936-475b-9fce-f11ab722187e"). InnerVolumeSpecName "kube-api-access-sj7rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:00.349270 master-0 kubenswrapper[4652]: I0216 17:43:00.344618 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-scheduler-0" event={"ID":"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363","Type":"ContainerStarted","Data":"d672704656f41b8de6d9071291d88233201d8515432decd802e4dc8cd339694d"} Feb 16 17:43:00.367283 master-0 kubenswrapper[4652]: I0216 17:43:00.362344 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" event={"ID":"c8c13c9e-779c-40f4-b478-b6c3d5baf083","Type":"ContainerStarted","Data":"534daa907ffb97f56d8446b89ef6be36940f08e6a97d8e1503a7bc8b824b6aeb"} Feb 16 17:43:00.367283 master-0 kubenswrapper[4652]: I0216 17:43:00.363353 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:43:00.447266 master-0 kubenswrapper[4652]: I0216 17:43:00.442138 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj7rx\" (UniqueName: \"kubernetes.io/projected/205faff9-2936-475b-9fce-f11ab722187e-kube-api-access-sj7rx\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:00.447266 master-0 kubenswrapper[4652]: I0216 17:43:00.442171 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/205faff9-2936-475b-9fce-f11ab722187e-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:00.515431 master-0 kubenswrapper[4652]: I0216 17:43:00.501195 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" podStartSLOduration=2.650946271 podStartE2EDuration="6.50117727s" podCreationTimestamp="2026-02-16 17:42:54 +0000 UTC" firstStartedPulling="2026-02-16 17:42:55.687432933 +0000 UTC m=+1133.075601449" lastFinishedPulling="2026-02-16 17:42:59.537663932 +0000 UTC m=+1136.925832448" observedRunningTime="2026-02-16 17:43:00.376721613 +0000 UTC m=+1137.764890119" watchObservedRunningTime="2026-02-16 17:43:00.50117727 +0000 UTC m=+1137.889345786" Feb 16 17:43:00.515431 master-0 kubenswrapper[4652]: I0216 17:43:00.515204 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-5665b8875d-tx66w"] Feb 16 17:43:00.552082 master-0 kubenswrapper[4652]: I0216 17:43:00.551991 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" podStartSLOduration=6.551973451 podStartE2EDuration="6.551973451s" podCreationTimestamp="2026-02-16 17:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:00.404842977 +0000 UTC m=+1137.793011493" watchObservedRunningTime="2026-02-16 17:43:00.551973451 +0000 UTC m=+1137.940141967" Feb 16 17:43:00.562989 master-0 kubenswrapper[4652]: W0216 17:43:00.562942 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod580471cb_c8d4_40ec_86ed_7062a15b7d24.slice/crio-56509900dc75ddcebfa04df02c1555daad5acaec54b23ff06a40851c13baa5c1 WatchSource:0}: Error finding container 56509900dc75ddcebfa04df02c1555daad5acaec54b23ff06a40851c13baa5c1: Status 404 returned error can't find the container with id 56509900dc75ddcebfa04df02c1555daad5acaec54b23ff06a40851c13baa5c1 Feb 16 17:43:00.598646 master-0 kubenswrapper[4652]: I0216 17:43:00.591351 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 17:43:00.602388 master-0 kubenswrapper[4652]: I0216 17:43:00.601800 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c34a6-backup-0"] Feb 16 17:43:01.278592 master-0 kubenswrapper[4652]: I0216 17:43:01.278516 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:43:01.392484 master-0 kubenswrapper[4652]: I0216 17:43:01.392429 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5496a92a-9932-4b28-8c4e-69b754218e51","Type":"ContainerStarted","Data":"582c5cdfce280ed7722db769c27f35ece429619957f1c29077caf2b1df313602"} Feb 16 17:43:01.400694 master-0 kubenswrapper[4652]: I0216 17:43:01.400407 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-backup-0" event={"ID":"580471cb-c8d4-40ec-86ed-7062a15b7d24","Type":"ContainerStarted","Data":"6664fe34e8560e19c616ea9c59007f728d7de8dd3510c163d9c70e175067897a"} Feb 16 17:43:01.400694 master-0 kubenswrapper[4652]: I0216 17:43:01.400450 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-backup-0" event={"ID":"580471cb-c8d4-40ec-86ed-7062a15b7d24","Type":"ContainerStarted","Data":"56509900dc75ddcebfa04df02c1555daad5acaec54b23ff06a40851c13baa5c1"} Feb 16 17:43:01.402515 master-0 kubenswrapper[4652]: I0216 17:43:01.402459 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5665b8875d-tx66w" event={"ID":"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7","Type":"ContainerStarted","Data":"d4ecebcdc0065a4338ba65744b93ecb38ddb5fa6c0ed30c8695a86b11d0c1280"} Feb 16 17:43:01.413394 master-0 kubenswrapper[4652]: I0216 17:43:01.412911 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-scheduler-0" event={"ID":"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363","Type":"ContainerStarted","Data":"0418e1e8de78027e9b3a863735e447429f014aa5b19d0aba2cf13f446545dc4b"} Feb 16 17:43:01.415402 master-0 kubenswrapper[4652]: I0216 17:43:01.415359 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-e5ec-account-create-update-nr7fv" Feb 16 17:43:01.417943 master-0 kubenswrapper[4652]: I0216 17:43:01.417886 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e","Type":"ContainerStarted","Data":"59f0265919847c12a53b50ff6fa3076ce49c01f010a9420066373075cbe180ea"} Feb 16 17:43:02.458358 master-0 kubenswrapper[4652]: I0216 17:43:02.454090 4652 generic.go:334] "Generic (PLEG): container finished" podID="06840359-14e1-46d7-b74a-a3acd120905b" containerID="a548065352cd4d8365baa3bfdf711167608f8b840f55133535592bb1ed4fb564" exitCode=0 Feb 16 17:43:02.458358 master-0 kubenswrapper[4652]: I0216 17:43:02.455150 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5559c64944-9qfgd" event={"ID":"06840359-14e1-46d7-b74a-a3acd120905b","Type":"ContainerDied","Data":"a548065352cd4d8365baa3bfdf711167608f8b840f55133535592bb1ed4fb564"} Feb 16 17:43:03.241309 master-0 kubenswrapper[4652]: I0216 17:43:03.241220 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:43:03.367223 master-0 kubenswrapper[4652]: I0216 17:43:03.367161 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-internal-tls-certs\") pod \"06840359-14e1-46d7-b74a-a3acd120905b\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " Feb 16 17:43:03.367435 master-0 kubenswrapper[4652]: I0216 17:43:03.367291 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-config-data\") pod \"06840359-14e1-46d7-b74a-a3acd120905b\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " Feb 16 17:43:03.367435 master-0 kubenswrapper[4652]: I0216 17:43:03.367379 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-scripts\") pod \"06840359-14e1-46d7-b74a-a3acd120905b\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " Feb 16 17:43:03.367544 master-0 kubenswrapper[4652]: I0216 17:43:03.367448 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-public-tls-certs\") pod \"06840359-14e1-46d7-b74a-a3acd120905b\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " Feb 16 17:43:03.367544 master-0 kubenswrapper[4652]: I0216 17:43:03.367513 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-combined-ca-bundle\") pod \"06840359-14e1-46d7-b74a-a3acd120905b\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " Feb 16 17:43:03.368106 master-0 kubenswrapper[4652]: I0216 17:43:03.367551 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbdb4\" (UniqueName: \"kubernetes.io/projected/06840359-14e1-46d7-b74a-a3acd120905b-kube-api-access-wbdb4\") pod \"06840359-14e1-46d7-b74a-a3acd120905b\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " Feb 16 17:43:03.368106 master-0 kubenswrapper[4652]: I0216 17:43:03.367693 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06840359-14e1-46d7-b74a-a3acd120905b-logs\") pod \"06840359-14e1-46d7-b74a-a3acd120905b\" (UID: \"06840359-14e1-46d7-b74a-a3acd120905b\") " Feb 16 17:43:03.368971 master-0 kubenswrapper[4652]: I0216 17:43:03.368929 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06840359-14e1-46d7-b74a-a3acd120905b-logs" (OuterVolumeSpecName: "logs") pod "06840359-14e1-46d7-b74a-a3acd120905b" (UID: "06840359-14e1-46d7-b74a-a3acd120905b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:43:03.394047 master-0 kubenswrapper[4652]: I0216 17:43:03.393954 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-scripts" (OuterVolumeSpecName: "scripts") pod "06840359-14e1-46d7-b74a-a3acd120905b" (UID: "06840359-14e1-46d7-b74a-a3acd120905b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:03.408611 master-0 kubenswrapper[4652]: I0216 17:43:03.408532 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06840359-14e1-46d7-b74a-a3acd120905b-kube-api-access-wbdb4" (OuterVolumeSpecName: "kube-api-access-wbdb4") pod "06840359-14e1-46d7-b74a-a3acd120905b" (UID: "06840359-14e1-46d7-b74a-a3acd120905b"). InnerVolumeSpecName "kube-api-access-wbdb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:03.450002 master-0 kubenswrapper[4652]: I0216 17:43:03.449945 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06840359-14e1-46d7-b74a-a3acd120905b" (UID: "06840359-14e1-46d7-b74a-a3acd120905b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:03.474043 master-0 kubenswrapper[4652]: I0216 17:43:03.473974 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-config-data" (OuterVolumeSpecName: "config-data") pod "06840359-14e1-46d7-b74a-a3acd120905b" (UID: "06840359-14e1-46d7-b74a-a3acd120905b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:03.474904 master-0 kubenswrapper[4652]: I0216 17:43:03.474853 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:03.474904 master-0 kubenswrapper[4652]: I0216 17:43:03.474886 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:03.475026 master-0 kubenswrapper[4652]: I0216 17:43:03.474909 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbdb4\" (UniqueName: \"kubernetes.io/projected/06840359-14e1-46d7-b74a-a3acd120905b-kube-api-access-wbdb4\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:03.475026 master-0 kubenswrapper[4652]: I0216 17:43:03.474923 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06840359-14e1-46d7-b74a-a3acd120905b-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:03.475026 master-0 kubenswrapper[4652]: I0216 17:43:03.474935 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:03.483262 master-0 kubenswrapper[4652]: I0216 17:43:03.483206 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-79d877c778-jztbq" event={"ID":"3c06307c-938d-45f7-b671-948d93bf0642","Type":"ContainerStarted","Data":"23d8a1448967c909af010a38174389e0c067d48578434e4854019f974b867cd4"} Feb 16 17:43:03.489981 master-0 kubenswrapper[4652]: I0216 17:43:03.489924 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-backup-0" event={"ID":"580471cb-c8d4-40ec-86ed-7062a15b7d24","Type":"ContainerStarted","Data":"a83813f0e27392c248061f8941359ed4f6dd96ea5f0e7c20ef26eb2b4dea5fff"} Feb 16 17:43:03.500823 master-0 kubenswrapper[4652]: I0216 17:43:03.500770 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5665b8875d-tx66w" event={"ID":"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7","Type":"ContainerStarted","Data":"a95be001c7076407d3e9f1426e768cdf93dd116bd6694f8ff7a89e6a6644072a"} Feb 16 17:43:03.527876 master-0 kubenswrapper[4652]: I0216 17:43:03.526667 4652 generic.go:334] "Generic (PLEG): container finished" podID="8f3751fd-c328-4914-8e15-a14ad13a527d" containerID="e07cfb5ea981209d4c967c50097928a505874d15bdd82edbb42a84d6c59ed438" exitCode=1 Feb 16 17:43:03.527876 master-0 kubenswrapper[4652]: I0216 17:43:03.526821 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" event={"ID":"8f3751fd-c328-4914-8e15-a14ad13a527d","Type":"ContainerDied","Data":"e07cfb5ea981209d4c967c50097928a505874d15bdd82edbb42a84d6c59ed438"} Feb 16 17:43:03.527876 master-0 kubenswrapper[4652]: I0216 17:43:03.527773 4652 scope.go:117] "RemoveContainer" containerID="e07cfb5ea981209d4c967c50097928a505874d15bdd82edbb42a84d6c59ed438" Feb 16 17:43:03.532708 master-0 kubenswrapper[4652]: I0216 17:43:03.532563 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5559c64944-9qfgd" event={"ID":"06840359-14e1-46d7-b74a-a3acd120905b","Type":"ContainerDied","Data":"f89656633e0498ec2792dfe6087952c09d2db54e9d1069b5c453e1871a20a408"} Feb 16 17:43:03.532708 master-0 kubenswrapper[4652]: I0216 17:43:03.532610 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5559c64944-9qfgd" Feb 16 17:43:03.532901 master-0 kubenswrapper[4652]: I0216 17:43:03.532619 4652 scope.go:117] "RemoveContainer" containerID="a548065352cd4d8365baa3bfdf711167608f8b840f55133535592bb1ed4fb564" Feb 16 17:43:03.573537 master-0 kubenswrapper[4652]: I0216 17:43:03.573474 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "06840359-14e1-46d7-b74a-a3acd120905b" (UID: "06840359-14e1-46d7-b74a-a3acd120905b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:03.580209 master-0 kubenswrapper[4652]: I0216 17:43:03.579182 4652 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:03.635995 master-0 kubenswrapper[4652]: I0216 17:43:03.635936 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "06840359-14e1-46d7-b74a-a3acd120905b" (UID: "06840359-14e1-46d7-b74a-a3acd120905b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:03.637188 master-0 kubenswrapper[4652]: I0216 17:43:03.637145 4652 scope.go:117] "RemoveContainer" containerID="a953984f730749c4669fd2456a04fc762aaa2b368617c7a5ec1858a57e604a8b" Feb 16 17:43:03.656624 master-0 kubenswrapper[4652]: I0216 17:43:03.655862 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c34a6-backup-0" podStartSLOduration=5.655838872 podStartE2EDuration="5.655838872s" podCreationTimestamp="2026-02-16 17:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:03.586516484 +0000 UTC m=+1140.974685010" watchObservedRunningTime="2026-02-16 17:43:03.655838872 +0000 UTC m=+1141.044007398" Feb 16 17:43:03.675458 master-0 kubenswrapper[4652]: I0216 17:43:03.675426 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-c34a6-backup-0" Feb 16 17:43:03.686543 master-0 kubenswrapper[4652]: I0216 17:43:03.686510 4652 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/06840359-14e1-46d7-b74a-a3acd120905b-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:03.929120 master-0 kubenswrapper[4652]: I0216 17:43:03.929015 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-5559c64944-9qfgd"] Feb 16 17:43:03.943393 master-0 kubenswrapper[4652]: I0216 17:43:03.943338 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-5559c64944-9qfgd"] Feb 16 17:43:04.106573 master-0 kubenswrapper[4652]: E0216 17:43:04.106523 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d01cc93_4bd4_4091_92a8_1c9a7e035c3e.slice/crio-conmon-59f0265919847c12a53b50ff6fa3076ce49c01f010a9420066373075cbe180ea.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d01cc93_4bd4_4091_92a8_1c9a7e035c3e.slice/crio-59f0265919847c12a53b50ff6fa3076ce49c01f010a9420066373075cbe180ea.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06840359_14e1_46d7_b74a_a3acd120905b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06840359_14e1_46d7_b74a_a3acd120905b.slice/crio-f89656633e0498ec2792dfe6087952c09d2db54e9d1069b5c453e1871a20a408\": RecentStats: unable to find data in memory cache]" Feb 16 17:43:04.544376 master-0 kubenswrapper[4652]: I0216 17:43:04.544321 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c34a6-scheduler-0" event={"ID":"0f6e10ee-00f9-4c6e-b67e-3f631e8c7363","Type":"ContainerStarted","Data":"fabb52452c8a74fb1a05e10eddd1ce92e65397324bf7da199c5f1ab190329dd8"} Feb 16 17:43:04.548380 master-0 kubenswrapper[4652]: I0216 17:43:04.548323 4652 generic.go:334] "Generic (PLEG): container finished" podID="7d01cc93-4bd4-4091-92a8-1c9a7e035c3e" containerID="59f0265919847c12a53b50ff6fa3076ce49c01f010a9420066373075cbe180ea" exitCode=0 Feb 16 17:43:04.548496 master-0 kubenswrapper[4652]: I0216 17:43:04.548421 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e","Type":"ContainerDied","Data":"59f0265919847c12a53b50ff6fa3076ce49c01f010a9420066373075cbe180ea"} Feb 16 17:43:04.550370 master-0 kubenswrapper[4652]: I0216 17:43:04.550336 4652 generic.go:334] "Generic (PLEG): container finished" podID="3c06307c-938d-45f7-b671-948d93bf0642" containerID="23d8a1448967c909af010a38174389e0c067d48578434e4854019f974b867cd4" exitCode=1 Feb 16 17:43:04.550458 master-0 kubenswrapper[4652]: I0216 17:43:04.550423 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-79d877c778-jztbq" event={"ID":"3c06307c-938d-45f7-b671-948d93bf0642","Type":"ContainerDied","Data":"23d8a1448967c909af010a38174389e0c067d48578434e4854019f974b867cd4"} Feb 16 17:43:04.552723 master-0 kubenswrapper[4652]: I0216 17:43:04.552687 4652 generic.go:334] "Generic (PLEG): container finished" podID="f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7" containerID="a95be001c7076407d3e9f1426e768cdf93dd116bd6694f8ff7a89e6a6644072a" exitCode=0 Feb 16 17:43:04.552898 master-0 kubenswrapper[4652]: I0216 17:43:04.552783 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5665b8875d-tx66w" event={"ID":"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7","Type":"ContainerDied","Data":"a95be001c7076407d3e9f1426e768cdf93dd116bd6694f8ff7a89e6a6644072a"} Feb 16 17:43:04.556340 master-0 kubenswrapper[4652]: I0216 17:43:04.556190 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" event={"ID":"8f3751fd-c328-4914-8e15-a14ad13a527d","Type":"ContainerStarted","Data":"70b3db1f1537b2818404dc8535e036babbc2fcb4913f744900b4c25bf59d46e3"} Feb 16 17:43:04.557384 master-0 kubenswrapper[4652]: I0216 17:43:04.557305 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:43:04.578300 master-0 kubenswrapper[4652]: I0216 17:43:04.578079 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-c34a6-scheduler-0" podStartSLOduration=7.578058823 podStartE2EDuration="7.578058823s" podCreationTimestamp="2026-02-16 17:42:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:04.566470922 +0000 UTC m=+1141.954639438" watchObservedRunningTime="2026-02-16 17:43:04.578058823 +0000 UTC m=+1141.966227339" Feb 16 17:43:04.761843 master-0 kubenswrapper[4652]: I0216 17:43:04.761785 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06840359-14e1-46d7-b74a-a3acd120905b" path="/var/lib/kubelet/pods/06840359-14e1-46d7-b74a-a3acd120905b/volumes" Feb 16 17:43:05.149238 master-0 kubenswrapper[4652]: I0216 17:43:05.149211 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-d5dfcf8b4-6nncv"] Feb 16 17:43:05.149686 master-0 kubenswrapper[4652]: E0216 17:43:05.149669 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109e606b-e77f-4512-957a-77f228cd55ed" containerName="mariadb-database-create" Feb 16 17:43:05.149686 master-0 kubenswrapper[4652]: I0216 17:43:05.149685 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="109e606b-e77f-4512-957a-77f228cd55ed" containerName="mariadb-database-create" Feb 16 17:43:05.150752 master-0 kubenswrapper[4652]: E0216 17:43:05.149714 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-log" Feb 16 17:43:05.150752 master-0 kubenswrapper[4652]: I0216 17:43:05.149720 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-log" Feb 16 17:43:05.150752 master-0 kubenswrapper[4652]: E0216 17:43:05.149737 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-api" Feb 16 17:43:05.150752 master-0 kubenswrapper[4652]: I0216 17:43:05.149742 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-api" Feb 16 17:43:05.150752 master-0 kubenswrapper[4652]: E0216 17:43:05.149753 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205faff9-2936-475b-9fce-f11ab722187e" containerName="mariadb-account-create-update" Feb 16 17:43:05.150752 master-0 kubenswrapper[4652]: I0216 17:43:05.149759 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="205faff9-2936-475b-9fce-f11ab722187e" containerName="mariadb-account-create-update" Feb 16 17:43:05.150752 master-0 kubenswrapper[4652]: I0216 17:43:05.149959 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="109e606b-e77f-4512-957a-77f228cd55ed" containerName="mariadb-database-create" Feb 16 17:43:05.150752 master-0 kubenswrapper[4652]: I0216 17:43:05.149976 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-api" Feb 16 17:43:05.150752 master-0 kubenswrapper[4652]: I0216 17:43:05.149986 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="06840359-14e1-46d7-b74a-a3acd120905b" containerName="placement-log" Feb 16 17:43:05.150752 master-0 kubenswrapper[4652]: I0216 17:43:05.150010 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="205faff9-2936-475b-9fce-f11ab722187e" containerName="mariadb-account-create-update" Feb 16 17:43:05.151117 master-0 kubenswrapper[4652]: I0216 17:43:05.151096 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.158966 master-0 kubenswrapper[4652]: I0216 17:43:05.158926 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 17:43:05.159173 master-0 kubenswrapper[4652]: I0216 17:43:05.159104 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 17:43:05.159300 master-0 kubenswrapper[4652]: I0216 17:43:05.159280 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 17:43:05.204886 master-0 kubenswrapper[4652]: I0216 17:43:05.204844 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-d5dfcf8b4-6nncv"] Feb 16 17:43:05.223565 master-0 kubenswrapper[4652]: I0216 17:43:05.223518 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1704ff-126d-475b-8511-17823ceee6b2-public-tls-certs\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.224123 master-0 kubenswrapper[4652]: I0216 17:43:05.224100 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b1704ff-126d-475b-8511-17823ceee6b2-log-httpd\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.224224 master-0 kubenswrapper[4652]: I0216 17:43:05.224210 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1704ff-126d-475b-8511-17823ceee6b2-config-data\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.224425 master-0 kubenswrapper[4652]: I0216 17:43:05.224407 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1704ff-126d-475b-8511-17823ceee6b2-combined-ca-bundle\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.224538 master-0 kubenswrapper[4652]: I0216 17:43:05.224519 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b1704ff-126d-475b-8511-17823ceee6b2-etc-swift\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.224826 master-0 kubenswrapper[4652]: I0216 17:43:05.224805 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b1704ff-126d-475b-8511-17823ceee6b2-run-httpd\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.224972 master-0 kubenswrapper[4652]: I0216 17:43:05.224954 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1704ff-126d-475b-8511-17823ceee6b2-internal-tls-certs\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.225197 master-0 kubenswrapper[4652]: I0216 17:43:05.225178 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mn8q\" (UniqueName: \"kubernetes.io/projected/7b1704ff-126d-475b-8511-17823ceee6b2-kube-api-access-5mn8q\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.327595 master-0 kubenswrapper[4652]: I0216 17:43:05.327478 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b1704ff-126d-475b-8511-17823ceee6b2-log-httpd\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.327821 master-0 kubenswrapper[4652]: I0216 17:43:05.327807 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1704ff-126d-475b-8511-17823ceee6b2-config-data\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.327919 master-0 kubenswrapper[4652]: I0216 17:43:05.327906 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1704ff-126d-475b-8511-17823ceee6b2-combined-ca-bundle\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.328013 master-0 kubenswrapper[4652]: I0216 17:43:05.328001 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b1704ff-126d-475b-8511-17823ceee6b2-etc-swift\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.328190 master-0 kubenswrapper[4652]: I0216 17:43:05.328177 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b1704ff-126d-475b-8511-17823ceee6b2-run-httpd\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.328301 master-0 kubenswrapper[4652]: I0216 17:43:05.328289 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1704ff-126d-475b-8511-17823ceee6b2-internal-tls-certs\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.328467 master-0 kubenswrapper[4652]: I0216 17:43:05.328450 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mn8q\" (UniqueName: \"kubernetes.io/projected/7b1704ff-126d-475b-8511-17823ceee6b2-kube-api-access-5mn8q\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.328644 master-0 kubenswrapper[4652]: I0216 17:43:05.328629 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1704ff-126d-475b-8511-17823ceee6b2-public-tls-certs\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.329624 master-0 kubenswrapper[4652]: I0216 17:43:05.328025 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b1704ff-126d-475b-8511-17823ceee6b2-log-httpd\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.330631 master-0 kubenswrapper[4652]: I0216 17:43:05.330598 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7b1704ff-126d-475b-8511-17823ceee6b2-run-httpd\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.336201 master-0 kubenswrapper[4652]: I0216 17:43:05.335934 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1704ff-126d-475b-8511-17823ceee6b2-public-tls-certs\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.336201 master-0 kubenswrapper[4652]: I0216 17:43:05.336148 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b1704ff-126d-475b-8511-17823ceee6b2-internal-tls-certs\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.336695 master-0 kubenswrapper[4652]: I0216 17:43:05.336646 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b1704ff-126d-475b-8511-17823ceee6b2-config-data\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.338236 master-0 kubenswrapper[4652]: I0216 17:43:05.338037 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7b1704ff-126d-475b-8511-17823ceee6b2-etc-swift\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.343981 master-0 kubenswrapper[4652]: I0216 17:43:05.343944 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b1704ff-126d-475b-8511-17823ceee6b2-combined-ca-bundle\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.362092 master-0 kubenswrapper[4652]: I0216 17:43:05.361999 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mn8q\" (UniqueName: \"kubernetes.io/projected/7b1704ff-126d-475b-8511-17823ceee6b2-kube-api-access-5mn8q\") pod \"swift-proxy-d5dfcf8b4-6nncv\" (UID: \"7b1704ff-126d-475b-8511-17823ceee6b2\") " pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.541732 master-0 kubenswrapper[4652]: I0216 17:43:05.535411 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:05.573759 master-0 kubenswrapper[4652]: I0216 17:43:05.573463 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5665b8875d-tx66w" event={"ID":"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7","Type":"ContainerStarted","Data":"e9d8f9d14dfe4660c6aa28c751696e9d7b744ca7b3f3371537a52cf64bfb949a"} Feb 16 17:43:05.573759 master-0 kubenswrapper[4652]: I0216 17:43:05.573519 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-5665b8875d-tx66w" event={"ID":"f0ecfbb5-6cca-4b6a-83a0-4d2be80e0be7","Type":"ContainerStarted","Data":"374b784190195f50c68ced1b9a3d2bc8dc9f7e4fead16832a182f2ca931d1b39"} Feb 16 17:43:05.574928 master-0 kubenswrapper[4652]: I0216 17:43:05.574903 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:43:05.591411 master-0 kubenswrapper[4652]: I0216 17:43:05.591293 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-79d877c778-jztbq" event={"ID":"3c06307c-938d-45f7-b671-948d93bf0642","Type":"ContainerStarted","Data":"5afa9efe404eb401af2419198c95576672e50384fea5fdfac7f5835d5fc2cbc0"} Feb 16 17:43:05.631468 master-0 kubenswrapper[4652]: I0216 17:43:05.631377 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-5665b8875d-tx66w" podStartSLOduration=5.197652061 podStartE2EDuration="7.631351548s" podCreationTimestamp="2026-02-16 17:42:58 +0000 UTC" firstStartedPulling="2026-02-16 17:43:00.434535883 +0000 UTC m=+1137.822704389" lastFinishedPulling="2026-02-16 17:43:02.86823536 +0000 UTC m=+1140.256403876" observedRunningTime="2026-02-16 17:43:05.612497132 +0000 UTC m=+1143.000665648" watchObservedRunningTime="2026-02-16 17:43:05.631351548 +0000 UTC m=+1143.019520084" Feb 16 17:43:05.868821 master-0 kubenswrapper[4652]: I0216 17:43:05.868530 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:43:05.952107 master-0 kubenswrapper[4652]: I0216 17:43:05.950584 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-997495b47-lhjkc"] Feb 16 17:43:05.953424 master-0 kubenswrapper[4652]: I0216 17:43:05.950981 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-997495b47-lhjkc" podUID="cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" containerName="dnsmasq-dns" containerID="cri-o://386676b15cb32a929b68b61ebacc8a6208451a2c271e0704bda2fd3ee92dcaa5" gracePeriod=10 Feb 16 17:43:06.190089 master-0 kubenswrapper[4652]: W0216 17:43:06.190013 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b1704ff_126d_475b_8511_17823ceee6b2.slice/crio-57bc4e3b4e6a7d049be5ed903da3b5f67d593790112780d31e0d66c9763d7d4f WatchSource:0}: Error finding container 57bc4e3b4e6a7d049be5ed903da3b5f67d593790112780d31e0d66c9763d7d4f: Status 404 returned error can't find the container with id 57bc4e3b4e6a7d049be5ed903da3b5f67d593790112780d31e0d66c9763d7d4f Feb 16 17:43:06.194821 master-0 kubenswrapper[4652]: I0216 17:43:06.194762 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-997495b47-lhjkc" podUID="cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.177:5353: connect: connection refused" Feb 16 17:43:06.202997 master-0 kubenswrapper[4652]: I0216 17:43:06.202940 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-d5dfcf8b4-6nncv"] Feb 16 17:43:06.479479 master-0 kubenswrapper[4652]: I0216 17:43:06.479418 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" Feb 16 17:43:06.625216 master-0 kubenswrapper[4652]: I0216 17:43:06.625086 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d5dfcf8b4-6nncv" event={"ID":"7b1704ff-126d-475b-8511-17823ceee6b2","Type":"ContainerStarted","Data":"57bc4e3b4e6a7d049be5ed903da3b5f67d593790112780d31e0d66c9763d7d4f"} Feb 16 17:43:06.625216 master-0 kubenswrapper[4652]: I0216 17:43:06.625202 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:43:06.630325 master-0 kubenswrapper[4652]: I0216 17:43:06.629501 4652 generic.go:334] "Generic (PLEG): container finished" podID="3c06307c-938d-45f7-b671-948d93bf0642" containerID="5afa9efe404eb401af2419198c95576672e50384fea5fdfac7f5835d5fc2cbc0" exitCode=0 Feb 16 17:43:06.630325 master-0 kubenswrapper[4652]: I0216 17:43:06.629549 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-79d877c778-jztbq" event={"ID":"3c06307c-938d-45f7-b671-948d93bf0642","Type":"ContainerDied","Data":"5afa9efe404eb401af2419198c95576672e50384fea5fdfac7f5835d5fc2cbc0"} Feb 16 17:43:06.630325 master-0 kubenswrapper[4652]: I0216 17:43:06.629575 4652 scope.go:117] "RemoveContainer" containerID="23d8a1448967c909af010a38174389e0c067d48578434e4854019f974b867cd4" Feb 16 17:43:06.630325 master-0 kubenswrapper[4652]: I0216 17:43:06.630270 4652 scope.go:117] "RemoveContainer" containerID="23d8a1448967c909af010a38174389e0c067d48578434e4854019f974b867cd4" Feb 16 17:43:06.636854 master-0 kubenswrapper[4652]: I0216 17:43:06.636816 4652 generic.go:334] "Generic (PLEG): container finished" podID="cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" containerID="386676b15cb32a929b68b61ebacc8a6208451a2c271e0704bda2fd3ee92dcaa5" exitCode=0 Feb 16 17:43:06.638275 master-0 kubenswrapper[4652]: I0216 17:43:06.637942 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-997495b47-lhjkc" Feb 16 17:43:06.638275 master-0 kubenswrapper[4652]: I0216 17:43:06.638136 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-997495b47-lhjkc" event={"ID":"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3","Type":"ContainerDied","Data":"386676b15cb32a929b68b61ebacc8a6208451a2c271e0704bda2fd3ee92dcaa5"} Feb 16 17:43:06.736581 master-0 kubenswrapper[4652]: I0216 17:43:06.736539 4652 scope.go:117] "RemoveContainer" containerID="386676b15cb32a929b68b61ebacc8a6208451a2c271e0704bda2fd3ee92dcaa5" Feb 16 17:43:06.736669 master-0 kubenswrapper[4652]: E0216 17:43:06.736548 4652 log.go:32] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to delete container k8s_init_ironic-79d877c778-jztbq_openstack_3c06307c-938d-45f7-b671-948d93bf0642_0 in pod sandbox c9616ceee950a2b7c407689ce98154b7a8c84f095b4ec4833e25d361a04f5db3: identifier is not a container" containerID="23d8a1448967c909af010a38174389e0c067d48578434e4854019f974b867cd4" Feb 16 17:43:06.736669 master-0 kubenswrapper[4652]: E0216 17:43:06.736649 4652 kuberuntime_container.go:896] "Unhandled Error" err="failed to remove pod init container \"init\": rpc error: code = Unknown desc = failed to delete container k8s_init_ironic-79d877c778-jztbq_openstack_3c06307c-938d-45f7-b671-948d93bf0642_0 in pod sandbox c9616ceee950a2b7c407689ce98154b7a8c84f095b4ec4833e25d361a04f5db3: identifier is not a container; Skipping pod \"ironic-79d877c778-jztbq_openstack(3c06307c-938d-45f7-b671-948d93bf0642)\"" logger="UnhandledError" Feb 16 17:43:06.779226 master-0 kubenswrapper[4652]: I0216 17:43:06.779092 4652 scope.go:117] "RemoveContainer" containerID="5b77eae8f52143140c1240f9b433b52826f7d532b0156b521b127a534abda182" Feb 16 17:43:06.781750 master-0 kubenswrapper[4652]: I0216 17:43:06.781720 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-dns-svc\") pod \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " Feb 16 17:43:06.781885 master-0 kubenswrapper[4652]: I0216 17:43:06.781825 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txcdj\" (UniqueName: \"kubernetes.io/projected/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-kube-api-access-txcdj\") pod \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " Feb 16 17:43:06.781999 master-0 kubenswrapper[4652]: I0216 17:43:06.781980 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-dns-swift-storage-0\") pod \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " Feb 16 17:43:06.782087 master-0 kubenswrapper[4652]: I0216 17:43:06.782054 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-ovsdbserver-sb\") pod \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " Feb 16 17:43:06.782146 master-0 kubenswrapper[4652]: I0216 17:43:06.782109 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-config\") pod \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " Feb 16 17:43:06.782229 master-0 kubenswrapper[4652]: I0216 17:43:06.782206 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-ovsdbserver-nb\") pod \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\" (UID: \"cd9a76cf-18a4-46a0-86ee-2e95889b1eb3\") " Feb 16 17:43:06.788684 master-0 kubenswrapper[4652]: I0216 17:43:06.788636 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-kube-api-access-txcdj" (OuterVolumeSpecName: "kube-api-access-txcdj") pod "cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" (UID: "cd9a76cf-18a4-46a0-86ee-2e95889b1eb3"). InnerVolumeSpecName "kube-api-access-txcdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:06.887555 master-0 kubenswrapper[4652]: I0216 17:43:06.886116 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txcdj\" (UniqueName: \"kubernetes.io/projected/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-kube-api-access-txcdj\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:06.938737 master-0 kubenswrapper[4652]: I0216 17:43:06.938696 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" (UID: "cd9a76cf-18a4-46a0-86ee-2e95889b1eb3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:06.947530 master-0 kubenswrapper[4652]: I0216 17:43:06.944552 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-config" (OuterVolumeSpecName: "config") pod "cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" (UID: "cd9a76cf-18a4-46a0-86ee-2e95889b1eb3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:06.955103 master-0 kubenswrapper[4652]: I0216 17:43:06.955053 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" (UID: "cd9a76cf-18a4-46a0-86ee-2e95889b1eb3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:06.962281 master-0 kubenswrapper[4652]: I0216 17:43:06.958970 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" (UID: "cd9a76cf-18a4-46a0-86ee-2e95889b1eb3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:06.962281 master-0 kubenswrapper[4652]: I0216 17:43:06.960936 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" (UID: "cd9a76cf-18a4-46a0-86ee-2e95889b1eb3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:06.989097 master-0 kubenswrapper[4652]: I0216 17:43:06.988904 4652 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:06.989097 master-0 kubenswrapper[4652]: I0216 17:43:06.988955 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:06.989097 master-0 kubenswrapper[4652]: I0216 17:43:06.988971 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:06.989097 master-0 kubenswrapper[4652]: I0216 17:43:06.988982 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:06.989097 master-0 kubenswrapper[4652]: I0216 17:43:06.988993 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:07.299928 master-0 kubenswrapper[4652]: I0216 17:43:07.299877 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-997495b47-lhjkc"] Feb 16 17:43:07.319870 master-0 kubenswrapper[4652]: I0216 17:43:07.319810 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-997495b47-lhjkc"] Feb 16 17:43:07.649810 master-0 kubenswrapper[4652]: I0216 17:43:07.649740 4652 generic.go:334] "Generic (PLEG): container finished" podID="8f3751fd-c328-4914-8e15-a14ad13a527d" containerID="70b3db1f1537b2818404dc8535e036babbc2fcb4913f744900b4c25bf59d46e3" exitCode=1 Feb 16 17:43:07.650378 master-0 kubenswrapper[4652]: I0216 17:43:07.649810 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" event={"ID":"8f3751fd-c328-4914-8e15-a14ad13a527d","Type":"ContainerDied","Data":"70b3db1f1537b2818404dc8535e036babbc2fcb4913f744900b4c25bf59d46e3"} Feb 16 17:43:07.650378 master-0 kubenswrapper[4652]: I0216 17:43:07.649873 4652 scope.go:117] "RemoveContainer" containerID="e07cfb5ea981209d4c967c50097928a505874d15bdd82edbb42a84d6c59ed438" Feb 16 17:43:07.650912 master-0 kubenswrapper[4652]: I0216 17:43:07.650884 4652 scope.go:117] "RemoveContainer" containerID="70b3db1f1537b2818404dc8535e036babbc2fcb4913f744900b4c25bf59d46e3" Feb 16 17:43:07.652058 master-0 kubenswrapper[4652]: E0216 17:43:07.651317 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-6975fcc79b-5wclc_openstack(8f3751fd-c328-4914-8e15-a14ad13a527d)\"" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" podUID="8f3751fd-c328-4914-8e15-a14ad13a527d" Feb 16 17:43:07.654616 master-0 kubenswrapper[4652]: I0216 17:43:07.654567 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d5dfcf8b4-6nncv" event={"ID":"7b1704ff-126d-475b-8511-17823ceee6b2","Type":"ContainerStarted","Data":"c9b2a6427d64295b42486cdd830fee2d713047d9618a6581101d9000c20ecdbd"} Feb 16 17:43:07.654726 master-0 kubenswrapper[4652]: I0216 17:43:07.654633 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d5dfcf8b4-6nncv" event={"ID":"7b1704ff-126d-475b-8511-17823ceee6b2","Type":"ContainerStarted","Data":"52dd9a3146613b4d916aa757608b09126177d3acdbfcb3593b141e763d3f51f6"} Feb 16 17:43:07.654726 master-0 kubenswrapper[4652]: I0216 17:43:07.654684 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:07.657728 master-0 kubenswrapper[4652]: I0216 17:43:07.657621 4652 generic.go:334] "Generic (PLEG): container finished" podID="3c06307c-938d-45f7-b671-948d93bf0642" containerID="a7cce0571c9b2678e58a27a4b5c307ee2868e47c4cfcfd92b431bcea63937812" exitCode=1 Feb 16 17:43:07.657728 master-0 kubenswrapper[4652]: I0216 17:43:07.657677 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-79d877c778-jztbq" event={"ID":"3c06307c-938d-45f7-b671-948d93bf0642","Type":"ContainerDied","Data":"a7cce0571c9b2678e58a27a4b5c307ee2868e47c4cfcfd92b431bcea63937812"} Feb 16 17:43:07.657728 master-0 kubenswrapper[4652]: I0216 17:43:07.657711 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-79d877c778-jztbq" event={"ID":"3c06307c-938d-45f7-b671-948d93bf0642","Type":"ContainerStarted","Data":"cd9bc913af239533ee2867a888542e1e04f9fdda16ea7514de49a4a458095c19"} Feb 16 17:43:07.660106 master-0 kubenswrapper[4652]: I0216 17:43:07.658329 4652 scope.go:117] "RemoveContainer" containerID="a7cce0571c9b2678e58a27a4b5c307ee2868e47c4cfcfd92b431bcea63937812" Feb 16 17:43:07.741586 master-0 kubenswrapper[4652]: I0216 17:43:07.741450 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:43:07.791032 master-0 kubenswrapper[4652]: I0216 17:43:07.790940 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-d5dfcf8b4-6nncv" podStartSLOduration=2.790913096 podStartE2EDuration="2.790913096s" podCreationTimestamp="2026-02-16 17:43:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:07.738638905 +0000 UTC m=+1145.126807421" watchObservedRunningTime="2026-02-16 17:43:07.790913096 +0000 UTC m=+1145.179081612" Feb 16 17:43:07.988910 master-0 kubenswrapper[4652]: I0216 17:43:07.988810 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-c34a6-scheduler-0" Feb 16 17:43:08.564445 master-0 kubenswrapper[4652]: I0216 17:43:08.564312 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-9rpkr"] Feb 16 17:43:08.564981 master-0 kubenswrapper[4652]: E0216 17:43:08.564954 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" containerName="init" Feb 16 17:43:08.564981 master-0 kubenswrapper[4652]: I0216 17:43:08.564980 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" containerName="init" Feb 16 17:43:08.565067 master-0 kubenswrapper[4652]: E0216 17:43:08.565000 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" containerName="dnsmasq-dns" Feb 16 17:43:08.565067 master-0 kubenswrapper[4652]: I0216 17:43:08.565009 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" containerName="dnsmasq-dns" Feb 16 17:43:08.565328 master-0 kubenswrapper[4652]: I0216 17:43:08.565313 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" containerName="dnsmasq-dns" Feb 16 17:43:08.566072 master-0 kubenswrapper[4652]: I0216 17:43:08.566047 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9rpkr" Feb 16 17:43:08.597326 master-0 kubenswrapper[4652]: I0216 17:43:08.597276 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9rpkr"] Feb 16 17:43:08.673020 master-0 kubenswrapper[4652]: I0216 17:43:08.672604 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:08.700684 master-0 kubenswrapper[4652]: I0216 17:43:08.700623 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-gp6kb"] Feb 16 17:43:08.703489 master-0 kubenswrapper[4652]: I0216 17:43:08.703445 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-gp6kb" Feb 16 17:43:08.754913 master-0 kubenswrapper[4652]: I0216 17:43:08.754860 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv46w\" (UniqueName: \"kubernetes.io/projected/3e120e62-7cc3-4d7e-8c1c-92d3f06302f1-kube-api-access-gv46w\") pod \"nova-api-db-create-9rpkr\" (UID: \"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1\") " pod="openstack/nova-api-db-create-9rpkr" Feb 16 17:43:08.756065 master-0 kubenswrapper[4652]: I0216 17:43:08.755881 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e120e62-7cc3-4d7e-8c1c-92d3f06302f1-operator-scripts\") pod \"nova-api-db-create-9rpkr\" (UID: \"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1\") " pod="openstack/nova-api-db-create-9rpkr" Feb 16 17:43:08.838461 master-0 kubenswrapper[4652]: I0216 17:43:08.838345 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9a76cf-18a4-46a0-86ee-2e95889b1eb3" path="/var/lib/kubelet/pods/cd9a76cf-18a4-46a0-86ee-2e95889b1eb3/volumes" Feb 16 17:43:08.839317 master-0 kubenswrapper[4652]: I0216 17:43:08.839283 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-gp6kb"] Feb 16 17:43:08.858333 master-0 kubenswrapper[4652]: I0216 17:43:08.858276 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c4sz\" (UniqueName: \"kubernetes.io/projected/089b8594-a539-4435-9573-6d904bce3901-kube-api-access-5c4sz\") pod \"nova-cell0-db-create-gp6kb\" (UID: \"089b8594-a539-4435-9573-6d904bce3901\") " pod="openstack/nova-cell0-db-create-gp6kb" Feb 16 17:43:08.858777 master-0 kubenswrapper[4652]: I0216 17:43:08.858751 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e120e62-7cc3-4d7e-8c1c-92d3f06302f1-operator-scripts\") pod \"nova-api-db-create-9rpkr\" (UID: \"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1\") " pod="openstack/nova-api-db-create-9rpkr" Feb 16 17:43:08.859095 master-0 kubenswrapper[4652]: I0216 17:43:08.859064 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv46w\" (UniqueName: \"kubernetes.io/projected/3e120e62-7cc3-4d7e-8c1c-92d3f06302f1-kube-api-access-gv46w\") pod \"nova-api-db-create-9rpkr\" (UID: \"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1\") " pod="openstack/nova-api-db-create-9rpkr" Feb 16 17:43:08.859164 master-0 kubenswrapper[4652]: I0216 17:43:08.859128 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/089b8594-a539-4435-9573-6d904bce3901-operator-scripts\") pod \"nova-cell0-db-create-gp6kb\" (UID: \"089b8594-a539-4435-9573-6d904bce3901\") " pod="openstack/nova-cell0-db-create-gp6kb" Feb 16 17:43:08.859764 master-0 kubenswrapper[4652]: I0216 17:43:08.859734 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e120e62-7cc3-4d7e-8c1c-92d3f06302f1-operator-scripts\") pod \"nova-api-db-create-9rpkr\" (UID: \"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1\") " pod="openstack/nova-api-db-create-9rpkr" Feb 16 17:43:08.859979 master-0 kubenswrapper[4652]: I0216 17:43:08.859960 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-nrzvp"] Feb 16 17:43:08.861714 master-0 kubenswrapper[4652]: I0216 17:43:08.861694 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nrzvp" Feb 16 17:43:08.886690 master-0 kubenswrapper[4652]: I0216 17:43:08.885393 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c6c9-account-create-update-xdl2v"] Feb 16 17:43:08.886690 master-0 kubenswrapper[4652]: I0216 17:43:08.885883 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv46w\" (UniqueName: \"kubernetes.io/projected/3e120e62-7cc3-4d7e-8c1c-92d3f06302f1-kube-api-access-gv46w\") pod \"nova-api-db-create-9rpkr\" (UID: \"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1\") " pod="openstack/nova-api-db-create-9rpkr" Feb 16 17:43:08.887465 master-0 kubenswrapper[4652]: I0216 17:43:08.886915 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c6c9-account-create-update-xdl2v" Feb 16 17:43:08.891206 master-0 kubenswrapper[4652]: I0216 17:43:08.891158 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 17:43:08.897446 master-0 kubenswrapper[4652]: I0216 17:43:08.897371 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nrzvp"] Feb 16 17:43:08.905939 master-0 kubenswrapper[4652]: I0216 17:43:08.905879 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9rpkr" Feb 16 17:43:08.911781 master-0 kubenswrapper[4652]: I0216 17:43:08.911726 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c6c9-account-create-update-xdl2v"] Feb 16 17:43:08.962551 master-0 kubenswrapper[4652]: I0216 17:43:08.962505 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c4sz\" (UniqueName: \"kubernetes.io/projected/089b8594-a539-4435-9573-6d904bce3901-kube-api-access-5c4sz\") pod \"nova-cell0-db-create-gp6kb\" (UID: \"089b8594-a539-4435-9573-6d904bce3901\") " pod="openstack/nova-cell0-db-create-gp6kb" Feb 16 17:43:08.962667 master-0 kubenswrapper[4652]: I0216 17:43:08.962642 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbfc1e76-e972-465d-bed5-92eb603c32a6-operator-scripts\") pod \"nova-cell1-db-create-nrzvp\" (UID: \"cbfc1e76-e972-465d-bed5-92eb603c32a6\") " pod="openstack/nova-cell1-db-create-nrzvp" Feb 16 17:43:08.962924 master-0 kubenswrapper[4652]: I0216 17:43:08.962897 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/089b8594-a539-4435-9573-6d904bce3901-operator-scripts\") pod \"nova-cell0-db-create-gp6kb\" (UID: \"089b8594-a539-4435-9573-6d904bce3901\") " pod="openstack/nova-cell0-db-create-gp6kb" Feb 16 17:43:08.962970 master-0 kubenswrapper[4652]: I0216 17:43:08.962923 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjphd\" (UniqueName: \"kubernetes.io/projected/cbfc1e76-e972-465d-bed5-92eb603c32a6-kube-api-access-mjphd\") pod \"nova-cell1-db-create-nrzvp\" (UID: \"cbfc1e76-e972-465d-bed5-92eb603c32a6\") " pod="openstack/nova-cell1-db-create-nrzvp" Feb 16 17:43:08.963682 master-0 kubenswrapper[4652]: I0216 17:43:08.963651 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/089b8594-a539-4435-9573-6d904bce3901-operator-scripts\") pod \"nova-cell0-db-create-gp6kb\" (UID: \"089b8594-a539-4435-9573-6d904bce3901\") " pod="openstack/nova-cell0-db-create-gp6kb" Feb 16 17:43:08.983730 master-0 kubenswrapper[4652]: I0216 17:43:08.983648 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c4sz\" (UniqueName: \"kubernetes.io/projected/089b8594-a539-4435-9573-6d904bce3901-kube-api-access-5c4sz\") pod \"nova-cell0-db-create-gp6kb\" (UID: \"089b8594-a539-4435-9573-6d904bce3901\") " pod="openstack/nova-cell0-db-create-gp6kb" Feb 16 17:43:08.992351 master-0 kubenswrapper[4652]: I0216 17:43:08.989100 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-c34a6-backup-0" Feb 16 17:43:08.992351 master-0 kubenswrapper[4652]: I0216 17:43:08.989216 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b802-account-create-update-mqckv"] Feb 16 17:43:08.992351 master-0 kubenswrapper[4652]: I0216 17:43:08.990764 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b802-account-create-update-mqckv" Feb 16 17:43:08.992723 master-0 kubenswrapper[4652]: I0216 17:43:08.992688 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 17:43:09.012940 master-0 kubenswrapper[4652]: I0216 17:43:09.012429 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b802-account-create-update-mqckv"] Feb 16 17:43:09.065544 master-0 kubenswrapper[4652]: I0216 17:43:09.065499 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjphd\" (UniqueName: \"kubernetes.io/projected/cbfc1e76-e972-465d-bed5-92eb603c32a6-kube-api-access-mjphd\") pod \"nova-cell1-db-create-nrzvp\" (UID: \"cbfc1e76-e972-465d-bed5-92eb603c32a6\") " pod="openstack/nova-cell1-db-create-nrzvp" Feb 16 17:43:09.065926 master-0 kubenswrapper[4652]: I0216 17:43:09.065894 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbfc1e76-e972-465d-bed5-92eb603c32a6-operator-scripts\") pod \"nova-cell1-db-create-nrzvp\" (UID: \"cbfc1e76-e972-465d-bed5-92eb603c32a6\") " pod="openstack/nova-cell1-db-create-nrzvp" Feb 16 17:43:09.066077 master-0 kubenswrapper[4652]: I0216 17:43:09.066052 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10a6eddf-9a5a-450c-b4f9-45fc556526dc-operator-scripts\") pod \"nova-api-c6c9-account-create-update-xdl2v\" (UID: \"10a6eddf-9a5a-450c-b4f9-45fc556526dc\") " pod="openstack/nova-api-c6c9-account-create-update-xdl2v" Feb 16 17:43:09.066460 master-0 kubenswrapper[4652]: I0216 17:43:09.066432 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-gp6kb" Feb 16 17:43:09.066644 master-0 kubenswrapper[4652]: I0216 17:43:09.066597 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6cjl\" (UniqueName: \"kubernetes.io/projected/10a6eddf-9a5a-450c-b4f9-45fc556526dc-kube-api-access-j6cjl\") pod \"nova-api-c6c9-account-create-update-xdl2v\" (UID: \"10a6eddf-9a5a-450c-b4f9-45fc556526dc\") " pod="openstack/nova-api-c6c9-account-create-update-xdl2v" Feb 16 17:43:09.067720 master-0 kubenswrapper[4652]: I0216 17:43:09.067685 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbfc1e76-e972-465d-bed5-92eb603c32a6-operator-scripts\") pod \"nova-cell1-db-create-nrzvp\" (UID: \"cbfc1e76-e972-465d-bed5-92eb603c32a6\") " pod="openstack/nova-cell1-db-create-nrzvp" Feb 16 17:43:09.092388 master-0 kubenswrapper[4652]: I0216 17:43:09.092140 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjphd\" (UniqueName: \"kubernetes.io/projected/cbfc1e76-e972-465d-bed5-92eb603c32a6-kube-api-access-mjphd\") pod \"nova-cell1-db-create-nrzvp\" (UID: \"cbfc1e76-e972-465d-bed5-92eb603c32a6\") " pod="openstack/nova-cell1-db-create-nrzvp" Feb 16 17:43:09.176785 master-0 kubenswrapper[4652]: I0216 17:43:09.174633 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d9f2-account-create-update-r7xjk"] Feb 16 17:43:09.176785 master-0 kubenswrapper[4652]: I0216 17:43:09.175183 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10a6eddf-9a5a-450c-b4f9-45fc556526dc-operator-scripts\") pod \"nova-api-c6c9-account-create-update-xdl2v\" (UID: \"10a6eddf-9a5a-450c-b4f9-45fc556526dc\") " pod="openstack/nova-api-c6c9-account-create-update-xdl2v" Feb 16 17:43:09.176785 master-0 kubenswrapper[4652]: I0216 17:43:09.175283 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6cjl\" (UniqueName: \"kubernetes.io/projected/10a6eddf-9a5a-450c-b4f9-45fc556526dc-kube-api-access-j6cjl\") pod \"nova-api-c6c9-account-create-update-xdl2v\" (UID: \"10a6eddf-9a5a-450c-b4f9-45fc556526dc\") " pod="openstack/nova-api-c6c9-account-create-update-xdl2v" Feb 16 17:43:09.176785 master-0 kubenswrapper[4652]: I0216 17:43:09.175750 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56eeae89-abbe-4bff-b750-6ad05532d328-operator-scripts\") pod \"nova-cell0-b802-account-create-update-mqckv\" (UID: \"56eeae89-abbe-4bff-b750-6ad05532d328\") " pod="openstack/nova-cell0-b802-account-create-update-mqckv" Feb 16 17:43:09.176785 master-0 kubenswrapper[4652]: I0216 17:43:09.175793 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsx7h\" (UniqueName: \"kubernetes.io/projected/56eeae89-abbe-4bff-b750-6ad05532d328-kube-api-access-qsx7h\") pod \"nova-cell0-b802-account-create-update-mqckv\" (UID: \"56eeae89-abbe-4bff-b750-6ad05532d328\") " pod="openstack/nova-cell0-b802-account-create-update-mqckv" Feb 16 17:43:09.176785 master-0 kubenswrapper[4652]: I0216 17:43:09.176032 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" Feb 16 17:43:09.176785 master-0 kubenswrapper[4652]: I0216 17:43:09.176715 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10a6eddf-9a5a-450c-b4f9-45fc556526dc-operator-scripts\") pod \"nova-api-c6c9-account-create-update-xdl2v\" (UID: \"10a6eddf-9a5a-450c-b4f9-45fc556526dc\") " pod="openstack/nova-api-c6c9-account-create-update-xdl2v" Feb 16 17:43:09.182170 master-0 kubenswrapper[4652]: I0216 17:43:09.182126 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 17:43:09.227884 master-0 kubenswrapper[4652]: I0216 17:43:09.222172 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6cjl\" (UniqueName: \"kubernetes.io/projected/10a6eddf-9a5a-450c-b4f9-45fc556526dc-kube-api-access-j6cjl\") pod \"nova-api-c6c9-account-create-update-xdl2v\" (UID: \"10a6eddf-9a5a-450c-b4f9-45fc556526dc\") " pod="openstack/nova-api-c6c9-account-create-update-xdl2v" Feb 16 17:43:09.268207 master-0 kubenswrapper[4652]: I0216 17:43:09.268134 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nrzvp" Feb 16 17:43:09.277664 master-0 kubenswrapper[4652]: I0216 17:43:09.277348 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c6c9-account-create-update-xdl2v" Feb 16 17:43:09.302681 master-0 kubenswrapper[4652]: I0216 17:43:09.302627 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d9f2-account-create-update-r7xjk"] Feb 16 17:43:09.310168 master-0 kubenswrapper[4652]: I0216 17:43:09.310097 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923ed270-e930-4edb-bd2e-8db0412c5334-operator-scripts\") pod \"nova-cell1-d9f2-account-create-update-r7xjk\" (UID: \"923ed270-e930-4edb-bd2e-8db0412c5334\") " pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" Feb 16 17:43:09.310639 master-0 kubenswrapper[4652]: I0216 17:43:09.310611 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h2lb\" (UniqueName: \"kubernetes.io/projected/923ed270-e930-4edb-bd2e-8db0412c5334-kube-api-access-6h2lb\") pod \"nova-cell1-d9f2-account-create-update-r7xjk\" (UID: \"923ed270-e930-4edb-bd2e-8db0412c5334\") " pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" Feb 16 17:43:09.310744 master-0 kubenswrapper[4652]: I0216 17:43:09.310712 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56eeae89-abbe-4bff-b750-6ad05532d328-operator-scripts\") pod \"nova-cell0-b802-account-create-update-mqckv\" (UID: \"56eeae89-abbe-4bff-b750-6ad05532d328\") " pod="openstack/nova-cell0-b802-account-create-update-mqckv" Feb 16 17:43:09.310820 master-0 kubenswrapper[4652]: I0216 17:43:09.310796 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsx7h\" (UniqueName: \"kubernetes.io/projected/56eeae89-abbe-4bff-b750-6ad05532d328-kube-api-access-qsx7h\") pod \"nova-cell0-b802-account-create-update-mqckv\" (UID: \"56eeae89-abbe-4bff-b750-6ad05532d328\") " pod="openstack/nova-cell0-b802-account-create-update-mqckv" Feb 16 17:43:09.312976 master-0 kubenswrapper[4652]: I0216 17:43:09.312947 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56eeae89-abbe-4bff-b750-6ad05532d328-operator-scripts\") pod \"nova-cell0-b802-account-create-update-mqckv\" (UID: \"56eeae89-abbe-4bff-b750-6ad05532d328\") " pod="openstack/nova-cell0-b802-account-create-update-mqckv" Feb 16 17:43:09.354124 master-0 kubenswrapper[4652]: I0216 17:43:09.349900 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsx7h\" (UniqueName: \"kubernetes.io/projected/56eeae89-abbe-4bff-b750-6ad05532d328-kube-api-access-qsx7h\") pod \"nova-cell0-b802-account-create-update-mqckv\" (UID: \"56eeae89-abbe-4bff-b750-6ad05532d328\") " pod="openstack/nova-cell0-b802-account-create-update-mqckv" Feb 16 17:43:09.392430 master-0 kubenswrapper[4652]: I0216 17:43:09.383010 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b802-account-create-update-mqckv" Feb 16 17:43:09.416274 master-0 kubenswrapper[4652]: I0216 17:43:09.415955 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h2lb\" (UniqueName: \"kubernetes.io/projected/923ed270-e930-4edb-bd2e-8db0412c5334-kube-api-access-6h2lb\") pod \"nova-cell1-d9f2-account-create-update-r7xjk\" (UID: \"923ed270-e930-4edb-bd2e-8db0412c5334\") " pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" Feb 16 17:43:09.417838 master-0 kubenswrapper[4652]: I0216 17:43:09.416842 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923ed270-e930-4edb-bd2e-8db0412c5334-operator-scripts\") pod \"nova-cell1-d9f2-account-create-update-r7xjk\" (UID: \"923ed270-e930-4edb-bd2e-8db0412c5334\") " pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" Feb 16 17:43:09.417838 master-0 kubenswrapper[4652]: I0216 17:43:09.417713 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923ed270-e930-4edb-bd2e-8db0412c5334-operator-scripts\") pod \"nova-cell1-d9f2-account-create-update-r7xjk\" (UID: \"923ed270-e930-4edb-bd2e-8db0412c5334\") " pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" Feb 16 17:43:09.442270 master-0 kubenswrapper[4652]: I0216 17:43:09.429355 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-db-sync-v5nmj"] Feb 16 17:43:09.442270 master-0 kubenswrapper[4652]: I0216 17:43:09.433492 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.442270 master-0 kubenswrapper[4652]: I0216 17:43:09.439911 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h2lb\" (UniqueName: \"kubernetes.io/projected/923ed270-e930-4edb-bd2e-8db0412c5334-kube-api-access-6h2lb\") pod \"nova-cell1-d9f2-account-create-update-r7xjk\" (UID: \"923ed270-e930-4edb-bd2e-8db0412c5334\") " pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" Feb 16 17:43:09.442270 master-0 kubenswrapper[4652]: I0216 17:43:09.441904 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 16 17:43:09.442586 master-0 kubenswrapper[4652]: I0216 17:43:09.442499 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 16 17:43:09.457569 master-0 kubenswrapper[4652]: I0216 17:43:09.453021 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-v5nmj"] Feb 16 17:43:09.519689 master-0 kubenswrapper[4652]: I0216 17:43:09.519624 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-combined-ca-bundle\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.519903 master-0 kubenswrapper[4652]: I0216 17:43:09.519733 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-var-lib-ironic\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.519903 master-0 kubenswrapper[4652]: I0216 17:43:09.519751 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-scripts\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.520905 master-0 kubenswrapper[4652]: I0216 17:43:09.520424 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-config\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.520905 master-0 kubenswrapper[4652]: I0216 17:43:09.520650 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v66v4\" (UniqueName: \"kubernetes.io/projected/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-kube-api-access-v66v4\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.521281 master-0 kubenswrapper[4652]: I0216 17:43:09.521037 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-etc-podinfo\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.521496 master-0 kubenswrapper[4652]: I0216 17:43:09.521461 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.623986 master-0 kubenswrapper[4652]: I0216 17:43:09.623872 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-etc-podinfo\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.623986 master-0 kubenswrapper[4652]: I0216 17:43:09.623938 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.624285 master-0 kubenswrapper[4652]: I0216 17:43:09.624010 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-combined-ca-bundle\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.624285 master-0 kubenswrapper[4652]: I0216 17:43:09.624099 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-var-lib-ironic\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.624285 master-0 kubenswrapper[4652]: I0216 17:43:09.624122 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-scripts\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.624389 master-0 kubenswrapper[4652]: I0216 17:43:09.624338 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-config\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.624427 master-0 kubenswrapper[4652]: I0216 17:43:09.624381 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v66v4\" (UniqueName: \"kubernetes.io/projected/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-kube-api-access-v66v4\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.625340 master-0 kubenswrapper[4652]: I0216 17:43:09.625312 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-var-lib-ironic\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.625590 master-0 kubenswrapper[4652]: I0216 17:43:09.625564 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.630324 master-0 kubenswrapper[4652]: I0216 17:43:09.630263 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-etc-podinfo\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.630883 master-0 kubenswrapper[4652]: I0216 17:43:09.630793 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-combined-ca-bundle\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.631786 master-0 kubenswrapper[4652]: I0216 17:43:09.631677 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-scripts\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.633686 master-0 kubenswrapper[4652]: I0216 17:43:09.633644 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-config\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.644517 master-0 kubenswrapper[4652]: I0216 17:43:09.644459 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" Feb 16 17:43:09.647876 master-0 kubenswrapper[4652]: I0216 17:43:09.647831 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v66v4\" (UniqueName: \"kubernetes.io/projected/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-kube-api-access-v66v4\") pod \"ironic-inspector-db-sync-v5nmj\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:09.726661 master-0 kubenswrapper[4652]: I0216 17:43:09.726404 4652 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:43:09.728110 master-0 kubenswrapper[4652]: I0216 17:43:09.727568 4652 scope.go:117] "RemoveContainer" containerID="70b3db1f1537b2818404dc8535e036babbc2fcb4913f744900b4c25bf59d46e3" Feb 16 17:43:09.728110 master-0 kubenswrapper[4652]: E0216 17:43:09.727821 4652 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ironic-neutron-agent\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ironic-neutron-agent pod=ironic-neutron-agent-6975fcc79b-5wclc_openstack(8f3751fd-c328-4914-8e15-a14ad13a527d)\"" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" podUID="8f3751fd-c328-4914-8e15-a14ad13a527d" Feb 16 17:43:09.755975 master-0 kubenswrapper[4652]: I0216 17:43:09.755917 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:11.019814 master-0 kubenswrapper[4652]: I0216 17:43:11.019782 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-5665b8875d-tx66w" Feb 16 17:43:11.164870 master-0 kubenswrapper[4652]: I0216 17:43:11.156892 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-79d877c778-jztbq"] Feb 16 17:43:11.318911 master-0 kubenswrapper[4652]: I0216 17:43:11.318750 4652 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:43:11.318911 master-0 kubenswrapper[4652]: I0216 17:43:11.318822 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:43:15.421674 master-0 kubenswrapper[4652]: I0216 17:43:15.416720 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:43:15.542238 master-0 kubenswrapper[4652]: I0216 17:43:15.542167 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:15.543852 master-0 kubenswrapper[4652]: I0216 17:43:15.543830 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-d5dfcf8b4-6nncv" Feb 16 17:43:17.915295 master-0 kubenswrapper[4652]: I0216 17:43:17.911664 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c6c9-account-create-update-xdl2v"] Feb 16 17:43:17.915789 master-0 kubenswrapper[4652]: W0216 17:43:17.915654 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod10a6eddf_9a5a_450c_b4f9_45fc556526dc.slice/crio-92094f6bd9e4b49963b5d274a5625ac1182656ebb495fc87dec0ed1cbd1d8825 WatchSource:0}: Error finding container 92094f6bd9e4b49963b5d274a5625ac1182656ebb495fc87dec0ed1cbd1d8825: Status 404 returned error can't find the container with id 92094f6bd9e4b49963b5d274a5625ac1182656ebb495fc87dec0ed1cbd1d8825 Feb 16 17:43:18.719389 master-0 kubenswrapper[4652]: I0216 17:43:18.719352 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9rpkr"] Feb 16 17:43:18.730809 master-0 kubenswrapper[4652]: I0216 17:43:18.730763 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-db-sync-v5nmj"] Feb 16 17:43:18.733984 master-0 kubenswrapper[4652]: W0216 17:43:18.733928 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod089b8594_a539_4435_9573_6d904bce3901.slice/crio-d7a2185d845a21f068224c37e5ecd9ca818dbf81bfd212ec0be2981bb6068952 WatchSource:0}: Error finding container d7a2185d845a21f068224c37e5ecd9ca818dbf81bfd212ec0be2981bb6068952: Status 404 returned error can't find the container with id d7a2185d845a21f068224c37e5ecd9ca818dbf81bfd212ec0be2981bb6068952 Feb 16 17:43:18.744176 master-0 kubenswrapper[4652]: I0216 17:43:18.743004 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-gp6kb"] Feb 16 17:43:18.745687 master-0 kubenswrapper[4652]: W0216 17:43:18.745646 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e120e62_7cc3_4d7e_8c1c_92d3f06302f1.slice/crio-c25348d5579562fa5909325281f82faf41b073f6dfc2180c6d380970e56ccd10 WatchSource:0}: Error finding container c25348d5579562fa5909325281f82faf41b073f6dfc2180c6d380970e56ccd10: Status 404 returned error can't find the container with id c25348d5579562fa5909325281f82faf41b073f6dfc2180c6d380970e56ccd10 Feb 16 17:43:18.775775 master-0 kubenswrapper[4652]: I0216 17:43:18.775710 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b802-account-create-update-mqckv"] Feb 16 17:43:18.775775 master-0 kubenswrapper[4652]: I0216 17:43:18.775758 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d9f2-account-create-update-r7xjk"] Feb 16 17:43:18.782272 master-0 kubenswrapper[4652]: I0216 17:43:18.782188 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-nrzvp"] Feb 16 17:43:18.925020 master-0 kubenswrapper[4652]: I0216 17:43:18.924955 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9rpkr" event={"ID":"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1","Type":"ContainerStarted","Data":"c25348d5579562fa5909325281f82faf41b073f6dfc2180c6d380970e56ccd10"} Feb 16 17:43:18.927173 master-0 kubenswrapper[4652]: I0216 17:43:18.927131 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" event={"ID":"923ed270-e930-4edb-bd2e-8db0412c5334","Type":"ContainerStarted","Data":"f6d3006e2a519eae31c099e9a97eda394915a81c44e5dcba9f4531938d8caee1"} Feb 16 17:43:18.931051 master-0 kubenswrapper[4652]: I0216 17:43:18.930951 4652 generic.go:334] "Generic (PLEG): container finished" podID="3c06307c-938d-45f7-b671-948d93bf0642" containerID="935c63566f8fc8afbe48e83167b48a92b6f8348db325cecf759adea54d32c5a4" exitCode=1 Feb 16 17:43:18.931051 master-0 kubenswrapper[4652]: I0216 17:43:18.931020 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-79d877c778-jztbq" event={"ID":"3c06307c-938d-45f7-b671-948d93bf0642","Type":"ContainerDied","Data":"935c63566f8fc8afbe48e83167b48a92b6f8348db325cecf759adea54d32c5a4"} Feb 16 17:43:18.931214 master-0 kubenswrapper[4652]: I0216 17:43:18.931069 4652 scope.go:117] "RemoveContainer" containerID="a7cce0571c9b2678e58a27a4b5c307ee2868e47c4cfcfd92b431bcea63937812" Feb 16 17:43:18.931702 master-0 kubenswrapper[4652]: I0216 17:43:18.931566 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ironic-79d877c778-jztbq" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="ironic-api-log" containerID="cri-o://cd9bc913af239533ee2867a888542e1e04f9fdda16ea7514de49a4a458095c19" gracePeriod=60 Feb 16 17:43:18.932810 master-0 kubenswrapper[4652]: I0216 17:43:18.932604 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c6c9-account-create-update-xdl2v" event={"ID":"10a6eddf-9a5a-450c-b4f9-45fc556526dc","Type":"ContainerStarted","Data":"65a6743d2b5e994c1cccbc9246e093e1359151d732aaad73c40d5f184edebb8e"} Feb 16 17:43:18.932810 master-0 kubenswrapper[4652]: I0216 17:43:18.932644 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c6c9-account-create-update-xdl2v" event={"ID":"10a6eddf-9a5a-450c-b4f9-45fc556526dc","Type":"ContainerStarted","Data":"92094f6bd9e4b49963b5d274a5625ac1182656ebb495fc87dec0ed1cbd1d8825"} Feb 16 17:43:18.937515 master-0 kubenswrapper[4652]: I0216 17:43:18.937470 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-gp6kb" event={"ID":"089b8594-a539-4435-9573-6d904bce3901","Type":"ContainerStarted","Data":"d7a2185d845a21f068224c37e5ecd9ca818dbf81bfd212ec0be2981bb6068952"} Feb 16 17:43:18.938751 master-0 kubenswrapper[4652]: I0216 17:43:18.938707 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nrzvp" event={"ID":"cbfc1e76-e972-465d-bed5-92eb603c32a6","Type":"ContainerStarted","Data":"102c9c512914d9f59cfb624f52f55301e85b60941fa84f5962a5f1b9c1cf7319"} Feb 16 17:43:18.941656 master-0 kubenswrapper[4652]: I0216 17:43:18.941606 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5496a92a-9932-4b28-8c4e-69b754218e51","Type":"ContainerStarted","Data":"b2efa28b479c2381edc4b6af44475a34d0dcc50f76b41bf4bd6f2727e8df0043"} Feb 16 17:43:18.943740 master-0 kubenswrapper[4652]: I0216 17:43:18.943698 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b802-account-create-update-mqckv" event={"ID":"56eeae89-abbe-4bff-b750-6ad05532d328","Type":"ContainerStarted","Data":"9f8b38925d05f2b0a887579d212e5032b5817aecfc1e8685302336c74015fccc"} Feb 16 17:43:18.945374 master-0 kubenswrapper[4652]: I0216 17:43:18.945333 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-v5nmj" event={"ID":"9f793d28-5e22-4c0a-8f87-dabf1e4031a2","Type":"ContainerStarted","Data":"6605b0f635bdc818c561ba0f0d4ce366ab898091072f701441f022e5e02a4248"} Feb 16 17:43:19.717417 master-0 kubenswrapper[4652]: I0216 17:43:19.717379 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:43:19.833227 master-0 kubenswrapper[4652]: I0216 17:43:19.833175 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c06307c-938d-45f7-b671-948d93bf0642-logs\") pod \"3c06307c-938d-45f7-b671-948d93bf0642\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " Feb 16 17:43:19.833469 master-0 kubenswrapper[4652]: I0216 17:43:19.833365 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3c06307c-938d-45f7-b671-948d93bf0642-config-data-merged\") pod \"3c06307c-938d-45f7-b671-948d93bf0642\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " Feb 16 17:43:19.833469 master-0 kubenswrapper[4652]: I0216 17:43:19.833400 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4v5x\" (UniqueName: \"kubernetes.io/projected/3c06307c-938d-45f7-b671-948d93bf0642-kube-api-access-w4v5x\") pod \"3c06307c-938d-45f7-b671-948d93bf0642\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " Feb 16 17:43:19.833469 master-0 kubenswrapper[4652]: I0216 17:43:19.833430 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3c06307c-938d-45f7-b671-948d93bf0642-etc-podinfo\") pod \"3c06307c-938d-45f7-b671-948d93bf0642\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " Feb 16 17:43:19.834041 master-0 kubenswrapper[4652]: I0216 17:43:19.833515 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-config-data-custom\") pod \"3c06307c-938d-45f7-b671-948d93bf0642\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " Feb 16 17:43:19.834041 master-0 kubenswrapper[4652]: I0216 17:43:19.833647 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-combined-ca-bundle\") pod \"3c06307c-938d-45f7-b671-948d93bf0642\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " Feb 16 17:43:19.834041 master-0 kubenswrapper[4652]: I0216 17:43:19.833708 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-config-data\") pod \"3c06307c-938d-45f7-b671-948d93bf0642\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " Feb 16 17:43:19.834041 master-0 kubenswrapper[4652]: I0216 17:43:19.833722 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c06307c-938d-45f7-b671-948d93bf0642-logs" (OuterVolumeSpecName: "logs") pod "3c06307c-938d-45f7-b671-948d93bf0642" (UID: "3c06307c-938d-45f7-b671-948d93bf0642"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:43:19.834041 master-0 kubenswrapper[4652]: I0216 17:43:19.833805 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-scripts\") pod \"3c06307c-938d-45f7-b671-948d93bf0642\" (UID: \"3c06307c-938d-45f7-b671-948d93bf0642\") " Feb 16 17:43:19.834041 master-0 kubenswrapper[4652]: I0216 17:43:19.833960 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c06307c-938d-45f7-b671-948d93bf0642-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "3c06307c-938d-45f7-b671-948d93bf0642" (UID: "3c06307c-938d-45f7-b671-948d93bf0642"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:43:19.834695 master-0 kubenswrapper[4652]: I0216 17:43:19.834642 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c06307c-938d-45f7-b671-948d93bf0642-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:19.834695 master-0 kubenswrapper[4652]: I0216 17:43:19.834671 4652 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/3c06307c-938d-45f7-b671-948d93bf0642-config-data-merged\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:19.838027 master-0 kubenswrapper[4652]: I0216 17:43:19.837984 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c06307c-938d-45f7-b671-948d93bf0642-kube-api-access-w4v5x" (OuterVolumeSpecName: "kube-api-access-w4v5x") pod "3c06307c-938d-45f7-b671-948d93bf0642" (UID: "3c06307c-938d-45f7-b671-948d93bf0642"). InnerVolumeSpecName "kube-api-access-w4v5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:19.840427 master-0 kubenswrapper[4652]: I0216 17:43:19.840384 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3c06307c-938d-45f7-b671-948d93bf0642" (UID: "3c06307c-938d-45f7-b671-948d93bf0642"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:19.841429 master-0 kubenswrapper[4652]: I0216 17:43:19.841391 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3c06307c-938d-45f7-b671-948d93bf0642-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "3c06307c-938d-45f7-b671-948d93bf0642" (UID: "3c06307c-938d-45f7-b671-948d93bf0642"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 17:43:19.842381 master-0 kubenswrapper[4652]: I0216 17:43:19.842343 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-scripts" (OuterVolumeSpecName: "scripts") pod "3c06307c-938d-45f7-b671-948d93bf0642" (UID: "3c06307c-938d-45f7-b671-948d93bf0642"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:19.883447 master-0 kubenswrapper[4652]: I0216 17:43:19.883389 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-config-data" (OuterVolumeSpecName: "config-data") pod "3c06307c-938d-45f7-b671-948d93bf0642" (UID: "3c06307c-938d-45f7-b671-948d93bf0642"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:19.906959 master-0 kubenswrapper[4652]: I0216 17:43:19.906906 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c06307c-938d-45f7-b671-948d93bf0642" (UID: "3c06307c-938d-45f7-b671-948d93bf0642"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:19.937125 master-0 kubenswrapper[4652]: I0216 17:43:19.937079 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:19.937125 master-0 kubenswrapper[4652]: I0216 17:43:19.937121 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:19.937627 master-0 kubenswrapper[4652]: I0216 17:43:19.937133 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:19.937627 master-0 kubenswrapper[4652]: I0216 17:43:19.937146 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4v5x\" (UniqueName: \"kubernetes.io/projected/3c06307c-938d-45f7-b671-948d93bf0642-kube-api-access-w4v5x\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:19.937627 master-0 kubenswrapper[4652]: I0216 17:43:19.937158 4652 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/3c06307c-938d-45f7-b671-948d93bf0642-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:19.937627 master-0 kubenswrapper[4652]: I0216 17:43:19.937169 4652 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c06307c-938d-45f7-b671-948d93bf0642-config-data-custom\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:19.958198 master-0 kubenswrapper[4652]: I0216 17:43:19.958168 4652 generic.go:334] "Generic (PLEG): container finished" podID="3c06307c-938d-45f7-b671-948d93bf0642" containerID="cd9bc913af239533ee2867a888542e1e04f9fdda16ea7514de49a4a458095c19" exitCode=143 Feb 16 17:43:19.958476 master-0 kubenswrapper[4652]: I0216 17:43:19.958241 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-79d877c778-jztbq" event={"ID":"3c06307c-938d-45f7-b671-948d93bf0642","Type":"ContainerDied","Data":"cd9bc913af239533ee2867a888542e1e04f9fdda16ea7514de49a4a458095c19"} Feb 16 17:43:19.958476 master-0 kubenswrapper[4652]: I0216 17:43:19.958280 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-79d877c778-jztbq" event={"ID":"3c06307c-938d-45f7-b671-948d93bf0642","Type":"ContainerDied","Data":"c9616ceee950a2b7c407689ce98154b7a8c84f095b4ec4833e25d361a04f5db3"} Feb 16 17:43:19.958476 master-0 kubenswrapper[4652]: I0216 17:43:19.958298 4652 scope.go:117] "RemoveContainer" containerID="935c63566f8fc8afbe48e83167b48a92b6f8348db325cecf759adea54d32c5a4" Feb 16 17:43:19.958476 master-0 kubenswrapper[4652]: I0216 17:43:19.958332 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-79d877c778-jztbq" Feb 16 17:43:19.960205 master-0 kubenswrapper[4652]: I0216 17:43:19.960108 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b802-account-create-update-mqckv" event={"ID":"56eeae89-abbe-4bff-b750-6ad05532d328","Type":"ContainerStarted","Data":"ee1c12beb7a58edbe0f602211c575efc6aaaab58bf36d615acb9b0a7cee901cd"} Feb 16 17:43:19.961924 master-0 kubenswrapper[4652]: I0216 17:43:19.961896 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9rpkr" event={"ID":"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1","Type":"ContainerStarted","Data":"658c9b050a86d087bb3f165bfc7fa711923a531e37c12b0731f5919b44523936"} Feb 16 17:43:19.963832 master-0 kubenswrapper[4652]: I0216 17:43:19.963811 4652 generic.go:334] "Generic (PLEG): container finished" podID="10a6eddf-9a5a-450c-b4f9-45fc556526dc" containerID="65a6743d2b5e994c1cccbc9246e093e1359151d732aaad73c40d5f184edebb8e" exitCode=0 Feb 16 17:43:19.963917 master-0 kubenswrapper[4652]: I0216 17:43:19.963867 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c6c9-account-create-update-xdl2v" event={"ID":"10a6eddf-9a5a-450c-b4f9-45fc556526dc","Type":"ContainerDied","Data":"65a6743d2b5e994c1cccbc9246e093e1359151d732aaad73c40d5f184edebb8e"} Feb 16 17:43:19.965636 master-0 kubenswrapper[4652]: I0216 17:43:19.965595 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" event={"ID":"923ed270-e930-4edb-bd2e-8db0412c5334","Type":"ContainerStarted","Data":"13fd43bd77d322847a52aa6bd1fa5f58c81f6717a66b3e4283779760f2e6091e"} Feb 16 17:43:19.967569 master-0 kubenswrapper[4652]: I0216 17:43:19.967520 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-gp6kb" event={"ID":"089b8594-a539-4435-9573-6d904bce3901","Type":"ContainerStarted","Data":"73c336da51a1da01bedbe205fe291028451df89ebbbe16806dcb57e0436fa393"} Feb 16 17:43:19.970689 master-0 kubenswrapper[4652]: I0216 17:43:19.970639 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nrzvp" event={"ID":"cbfc1e76-e972-465d-bed5-92eb603c32a6","Type":"ContainerStarted","Data":"dba8d483125c52c368490a4b7a3f33ff3333d8d2c9f967bcfb2939b4aaa6eb45"} Feb 16 17:43:20.008601 master-0 kubenswrapper[4652]: I0216 17:43:20.002667 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=4.852843918 podStartE2EDuration="22.002645993s" podCreationTimestamp="2026-02-16 17:42:58 +0000 UTC" firstStartedPulling="2026-02-16 17:43:00.558231139 +0000 UTC m=+1137.946399655" lastFinishedPulling="2026-02-16 17:43:17.708033214 +0000 UTC m=+1155.096201730" observedRunningTime="2026-02-16 17:43:19.984213469 +0000 UTC m=+1157.372381985" watchObservedRunningTime="2026-02-16 17:43:20.002645993 +0000 UTC m=+1157.390814509" Feb 16 17:43:20.020212 master-0 kubenswrapper[4652]: I0216 17:43:20.020168 4652 scope.go:117] "RemoveContainer" containerID="cd9bc913af239533ee2867a888542e1e04f9fdda16ea7514de49a4a458095c19" Feb 16 17:43:20.130364 master-0 kubenswrapper[4652]: I0216 17:43:20.130317 4652 scope.go:117] "RemoveContainer" containerID="5afa9efe404eb401af2419198c95576672e50384fea5fdfac7f5835d5fc2cbc0" Feb 16 17:43:20.199119 master-0 kubenswrapper[4652]: I0216 17:43:20.198982 4652 scope.go:117] "RemoveContainer" containerID="935c63566f8fc8afbe48e83167b48a92b6f8348db325cecf759adea54d32c5a4" Feb 16 17:43:20.199675 master-0 kubenswrapper[4652]: E0216 17:43:20.199632 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"935c63566f8fc8afbe48e83167b48a92b6f8348db325cecf759adea54d32c5a4\": container with ID starting with 935c63566f8fc8afbe48e83167b48a92b6f8348db325cecf759adea54d32c5a4 not found: ID does not exist" containerID="935c63566f8fc8afbe48e83167b48a92b6f8348db325cecf759adea54d32c5a4" Feb 16 17:43:20.199744 master-0 kubenswrapper[4652]: I0216 17:43:20.199687 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"935c63566f8fc8afbe48e83167b48a92b6f8348db325cecf759adea54d32c5a4"} err="failed to get container status \"935c63566f8fc8afbe48e83167b48a92b6f8348db325cecf759adea54d32c5a4\": rpc error: code = NotFound desc = could not find container \"935c63566f8fc8afbe48e83167b48a92b6f8348db325cecf759adea54d32c5a4\": container with ID starting with 935c63566f8fc8afbe48e83167b48a92b6f8348db325cecf759adea54d32c5a4 not found: ID does not exist" Feb 16 17:43:20.199744 master-0 kubenswrapper[4652]: I0216 17:43:20.199718 4652 scope.go:117] "RemoveContainer" containerID="cd9bc913af239533ee2867a888542e1e04f9fdda16ea7514de49a4a458095c19" Feb 16 17:43:20.200158 master-0 kubenswrapper[4652]: E0216 17:43:20.200098 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd9bc913af239533ee2867a888542e1e04f9fdda16ea7514de49a4a458095c19\": container with ID starting with cd9bc913af239533ee2867a888542e1e04f9fdda16ea7514de49a4a458095c19 not found: ID does not exist" containerID="cd9bc913af239533ee2867a888542e1e04f9fdda16ea7514de49a4a458095c19" Feb 16 17:43:20.200217 master-0 kubenswrapper[4652]: I0216 17:43:20.200169 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd9bc913af239533ee2867a888542e1e04f9fdda16ea7514de49a4a458095c19"} err="failed to get container status \"cd9bc913af239533ee2867a888542e1e04f9fdda16ea7514de49a4a458095c19\": rpc error: code = NotFound desc = could not find container \"cd9bc913af239533ee2867a888542e1e04f9fdda16ea7514de49a4a458095c19\": container with ID starting with cd9bc913af239533ee2867a888542e1e04f9fdda16ea7514de49a4a458095c19 not found: ID does not exist" Feb 16 17:43:20.200217 master-0 kubenswrapper[4652]: I0216 17:43:20.200210 4652 scope.go:117] "RemoveContainer" containerID="5afa9efe404eb401af2419198c95576672e50384fea5fdfac7f5835d5fc2cbc0" Feb 16 17:43:20.200830 master-0 kubenswrapper[4652]: E0216 17:43:20.200768 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5afa9efe404eb401af2419198c95576672e50384fea5fdfac7f5835d5fc2cbc0\": container with ID starting with 5afa9efe404eb401af2419198c95576672e50384fea5fdfac7f5835d5fc2cbc0 not found: ID does not exist" containerID="5afa9efe404eb401af2419198c95576672e50384fea5fdfac7f5835d5fc2cbc0" Feb 16 17:43:20.200830 master-0 kubenswrapper[4652]: I0216 17:43:20.200801 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5afa9efe404eb401af2419198c95576672e50384fea5fdfac7f5835d5fc2cbc0"} err="failed to get container status \"5afa9efe404eb401af2419198c95576672e50384fea5fdfac7f5835d5fc2cbc0\": rpc error: code = NotFound desc = could not find container \"5afa9efe404eb401af2419198c95576672e50384fea5fdfac7f5835d5fc2cbc0\": container with ID starting with 5afa9efe404eb401af2419198c95576672e50384fea5fdfac7f5835d5fc2cbc0 not found: ID does not exist" Feb 16 17:43:20.449363 master-0 kubenswrapper[4652]: I0216 17:43:20.449293 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-c6c9-account-create-update-xdl2v" podStartSLOduration=12.449270745 podStartE2EDuration="12.449270745s" podCreationTimestamp="2026-02-16 17:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:20.435336282 +0000 UTC m=+1157.823504798" watchObservedRunningTime="2026-02-16 17:43:20.449270745 +0000 UTC m=+1157.837439271" Feb 16 17:43:20.518763 master-0 kubenswrapper[4652]: I0216 17:43:20.518695 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-79d877c778-jztbq"] Feb 16 17:43:20.549521 master-0 kubenswrapper[4652]: I0216 17:43:20.542965 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-79d877c778-jztbq"] Feb 16 17:43:20.549521 master-0 kubenswrapper[4652]: I0216 17:43:20.544783 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-nrzvp" podStartSLOduration=12.544762685 podStartE2EDuration="12.544762685s" podCreationTimestamp="2026-02-16 17:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:20.525032266 +0000 UTC m=+1157.913200782" watchObservedRunningTime="2026-02-16 17:43:20.544762685 +0000 UTC m=+1157.932931201" Feb 16 17:43:20.593385 master-0 kubenswrapper[4652]: I0216 17:43:20.593074 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-gp6kb" podStartSLOduration=12.59305215 podStartE2EDuration="12.59305215s" podCreationTimestamp="2026-02-16 17:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:20.561991547 +0000 UTC m=+1157.950160063" watchObservedRunningTime="2026-02-16 17:43:20.59305215 +0000 UTC m=+1157.981220676" Feb 16 17:43:20.602280 master-0 kubenswrapper[4652]: I0216 17:43:20.601739 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" podStartSLOduration=11.601720092 podStartE2EDuration="11.601720092s" podCreationTimestamp="2026-02-16 17:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:20.57963277 +0000 UTC m=+1157.967801286" watchObservedRunningTime="2026-02-16 17:43:20.601720092 +0000 UTC m=+1157.989888608" Feb 16 17:43:20.620390 master-0 kubenswrapper[4652]: I0216 17:43:20.620203 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-9rpkr" podStartSLOduration=12.620184387 podStartE2EDuration="12.620184387s" podCreationTimestamp="2026-02-16 17:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:20.594119678 +0000 UTC m=+1157.982288194" watchObservedRunningTime="2026-02-16 17:43:20.620184387 +0000 UTC m=+1158.008352903" Feb 16 17:43:20.648891 master-0 kubenswrapper[4652]: I0216 17:43:20.648763 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-b802-account-create-update-mqckv" podStartSLOduration=12.648725772 podStartE2EDuration="12.648725772s" podCreationTimestamp="2026-02-16 17:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:20.622725295 +0000 UTC m=+1158.010893831" watchObservedRunningTime="2026-02-16 17:43:20.648725772 +0000 UTC m=+1158.036894288" Feb 16 17:43:20.761554 master-0 kubenswrapper[4652]: I0216 17:43:20.761389 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c06307c-938d-45f7-b671-948d93bf0642" path="/var/lib/kubelet/pods/3c06307c-938d-45f7-b671-948d93bf0642/volumes" Feb 16 17:43:20.982654 master-0 kubenswrapper[4652]: I0216 17:43:20.982606 4652 generic.go:334] "Generic (PLEG): container finished" podID="923ed270-e930-4edb-bd2e-8db0412c5334" containerID="13fd43bd77d322847a52aa6bd1fa5f58c81f6717a66b3e4283779760f2e6091e" exitCode=0 Feb 16 17:43:20.983443 master-0 kubenswrapper[4652]: I0216 17:43:20.982694 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" event={"ID":"923ed270-e930-4edb-bd2e-8db0412c5334","Type":"ContainerDied","Data":"13fd43bd77d322847a52aa6bd1fa5f58c81f6717a66b3e4283779760f2e6091e"} Feb 16 17:43:20.986021 master-0 kubenswrapper[4652]: I0216 17:43:20.985940 4652 generic.go:334] "Generic (PLEG): container finished" podID="089b8594-a539-4435-9573-6d904bce3901" containerID="73c336da51a1da01bedbe205fe291028451df89ebbbe16806dcb57e0436fa393" exitCode=0 Feb 16 17:43:20.986213 master-0 kubenswrapper[4652]: I0216 17:43:20.986062 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-gp6kb" event={"ID":"089b8594-a539-4435-9573-6d904bce3901","Type":"ContainerDied","Data":"73c336da51a1da01bedbe205fe291028451df89ebbbe16806dcb57e0436fa393"} Feb 16 17:43:20.989169 master-0 kubenswrapper[4652]: I0216 17:43:20.989140 4652 generic.go:334] "Generic (PLEG): container finished" podID="cbfc1e76-e972-465d-bed5-92eb603c32a6" containerID="dba8d483125c52c368490a4b7a3f33ff3333d8d2c9f967bcfb2939b4aaa6eb45" exitCode=0 Feb 16 17:43:20.989285 master-0 kubenswrapper[4652]: I0216 17:43:20.989208 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nrzvp" event={"ID":"cbfc1e76-e972-465d-bed5-92eb603c32a6","Type":"ContainerDied","Data":"dba8d483125c52c368490a4b7a3f33ff3333d8d2c9f967bcfb2939b4aaa6eb45"} Feb 16 17:43:20.993151 master-0 kubenswrapper[4652]: I0216 17:43:20.993118 4652 generic.go:334] "Generic (PLEG): container finished" podID="56eeae89-abbe-4bff-b750-6ad05532d328" containerID="ee1c12beb7a58edbe0f602211c575efc6aaaab58bf36d615acb9b0a7cee901cd" exitCode=0 Feb 16 17:43:20.993210 master-0 kubenswrapper[4652]: I0216 17:43:20.993175 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b802-account-create-update-mqckv" event={"ID":"56eeae89-abbe-4bff-b750-6ad05532d328","Type":"ContainerDied","Data":"ee1c12beb7a58edbe0f602211c575efc6aaaab58bf36d615acb9b0a7cee901cd"} Feb 16 17:43:20.996803 master-0 kubenswrapper[4652]: I0216 17:43:20.996747 4652 generic.go:334] "Generic (PLEG): container finished" podID="3e120e62-7cc3-4d7e-8c1c-92d3f06302f1" containerID="658c9b050a86d087bb3f165bfc7fa711923a531e37c12b0731f5919b44523936" exitCode=0 Feb 16 17:43:20.996860 master-0 kubenswrapper[4652]: I0216 17:43:20.996820 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9rpkr" event={"ID":"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1","Type":"ContainerDied","Data":"658c9b050a86d087bb3f165bfc7fa711923a531e37c12b0731f5919b44523936"} Feb 16 17:43:22.808090 master-0 kubenswrapper[4652]: I0216 17:43:22.807920 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-859ff674f7-llnnx" Feb 16 17:43:23.746189 master-0 kubenswrapper[4652]: I0216 17:43:23.746146 4652 scope.go:117] "RemoveContainer" containerID="70b3db1f1537b2818404dc8535e036babbc2fcb4913f744900b4c25bf59d46e3" Feb 16 17:43:25.139271 master-0 kubenswrapper[4652]: I0216 17:43:25.135597 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66f9d86cdb-h58xd"] Feb 16 17:43:25.139271 master-0 kubenswrapper[4652]: I0216 17:43:25.135877 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66f9d86cdb-h58xd" podUID="d4f31917-de6f-4a2d-a7ec-14023e52f58d" containerName="neutron-api" containerID="cri-o://5fc807d82c3e5673a6f374ae1a575834bfeb59f1ee48fd03180d2278dec790d1" gracePeriod=30 Feb 16 17:43:25.139271 master-0 kubenswrapper[4652]: I0216 17:43:25.136494 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-66f9d86cdb-h58xd" podUID="d4f31917-de6f-4a2d-a7ec-14023e52f58d" containerName="neutron-httpd" containerID="cri-o://3d0602731bf458ec2a01be157b2e02706750d5d14698c9fbe4f0b1a62587c519" gracePeriod=30 Feb 16 17:43:25.228385 master-0 kubenswrapper[4652]: I0216 17:43:25.228319 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:43:25.228694 master-0 kubenswrapper[4652]: I0216 17:43:25.228639 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-50e08-default-external-api-0" podUID="f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" containerName="glance-log" containerID="cri-o://2557431c0c136ca2439fed860ea95097d8ceaeb286271c01f87bb1d349766acf" gracePeriod=30 Feb 16 17:43:25.229407 master-0 kubenswrapper[4652]: I0216 17:43:25.229327 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-50e08-default-external-api-0" podUID="f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" containerName="glance-httpd" containerID="cri-o://31c93779bdd8a5bb7167d32420acdff15bffcf01001b6ed861d4a2a2c1c4c2be" gracePeriod=30 Feb 16 17:43:25.491723 master-0 kubenswrapper[4652]: I0216 17:43:25.491157 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" Feb 16 17:43:25.555355 master-0 kubenswrapper[4652]: I0216 17:43:25.555301 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c6c9-account-create-update-xdl2v" Feb 16 17:43:25.583842 master-0 kubenswrapper[4652]: I0216 17:43:25.583806 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b802-account-create-update-mqckv" Feb 16 17:43:25.619735 master-0 kubenswrapper[4652]: I0216 17:43:25.619685 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9rpkr" Feb 16 17:43:25.631279 master-0 kubenswrapper[4652]: I0216 17:43:25.631207 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923ed270-e930-4edb-bd2e-8db0412c5334-operator-scripts\") pod \"923ed270-e930-4edb-bd2e-8db0412c5334\" (UID: \"923ed270-e930-4edb-bd2e-8db0412c5334\") " Feb 16 17:43:25.631728 master-0 kubenswrapper[4652]: I0216 17:43:25.631693 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h2lb\" (UniqueName: \"kubernetes.io/projected/923ed270-e930-4edb-bd2e-8db0412c5334-kube-api-access-6h2lb\") pod \"923ed270-e930-4edb-bd2e-8db0412c5334\" (UID: \"923ed270-e930-4edb-bd2e-8db0412c5334\") " Feb 16 17:43:25.632482 master-0 kubenswrapper[4652]: I0216 17:43:25.632451 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/923ed270-e930-4edb-bd2e-8db0412c5334-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "923ed270-e930-4edb-bd2e-8db0412c5334" (UID: "923ed270-e930-4edb-bd2e-8db0412c5334"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:25.633775 master-0 kubenswrapper[4652]: I0216 17:43:25.633722 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/923ed270-e930-4edb-bd2e-8db0412c5334-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:25.643475 master-0 kubenswrapper[4652]: I0216 17:43:25.639236 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/923ed270-e930-4edb-bd2e-8db0412c5334-kube-api-access-6h2lb" (OuterVolumeSpecName: "kube-api-access-6h2lb") pod "923ed270-e930-4edb-bd2e-8db0412c5334" (UID: "923ed270-e930-4edb-bd2e-8db0412c5334"). InnerVolumeSpecName "kube-api-access-6h2lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:25.649628 master-0 kubenswrapper[4652]: I0216 17:43:25.649565 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-gp6kb" Feb 16 17:43:25.652950 master-0 kubenswrapper[4652]: I0216 17:43:25.652911 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nrzvp" Feb 16 17:43:25.738273 master-0 kubenswrapper[4652]: I0216 17:43:25.735389 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56eeae89-abbe-4bff-b750-6ad05532d328-operator-scripts\") pod \"56eeae89-abbe-4bff-b750-6ad05532d328\" (UID: \"56eeae89-abbe-4bff-b750-6ad05532d328\") " Feb 16 17:43:25.738273 master-0 kubenswrapper[4652]: I0216 17:43:25.735510 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsx7h\" (UniqueName: \"kubernetes.io/projected/56eeae89-abbe-4bff-b750-6ad05532d328-kube-api-access-qsx7h\") pod \"56eeae89-abbe-4bff-b750-6ad05532d328\" (UID: \"56eeae89-abbe-4bff-b750-6ad05532d328\") " Feb 16 17:43:25.738273 master-0 kubenswrapper[4652]: I0216 17:43:25.735545 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv46w\" (UniqueName: \"kubernetes.io/projected/3e120e62-7cc3-4d7e-8c1c-92d3f06302f1-kube-api-access-gv46w\") pod \"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1\" (UID: \"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1\") " Feb 16 17:43:25.738273 master-0 kubenswrapper[4652]: I0216 17:43:25.737315 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10a6eddf-9a5a-450c-b4f9-45fc556526dc-operator-scripts\") pod \"10a6eddf-9a5a-450c-b4f9-45fc556526dc\" (UID: \"10a6eddf-9a5a-450c-b4f9-45fc556526dc\") " Feb 16 17:43:25.738273 master-0 kubenswrapper[4652]: I0216 17:43:25.737586 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e120e62-7cc3-4d7e-8c1c-92d3f06302f1-operator-scripts\") pod \"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1\" (UID: \"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1\") " Feb 16 17:43:25.738273 master-0 kubenswrapper[4652]: I0216 17:43:25.737712 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6cjl\" (UniqueName: \"kubernetes.io/projected/10a6eddf-9a5a-450c-b4f9-45fc556526dc-kube-api-access-j6cjl\") pod \"10a6eddf-9a5a-450c-b4f9-45fc556526dc\" (UID: \"10a6eddf-9a5a-450c-b4f9-45fc556526dc\") " Feb 16 17:43:25.742758 master-0 kubenswrapper[4652]: I0216 17:43:25.739455 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h2lb\" (UniqueName: \"kubernetes.io/projected/923ed270-e930-4edb-bd2e-8db0412c5334-kube-api-access-6h2lb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:25.743162 master-0 kubenswrapper[4652]: I0216 17:43:25.743121 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56eeae89-abbe-4bff-b750-6ad05532d328-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "56eeae89-abbe-4bff-b750-6ad05532d328" (UID: "56eeae89-abbe-4bff-b750-6ad05532d328"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:25.743839 master-0 kubenswrapper[4652]: I0216 17:43:25.743814 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10a6eddf-9a5a-450c-b4f9-45fc556526dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "10a6eddf-9a5a-450c-b4f9-45fc556526dc" (UID: "10a6eddf-9a5a-450c-b4f9-45fc556526dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:25.744196 master-0 kubenswrapper[4652]: I0216 17:43:25.744175 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e120e62-7cc3-4d7e-8c1c-92d3f06302f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e120e62-7cc3-4d7e-8c1c-92d3f06302f1" (UID: "3e120e62-7cc3-4d7e-8c1c-92d3f06302f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:25.746541 master-0 kubenswrapper[4652]: I0216 17:43:25.745008 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e120e62-7cc3-4d7e-8c1c-92d3f06302f1-kube-api-access-gv46w" (OuterVolumeSpecName: "kube-api-access-gv46w") pod "3e120e62-7cc3-4d7e-8c1c-92d3f06302f1" (UID: "3e120e62-7cc3-4d7e-8c1c-92d3f06302f1"). InnerVolumeSpecName "kube-api-access-gv46w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:25.746924 master-0 kubenswrapper[4652]: I0216 17:43:25.746720 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56eeae89-abbe-4bff-b750-6ad05532d328-kube-api-access-qsx7h" (OuterVolumeSpecName: "kube-api-access-qsx7h") pod "56eeae89-abbe-4bff-b750-6ad05532d328" (UID: "56eeae89-abbe-4bff-b750-6ad05532d328"). InnerVolumeSpecName "kube-api-access-qsx7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:25.746924 master-0 kubenswrapper[4652]: I0216 17:43:25.746813 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10a6eddf-9a5a-450c-b4f9-45fc556526dc-kube-api-access-j6cjl" (OuterVolumeSpecName: "kube-api-access-j6cjl") pod "10a6eddf-9a5a-450c-b4f9-45fc556526dc" (UID: "10a6eddf-9a5a-450c-b4f9-45fc556526dc"). InnerVolumeSpecName "kube-api-access-j6cjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:25.830326 master-0 kubenswrapper[4652]: I0216 17:43:25.829633 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:43:25.830326 master-0 kubenswrapper[4652]: I0216 17:43:25.829906 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-50e08-default-internal-api-0" podUID="67dcf429-d644-435b-8edb-e08198064dfb" containerName="glance-log" containerID="cri-o://076fa119e8cae849e305bb8138aa038ea39286ea3cb353a183aef6ac4148ad49" gracePeriod=30 Feb 16 17:43:25.830585 master-0 kubenswrapper[4652]: I0216 17:43:25.830406 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-50e08-default-internal-api-0" podUID="67dcf429-d644-435b-8edb-e08198064dfb" containerName="glance-httpd" containerID="cri-o://81692546bc5049a915cfd0b82e0a924cf6a246e940f0254c5f82ef23ef987a76" gracePeriod=30 Feb 16 17:43:25.845397 master-0 kubenswrapper[4652]: I0216 17:43:25.841933 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/089b8594-a539-4435-9573-6d904bce3901-operator-scripts\") pod \"089b8594-a539-4435-9573-6d904bce3901\" (UID: \"089b8594-a539-4435-9573-6d904bce3901\") " Feb 16 17:43:25.845397 master-0 kubenswrapper[4652]: I0216 17:43:25.842008 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbfc1e76-e972-465d-bed5-92eb603c32a6-operator-scripts\") pod \"cbfc1e76-e972-465d-bed5-92eb603c32a6\" (UID: \"cbfc1e76-e972-465d-bed5-92eb603c32a6\") " Feb 16 17:43:25.845397 master-0 kubenswrapper[4652]: I0216 17:43:25.842091 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjphd\" (UniqueName: \"kubernetes.io/projected/cbfc1e76-e972-465d-bed5-92eb603c32a6-kube-api-access-mjphd\") pod \"cbfc1e76-e972-465d-bed5-92eb603c32a6\" (UID: \"cbfc1e76-e972-465d-bed5-92eb603c32a6\") " Feb 16 17:43:25.845397 master-0 kubenswrapper[4652]: I0216 17:43:25.842153 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c4sz\" (UniqueName: \"kubernetes.io/projected/089b8594-a539-4435-9573-6d904bce3901-kube-api-access-5c4sz\") pod \"089b8594-a539-4435-9573-6d904bce3901\" (UID: \"089b8594-a539-4435-9573-6d904bce3901\") " Feb 16 17:43:25.845397 master-0 kubenswrapper[4652]: I0216 17:43:25.842837 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e120e62-7cc3-4d7e-8c1c-92d3f06302f1-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:25.845397 master-0 kubenswrapper[4652]: I0216 17:43:25.842855 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6cjl\" (UniqueName: \"kubernetes.io/projected/10a6eddf-9a5a-450c-b4f9-45fc556526dc-kube-api-access-j6cjl\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:25.845397 master-0 kubenswrapper[4652]: I0216 17:43:25.842865 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/56eeae89-abbe-4bff-b750-6ad05532d328-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:25.845397 master-0 kubenswrapper[4652]: I0216 17:43:25.842874 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsx7h\" (UniqueName: \"kubernetes.io/projected/56eeae89-abbe-4bff-b750-6ad05532d328-kube-api-access-qsx7h\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:25.845397 master-0 kubenswrapper[4652]: I0216 17:43:25.842883 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv46w\" (UniqueName: \"kubernetes.io/projected/3e120e62-7cc3-4d7e-8c1c-92d3f06302f1-kube-api-access-gv46w\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:25.845397 master-0 kubenswrapper[4652]: I0216 17:43:25.842891 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/10a6eddf-9a5a-450c-b4f9-45fc556526dc-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:25.851148 master-0 kubenswrapper[4652]: I0216 17:43:25.850491 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbfc1e76-e972-465d-bed5-92eb603c32a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cbfc1e76-e972-465d-bed5-92eb603c32a6" (UID: "cbfc1e76-e972-465d-bed5-92eb603c32a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:25.851148 master-0 kubenswrapper[4652]: I0216 17:43:25.850877 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/089b8594-a539-4435-9573-6d904bce3901-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "089b8594-a539-4435-9573-6d904bce3901" (UID: "089b8594-a539-4435-9573-6d904bce3901"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:25.854469 master-0 kubenswrapper[4652]: I0216 17:43:25.854411 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbfc1e76-e972-465d-bed5-92eb603c32a6-kube-api-access-mjphd" (OuterVolumeSpecName: "kube-api-access-mjphd") pod "cbfc1e76-e972-465d-bed5-92eb603c32a6" (UID: "cbfc1e76-e972-465d-bed5-92eb603c32a6"). InnerVolumeSpecName "kube-api-access-mjphd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:25.856558 master-0 kubenswrapper[4652]: I0216 17:43:25.856524 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/089b8594-a539-4435-9573-6d904bce3901-kube-api-access-5c4sz" (OuterVolumeSpecName: "kube-api-access-5c4sz") pod "089b8594-a539-4435-9573-6d904bce3901" (UID: "089b8594-a539-4435-9573-6d904bce3901"). InnerVolumeSpecName "kube-api-access-5c4sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:25.946737 master-0 kubenswrapper[4652]: I0216 17:43:25.946691 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjphd\" (UniqueName: \"kubernetes.io/projected/cbfc1e76-e972-465d-bed5-92eb603c32a6-kube-api-access-mjphd\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:25.946737 master-0 kubenswrapper[4652]: I0216 17:43:25.946723 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c4sz\" (UniqueName: \"kubernetes.io/projected/089b8594-a539-4435-9573-6d904bce3901-kube-api-access-5c4sz\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:25.946737 master-0 kubenswrapper[4652]: I0216 17:43:25.946737 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/089b8594-a539-4435-9573-6d904bce3901-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:25.946737 master-0 kubenswrapper[4652]: I0216 17:43:25.946746 4652 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cbfc1e76-e972-465d-bed5-92eb603c32a6-operator-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:26.054101 master-0 kubenswrapper[4652]: I0216 17:43:26.053973 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9rpkr" event={"ID":"3e120e62-7cc3-4d7e-8c1c-92d3f06302f1","Type":"ContainerDied","Data":"c25348d5579562fa5909325281f82faf41b073f6dfc2180c6d380970e56ccd10"} Feb 16 17:43:26.054101 master-0 kubenswrapper[4652]: I0216 17:43:26.054007 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9rpkr" Feb 16 17:43:26.054101 master-0 kubenswrapper[4652]: I0216 17:43:26.054016 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c25348d5579562fa5909325281f82faf41b073f6dfc2180c6d380970e56ccd10" Feb 16 17:43:26.055945 master-0 kubenswrapper[4652]: I0216 17:43:26.055915 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c6c9-account-create-update-xdl2v" event={"ID":"10a6eddf-9a5a-450c-b4f9-45fc556526dc","Type":"ContainerDied","Data":"92094f6bd9e4b49963b5d274a5625ac1182656ebb495fc87dec0ed1cbd1d8825"} Feb 16 17:43:26.055945 master-0 kubenswrapper[4652]: I0216 17:43:26.055939 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92094f6bd9e4b49963b5d274a5625ac1182656ebb495fc87dec0ed1cbd1d8825" Feb 16 17:43:26.056064 master-0 kubenswrapper[4652]: I0216 17:43:26.055971 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c6c9-account-create-update-xdl2v" Feb 16 17:43:26.069297 master-0 kubenswrapper[4652]: I0216 17:43:26.069238 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" event={"ID":"923ed270-e930-4edb-bd2e-8db0412c5334","Type":"ContainerDied","Data":"f6d3006e2a519eae31c099e9a97eda394915a81c44e5dcba9f4531938d8caee1"} Feb 16 17:43:26.069297 master-0 kubenswrapper[4652]: I0216 17:43:26.069295 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6d3006e2a519eae31c099e9a97eda394915a81c44e5dcba9f4531938d8caee1" Feb 16 17:43:26.069451 master-0 kubenswrapper[4652]: I0216 17:43:26.069358 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d9f2-account-create-update-r7xjk" Feb 16 17:43:26.072387 master-0 kubenswrapper[4652]: I0216 17:43:26.072319 4652 generic.go:334] "Generic (PLEG): container finished" podID="d4f31917-de6f-4a2d-a7ec-14023e52f58d" containerID="3d0602731bf458ec2a01be157b2e02706750d5d14698c9fbe4f0b1a62587c519" exitCode=0 Feb 16 17:43:26.072488 master-0 kubenswrapper[4652]: I0216 17:43:26.072415 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66f9d86cdb-h58xd" event={"ID":"d4f31917-de6f-4a2d-a7ec-14023e52f58d","Type":"ContainerDied","Data":"3d0602731bf458ec2a01be157b2e02706750d5d14698c9fbe4f0b1a62587c519"} Feb 16 17:43:26.076975 master-0 kubenswrapper[4652]: I0216 17:43:26.075269 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e","Type":"ContainerStarted","Data":"fbadfdfcc27e4a352260ab5d0d5522b53ca0aa3355117c4aa2fcc37a416ac351"} Feb 16 17:43:26.078816 master-0 kubenswrapper[4652]: I0216 17:43:26.078774 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-nrzvp" event={"ID":"cbfc1e76-e972-465d-bed5-92eb603c32a6","Type":"ContainerDied","Data":"102c9c512914d9f59cfb624f52f55301e85b60941fa84f5962a5f1b9c1cf7319"} Feb 16 17:43:26.078816 master-0 kubenswrapper[4652]: I0216 17:43:26.078815 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="102c9c512914d9f59cfb624f52f55301e85b60941fa84f5962a5f1b9c1cf7319" Feb 16 17:43:26.078948 master-0 kubenswrapper[4652]: I0216 17:43:26.078862 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-nrzvp" Feb 16 17:43:26.086085 master-0 kubenswrapper[4652]: I0216 17:43:26.085999 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-v5nmj" event={"ID":"9f793d28-5e22-4c0a-8f87-dabf1e4031a2","Type":"ContainerStarted","Data":"f30bd4b174a0d986708740453f187fe3428c4634c9adccae928a5d0f3e8b57c4"} Feb 16 17:43:26.088581 master-0 kubenswrapper[4652]: I0216 17:43:26.088531 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" event={"ID":"8f3751fd-c328-4914-8e15-a14ad13a527d","Type":"ContainerStarted","Data":"2560b68b51efb99bdedb0e3df26d9a0f6fad6c57346b54b010f1c0b45a64e509"} Feb 16 17:43:26.091923 master-0 kubenswrapper[4652]: I0216 17:43:26.089547 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:43:26.095951 master-0 kubenswrapper[4652]: I0216 17:43:26.095904 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-gp6kb" Feb 16 17:43:26.097316 master-0 kubenswrapper[4652]: I0216 17:43:26.096569 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-gp6kb" event={"ID":"089b8594-a539-4435-9573-6d904bce3901","Type":"ContainerDied","Data":"d7a2185d845a21f068224c37e5ecd9ca818dbf81bfd212ec0be2981bb6068952"} Feb 16 17:43:26.097316 master-0 kubenswrapper[4652]: I0216 17:43:26.096612 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7a2185d845a21f068224c37e5ecd9ca818dbf81bfd212ec0be2981bb6068952" Feb 16 17:43:26.102440 master-0 kubenswrapper[4652]: I0216 17:43:26.102394 4652 generic.go:334] "Generic (PLEG): container finished" podID="67dcf429-d644-435b-8edb-e08198064dfb" containerID="076fa119e8cae849e305bb8138aa038ea39286ea3cb353a183aef6ac4148ad49" exitCode=143 Feb 16 17:43:26.105316 master-0 kubenswrapper[4652]: I0216 17:43:26.102760 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"67dcf429-d644-435b-8edb-e08198064dfb","Type":"ContainerDied","Data":"076fa119e8cae849e305bb8138aa038ea39286ea3cb353a183aef6ac4148ad49"} Feb 16 17:43:26.107839 master-0 kubenswrapper[4652]: I0216 17:43:26.107211 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b802-account-create-update-mqckv" event={"ID":"56eeae89-abbe-4bff-b750-6ad05532d328","Type":"ContainerDied","Data":"9f8b38925d05f2b0a887579d212e5032b5817aecfc1e8685302336c74015fccc"} Feb 16 17:43:26.107839 master-0 kubenswrapper[4652]: I0216 17:43:26.107273 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f8b38925d05f2b0a887579d212e5032b5817aecfc1e8685302336c74015fccc" Feb 16 17:43:26.107839 master-0 kubenswrapper[4652]: I0216 17:43:26.107325 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b802-account-create-update-mqckv" Feb 16 17:43:26.111035 master-0 kubenswrapper[4652]: I0216 17:43:26.110951 4652 generic.go:334] "Generic (PLEG): container finished" podID="f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" containerID="2557431c0c136ca2439fed860ea95097d8ceaeb286271c01f87bb1d349766acf" exitCode=143 Feb 16 17:43:26.111035 master-0 kubenswrapper[4652]: I0216 17:43:26.110990 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95","Type":"ContainerDied","Data":"2557431c0c136ca2439fed860ea95097d8ceaeb286271c01f87bb1d349766acf"} Feb 16 17:43:26.169039 master-0 kubenswrapper[4652]: I0216 17:43:26.167277 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-db-sync-v5nmj" podStartSLOduration=10.595610012 podStartE2EDuration="17.16723599s" podCreationTimestamp="2026-02-16 17:43:09 +0000 UTC" firstStartedPulling="2026-02-16 17:43:18.73080303 +0000 UTC m=+1156.118971546" lastFinishedPulling="2026-02-16 17:43:25.302429008 +0000 UTC m=+1162.690597524" observedRunningTime="2026-02-16 17:43:26.146945996 +0000 UTC m=+1163.535114542" watchObservedRunningTime="2026-02-16 17:43:26.16723599 +0000 UTC m=+1163.555404506" Feb 16 17:43:28.135591 master-0 kubenswrapper[4652]: I0216 17:43:28.135541 4652 generic.go:334] "Generic (PLEG): container finished" podID="9f793d28-5e22-4c0a-8f87-dabf1e4031a2" containerID="f30bd4b174a0d986708740453f187fe3428c4634c9adccae928a5d0f3e8b57c4" exitCode=0 Feb 16 17:43:28.136113 master-0 kubenswrapper[4652]: I0216 17:43:28.135626 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-v5nmj" event={"ID":"9f793d28-5e22-4c0a-8f87-dabf1e4031a2","Type":"ContainerDied","Data":"f30bd4b174a0d986708740453f187fe3428c4634c9adccae928a5d0f3e8b57c4"} Feb 16 17:43:28.139127 master-0 kubenswrapper[4652]: I0216 17:43:28.139090 4652 generic.go:334] "Generic (PLEG): container finished" podID="d4f31917-de6f-4a2d-a7ec-14023e52f58d" containerID="5fc807d82c3e5673a6f374ae1a575834bfeb59f1ee48fd03180d2278dec790d1" exitCode=0 Feb 16 17:43:28.139322 master-0 kubenswrapper[4652]: I0216 17:43:28.139131 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66f9d86cdb-h58xd" event={"ID":"d4f31917-de6f-4a2d-a7ec-14023e52f58d","Type":"ContainerDied","Data":"5fc807d82c3e5673a6f374ae1a575834bfeb59f1ee48fd03180d2278dec790d1"} Feb 16 17:43:28.392797 master-0 kubenswrapper[4652]: I0216 17:43:28.390978 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-50e08-default-external-api-0" podUID="f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.185:9292/healthcheck\": read tcp 10.128.0.2:60426->10.128.0.185:9292: read: connection reset by peer" Feb 16 17:43:28.392797 master-0 kubenswrapper[4652]: I0216 17:43:28.391000 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-50e08-default-external-api-0" podUID="f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.185:9292/healthcheck\": read tcp 10.128.0.2:60418->10.128.0.185:9292: read: connection reset by peer" Feb 16 17:43:28.604551 master-0 kubenswrapper[4652]: I0216 17:43:28.604494 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:43:28.720386 master-0 kubenswrapper[4652]: I0216 17:43:28.717240 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-httpd-config\") pod \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " Feb 16 17:43:28.720386 master-0 kubenswrapper[4652]: I0216 17:43:28.717329 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-ovndb-tls-certs\") pod \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " Feb 16 17:43:28.720386 master-0 kubenswrapper[4652]: I0216 17:43:28.717369 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-config\") pod \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " Feb 16 17:43:28.720386 master-0 kubenswrapper[4652]: I0216 17:43:28.717405 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4flz\" (UniqueName: \"kubernetes.io/projected/d4f31917-de6f-4a2d-a7ec-14023e52f58d-kube-api-access-x4flz\") pod \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " Feb 16 17:43:28.720386 master-0 kubenswrapper[4652]: I0216 17:43:28.717511 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-combined-ca-bundle\") pod \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\" (UID: \"d4f31917-de6f-4a2d-a7ec-14023e52f58d\") " Feb 16 17:43:28.723652 master-0 kubenswrapper[4652]: I0216 17:43:28.723577 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d4f31917-de6f-4a2d-a7ec-14023e52f58d" (UID: "d4f31917-de6f-4a2d-a7ec-14023e52f58d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:28.725702 master-0 kubenswrapper[4652]: I0216 17:43:28.724366 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4f31917-de6f-4a2d-a7ec-14023e52f58d-kube-api-access-x4flz" (OuterVolumeSpecName: "kube-api-access-x4flz") pod "d4f31917-de6f-4a2d-a7ec-14023e52f58d" (UID: "d4f31917-de6f-4a2d-a7ec-14023e52f58d"). InnerVolumeSpecName "kube-api-access-x4flz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:28.789904 master-0 kubenswrapper[4652]: I0216 17:43:28.789822 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-config" (OuterVolumeSpecName: "config") pod "d4f31917-de6f-4a2d-a7ec-14023e52f58d" (UID: "d4f31917-de6f-4a2d-a7ec-14023e52f58d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:28.820886 master-0 kubenswrapper[4652]: I0216 17:43:28.820776 4652 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-httpd-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:28.820886 master-0 kubenswrapper[4652]: I0216 17:43:28.820814 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:28.820886 master-0 kubenswrapper[4652]: I0216 17:43:28.820824 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4flz\" (UniqueName: \"kubernetes.io/projected/d4f31917-de6f-4a2d-a7ec-14023e52f58d-kube-api-access-x4flz\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:28.826725 master-0 kubenswrapper[4652]: I0216 17:43:28.826212 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4f31917-de6f-4a2d-a7ec-14023e52f58d" (UID: "d4f31917-de6f-4a2d-a7ec-14023e52f58d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:28.871402 master-0 kubenswrapper[4652]: I0216 17:43:28.863463 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d4f31917-de6f-4a2d-a7ec-14023e52f58d" (UID: "d4f31917-de6f-4a2d-a7ec-14023e52f58d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:28.925269 master-0 kubenswrapper[4652]: I0216 17:43:28.923545 4652 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-ovndb-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:28.925269 master-0 kubenswrapper[4652]: I0216 17:43:28.923583 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4f31917-de6f-4a2d-a7ec-14023e52f58d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:28.978671 master-0 kubenswrapper[4652]: I0216 17:43:28.977692 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-50e08-default-internal-api-0" podUID="67dcf429-d644-435b-8edb-e08198064dfb" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.128.0.186:9292/healthcheck\": read tcp 10.128.0.2:54448->10.128.0.186:9292: read: connection reset by peer" Feb 16 17:43:28.978671 master-0 kubenswrapper[4652]: I0216 17:43:28.977802 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-50e08-default-internal-api-0" podUID="67dcf429-d644-435b-8edb-e08198064dfb" containerName="glance-log" probeResult="failure" output="Get \"https://10.128.0.186:9292/healthcheck\": read tcp 10.128.0.2:54434->10.128.0.186:9292: read: connection reset by peer" Feb 16 17:43:29.002607 master-0 kubenswrapper[4652]: I0216 17:43:29.002557 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.128143 master-0 kubenswrapper[4652]: I0216 17:43:29.128066 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v6rd\" (UniqueName: \"kubernetes.io/projected/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-kube-api-access-7v6rd\") pod \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " Feb 16 17:43:29.128143 master-0 kubenswrapper[4652]: I0216 17:43:29.128132 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-httpd-run\") pod \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " Feb 16 17:43:29.128561 master-0 kubenswrapper[4652]: I0216 17:43:29.128162 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-combined-ca-bundle\") pod \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " Feb 16 17:43:29.132522 master-0 kubenswrapper[4652]: I0216 17:43:29.131928 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" (UID: "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:43:29.132522 master-0 kubenswrapper[4652]: I0216 17:43:29.132427 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-public-tls-certs\") pod \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " Feb 16 17:43:29.132522 master-0 kubenswrapper[4652]: I0216 17:43:29.132523 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-scripts\") pod \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " Feb 16 17:43:29.132929 master-0 kubenswrapper[4652]: I0216 17:43:29.132554 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-config-data\") pod \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " Feb 16 17:43:29.132929 master-0 kubenswrapper[4652]: I0216 17:43:29.132580 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-logs\") pod \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " Feb 16 17:43:29.132929 master-0 kubenswrapper[4652]: I0216 17:43:29.132923 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\" (UID: \"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95\") " Feb 16 17:43:29.133639 master-0 kubenswrapper[4652]: I0216 17:43:29.133509 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-kube-api-access-7v6rd" (OuterVolumeSpecName: "kube-api-access-7v6rd") pod "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" (UID: "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95"). InnerVolumeSpecName "kube-api-access-7v6rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:29.134278 master-0 kubenswrapper[4652]: I0216 17:43:29.133831 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v6rd\" (UniqueName: \"kubernetes.io/projected/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-kube-api-access-7v6rd\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.134278 master-0 kubenswrapper[4652]: I0216 17:43:29.133851 4652 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.134776 master-0 kubenswrapper[4652]: I0216 17:43:29.134489 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-logs" (OuterVolumeSpecName: "logs") pod "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" (UID: "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:43:29.138762 master-0 kubenswrapper[4652]: I0216 17:43:29.138547 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-scripts" (OuterVolumeSpecName: "scripts") pod "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" (UID: "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:29.153740 master-0 kubenswrapper[4652]: I0216 17:43:29.153685 4652 generic.go:334] "Generic (PLEG): container finished" podID="67dcf429-d644-435b-8edb-e08198064dfb" containerID="81692546bc5049a915cfd0b82e0a924cf6a246e940f0254c5f82ef23ef987a76" exitCode=0 Feb 16 17:43:29.153954 master-0 kubenswrapper[4652]: I0216 17:43:29.153766 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"67dcf429-d644-435b-8edb-e08198064dfb","Type":"ContainerDied","Data":"81692546bc5049a915cfd0b82e0a924cf6a246e940f0254c5f82ef23ef987a76"} Feb 16 17:43:29.155185 master-0 kubenswrapper[4652]: I0216 17:43:29.155062 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5" (OuterVolumeSpecName: "glance") pod "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" (UID: "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95"). InnerVolumeSpecName "pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:43:29.156358 master-0 kubenswrapper[4652]: I0216 17:43:29.156152 4652 generic.go:334] "Generic (PLEG): container finished" podID="f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" containerID="31c93779bdd8a5bb7167d32420acdff15bffcf01001b6ed861d4a2a2c1c4c2be" exitCode=0 Feb 16 17:43:29.156358 master-0 kubenswrapper[4652]: I0216 17:43:29.156194 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95","Type":"ContainerDied","Data":"31c93779bdd8a5bb7167d32420acdff15bffcf01001b6ed861d4a2a2c1c4c2be"} Feb 16 17:43:29.156358 master-0 kubenswrapper[4652]: I0216 17:43:29.156236 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.156358 master-0 kubenswrapper[4652]: I0216 17:43:29.156270 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"f9b6c637-c97f-4d6e-b233-6c6e1a54cc95","Type":"ContainerDied","Data":"1ec31d9bafaaf086cfc60e24d11417b56c744f1116b7f115c763c3efbdb7a781"} Feb 16 17:43:29.156358 master-0 kubenswrapper[4652]: I0216 17:43:29.156296 4652 scope.go:117] "RemoveContainer" containerID="31c93779bdd8a5bb7167d32420acdff15bffcf01001b6ed861d4a2a2c1c4c2be" Feb 16 17:43:29.159768 master-0 kubenswrapper[4652]: I0216 17:43:29.159714 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-66f9d86cdb-h58xd" event={"ID":"d4f31917-de6f-4a2d-a7ec-14023e52f58d","Type":"ContainerDied","Data":"c32aeb76122c0938b82f34b69a7287e62d3455243fef801010c5cfe9f22a7c19"} Feb 16 17:43:29.159879 master-0 kubenswrapper[4652]: I0216 17:43:29.159729 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-66f9d86cdb-h58xd" Feb 16 17:43:29.193181 master-0 kubenswrapper[4652]: I0216 17:43:29.193059 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" (UID: "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:29.203838 master-0 kubenswrapper[4652]: I0216 17:43:29.203788 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" (UID: "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:29.220497 master-0 kubenswrapper[4652]: I0216 17:43:29.220403 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-config-data" (OuterVolumeSpecName: "config-data") pod "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" (UID: "f9b6c637-c97f-4d6e-b233-6c6e1a54cc95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:29.237544 master-0 kubenswrapper[4652]: I0216 17:43:29.236667 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.237544 master-0 kubenswrapper[4652]: I0216 17:43:29.236706 4652 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.237544 master-0 kubenswrapper[4652]: I0216 17:43:29.236719 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.237544 master-0 kubenswrapper[4652]: I0216 17:43:29.236730 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.237544 master-0 kubenswrapper[4652]: I0216 17:43:29.236741 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.237544 master-0 kubenswrapper[4652]: I0216 17:43:29.236789 4652 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") on node \"master-0\" " Feb 16 17:43:29.294201 master-0 kubenswrapper[4652]: I0216 17:43:29.293688 4652 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:43:29.294201 master-0 kubenswrapper[4652]: I0216 17:43:29.293935 4652 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028" (UniqueName: "kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5") on node "master-0" Feb 16 17:43:29.314205 master-0 kubenswrapper[4652]: I0216 17:43:29.313379 4652 scope.go:117] "RemoveContainer" containerID="2557431c0c136ca2439fed860ea95097d8ceaeb286271c01f87bb1d349766acf" Feb 16 17:43:29.319808 master-0 kubenswrapper[4652]: I0216 17:43:29.319746 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-66f9d86cdb-h58xd"] Feb 16 17:43:29.335609 master-0 kubenswrapper[4652]: I0216 17:43:29.334842 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-66f9d86cdb-h58xd"] Feb 16 17:43:29.338266 master-0 kubenswrapper[4652]: I0216 17:43:29.338219 4652 scope.go:117] "RemoveContainer" containerID="31c93779bdd8a5bb7167d32420acdff15bffcf01001b6ed861d4a2a2c1c4c2be" Feb 16 17:43:29.339073 master-0 kubenswrapper[4652]: E0216 17:43:29.339027 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31c93779bdd8a5bb7167d32420acdff15bffcf01001b6ed861d4a2a2c1c4c2be\": container with ID starting with 31c93779bdd8a5bb7167d32420acdff15bffcf01001b6ed861d4a2a2c1c4c2be not found: ID does not exist" containerID="31c93779bdd8a5bb7167d32420acdff15bffcf01001b6ed861d4a2a2c1c4c2be" Feb 16 17:43:29.339137 master-0 kubenswrapper[4652]: I0216 17:43:29.339088 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31c93779bdd8a5bb7167d32420acdff15bffcf01001b6ed861d4a2a2c1c4c2be"} err="failed to get container status \"31c93779bdd8a5bb7167d32420acdff15bffcf01001b6ed861d4a2a2c1c4c2be\": rpc error: code = NotFound desc = could not find container \"31c93779bdd8a5bb7167d32420acdff15bffcf01001b6ed861d4a2a2c1c4c2be\": container with ID starting with 31c93779bdd8a5bb7167d32420acdff15bffcf01001b6ed861d4a2a2c1c4c2be not found: ID does not exist" Feb 16 17:43:29.339137 master-0 kubenswrapper[4652]: I0216 17:43:29.339126 4652 scope.go:117] "RemoveContainer" containerID="2557431c0c136ca2439fed860ea95097d8ceaeb286271c01f87bb1d349766acf" Feb 16 17:43:29.339621 master-0 kubenswrapper[4652]: E0216 17:43:29.339590 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2557431c0c136ca2439fed860ea95097d8ceaeb286271c01f87bb1d349766acf\": container with ID starting with 2557431c0c136ca2439fed860ea95097d8ceaeb286271c01f87bb1d349766acf not found: ID does not exist" containerID="2557431c0c136ca2439fed860ea95097d8ceaeb286271c01f87bb1d349766acf" Feb 16 17:43:29.339696 master-0 kubenswrapper[4652]: I0216 17:43:29.339626 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2557431c0c136ca2439fed860ea95097d8ceaeb286271c01f87bb1d349766acf"} err="failed to get container status \"2557431c0c136ca2439fed860ea95097d8ceaeb286271c01f87bb1d349766acf\": rpc error: code = NotFound desc = could not find container \"2557431c0c136ca2439fed860ea95097d8ceaeb286271c01f87bb1d349766acf\": container with ID starting with 2557431c0c136ca2439fed860ea95097d8ceaeb286271c01f87bb1d349766acf not found: ID does not exist" Feb 16 17:43:29.339696 master-0 kubenswrapper[4652]: I0216 17:43:29.339676 4652 scope.go:117] "RemoveContainer" containerID="3d0602731bf458ec2a01be157b2e02706750d5d14698c9fbe4f0b1a62587c519" Feb 16 17:43:29.340622 master-0 kubenswrapper[4652]: I0216 17:43:29.340599 4652 reconciler_common.go:293] "Volume detached for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.372224 master-0 kubenswrapper[4652]: I0216 17:43:29.372183 4652 scope.go:117] "RemoveContainer" containerID="5fc807d82c3e5673a6f374ae1a575834bfeb59f1ee48fd03180d2278dec790d1" Feb 16 17:43:29.519992 master-0 kubenswrapper[4652]: I0216 17:43:29.519930 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:43:29.537970 master-0 kubenswrapper[4652]: I0216 17:43:29.536260 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:43:29.562009 master-0 kubenswrapper[4652]: I0216 17:43:29.561694 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562414 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f31917-de6f-4a2d-a7ec-14023e52f58d" containerName="neutron-api" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562441 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f31917-de6f-4a2d-a7ec-14023e52f58d" containerName="neutron-api" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562456 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" containerName="glance-httpd" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562463 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" containerName="glance-httpd" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562480 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="init" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562488 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="init" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562496 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10a6eddf-9a5a-450c-b4f9-45fc556526dc" containerName="mariadb-account-create-update" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562503 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="10a6eddf-9a5a-450c-b4f9-45fc556526dc" containerName="mariadb-account-create-update" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562518 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" containerName="glance-log" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562526 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" containerName="glance-log" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562543 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="ironic-api" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562550 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="ironic-api" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562561 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="089b8594-a539-4435-9573-6d904bce3901" containerName="mariadb-database-create" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562568 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="089b8594-a539-4435-9573-6d904bce3901" containerName="mariadb-database-create" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562585 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56eeae89-abbe-4bff-b750-6ad05532d328" containerName="mariadb-account-create-update" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562592 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="56eeae89-abbe-4bff-b750-6ad05532d328" containerName="mariadb-account-create-update" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562604 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4f31917-de6f-4a2d-a7ec-14023e52f58d" containerName="neutron-httpd" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562611 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4f31917-de6f-4a2d-a7ec-14023e52f58d" containerName="neutron-httpd" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562620 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="923ed270-e930-4edb-bd2e-8db0412c5334" containerName="mariadb-account-create-update" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562628 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="923ed270-e930-4edb-bd2e-8db0412c5334" containerName="mariadb-account-create-update" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562640 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbfc1e76-e972-465d-bed5-92eb603c32a6" containerName="mariadb-database-create" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562648 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbfc1e76-e972-465d-bed5-92eb603c32a6" containerName="mariadb-database-create" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562671 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="ironic-api-log" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562678 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="ironic-api-log" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562694 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e120e62-7cc3-4d7e-8c1c-92d3f06302f1" containerName="mariadb-database-create" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562701 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e120e62-7cc3-4d7e-8c1c-92d3f06302f1" containerName="mariadb-database-create" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.562716 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="ironic-api" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562723 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="ironic-api" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562975 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" containerName="glance-httpd" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.562991 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="10a6eddf-9a5a-450c-b4f9-45fc556526dc" containerName="mariadb-account-create-update" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.563004 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e120e62-7cc3-4d7e-8c1c-92d3f06302f1" containerName="mariadb-database-create" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.563020 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbfc1e76-e972-465d-bed5-92eb603c32a6" containerName="mariadb-database-create" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.563031 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="923ed270-e930-4edb-bd2e-8db0412c5334" containerName="mariadb-account-create-update" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.563042 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="56eeae89-abbe-4bff-b750-6ad05532d328" containerName="mariadb-account-create-update" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.563059 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" containerName="glance-log" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.563073 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="ironic-api" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.563089 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="ironic-api-log" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.563103 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="ironic-api" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.563118 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4f31917-de6f-4a2d-a7ec-14023e52f58d" containerName="neutron-api" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.563125 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="089b8594-a539-4435-9573-6d904bce3901" containerName="mariadb-database-create" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.563141 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4f31917-de6f-4a2d-a7ec-14023e52f58d" containerName="neutron-httpd" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: E0216 17:43:29.563438 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="init" Feb 16 17:43:29.564398 master-0 kubenswrapper[4652]: I0216 17:43:29.563450 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c06307c-938d-45f7-b671-948d93bf0642" containerName="init" Feb 16 17:43:29.565823 master-0 kubenswrapper[4652]: I0216 17:43:29.565169 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.570382 master-0 kubenswrapper[4652]: I0216 17:43:29.568594 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 17:43:29.570878 master-0 kubenswrapper[4652]: I0216 17:43:29.570851 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-50e08-default-external-config-data" Feb 16 17:43:29.582123 master-0 kubenswrapper[4652]: I0216 17:43:29.582075 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:43:29.676939 master-0 kubenswrapper[4652]: I0216 17:43:29.676892 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:29.686452 master-0 kubenswrapper[4652]: I0216 17:43:29.684377 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:29.757181 master-0 kubenswrapper[4652]: I0216 17:43:29.757121 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17cef5f2-9564-4a9b-b067-89f498cf4a07-config-data\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.757181 master-0 kubenswrapper[4652]: I0216 17:43:29.757186 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.772347 master-0 kubenswrapper[4652]: I0216 17:43:29.757368 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17cef5f2-9564-4a9b-b067-89f498cf4a07-scripts\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.772698 master-0 kubenswrapper[4652]: I0216 17:43:29.772675 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27t88\" (UniqueName: \"kubernetes.io/projected/17cef5f2-9564-4a9b-b067-89f498cf4a07-kube-api-access-27t88\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.775239 master-0 kubenswrapper[4652]: I0216 17:43:29.775206 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17cef5f2-9564-4a9b-b067-89f498cf4a07-logs\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.775451 master-0 kubenswrapper[4652]: I0216 17:43:29.775431 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17cef5f2-9564-4a9b-b067-89f498cf4a07-combined-ca-bundle\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.775822 master-0 kubenswrapper[4652]: I0216 17:43:29.775744 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17cef5f2-9564-4a9b-b067-89f498cf4a07-public-tls-certs\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.776000 master-0 kubenswrapper[4652]: I0216 17:43:29.775982 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17cef5f2-9564-4a9b-b067-89f498cf4a07-httpd-run\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.793224 master-0 kubenswrapper[4652]: I0216 17:43:29.793187 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-neutron-agent-6975fcc79b-5wclc" Feb 16 17:43:29.877414 master-0 kubenswrapper[4652]: I0216 17:43:29.877045 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-internal-tls-certs\") pod \"67dcf429-d644-435b-8edb-e08198064dfb\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " Feb 16 17:43:29.877414 master-0 kubenswrapper[4652]: I0216 17:43:29.877130 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v66v4\" (UniqueName: \"kubernetes.io/projected/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-kube-api-access-v66v4\") pod \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " Feb 16 17:43:29.877414 master-0 kubenswrapper[4652]: I0216 17:43:29.877175 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-var-lib-ironic\") pod \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " Feb 16 17:43:29.877414 master-0 kubenswrapper[4652]: I0216 17:43:29.877273 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpj8x\" (UniqueName: \"kubernetes.io/projected/67dcf429-d644-435b-8edb-e08198064dfb-kube-api-access-mpj8x\") pod \"67dcf429-d644-435b-8edb-e08198064dfb\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " Feb 16 17:43:29.877414 master-0 kubenswrapper[4652]: I0216 17:43:29.877350 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-combined-ca-bundle\") pod \"67dcf429-d644-435b-8edb-e08198064dfb\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " Feb 16 17:43:29.877414 master-0 kubenswrapper[4652]: I0216 17:43:29.877382 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67dcf429-d644-435b-8edb-e08198064dfb-httpd-run\") pod \"67dcf429-d644-435b-8edb-e08198064dfb\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " Feb 16 17:43:29.877955 master-0 kubenswrapper[4652]: I0216 17:43:29.877521 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"67dcf429-d644-435b-8edb-e08198064dfb\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " Feb 16 17:43:29.877955 master-0 kubenswrapper[4652]: I0216 17:43:29.877566 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-combined-ca-bundle\") pod \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " Feb 16 17:43:29.877955 master-0 kubenswrapper[4652]: I0216 17:43:29.877649 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-config\") pod \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " Feb 16 17:43:29.877955 master-0 kubenswrapper[4652]: I0216 17:43:29.877697 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-scripts\") pod \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " Feb 16 17:43:29.877955 master-0 kubenswrapper[4652]: I0216 17:43:29.877774 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67dcf429-d644-435b-8edb-e08198064dfb-logs\") pod \"67dcf429-d644-435b-8edb-e08198064dfb\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " Feb 16 17:43:29.877955 master-0 kubenswrapper[4652]: I0216 17:43:29.877800 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-config-data\") pod \"67dcf429-d644-435b-8edb-e08198064dfb\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " Feb 16 17:43:29.877955 master-0 kubenswrapper[4652]: I0216 17:43:29.877845 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " Feb 16 17:43:29.877955 master-0 kubenswrapper[4652]: I0216 17:43:29.877903 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-etc-podinfo\") pod \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\" (UID: \"9f793d28-5e22-4c0a-8f87-dabf1e4031a2\") " Feb 16 17:43:29.877955 master-0 kubenswrapper[4652]: I0216 17:43:29.877931 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-scripts\") pod \"67dcf429-d644-435b-8edb-e08198064dfb\" (UID: \"67dcf429-d644-435b-8edb-e08198064dfb\") " Feb 16 17:43:29.878428 master-0 kubenswrapper[4652]: I0216 17:43:29.878396 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17cef5f2-9564-4a9b-b067-89f498cf4a07-scripts\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.878499 master-0 kubenswrapper[4652]: I0216 17:43:29.878450 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27t88\" (UniqueName: \"kubernetes.io/projected/17cef5f2-9564-4a9b-b067-89f498cf4a07-kube-api-access-27t88\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.878591 master-0 kubenswrapper[4652]: I0216 17:43:29.878564 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17cef5f2-9564-4a9b-b067-89f498cf4a07-logs\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.878652 master-0 kubenswrapper[4652]: I0216 17:43:29.878620 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17cef5f2-9564-4a9b-b067-89f498cf4a07-combined-ca-bundle\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.878723 master-0 kubenswrapper[4652]: I0216 17:43:29.878699 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17cef5f2-9564-4a9b-b067-89f498cf4a07-public-tls-certs\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.878780 master-0 kubenswrapper[4652]: I0216 17:43:29.878761 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17cef5f2-9564-4a9b-b067-89f498cf4a07-httpd-run\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.878834 master-0 kubenswrapper[4652]: I0216 17:43:29.878793 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17cef5f2-9564-4a9b-b067-89f498cf4a07-config-data\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.878834 master-0 kubenswrapper[4652]: I0216 17:43:29.878823 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.883571 master-0 kubenswrapper[4652]: I0216 17:43:29.883109 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/17cef5f2-9564-4a9b-b067-89f498cf4a07-httpd-run\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.883920 master-0 kubenswrapper[4652]: I0216 17:43:29.883884 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "9f793d28-5e22-4c0a-8f87-dabf1e4031a2" (UID: "9f793d28-5e22-4c0a-8f87-dabf1e4031a2"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 17:43:29.884482 master-0 kubenswrapper[4652]: I0216 17:43:29.884460 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67dcf429-d644-435b-8edb-e08198064dfb-logs" (OuterVolumeSpecName: "logs") pod "67dcf429-d644-435b-8edb-e08198064dfb" (UID: "67dcf429-d644-435b-8edb-e08198064dfb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:43:29.884593 master-0 kubenswrapper[4652]: I0216 17:43:29.884472 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "9f793d28-5e22-4c0a-8f87-dabf1e4031a2" (UID: "9f793d28-5e22-4c0a-8f87-dabf1e4031a2"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:43:29.884785 master-0 kubenswrapper[4652]: I0216 17:43:29.884758 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67dcf429-d644-435b-8edb-e08198064dfb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "67dcf429-d644-435b-8edb-e08198064dfb" (UID: "67dcf429-d644-435b-8edb-e08198064dfb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:43:29.884874 master-0 kubenswrapper[4652]: I0216 17:43:29.884745 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "9f793d28-5e22-4c0a-8f87-dabf1e4031a2" (UID: "9f793d28-5e22-4c0a-8f87-dabf1e4031a2"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:43:29.885028 master-0 kubenswrapper[4652]: I0216 17:43:29.884990 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-kube-api-access-v66v4" (OuterVolumeSpecName: "kube-api-access-v66v4") pod "9f793d28-5e22-4c0a-8f87-dabf1e4031a2" (UID: "9f793d28-5e22-4c0a-8f87-dabf1e4031a2"). InnerVolumeSpecName "kube-api-access-v66v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:29.885142 master-0 kubenswrapper[4652]: I0216 17:43:29.885117 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17cef5f2-9564-4a9b-b067-89f498cf4a07-logs\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.886109 master-0 kubenswrapper[4652]: I0216 17:43:29.886020 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67dcf429-d644-435b-8edb-e08198064dfb-kube-api-access-mpj8x" (OuterVolumeSpecName: "kube-api-access-mpj8x") pod "67dcf429-d644-435b-8edb-e08198064dfb" (UID: "67dcf429-d644-435b-8edb-e08198064dfb"). InnerVolumeSpecName "kube-api-access-mpj8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:29.886183 master-0 kubenswrapper[4652]: I0216 17:43:29.886073 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:43:29.886529 master-0 kubenswrapper[4652]: I0216 17:43:29.886412 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/3d8bc9531c98396b6e6ea0108c18f808bdb9e170b0cc5e329df6a02a3996a78b/globalmount\"" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.887620 master-0 kubenswrapper[4652]: I0216 17:43:29.887536 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17cef5f2-9564-4a9b-b067-89f498cf4a07-combined-ca-bundle\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.887620 master-0 kubenswrapper[4652]: I0216 17:43:29.887555 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-scripts" (OuterVolumeSpecName: "scripts") pod "67dcf429-d644-435b-8edb-e08198064dfb" (UID: "67dcf429-d644-435b-8edb-e08198064dfb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:29.887849 master-0 kubenswrapper[4652]: I0216 17:43:29.887823 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17cef5f2-9564-4a9b-b067-89f498cf4a07-config-data\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.891049 master-0 kubenswrapper[4652]: I0216 17:43:29.890986 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-scripts" (OuterVolumeSpecName: "scripts") pod "9f793d28-5e22-4c0a-8f87-dabf1e4031a2" (UID: "9f793d28-5e22-4c0a-8f87-dabf1e4031a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:29.891492 master-0 kubenswrapper[4652]: I0216 17:43:29.891463 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/17cef5f2-9564-4a9b-b067-89f498cf4a07-public-tls-certs\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.892310 master-0 kubenswrapper[4652]: I0216 17:43:29.892281 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/17cef5f2-9564-4a9b-b067-89f498cf4a07-scripts\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.906448 master-0 kubenswrapper[4652]: I0216 17:43:29.906148 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364" (OuterVolumeSpecName: "glance") pod "67dcf429-d644-435b-8edb-e08198064dfb" (UID: "67dcf429-d644-435b-8edb-e08198064dfb"). InnerVolumeSpecName "pvc-f5bb6936-02e9-48af-847a-b5f88beeba22". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 17:43:29.913972 master-0 kubenswrapper[4652]: I0216 17:43:29.913235 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-config" (OuterVolumeSpecName: "config") pod "9f793d28-5e22-4c0a-8f87-dabf1e4031a2" (UID: "9f793d28-5e22-4c0a-8f87-dabf1e4031a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:29.913972 master-0 kubenswrapper[4652]: I0216 17:43:29.913702 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9f793d28-5e22-4c0a-8f87-dabf1e4031a2" (UID: "9f793d28-5e22-4c0a-8f87-dabf1e4031a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:29.917545 master-0 kubenswrapper[4652]: I0216 17:43:29.917446 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67dcf429-d644-435b-8edb-e08198064dfb" (UID: "67dcf429-d644-435b-8edb-e08198064dfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:29.948643 master-0 kubenswrapper[4652]: I0216 17:43:29.948560 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27t88\" (UniqueName: \"kubernetes.io/projected/17cef5f2-9564-4a9b-b067-89f498cf4a07-kube-api-access-27t88\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:29.960794 master-0 kubenswrapper[4652]: I0216 17:43:29.960733 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "67dcf429-d644-435b-8edb-e08198064dfb" (UID: "67dcf429-d644-435b-8edb-e08198064dfb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:29.964959 master-0 kubenswrapper[4652]: I0216 17:43:29.964886 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-config-data" (OuterVolumeSpecName: "config-data") pod "67dcf429-d644-435b-8edb-e08198064dfb" (UID: "67dcf429-d644-435b-8edb-e08198064dfb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981143 4652 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981190 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v66v4\" (UniqueName: \"kubernetes.io/projected/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-kube-api-access-v66v4\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981202 4652 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981212 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpj8x\" (UniqueName: \"kubernetes.io/projected/67dcf429-d644-435b-8edb-e08198064dfb-kube-api-access-mpj8x\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981221 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981232 4652 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/67dcf429-d644-435b-8edb-e08198064dfb-httpd-run\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981285 4652 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") on node \"master-0\" " Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981297 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981307 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981318 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981326 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/67dcf429-d644-435b-8edb-e08198064dfb-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981334 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981346 4652 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981355 4652 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/9f793d28-5e22-4c0a-8f87-dabf1e4031a2-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:29.981575 master-0 kubenswrapper[4652]: I0216 17:43:29.981364 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67dcf429-d644-435b-8edb-e08198064dfb-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:30.005134 master-0 kubenswrapper[4652]: I0216 17:43:30.005100 4652 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 17:43:30.005353 master-0 kubenswrapper[4652]: I0216 17:43:30.005340 4652 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f5bb6936-02e9-48af-847a-b5f88beeba22" (UniqueName: "kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364") on node "master-0" Feb 16 17:43:30.083303 master-0 kubenswrapper[4652]: I0216 17:43:30.083208 4652 reconciler_common.go:293] "Volume detached for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:30.176128 master-0 kubenswrapper[4652]: I0216 17:43:30.176081 4652 generic.go:334] "Generic (PLEG): container finished" podID="7d01cc93-4bd4-4091-92a8-1c9a7e035c3e" containerID="fbadfdfcc27e4a352260ab5d0d5522b53ca0aa3355117c4aa2fcc37a416ac351" exitCode=0 Feb 16 17:43:30.176128 master-0 kubenswrapper[4652]: I0216 17:43:30.176139 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e","Type":"ContainerDied","Data":"fbadfdfcc27e4a352260ab5d0d5522b53ca0aa3355117c4aa2fcc37a416ac351"} Feb 16 17:43:30.183436 master-0 kubenswrapper[4652]: I0216 17:43:30.183394 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"67dcf429-d644-435b-8edb-e08198064dfb","Type":"ContainerDied","Data":"d38c349d591ef395ce194f3f72ba8a27ec32122317d0210bd8cbe86ed6538b5d"} Feb 16 17:43:30.183562 master-0 kubenswrapper[4652]: I0216 17:43:30.183451 4652 scope.go:117] "RemoveContainer" containerID="81692546bc5049a915cfd0b82e0a924cf6a246e940f0254c5f82ef23ef987a76" Feb 16 17:43:30.183688 master-0 kubenswrapper[4652]: I0216 17:43:30.183663 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.191260 master-0 kubenswrapper[4652]: I0216 17:43:30.191188 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-db-sync-v5nmj" event={"ID":"9f793d28-5e22-4c0a-8f87-dabf1e4031a2","Type":"ContainerDied","Data":"6605b0f635bdc818c561ba0f0d4ce366ab898091072f701441f022e5e02a4248"} Feb 16 17:43:30.191396 master-0 kubenswrapper[4652]: I0216 17:43:30.191266 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6605b0f635bdc818c561ba0f0d4ce366ab898091072f701441f022e5e02a4248" Feb 16 17:43:30.191396 master-0 kubenswrapper[4652]: I0216 17:43:30.191366 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-db-sync-v5nmj" Feb 16 17:43:30.230264 master-0 kubenswrapper[4652]: I0216 17:43:30.230163 4652 scope.go:117] "RemoveContainer" containerID="076fa119e8cae849e305bb8138aa038ea39286ea3cb353a183aef6ac4148ad49" Feb 16 17:43:30.258522 master-0 kubenswrapper[4652]: I0216 17:43:30.258425 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:43:30.284645 master-0 kubenswrapper[4652]: I0216 17:43:30.284597 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:43:30.323802 master-0 kubenswrapper[4652]: I0216 17:43:30.323733 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:43:30.324194 master-0 kubenswrapper[4652]: E0216 17:43:30.324171 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67dcf429-d644-435b-8edb-e08198064dfb" containerName="glance-log" Feb 16 17:43:30.324194 master-0 kubenswrapper[4652]: I0216 17:43:30.324188 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="67dcf429-d644-435b-8edb-e08198064dfb" containerName="glance-log" Feb 16 17:43:30.324333 master-0 kubenswrapper[4652]: E0216 17:43:30.324222 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67dcf429-d644-435b-8edb-e08198064dfb" containerName="glance-httpd" Feb 16 17:43:30.324333 master-0 kubenswrapper[4652]: I0216 17:43:30.324230 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="67dcf429-d644-435b-8edb-e08198064dfb" containerName="glance-httpd" Feb 16 17:43:30.324333 master-0 kubenswrapper[4652]: E0216 17:43:30.324276 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f793d28-5e22-4c0a-8f87-dabf1e4031a2" containerName="ironic-inspector-db-sync" Feb 16 17:43:30.324333 master-0 kubenswrapper[4652]: I0216 17:43:30.324284 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f793d28-5e22-4c0a-8f87-dabf1e4031a2" containerName="ironic-inspector-db-sync" Feb 16 17:43:30.324525 master-0 kubenswrapper[4652]: I0216 17:43:30.324483 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="67dcf429-d644-435b-8edb-e08198064dfb" containerName="glance-log" Feb 16 17:43:30.324525 master-0 kubenswrapper[4652]: I0216 17:43:30.324506 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f793d28-5e22-4c0a-8f87-dabf1e4031a2" containerName="ironic-inspector-db-sync" Feb 16 17:43:30.324525 master-0 kubenswrapper[4652]: I0216 17:43:30.324517 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="67dcf429-d644-435b-8edb-e08198064dfb" containerName="glance-httpd" Feb 16 17:43:30.327767 master-0 kubenswrapper[4652]: I0216 17:43:30.327717 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.329706 master-0 kubenswrapper[4652]: I0216 17:43:30.329655 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-50e08-default-internal-config-data" Feb 16 17:43:30.336346 master-0 kubenswrapper[4652]: I0216 17:43:30.332596 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 17:43:30.381722 master-0 kubenswrapper[4652]: I0216 17:43:30.381677 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:43:30.393468 master-0 kubenswrapper[4652]: I0216 17:43:30.393197 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n4l2r"] Feb 16 17:43:30.396031 master-0 kubenswrapper[4652]: I0216 17:43:30.395986 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.398395 master-0 kubenswrapper[4652]: I0216 17:43:30.397538 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 17:43:30.398395 master-0 kubenswrapper[4652]: I0216 17:43:30.397812 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 17:43:30.405883 master-0 kubenswrapper[4652]: I0216 17:43:30.405812 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n4l2r"] Feb 16 17:43:30.494930 master-0 kubenswrapper[4652]: I0216 17:43:30.494743 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-n4l2r\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.494930 master-0 kubenswrapper[4652]: I0216 17:43:30.494817 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p5tk\" (UniqueName: \"kubernetes.io/projected/a6a250cc-de97-4949-8016-70a1eb0c64a4-kube-api-access-8p5tk\") pod \"nova-cell0-conductor-db-sync-n4l2r\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.494930 master-0 kubenswrapper[4652]: I0216 17:43:30.494843 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-config-data\") pod \"nova-cell0-conductor-db-sync-n4l2r\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.495322 master-0 kubenswrapper[4652]: I0216 17:43:30.495018 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526542d1-0383-49a5-9190-2389a0aef5f1-combined-ca-bundle\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.495322 master-0 kubenswrapper[4652]: I0216 17:43:30.495087 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjs7p\" (UniqueName: \"kubernetes.io/projected/526542d1-0383-49a5-9190-2389a0aef5f1-kube-api-access-kjs7p\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.495322 master-0 kubenswrapper[4652]: I0216 17:43:30.495165 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/526542d1-0383-49a5-9190-2389a0aef5f1-httpd-run\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.495322 master-0 kubenswrapper[4652]: I0216 17:43:30.495225 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.495322 master-0 kubenswrapper[4652]: I0216 17:43:30.495285 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/526542d1-0383-49a5-9190-2389a0aef5f1-logs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.495526 master-0 kubenswrapper[4652]: I0216 17:43:30.495388 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/526542d1-0383-49a5-9190-2389a0aef5f1-scripts\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.495526 master-0 kubenswrapper[4652]: I0216 17:43:30.495414 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-scripts\") pod \"nova-cell0-conductor-db-sync-n4l2r\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.495526 master-0 kubenswrapper[4652]: I0216 17:43:30.495469 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/526542d1-0383-49a5-9190-2389a0aef5f1-internal-tls-certs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.495686 master-0 kubenswrapper[4652]: I0216 17:43:30.495570 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526542d1-0383-49a5-9190-2389a0aef5f1-config-data\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.597632 master-0 kubenswrapper[4652]: I0216 17:43:30.597578 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526542d1-0383-49a5-9190-2389a0aef5f1-config-data\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.597842 master-0 kubenswrapper[4652]: I0216 17:43:30.597737 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-n4l2r\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.597842 master-0 kubenswrapper[4652]: I0216 17:43:30.597805 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p5tk\" (UniqueName: \"kubernetes.io/projected/a6a250cc-de97-4949-8016-70a1eb0c64a4-kube-api-access-8p5tk\") pod \"nova-cell0-conductor-db-sync-n4l2r\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.597842 master-0 kubenswrapper[4652]: I0216 17:43:30.597827 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-config-data\") pod \"nova-cell0-conductor-db-sync-n4l2r\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.597962 master-0 kubenswrapper[4652]: I0216 17:43:30.597859 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526542d1-0383-49a5-9190-2389a0aef5f1-combined-ca-bundle\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.597962 master-0 kubenswrapper[4652]: I0216 17:43:30.597885 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjs7p\" (UniqueName: \"kubernetes.io/projected/526542d1-0383-49a5-9190-2389a0aef5f1-kube-api-access-kjs7p\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.598572 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/526542d1-0383-49a5-9190-2389a0aef5f1-httpd-run\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.598674 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.598717 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/526542d1-0383-49a5-9190-2389a0aef5f1-logs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.598828 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/526542d1-0383-49a5-9190-2389a0aef5f1-scripts\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.598849 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-scripts\") pod \"nova-cell0-conductor-db-sync-n4l2r\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.598915 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/526542d1-0383-49a5-9190-2389a0aef5f1-internal-tls-certs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.599051 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/526542d1-0383-49a5-9190-2389a0aef5f1-httpd-run\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.600145 4652 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.600174 4652 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/topolvm.io/6145192c64db548fddf9bb3cc8141db5764e5395e391d0e15bf39805d4ff5e26/globalmount\"" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.600998 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/526542d1-0383-49a5-9190-2389a0aef5f1-logs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.601500 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-n4l2r\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.601994 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-scripts\") pod \"nova-cell0-conductor-db-sync-n4l2r\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.602044 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/526542d1-0383-49a5-9190-2389a0aef5f1-combined-ca-bundle\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.602506 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/526542d1-0383-49a5-9190-2389a0aef5f1-config-data\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.603384 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/526542d1-0383-49a5-9190-2389a0aef5f1-internal-tls-certs\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.603949 master-0 kubenswrapper[4652]: I0216 17:43:30.603884 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-config-data\") pod \"nova-cell0-conductor-db-sync-n4l2r\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.606844 master-0 kubenswrapper[4652]: I0216 17:43:30.606813 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/526542d1-0383-49a5-9190-2389a0aef5f1-scripts\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.616356 master-0 kubenswrapper[4652]: I0216 17:43:30.615089 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjs7p\" (UniqueName: \"kubernetes.io/projected/526542d1-0383-49a5-9190-2389a0aef5f1-kube-api-access-kjs7p\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:30.622919 master-0 kubenswrapper[4652]: I0216 17:43:30.622795 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p5tk\" (UniqueName: \"kubernetes.io/projected/a6a250cc-de97-4949-8016-70a1eb0c64a4-kube-api-access-8p5tk\") pod \"nova-cell0-conductor-db-sync-n4l2r\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.721746 master-0 kubenswrapper[4652]: I0216 17:43:30.721667 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:30.760312 master-0 kubenswrapper[4652]: I0216 17:43:30.760261 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67dcf429-d644-435b-8edb-e08198064dfb" path="/var/lib/kubelet/pods/67dcf429-d644-435b-8edb-e08198064dfb/volumes" Feb 16 17:43:30.761337 master-0 kubenswrapper[4652]: I0216 17:43:30.761074 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4f31917-de6f-4a2d-a7ec-14023e52f58d" path="/var/lib/kubelet/pods/d4f31917-de6f-4a2d-a7ec-14023e52f58d/volumes" Feb 16 17:43:30.762059 master-0 kubenswrapper[4652]: I0216 17:43:30.762006 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b6c637-c97f-4d6e-b233-6c6e1a54cc95" path="/var/lib/kubelet/pods/f9b6c637-c97f-4d6e-b233-6c6e1a54cc95/volumes" Feb 16 17:43:30.765067 master-0 kubenswrapper[4652]: I0216 17:43:30.765034 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-50a698bd-ab97-4c8c-b97e-21fad86d1028\" (UniqueName: \"kubernetes.io/csi/topolvm.io^6e003579-705b-4dbf-a055-11d79423c0f5\") pod \"glance-50e08-default-external-api-0\" (UID: \"17cef5f2-9564-4a9b-b067-89f498cf4a07\") " pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:30.871155 master-0 kubenswrapper[4652]: I0216 17:43:30.871034 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:31.170864 master-0 kubenswrapper[4652]: I0216 17:43:31.169085 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n4l2r"] Feb 16 17:43:31.225151 master-0 kubenswrapper[4652]: I0216 17:43:31.225069 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n4l2r" event={"ID":"a6a250cc-de97-4949-8016-70a1eb0c64a4","Type":"ContainerStarted","Data":"585682da8968fba9f6c66c8e509644fa4bda48fcc00a772d589fa4909171dfd3"} Feb 16 17:43:31.513438 master-0 kubenswrapper[4652]: I0216 17:43:31.513377 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50e08-default-external-api-0"] Feb 16 17:43:32.246370 master-0 kubenswrapper[4652]: I0216 17:43:32.245995 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"17cef5f2-9564-4a9b-b067-89f498cf4a07","Type":"ContainerStarted","Data":"4d9626a69f4456f69fefd92f8d125292044c85470fb91a7b7b0fd0fb5cac834d"} Feb 16 17:43:32.246370 master-0 kubenswrapper[4652]: I0216 17:43:32.246054 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"17cef5f2-9564-4a9b-b067-89f498cf4a07","Type":"ContainerStarted","Data":"81edba1e225eff64ada92bb70a2346a715a7079be717c6ec1c6e0952482bd4de"} Feb 16 17:43:32.530356 master-0 kubenswrapper[4652]: I0216 17:43:32.517639 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7897cfb75c-d6qs4"] Feb 16 17:43:32.530356 master-0 kubenswrapper[4652]: I0216 17:43:32.520924 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.549008 master-0 kubenswrapper[4652]: I0216 17:43:32.547312 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7897cfb75c-d6qs4"] Feb 16 17:43:32.644922 master-0 kubenswrapper[4652]: I0216 17:43:32.644462 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 17:43:32.654403 master-0 kubenswrapper[4652]: I0216 17:43:32.653789 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-dns-swift-storage-0\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.654403 master-0 kubenswrapper[4652]: I0216 17:43:32.653924 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-config\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.654403 master-0 kubenswrapper[4652]: I0216 17:43:32.654001 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-ovsdbserver-sb\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.654403 master-0 kubenswrapper[4652]: I0216 17:43:32.654026 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-dns-svc\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.655362 master-0 kubenswrapper[4652]: I0216 17:43:32.655277 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-ovsdbserver-nb\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.655362 master-0 kubenswrapper[4652]: I0216 17:43:32.655348 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdjh4\" (UniqueName: \"kubernetes.io/projected/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-kube-api-access-mdjh4\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.673787 master-0 kubenswrapper[4652]: I0216 17:43:32.673670 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 17:43:32.679948 master-0 kubenswrapper[4652]: I0216 17:43:32.678695 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 16 17:43:32.679948 master-0 kubenswrapper[4652]: I0216 17:43:32.678889 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 16 17:43:32.679948 master-0 kubenswrapper[4652]: I0216 17:43:32.679212 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 16 17:43:32.680355 master-0 kubenswrapper[4652]: I0216 17:43:32.680282 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 17:43:32.757608 master-0 kubenswrapper[4652]: I0216 17:43:32.757539 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.757608 master-0 kubenswrapper[4652]: I0216 17:43:32.757601 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-config\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.757859 master-0 kubenswrapper[4652]: I0216 17:43:32.757633 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/479c302f-f49d-4694-8f6f-fbb8c458db29-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.757859 master-0 kubenswrapper[4652]: I0216 17:43:32.757787 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-ovsdbserver-sb\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.757859 master-0 kubenswrapper[4652]: I0216 17:43:32.757820 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-dns-svc\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.758154 master-0 kubenswrapper[4652]: I0216 17:43:32.757960 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8kpf\" (UniqueName: \"kubernetes.io/projected/479c302f-f49d-4694-8f6f-fbb8c458db29-kube-api-access-b8kpf\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.760574 master-0 kubenswrapper[4652]: I0216 17:43:32.758031 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-ovsdbserver-nb\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.760574 master-0 kubenswrapper[4652]: I0216 17:43:32.759060 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-dns-svc\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.760574 master-0 kubenswrapper[4652]: I0216 17:43:32.759079 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdjh4\" (UniqueName: \"kubernetes.io/projected/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-kube-api-access-mdjh4\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.760574 master-0 kubenswrapper[4652]: I0216 17:43:32.759135 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/479c302f-f49d-4694-8f6f-fbb8c458db29-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.760574 master-0 kubenswrapper[4652]: I0216 17:43:32.759220 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/479c302f-f49d-4694-8f6f-fbb8c458db29-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.760574 master-0 kubenswrapper[4652]: I0216 17:43:32.759718 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-scripts\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.760574 master-0 kubenswrapper[4652]: I0216 17:43:32.759794 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-config\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.760574 master-0 kubenswrapper[4652]: I0216 17:43:32.759824 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-dns-swift-storage-0\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.761364 master-0 kubenswrapper[4652]: I0216 17:43:32.760923 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-config\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.761784 master-0 kubenswrapper[4652]: I0216 17:43:32.761519 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-ovsdbserver-sb\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.762291 master-0 kubenswrapper[4652]: I0216 17:43:32.761875 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-dns-swift-storage-0\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.762291 master-0 kubenswrapper[4652]: I0216 17:43:32.762262 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-ovsdbserver-nb\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.787338 master-0 kubenswrapper[4652]: I0216 17:43:32.787284 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdjh4\" (UniqueName: \"kubernetes.io/projected/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-kube-api-access-mdjh4\") pod \"dnsmasq-dns-7897cfb75c-d6qs4\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.859883 master-0 kubenswrapper[4652]: I0216 17:43:32.859816 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:32.863414 master-0 kubenswrapper[4652]: I0216 17:43:32.863379 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8kpf\" (UniqueName: \"kubernetes.io/projected/479c302f-f49d-4694-8f6f-fbb8c458db29-kube-api-access-b8kpf\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.864161 master-0 kubenswrapper[4652]: I0216 17:43:32.864116 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/479c302f-f49d-4694-8f6f-fbb8c458db29-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.864406 master-0 kubenswrapper[4652]: I0216 17:43:32.864274 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/479c302f-f49d-4694-8f6f-fbb8c458db29-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.864483 master-0 kubenswrapper[4652]: I0216 17:43:32.864459 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-scripts\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.879272 master-0 kubenswrapper[4652]: I0216 17:43:32.864557 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-config\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.879272 master-0 kubenswrapper[4652]: I0216 17:43:32.864645 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.879272 master-0 kubenswrapper[4652]: I0216 17:43:32.864684 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/479c302f-f49d-4694-8f6f-fbb8c458db29-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.879272 master-0 kubenswrapper[4652]: I0216 17:43:32.864704 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/479c302f-f49d-4694-8f6f-fbb8c458db29-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.879272 master-0 kubenswrapper[4652]: I0216 17:43:32.866285 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/479c302f-f49d-4694-8f6f-fbb8c458db29-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.879272 master-0 kubenswrapper[4652]: I0216 17:43:32.870607 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-scripts\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.879272 master-0 kubenswrapper[4652]: I0216 17:43:32.870727 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.879272 master-0 kubenswrapper[4652]: I0216 17:43:32.872052 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/479c302f-f49d-4694-8f6f-fbb8c458db29-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.879272 master-0 kubenswrapper[4652]: I0216 17:43:32.872557 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-config\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:32.893354 master-0 kubenswrapper[4652]: I0216 17:43:32.893300 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8kpf\" (UniqueName: \"kubernetes.io/projected/479c302f-f49d-4694-8f6f-fbb8c458db29-kube-api-access-b8kpf\") pod \"ironic-inspector-0\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:33.020371 master-0 kubenswrapper[4652]: I0216 17:43:33.017013 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 17:43:33.259326 master-0 kubenswrapper[4652]: I0216 17:43:33.259192 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-external-api-0" event={"ID":"17cef5f2-9564-4a9b-b067-89f498cf4a07","Type":"ContainerStarted","Data":"e557f839a299b196d6a79e1f6dae6ccd1fb481d5f21c2039643cf74e49cf661b"} Feb 16 17:43:33.321187 master-0 kubenswrapper[4652]: I0216 17:43:33.321105 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-50e08-default-external-api-0" podStartSLOduration=4.321083795 podStartE2EDuration="4.321083795s" podCreationTimestamp="2026-02-16 17:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:33.2902885 +0000 UTC m=+1170.678457016" watchObservedRunningTime="2026-02-16 17:43:33.321083795 +0000 UTC m=+1170.709252311" Feb 16 17:43:34.363543 master-0 kubenswrapper[4652]: I0216 17:43:34.363477 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f5bb6936-02e9-48af-847a-b5f88beeba22\" (UniqueName: \"kubernetes.io/csi/topolvm.io^abd00595-a8e9-41e2-ad41-796f41623364\") pod \"glance-50e08-default-internal-api-0\" (UID: \"526542d1-0383-49a5-9190-2389a0aef5f1\") " pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:34.582014 master-0 kubenswrapper[4652]: I0216 17:43:34.581953 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:35.709097 master-0 kubenswrapper[4652]: I0216 17:43:35.708999 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7897cfb75c-d6qs4"] Feb 16 17:43:35.976067 master-0 kubenswrapper[4652]: I0216 17:43:35.975954 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 17:43:39.916257 master-0 kubenswrapper[4652]: I0216 17:43:39.916165 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-50e08-default-internal-api-0"] Feb 16 17:43:39.969832 master-0 kubenswrapper[4652]: W0216 17:43:39.969785 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod526542d1_0383_49a5_9190_2389a0aef5f1.slice/crio-5bd2e4db13cae99f640125980d570f93f87120d295929faaf8e0b4e813a8b052 WatchSource:0}: Error finding container 5bd2e4db13cae99f640125980d570f93f87120d295929faaf8e0b4e813a8b052: Status 404 returned error can't find the container with id 5bd2e4db13cae99f640125980d570f93f87120d295929faaf8e0b4e813a8b052 Feb 16 17:43:40.341278 master-0 kubenswrapper[4652]: I0216 17:43:40.340766 4652 generic.go:334] "Generic (PLEG): container finished" podID="a6f438fb-b03b-4b1e-9334-6438bb21c7eb" containerID="6d2e4b65bfc30127f391b236cf89c05d5501422a565232ae207ba2ecc3bd8163" exitCode=0 Feb 16 17:43:40.341278 master-0 kubenswrapper[4652]: I0216 17:43:40.340830 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" event={"ID":"a6f438fb-b03b-4b1e-9334-6438bb21c7eb","Type":"ContainerDied","Data":"6d2e4b65bfc30127f391b236cf89c05d5501422a565232ae207ba2ecc3bd8163"} Feb 16 17:43:40.341278 master-0 kubenswrapper[4652]: I0216 17:43:40.340895 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" event={"ID":"a6f438fb-b03b-4b1e-9334-6438bb21c7eb","Type":"ContainerStarted","Data":"821b124a12eb132dcc64d6eca3bd3bf42d421a972e264aa6056f36c855fc44ad"} Feb 16 17:43:40.342759 master-0 kubenswrapper[4652]: I0216 17:43:40.342447 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"526542d1-0383-49a5-9190-2389a0aef5f1","Type":"ContainerStarted","Data":"5bd2e4db13cae99f640125980d570f93f87120d295929faaf8e0b4e813a8b052"} Feb 16 17:43:40.353389 master-0 kubenswrapper[4652]: I0216 17:43:40.353343 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n4l2r" event={"ID":"a6a250cc-de97-4949-8016-70a1eb0c64a4","Type":"ContainerStarted","Data":"90b0fe5e32687b9703b39979a42107cb0abf7b6484c8243fe2ce9c4e35307ce7"} Feb 16 17:43:40.390820 master-0 kubenswrapper[4652]: I0216 17:43:40.390766 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-n4l2r" podStartSLOduration=1.460227994 podStartE2EDuration="10.390700702s" podCreationTimestamp="2026-02-16 17:43:30 +0000 UTC" firstStartedPulling="2026-02-16 17:43:31.168477303 +0000 UTC m=+1168.556645819" lastFinishedPulling="2026-02-16 17:43:40.098950011 +0000 UTC m=+1177.487118527" observedRunningTime="2026-02-16 17:43:40.38317501 +0000 UTC m=+1177.771343526" watchObservedRunningTime="2026-02-16 17:43:40.390700702 +0000 UTC m=+1177.778869218" Feb 16 17:43:40.728333 master-0 kubenswrapper[4652]: I0216 17:43:40.728285 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 17:43:40.871753 master-0 kubenswrapper[4652]: I0216 17:43:40.871669 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:40.874066 master-0 kubenswrapper[4652]: I0216 17:43:40.874027 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:40.920611 master-0 kubenswrapper[4652]: I0216 17:43:40.920550 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:40.924823 master-0 kubenswrapper[4652]: I0216 17:43:40.924781 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:41.375093 master-0 kubenswrapper[4652]: I0216 17:43:41.375037 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"526542d1-0383-49a5-9190-2389a0aef5f1","Type":"ContainerStarted","Data":"7712f69eeda911476fea16ccd82ba767108c6775d2a1835026799febecd87657"} Feb 16 17:43:41.375093 master-0 kubenswrapper[4652]: I0216 17:43:41.375092 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-50e08-default-internal-api-0" event={"ID":"526542d1-0383-49a5-9190-2389a0aef5f1","Type":"ContainerStarted","Data":"ecfe72e9c5d1a2eb1d6df3d11361dd886706ede55eab6f08b288e42a2609a9a8"} Feb 16 17:43:41.378879 master-0 kubenswrapper[4652]: I0216 17:43:41.378835 4652 generic.go:334] "Generic (PLEG): container finished" podID="479c302f-f49d-4694-8f6f-fbb8c458db29" containerID="b29503229dcf794985ec7bd63ba3aed098e215e433e2478025ba3541e90cdeb8" exitCode=0 Feb 16 17:43:41.379014 master-0 kubenswrapper[4652]: I0216 17:43:41.378986 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"479c302f-f49d-4694-8f6f-fbb8c458db29","Type":"ContainerDied","Data":"b29503229dcf794985ec7bd63ba3aed098e215e433e2478025ba3541e90cdeb8"} Feb 16 17:43:41.379070 master-0 kubenswrapper[4652]: I0216 17:43:41.379016 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"479c302f-f49d-4694-8f6f-fbb8c458db29","Type":"ContainerStarted","Data":"297c6d554ccda40dd30b43d39eb53700e8b8830951a58aa0dbadcf5d74d20d61"} Feb 16 17:43:41.384127 master-0 kubenswrapper[4652]: I0216 17:43:41.384014 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" event={"ID":"a6f438fb-b03b-4b1e-9334-6438bb21c7eb","Type":"ContainerStarted","Data":"e27da12b5f25e7b046e89c7ef896ef258499d158b9da026bdab1d7b7c6121fb1"} Feb 16 17:43:41.385139 master-0 kubenswrapper[4652]: I0216 17:43:41.385104 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:41.388981 master-0 kubenswrapper[4652]: I0216 17:43:41.388942 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e","Type":"ContainerStarted","Data":"b48630d0a30601fcdecd0da702d94769e0b915a321f4b94dfc435e98b64c586e"} Feb 16 17:43:41.390894 master-0 kubenswrapper[4652]: I0216 17:43:41.390800 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:41.390894 master-0 kubenswrapper[4652]: I0216 17:43:41.390865 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:41.398575 master-0 kubenswrapper[4652]: I0216 17:43:41.398491 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-50e08-default-internal-api-0" podStartSLOduration=11.398473976 podStartE2EDuration="11.398473976s" podCreationTimestamp="2026-02-16 17:43:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:41.39676347 +0000 UTC m=+1178.784932006" watchObservedRunningTime="2026-02-16 17:43:41.398473976 +0000 UTC m=+1178.786642492" Feb 16 17:43:41.470504 master-0 kubenswrapper[4652]: I0216 17:43:41.470397 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" podStartSLOduration=9.470372303 podStartE2EDuration="9.470372303s" podCreationTimestamp="2026-02-16 17:43:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:41.465897913 +0000 UTC m=+1178.854066439" watchObservedRunningTime="2026-02-16 17:43:41.470372303 +0000 UTC m=+1178.858540839" Feb 16 17:43:41.999705 master-0 kubenswrapper[4652]: I0216 17:43:41.999661 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 17:43:42.166366 master-0 kubenswrapper[4652]: I0216 17:43:42.166202 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8kpf\" (UniqueName: \"kubernetes.io/projected/479c302f-f49d-4694-8f6f-fbb8c458db29-kube-api-access-b8kpf\") pod \"479c302f-f49d-4694-8f6f-fbb8c458db29\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " Feb 16 17:43:42.166623 master-0 kubenswrapper[4652]: I0216 17:43:42.166588 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-combined-ca-bundle\") pod \"479c302f-f49d-4694-8f6f-fbb8c458db29\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " Feb 16 17:43:42.166679 master-0 kubenswrapper[4652]: I0216 17:43:42.166643 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/479c302f-f49d-4694-8f6f-fbb8c458db29-var-lib-ironic\") pod \"479c302f-f49d-4694-8f6f-fbb8c458db29\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " Feb 16 17:43:42.167629 master-0 kubenswrapper[4652]: I0216 17:43:42.167598 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/479c302f-f49d-4694-8f6f-fbb8c458db29-etc-podinfo\") pod \"479c302f-f49d-4694-8f6f-fbb8c458db29\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " Feb 16 17:43:42.167724 master-0 kubenswrapper[4652]: I0216 17:43:42.167715 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/479c302f-f49d-4694-8f6f-fbb8c458db29-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"479c302f-f49d-4694-8f6f-fbb8c458db29\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " Feb 16 17:43:42.167817 master-0 kubenswrapper[4652]: I0216 17:43:42.167786 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-scripts\") pod \"479c302f-f49d-4694-8f6f-fbb8c458db29\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " Feb 16 17:43:42.167874 master-0 kubenswrapper[4652]: I0216 17:43:42.167825 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-config\") pod \"479c302f-f49d-4694-8f6f-fbb8c458db29\" (UID: \"479c302f-f49d-4694-8f6f-fbb8c458db29\") " Feb 16 17:43:42.168267 master-0 kubenswrapper[4652]: I0216 17:43:42.168197 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/479c302f-f49d-4694-8f6f-fbb8c458db29-var-lib-ironic-inspector-dhcp-hostsdir" (OuterVolumeSpecName: "var-lib-ironic-inspector-dhcp-hostsdir") pod "479c302f-f49d-4694-8f6f-fbb8c458db29" (UID: "479c302f-f49d-4694-8f6f-fbb8c458db29"). InnerVolumeSpecName "var-lib-ironic-inspector-dhcp-hostsdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:43:42.168529 master-0 kubenswrapper[4652]: I0216 17:43:42.168475 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/479c302f-f49d-4694-8f6f-fbb8c458db29-var-lib-ironic" (OuterVolumeSpecName: "var-lib-ironic") pod "479c302f-f49d-4694-8f6f-fbb8c458db29" (UID: "479c302f-f49d-4694-8f6f-fbb8c458db29"). InnerVolumeSpecName "var-lib-ironic". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:43:42.169228 master-0 kubenswrapper[4652]: I0216 17:43:42.169202 4652 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/479c302f-f49d-4694-8f6f-fbb8c458db29-var-lib-ironic-inspector-dhcp-hostsdir\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:42.170766 master-0 kubenswrapper[4652]: I0216 17:43:42.170556 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/479c302f-f49d-4694-8f6f-fbb8c458db29-kube-api-access-b8kpf" (OuterVolumeSpecName: "kube-api-access-b8kpf") pod "479c302f-f49d-4694-8f6f-fbb8c458db29" (UID: "479c302f-f49d-4694-8f6f-fbb8c458db29"). InnerVolumeSpecName "kube-api-access-b8kpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:42.171511 master-0 kubenswrapper[4652]: I0216 17:43:42.171455 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-scripts" (OuterVolumeSpecName: "scripts") pod "479c302f-f49d-4694-8f6f-fbb8c458db29" (UID: "479c302f-f49d-4694-8f6f-fbb8c458db29"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:42.172755 master-0 kubenswrapper[4652]: I0216 17:43:42.172720 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-config" (OuterVolumeSpecName: "config") pod "479c302f-f49d-4694-8f6f-fbb8c458db29" (UID: "479c302f-f49d-4694-8f6f-fbb8c458db29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:42.189962 master-0 kubenswrapper[4652]: I0216 17:43:42.189883 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/479c302f-f49d-4694-8f6f-fbb8c458db29-etc-podinfo" (OuterVolumeSpecName: "etc-podinfo") pod "479c302f-f49d-4694-8f6f-fbb8c458db29" (UID: "479c302f-f49d-4694-8f6f-fbb8c458db29"). InnerVolumeSpecName "etc-podinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 16 17:43:42.215281 master-0 kubenswrapper[4652]: I0216 17:43:42.212798 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "479c302f-f49d-4694-8f6f-fbb8c458db29" (UID: "479c302f-f49d-4694-8f6f-fbb8c458db29"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:42.271142 master-0 kubenswrapper[4652]: I0216 17:43:42.271092 4652 reconciler_common.go:293] "Volume detached for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/479c302f-f49d-4694-8f6f-fbb8c458db29-var-lib-ironic\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:42.271429 master-0 kubenswrapper[4652]: I0216 17:43:42.271413 4652 reconciler_common.go:293] "Volume detached for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/479c302f-f49d-4694-8f6f-fbb8c458db29-etc-podinfo\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:42.271534 master-0 kubenswrapper[4652]: I0216 17:43:42.271520 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:42.271640 master-0 kubenswrapper[4652]: I0216 17:43:42.271625 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:42.271731 master-0 kubenswrapper[4652]: I0216 17:43:42.271715 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8kpf\" (UniqueName: \"kubernetes.io/projected/479c302f-f49d-4694-8f6f-fbb8c458db29-kube-api-access-b8kpf\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:42.271824 master-0 kubenswrapper[4652]: I0216 17:43:42.271809 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/479c302f-f49d-4694-8f6f-fbb8c458db29-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:42.399060 master-0 kubenswrapper[4652]: I0216 17:43:42.398987 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"479c302f-f49d-4694-8f6f-fbb8c458db29","Type":"ContainerDied","Data":"297c6d554ccda40dd30b43d39eb53700e8b8830951a58aa0dbadcf5d74d20d61"} Feb 16 17:43:42.399060 master-0 kubenswrapper[4652]: I0216 17:43:42.399057 4652 scope.go:117] "RemoveContainer" containerID="b29503229dcf794985ec7bd63ba3aed098e215e433e2478025ba3541e90cdeb8" Feb 16 17:43:42.399330 master-0 kubenswrapper[4652]: I0216 17:43:42.399077 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 17:43:42.527567 master-0 kubenswrapper[4652]: I0216 17:43:42.526421 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 17:43:42.551153 master-0 kubenswrapper[4652]: I0216 17:43:42.551079 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 17:43:42.571561 master-0 kubenswrapper[4652]: I0216 17:43:42.571494 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 17:43:42.572223 master-0 kubenswrapper[4652]: E0216 17:43:42.572196 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="479c302f-f49d-4694-8f6f-fbb8c458db29" containerName="ironic-python-agent-init" Feb 16 17:43:42.572333 master-0 kubenswrapper[4652]: I0216 17:43:42.572224 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="479c302f-f49d-4694-8f6f-fbb8c458db29" containerName="ironic-python-agent-init" Feb 16 17:43:42.572603 master-0 kubenswrapper[4652]: I0216 17:43:42.572561 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="479c302f-f49d-4694-8f6f-fbb8c458db29" containerName="ironic-python-agent-init" Feb 16 17:43:42.577747 master-0 kubenswrapper[4652]: I0216 17:43:42.577597 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 17:43:42.582462 master-0 kubenswrapper[4652]: I0216 17:43:42.580864 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-transport-url-ironic-inspector-transport" Feb 16 17:43:42.582462 master-0 kubenswrapper[4652]: I0216 17:43:42.581032 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-scripts" Feb 16 17:43:42.584881 master-0 kubenswrapper[4652]: I0216 17:43:42.583302 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ironic-inspector-config-data" Feb 16 17:43:42.584881 master-0 kubenswrapper[4652]: I0216 17:43:42.583877 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-internal-svc" Feb 16 17:43:42.586601 master-0 kubenswrapper[4652]: I0216 17:43:42.585097 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ironic-inspector-public-svc" Feb 16 17:43:42.590487 master-0 kubenswrapper[4652]: I0216 17:43:42.588943 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 17:43:42.687743 master-0 kubenswrapper[4652]: I0216 17:43:42.687605 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-scripts\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.687743 master-0 kubenswrapper[4652]: I0216 17:43:42.687725 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-config\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.687978 master-0 kubenswrapper[4652]: I0216 17:43:42.687802 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/a028487d-3e90-44c6-b952-7a70e5f45480-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.687978 master-0 kubenswrapper[4652]: I0216 17:43:42.687920 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf8t6\" (UniqueName: \"kubernetes.io/projected/a028487d-3e90-44c6-b952-7a70e5f45480-kube-api-access-bf8t6\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.688043 master-0 kubenswrapper[4652]: I0216 17:43:42.687978 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.688192 master-0 kubenswrapper[4652]: I0216 17:43:42.688149 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/a028487d-3e90-44c6-b952-7a70e5f45480-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.688242 master-0 kubenswrapper[4652]: I0216 17:43:42.688230 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/a028487d-3e90-44c6-b952-7a70e5f45480-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.688299 master-0 kubenswrapper[4652]: I0216 17:43:42.688281 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.688512 master-0 kubenswrapper[4652]: I0216 17:43:42.688450 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.764592 master-0 kubenswrapper[4652]: I0216 17:43:42.764525 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="479c302f-f49d-4694-8f6f-fbb8c458db29" path="/var/lib/kubelet/pods/479c302f-f49d-4694-8f6f-fbb8c458db29/volumes" Feb 16 17:43:42.790021 master-0 kubenswrapper[4652]: I0216 17:43:42.789953 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/a028487d-3e90-44c6-b952-7a70e5f45480-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.790415 master-0 kubenswrapper[4652]: I0216 17:43:42.790375 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.790695 master-0 kubenswrapper[4652]: I0216 17:43:42.790675 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.791000 master-0 kubenswrapper[4652]: I0216 17:43:42.790976 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-scripts\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.791196 master-0 kubenswrapper[4652]: I0216 17:43:42.791176 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-config\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.791429 master-0 kubenswrapper[4652]: I0216 17:43:42.791407 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/a028487d-3e90-44c6-b952-7a70e5f45480-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.791635 master-0 kubenswrapper[4652]: I0216 17:43:42.791620 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf8t6\" (UniqueName: \"kubernetes.io/projected/a028487d-3e90-44c6-b952-7a70e5f45480-kube-api-access-bf8t6\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.791780 master-0 kubenswrapper[4652]: I0216 17:43:42.791764 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.791981 master-0 kubenswrapper[4652]: I0216 17:43:42.791965 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/a028487d-3e90-44c6-b952-7a70e5f45480-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.792625 master-0 kubenswrapper[4652]: I0216 17:43:42.792603 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic\" (UniqueName: \"kubernetes.io/empty-dir/a028487d-3e90-44c6-b952-7a70e5f45480-var-lib-ironic\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.792750 master-0 kubenswrapper[4652]: I0216 17:43:42.792645 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-ironic-inspector-dhcp-hostsdir\" (UniqueName: \"kubernetes.io/empty-dir/a028487d-3e90-44c6-b952-7a70e5f45480-var-lib-ironic-inspector-dhcp-hostsdir\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.794985 master-0 kubenswrapper[4652]: I0216 17:43:42.793332 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-internal-tls-certs\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.795613 master-0 kubenswrapper[4652]: I0216 17:43:42.795580 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-podinfo\" (UniqueName: \"kubernetes.io/downward-api/a028487d-3e90-44c6-b952-7a70e5f45480-etc-podinfo\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.795704 master-0 kubenswrapper[4652]: I0216 17:43:42.795640 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-scripts\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.796660 master-0 kubenswrapper[4652]: I0216 17:43:42.796634 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-config\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.799815 master-0 kubenswrapper[4652]: I0216 17:43:42.799782 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-public-tls-certs\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.804084 master-0 kubenswrapper[4652]: I0216 17:43:42.804049 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a028487d-3e90-44c6-b952-7a70e5f45480-combined-ca-bundle\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.811483 master-0 kubenswrapper[4652]: I0216 17:43:42.811434 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf8t6\" (UniqueName: \"kubernetes.io/projected/a028487d-3e90-44c6-b952-7a70e5f45480-kube-api-access-bf8t6\") pod \"ironic-inspector-0\" (UID: \"a028487d-3e90-44c6-b952-7a70e5f45480\") " pod="openstack/ironic-inspector-0" Feb 16 17:43:42.935696 master-0 kubenswrapper[4652]: I0216 17:43:42.935466 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ironic-inspector-0" Feb 16 17:43:43.416084 master-0 kubenswrapper[4652]: I0216 17:43:43.416039 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:43:43.416084 master-0 kubenswrapper[4652]: I0216 17:43:43.416069 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:43:43.456585 master-0 kubenswrapper[4652]: I0216 17:43:43.456526 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:43.586049 master-0 kubenswrapper[4652]: I0216 17:43:43.584951 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-50e08-default-external-api-0" Feb 16 17:43:44.582557 master-0 kubenswrapper[4652]: I0216 17:43:44.582476 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:44.583397 master-0 kubenswrapper[4652]: I0216 17:43:44.582571 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:44.623786 master-0 kubenswrapper[4652]: I0216 17:43:44.623730 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:44.636719 master-0 kubenswrapper[4652]: I0216 17:43:44.636634 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:44.934776 master-0 kubenswrapper[4652]: W0216 17:43:44.934718 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda028487d_3e90_44c6_b952_7a70e5f45480.slice/crio-2beadfa5ec6f06f6aa1d2317fcb569de1489a7091bd030a6ee238c2b731a2c32 WatchSource:0}: Error finding container 2beadfa5ec6f06f6aa1d2317fcb569de1489a7091bd030a6ee238c2b731a2c32: Status 404 returned error can't find the container with id 2beadfa5ec6f06f6aa1d2317fcb569de1489a7091bd030a6ee238c2b731a2c32 Feb 16 17:43:45.120638 master-0 kubenswrapper[4652]: I0216 17:43:45.120556 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ironic-inspector-0"] Feb 16 17:43:45.440945 master-0 kubenswrapper[4652]: I0216 17:43:45.440801 4652 generic.go:334] "Generic (PLEG): container finished" podID="a028487d-3e90-44c6-b952-7a70e5f45480" containerID="c05276b0f6ee3ec645f3311bf9da340a387b8533d55e9d6bf68838bf1bf8c7db" exitCode=0 Feb 16 17:43:45.440945 master-0 kubenswrapper[4652]: I0216 17:43:45.440869 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"a028487d-3e90-44c6-b952-7a70e5f45480","Type":"ContainerDied","Data":"c05276b0f6ee3ec645f3311bf9da340a387b8533d55e9d6bf68838bf1bf8c7db"} Feb 16 17:43:45.440945 master-0 kubenswrapper[4652]: I0216 17:43:45.440957 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"a028487d-3e90-44c6-b952-7a70e5f45480","Type":"ContainerStarted","Data":"2beadfa5ec6f06f6aa1d2317fcb569de1489a7091bd030a6ee238c2b731a2c32"} Feb 16 17:43:45.442105 master-0 kubenswrapper[4652]: I0216 17:43:45.442077 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:45.442175 master-0 kubenswrapper[4652]: I0216 17:43:45.442114 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:46.454534 master-0 kubenswrapper[4652]: I0216 17:43:46.454474 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"a028487d-3e90-44c6-b952-7a70e5f45480","Type":"ContainerStarted","Data":"b5e9ac6f8750e0aedda7602eb70fa9dc8ebc50d2b8dd95ef8ca9b32a81b1d497"} Feb 16 17:43:47.466314 master-0 kubenswrapper[4652]: I0216 17:43:47.466232 4652 generic.go:334] "Generic (PLEG): container finished" podID="a028487d-3e90-44c6-b952-7a70e5f45480" containerID="b5e9ac6f8750e0aedda7602eb70fa9dc8ebc50d2b8dd95ef8ca9b32a81b1d497" exitCode=0 Feb 16 17:43:47.466314 master-0 kubenswrapper[4652]: I0216 17:43:47.466300 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"a028487d-3e90-44c6-b952-7a70e5f45480","Type":"ContainerDied","Data":"b5e9ac6f8750e0aedda7602eb70fa9dc8ebc50d2b8dd95ef8ca9b32a81b1d497"} Feb 16 17:43:47.491963 master-0 kubenswrapper[4652]: I0216 17:43:47.491918 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:47.492180 master-0 kubenswrapper[4652]: I0216 17:43:47.492024 4652 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 17:43:47.493325 master-0 kubenswrapper[4652]: I0216 17:43:47.493287 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-50e08-default-internal-api-0" Feb 16 17:43:47.863860 master-0 kubenswrapper[4652]: I0216 17:43:47.863804 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:43:48.481165 master-0 kubenswrapper[4652]: I0216 17:43:48.480981 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"a028487d-3e90-44c6-b952-7a70e5f45480","Type":"ContainerStarted","Data":"4bed3dd2c4c18ac4f57718ff2d8bfba7a222464233be1c56a5616bec4d20571c"} Feb 16 17:43:49.166281 master-0 kubenswrapper[4652]: I0216 17:43:49.163174 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ffcb9997-88bvh"] Feb 16 17:43:49.166281 master-0 kubenswrapper[4652]: I0216 17:43:49.163462 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" podUID="c8c13c9e-779c-40f4-b478-b6c3d5baf083" containerName="dnsmasq-dns" containerID="cri-o://534daa907ffb97f56d8446b89ef6be36940f08e6a97d8e1503a7bc8b824b6aeb" gracePeriod=10 Feb 16 17:43:49.492782 master-0 kubenswrapper[4652]: I0216 17:43:49.492704 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"a028487d-3e90-44c6-b952-7a70e5f45480","Type":"ContainerStarted","Data":"9307123f2fbae9252a080e1d52907f1df5fdec5568f6c2ea299de74ce1443d7d"} Feb 16 17:43:50.506154 master-0 kubenswrapper[4652]: I0216 17:43:50.506088 4652 generic.go:334] "Generic (PLEG): container finished" podID="c8c13c9e-779c-40f4-b478-b6c3d5baf083" containerID="534daa907ffb97f56d8446b89ef6be36940f08e6a97d8e1503a7bc8b824b6aeb" exitCode=0 Feb 16 17:43:50.506154 master-0 kubenswrapper[4652]: I0216 17:43:50.506143 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" event={"ID":"c8c13c9e-779c-40f4-b478-b6c3d5baf083","Type":"ContainerDied","Data":"534daa907ffb97f56d8446b89ef6be36940f08e6a97d8e1503a7bc8b824b6aeb"} Feb 16 17:43:51.150738 master-0 kubenswrapper[4652]: I0216 17:43:51.150699 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:43:51.176648 master-0 kubenswrapper[4652]: I0216 17:43:51.172544 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-dns-swift-storage-0\") pod \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " Feb 16 17:43:51.176648 master-0 kubenswrapper[4652]: I0216 17:43:51.172676 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-config\") pod \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " Feb 16 17:43:51.176648 master-0 kubenswrapper[4652]: I0216 17:43:51.173104 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-ovsdbserver-nb\") pod \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " Feb 16 17:43:51.176648 master-0 kubenswrapper[4652]: I0216 17:43:51.173328 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-ovsdbserver-sb\") pod \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " Feb 16 17:43:51.176648 master-0 kubenswrapper[4652]: I0216 17:43:51.173385 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-dns-svc\") pod \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " Feb 16 17:43:51.176648 master-0 kubenswrapper[4652]: I0216 17:43:51.173451 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lbn5\" (UniqueName: \"kubernetes.io/projected/c8c13c9e-779c-40f4-b478-b6c3d5baf083-kube-api-access-7lbn5\") pod \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\" (UID: \"c8c13c9e-779c-40f4-b478-b6c3d5baf083\") " Feb 16 17:43:51.190879 master-0 kubenswrapper[4652]: I0216 17:43:51.187712 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c13c9e-779c-40f4-b478-b6c3d5baf083-kube-api-access-7lbn5" (OuterVolumeSpecName: "kube-api-access-7lbn5") pod "c8c13c9e-779c-40f4-b478-b6c3d5baf083" (UID: "c8c13c9e-779c-40f4-b478-b6c3d5baf083"). InnerVolumeSpecName "kube-api-access-7lbn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:51.235781 master-0 kubenswrapper[4652]: I0216 17:43:51.235629 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-config" (OuterVolumeSpecName: "config") pod "c8c13c9e-779c-40f4-b478-b6c3d5baf083" (UID: "c8c13c9e-779c-40f4-b478-b6c3d5baf083"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:51.237859 master-0 kubenswrapper[4652]: I0216 17:43:51.237801 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c8c13c9e-779c-40f4-b478-b6c3d5baf083" (UID: "c8c13c9e-779c-40f4-b478-b6c3d5baf083"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:51.245874 master-0 kubenswrapper[4652]: I0216 17:43:51.245808 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c8c13c9e-779c-40f4-b478-b6c3d5baf083" (UID: "c8c13c9e-779c-40f4-b478-b6c3d5baf083"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:51.248061 master-0 kubenswrapper[4652]: I0216 17:43:51.247995 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8c13c9e-779c-40f4-b478-b6c3d5baf083" (UID: "c8c13c9e-779c-40f4-b478-b6c3d5baf083"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:51.257370 master-0 kubenswrapper[4652]: I0216 17:43:51.257314 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c8c13c9e-779c-40f4-b478-b6c3d5baf083" (UID: "c8c13c9e-779c-40f4-b478-b6c3d5baf083"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:43:51.279262 master-0 kubenswrapper[4652]: I0216 17:43:51.279005 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:51.279262 master-0 kubenswrapper[4652]: I0216 17:43:51.279051 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:51.279262 master-0 kubenswrapper[4652]: I0216 17:43:51.279067 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lbn5\" (UniqueName: \"kubernetes.io/projected/c8c13c9e-779c-40f4-b478-b6c3d5baf083-kube-api-access-7lbn5\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:51.279262 master-0 kubenswrapper[4652]: I0216 17:43:51.279087 4652 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:51.279262 master-0 kubenswrapper[4652]: I0216 17:43:51.279102 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:51.279262 master-0 kubenswrapper[4652]: I0216 17:43:51.279113 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8c13c9e-779c-40f4-b478-b6c3d5baf083-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:51.606822 master-0 kubenswrapper[4652]: I0216 17:43:51.605315 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" Feb 16 17:43:51.607972 master-0 kubenswrapper[4652]: I0216 17:43:51.607915 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" event={"ID":"c8c13c9e-779c-40f4-b478-b6c3d5baf083","Type":"ContainerDied","Data":"18c1b82dace0d6289c60411d913b41368dd8611c6bef95229f063eb3fe54c4ba"} Feb 16 17:43:51.608096 master-0 kubenswrapper[4652]: I0216 17:43:51.608081 4652 scope.go:117] "RemoveContainer" containerID="534daa907ffb97f56d8446b89ef6be36940f08e6a97d8e1503a7bc8b824b6aeb" Feb 16 17:43:51.630473 master-0 kubenswrapper[4652]: I0216 17:43:51.630434 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"a028487d-3e90-44c6-b952-7a70e5f45480","Type":"ContainerStarted","Data":"f7240de9537a6369f3432fe7af51c603bd661cdf16a248b2647d7cd7cd565488"} Feb 16 17:43:51.630768 master-0 kubenswrapper[4652]: I0216 17:43:51.630755 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"a028487d-3e90-44c6-b952-7a70e5f45480","Type":"ContainerStarted","Data":"d6ea5e383b369b81e3b8cfe6bc0f581eadf4cb12e278a303aa602f9e734af719"} Feb 16 17:43:51.679211 master-0 kubenswrapper[4652]: I0216 17:43:51.679170 4652 scope.go:117] "RemoveContainer" containerID="d28138aa222c38b45c1131708c1b8b34c847f0a7d85745d32e2619d9a6bcba6e" Feb 16 17:43:51.691431 master-0 kubenswrapper[4652]: I0216 17:43:51.691359 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ffcb9997-88bvh"] Feb 16 17:43:51.730145 master-0 kubenswrapper[4652]: I0216 17:43:51.719929 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ffcb9997-88bvh"] Feb 16 17:43:52.645230 master-0 kubenswrapper[4652]: I0216 17:43:52.645189 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-inspector-0" event={"ID":"a028487d-3e90-44c6-b952-7a70e5f45480","Type":"ContainerStarted","Data":"aa145adc56f8d8015a94e18d48131b3dd7e6ee7b097c8110edb04fb30937687c"} Feb 16 17:43:52.645780 master-0 kubenswrapper[4652]: I0216 17:43:52.645764 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 16 17:43:52.760442 master-0 kubenswrapper[4652]: I0216 17:43:52.760380 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-inspector-0" podStartSLOduration=10.760364051 podStartE2EDuration="10.760364051s" podCreationTimestamp="2026-02-16 17:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:43:52.75658796 +0000 UTC m=+1190.144756506" watchObservedRunningTime="2026-02-16 17:43:52.760364051 +0000 UTC m=+1190.148532557" Feb 16 17:43:52.771258 master-0 kubenswrapper[4652]: I0216 17:43:52.771185 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c13c9e-779c-40f4-b478-b6c3d5baf083" path="/var/lib/kubelet/pods/c8c13c9e-779c-40f4-b478-b6c3d5baf083/volumes" Feb 16 17:43:52.935677 master-0 kubenswrapper[4652]: I0216 17:43:52.935558 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 16 17:43:52.935677 master-0 kubenswrapper[4652]: I0216 17:43:52.935627 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 16 17:43:52.935677 master-0 kubenswrapper[4652]: I0216 17:43:52.935642 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 16 17:43:52.935677 master-0 kubenswrapper[4652]: I0216 17:43:52.935653 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-inspector-0" Feb 16 17:43:52.967645 master-0 kubenswrapper[4652]: I0216 17:43:52.967586 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 16 17:43:52.968977 master-0 kubenswrapper[4652]: I0216 17:43:52.968943 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-inspector-0" Feb 16 17:43:53.655555 master-0 kubenswrapper[4652]: I0216 17:43:53.655518 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-inspector-0" Feb 16 17:43:53.666334 master-0 kubenswrapper[4652]: I0216 17:43:53.666296 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 16 17:43:53.667481 master-0 kubenswrapper[4652]: I0216 17:43:53.667442 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 16 17:43:54.709198 master-0 kubenswrapper[4652]: I0216 17:43:54.709159 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 16 17:43:55.674979 master-0 kubenswrapper[4652]: I0216 17:43:55.674920 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-inspector-0" Feb 16 17:43:55.891495 master-0 kubenswrapper[4652]: I0216 17:43:55.891401 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-85ffcb9997-88bvh" podUID="c8c13c9e-779c-40f4-b478-b6c3d5baf083" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.200:5353: i/o timeout" Feb 16 17:43:56.685015 master-0 kubenswrapper[4652]: I0216 17:43:56.684962 4652 generic.go:334] "Generic (PLEG): container finished" podID="a6a250cc-de97-4949-8016-70a1eb0c64a4" containerID="90b0fe5e32687b9703b39979a42107cb0abf7b6484c8243fe2ce9c4e35307ce7" exitCode=0 Feb 16 17:43:56.685228 master-0 kubenswrapper[4652]: I0216 17:43:56.685047 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n4l2r" event={"ID":"a6a250cc-de97-4949-8016-70a1eb0c64a4","Type":"ContainerDied","Data":"90b0fe5e32687b9703b39979a42107cb0abf7b6484c8243fe2ce9c4e35307ce7"} Feb 16 17:43:58.186202 master-0 kubenswrapper[4652]: I0216 17:43:58.186153 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:58.317512 master-0 kubenswrapper[4652]: I0216 17:43:58.317348 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p5tk\" (UniqueName: \"kubernetes.io/projected/a6a250cc-de97-4949-8016-70a1eb0c64a4-kube-api-access-8p5tk\") pod \"a6a250cc-de97-4949-8016-70a1eb0c64a4\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " Feb 16 17:43:58.317512 master-0 kubenswrapper[4652]: I0216 17:43:58.317400 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-scripts\") pod \"a6a250cc-de97-4949-8016-70a1eb0c64a4\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " Feb 16 17:43:58.317512 master-0 kubenswrapper[4652]: I0216 17:43:58.317454 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-config-data\") pod \"a6a250cc-de97-4949-8016-70a1eb0c64a4\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " Feb 16 17:43:58.317512 master-0 kubenswrapper[4652]: I0216 17:43:58.317498 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-combined-ca-bundle\") pod \"a6a250cc-de97-4949-8016-70a1eb0c64a4\" (UID: \"a6a250cc-de97-4949-8016-70a1eb0c64a4\") " Feb 16 17:43:58.320465 master-0 kubenswrapper[4652]: I0216 17:43:58.320412 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-scripts" (OuterVolumeSpecName: "scripts") pod "a6a250cc-de97-4949-8016-70a1eb0c64a4" (UID: "a6a250cc-de97-4949-8016-70a1eb0c64a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:58.320827 master-0 kubenswrapper[4652]: I0216 17:43:58.320770 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a250cc-de97-4949-8016-70a1eb0c64a4-kube-api-access-8p5tk" (OuterVolumeSpecName: "kube-api-access-8p5tk") pod "a6a250cc-de97-4949-8016-70a1eb0c64a4" (UID: "a6a250cc-de97-4949-8016-70a1eb0c64a4"). InnerVolumeSpecName "kube-api-access-8p5tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:43:58.356715 master-0 kubenswrapper[4652]: I0216 17:43:58.356647 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-config-data" (OuterVolumeSpecName: "config-data") pod "a6a250cc-de97-4949-8016-70a1eb0c64a4" (UID: "a6a250cc-de97-4949-8016-70a1eb0c64a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:58.363744 master-0 kubenswrapper[4652]: I0216 17:43:58.363692 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6a250cc-de97-4949-8016-70a1eb0c64a4" (UID: "a6a250cc-de97-4949-8016-70a1eb0c64a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:43:58.419856 master-0 kubenswrapper[4652]: I0216 17:43:58.419792 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p5tk\" (UniqueName: \"kubernetes.io/projected/a6a250cc-de97-4949-8016-70a1eb0c64a4-kube-api-access-8p5tk\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:58.419856 master-0 kubenswrapper[4652]: I0216 17:43:58.419834 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:58.419856 master-0 kubenswrapper[4652]: I0216 17:43:58.419847 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:58.419856 master-0 kubenswrapper[4652]: I0216 17:43:58.419859 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a250cc-de97-4949-8016-70a1eb0c64a4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:43:58.713610 master-0 kubenswrapper[4652]: I0216 17:43:58.713558 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n4l2r" event={"ID":"a6a250cc-de97-4949-8016-70a1eb0c64a4","Type":"ContainerDied","Data":"585682da8968fba9f6c66c8e509644fa4bda48fcc00a772d589fa4909171dfd3"} Feb 16 17:43:58.713610 master-0 kubenswrapper[4652]: I0216 17:43:58.713605 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="585682da8968fba9f6c66c8e509644fa4bda48fcc00a772d589fa4909171dfd3" Feb 16 17:43:58.714023 master-0 kubenswrapper[4652]: I0216 17:43:58.713991 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n4l2r" Feb 16 17:43:58.872823 master-0 kubenswrapper[4652]: I0216 17:43:58.872778 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 17:43:58.873589 master-0 kubenswrapper[4652]: E0216 17:43:58.873573 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c13c9e-779c-40f4-b478-b6c3d5baf083" containerName="init" Feb 16 17:43:58.873675 master-0 kubenswrapper[4652]: I0216 17:43:58.873665 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c13c9e-779c-40f4-b478-b6c3d5baf083" containerName="init" Feb 16 17:43:58.873745 master-0 kubenswrapper[4652]: E0216 17:43:58.873736 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c13c9e-779c-40f4-b478-b6c3d5baf083" containerName="dnsmasq-dns" Feb 16 17:43:58.873805 master-0 kubenswrapper[4652]: I0216 17:43:58.873794 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c13c9e-779c-40f4-b478-b6c3d5baf083" containerName="dnsmasq-dns" Feb 16 17:43:58.873879 master-0 kubenswrapper[4652]: E0216 17:43:58.873870 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a250cc-de97-4949-8016-70a1eb0c64a4" containerName="nova-cell0-conductor-db-sync" Feb 16 17:43:58.873945 master-0 kubenswrapper[4652]: I0216 17:43:58.873936 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a250cc-de97-4949-8016-70a1eb0c64a4" containerName="nova-cell0-conductor-db-sync" Feb 16 17:43:58.874212 master-0 kubenswrapper[4652]: I0216 17:43:58.874200 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c13c9e-779c-40f4-b478-b6c3d5baf083" containerName="dnsmasq-dns" Feb 16 17:43:58.874330 master-0 kubenswrapper[4652]: I0216 17:43:58.874319 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6a250cc-de97-4949-8016-70a1eb0c64a4" containerName="nova-cell0-conductor-db-sync" Feb 16 17:43:58.875261 master-0 kubenswrapper[4652]: I0216 17:43:58.875205 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 17:43:58.877786 master-0 kubenswrapper[4652]: I0216 17:43:58.877746 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 17:43:58.917941 master-0 kubenswrapper[4652]: I0216 17:43:58.884428 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 17:43:58.931110 master-0 kubenswrapper[4652]: I0216 17:43:58.931063 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqcvg\" (UniqueName: \"kubernetes.io/projected/6199776e-0e13-4d91-af54-27dcd4bb3f01-kube-api-access-tqcvg\") pod \"nova-cell0-conductor-0\" (UID: \"6199776e-0e13-4d91-af54-27dcd4bb3f01\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:43:58.931613 master-0 kubenswrapper[4652]: I0216 17:43:58.931592 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199776e-0e13-4d91-af54-27dcd4bb3f01-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6199776e-0e13-4d91-af54-27dcd4bb3f01\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:43:58.931750 master-0 kubenswrapper[4652]: I0216 17:43:58.931736 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199776e-0e13-4d91-af54-27dcd4bb3f01-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6199776e-0e13-4d91-af54-27dcd4bb3f01\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:43:59.034315 master-0 kubenswrapper[4652]: I0216 17:43:59.034180 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199776e-0e13-4d91-af54-27dcd4bb3f01-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6199776e-0e13-4d91-af54-27dcd4bb3f01\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:43:59.034701 master-0 kubenswrapper[4652]: I0216 17:43:59.034681 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqcvg\" (UniqueName: \"kubernetes.io/projected/6199776e-0e13-4d91-af54-27dcd4bb3f01-kube-api-access-tqcvg\") pod \"nova-cell0-conductor-0\" (UID: \"6199776e-0e13-4d91-af54-27dcd4bb3f01\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:43:59.034895 master-0 kubenswrapper[4652]: I0216 17:43:59.034879 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199776e-0e13-4d91-af54-27dcd4bb3f01-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6199776e-0e13-4d91-af54-27dcd4bb3f01\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:43:59.038164 master-0 kubenswrapper[4652]: I0216 17:43:59.038122 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6199776e-0e13-4d91-af54-27dcd4bb3f01-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"6199776e-0e13-4d91-af54-27dcd4bb3f01\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:43:59.040084 master-0 kubenswrapper[4652]: I0216 17:43:59.040043 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6199776e-0e13-4d91-af54-27dcd4bb3f01-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"6199776e-0e13-4d91-af54-27dcd4bb3f01\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:43:59.052857 master-0 kubenswrapper[4652]: I0216 17:43:59.052818 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqcvg\" (UniqueName: \"kubernetes.io/projected/6199776e-0e13-4d91-af54-27dcd4bb3f01-kube-api-access-tqcvg\") pod \"nova-cell0-conductor-0\" (UID: \"6199776e-0e13-4d91-af54-27dcd4bb3f01\") " pod="openstack/nova-cell0-conductor-0" Feb 16 17:43:59.239884 master-0 kubenswrapper[4652]: I0216 17:43:59.239836 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 17:43:59.793840 master-0 kubenswrapper[4652]: I0216 17:43:59.793772 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 17:43:59.800842 master-0 kubenswrapper[4652]: W0216 17:43:59.800785 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6199776e_0e13_4d91_af54_27dcd4bb3f01.slice/crio-d45508453f08962fadf09194f9ba139ede809d8926dc3a3a373156ab6a9d4c52 WatchSource:0}: Error finding container d45508453f08962fadf09194f9ba139ede809d8926dc3a3a373156ab6a9d4c52: Status 404 returned error can't find the container with id d45508453f08962fadf09194f9ba139ede809d8926dc3a3a373156ab6a9d4c52 Feb 16 17:44:00.744320 master-0 kubenswrapper[4652]: I0216 17:44:00.743325 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6199776e-0e13-4d91-af54-27dcd4bb3f01","Type":"ContainerStarted","Data":"a23a875c62b5c8fa244ac095cfe738f28db80f139745a38ebe8b1e5d1550e78b"} Feb 16 17:44:00.744320 master-0 kubenswrapper[4652]: I0216 17:44:00.743407 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"6199776e-0e13-4d91-af54-27dcd4bb3f01","Type":"ContainerStarted","Data":"d45508453f08962fadf09194f9ba139ede809d8926dc3a3a373156ab6a9d4c52"} Feb 16 17:44:00.744320 master-0 kubenswrapper[4652]: I0216 17:44:00.743667 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 17:44:00.852236 master-0 kubenswrapper[4652]: I0216 17:44:00.852153 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.8521350180000002 podStartE2EDuration="2.852135018s" podCreationTimestamp="2026-02-16 17:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:44:00.838656507 +0000 UTC m=+1198.226825043" watchObservedRunningTime="2026-02-16 17:44:00.852135018 +0000 UTC m=+1198.240303534" Feb 16 17:44:09.269356 master-0 kubenswrapper[4652]: I0216 17:44:09.269299 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 17:44:09.783122 master-0 kubenswrapper[4652]: I0216 17:44:09.783061 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-q4gq5"] Feb 16 17:44:09.784607 master-0 kubenswrapper[4652]: I0216 17:44:09.784446 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:09.789772 master-0 kubenswrapper[4652]: I0216 17:44:09.789544 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 17:44:09.790909 master-0 kubenswrapper[4652]: I0216 17:44:09.790318 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 17:44:09.825150 master-0 kubenswrapper[4652]: I0216 17:44:09.820246 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-q4gq5"] Feb 16 17:44:09.886410 master-0 kubenswrapper[4652]: I0216 17:44:09.886301 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 16 17:44:09.923192 master-0 kubenswrapper[4652]: I0216 17:44:09.920607 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 17:44:09.924096 master-0 kubenswrapper[4652]: I0216 17:44:09.924065 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-ironic-compute-config-data" Feb 16 17:44:09.925963 master-0 kubenswrapper[4652]: I0216 17:44:09.924530 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w62p\" (UniqueName: \"kubernetes.io/projected/209b2a48-903a-46dd-abc2-902650a6384c-kube-api-access-2w62p\") pod \"nova-cell0-cell-mapping-q4gq5\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:09.925963 master-0 kubenswrapper[4652]: I0216 17:44:09.924711 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-scripts\") pod \"nova-cell0-cell-mapping-q4gq5\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:09.925963 master-0 kubenswrapper[4652]: I0216 17:44:09.924817 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-q4gq5\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:09.925963 master-0 kubenswrapper[4652]: I0216 17:44:09.924990 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-config-data\") pod \"nova-cell0-cell-mapping-q4gq5\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:09.942991 master-0 kubenswrapper[4652]: I0216 17:44:09.942929 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 16 17:44:10.046837 master-0 kubenswrapper[4652]: I0216 17:44:10.046404 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 17:44:10.060446 master-0 kubenswrapper[4652]: I0216 17:44:10.060390 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:44:10.070200 master-0 kubenswrapper[4652]: I0216 17:44:10.070151 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 17:44:10.073612 master-0 kubenswrapper[4652]: I0216 17:44:10.073164 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxm46\" (UniqueName: \"kubernetes.io/projected/672cb018-c7eb-4e49-93bf-44f6613465a7-kube-api-access-pxm46\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"672cb018-c7eb-4e49-93bf-44f6613465a7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 17:44:10.073612 master-0 kubenswrapper[4652]: I0216 17:44:10.073271 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-config-data\") pod \"nova-cell0-cell-mapping-q4gq5\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:10.076776 master-0 kubenswrapper[4652]: I0216 17:44:10.076747 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w62p\" (UniqueName: \"kubernetes.io/projected/209b2a48-903a-46dd-abc2-902650a6384c-kube-api-access-2w62p\") pod \"nova-cell0-cell-mapping-q4gq5\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:10.077062 master-0 kubenswrapper[4652]: I0216 17:44:10.077032 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/672cb018-c7eb-4e49-93bf-44f6613465a7-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"672cb018-c7eb-4e49-93bf-44f6613465a7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 17:44:10.077275 master-0 kubenswrapper[4652]: I0216 17:44:10.077226 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-scripts\") pod \"nova-cell0-cell-mapping-q4gq5\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:10.077545 master-0 kubenswrapper[4652]: I0216 17:44:10.077530 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-q4gq5\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:10.077803 master-0 kubenswrapper[4652]: I0216 17:44:10.077779 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672cb018-c7eb-4e49-93bf-44f6613465a7-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"672cb018-c7eb-4e49-93bf-44f6613465a7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 17:44:10.095513 master-0 kubenswrapper[4652]: I0216 17:44:10.095146 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-scripts\") pod \"nova-cell0-cell-mapping-q4gq5\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:10.095786 master-0 kubenswrapper[4652]: I0216 17:44:10.095759 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-q4gq5\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:10.101968 master-0 kubenswrapper[4652]: I0216 17:44:10.101927 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-config-data\") pod \"nova-cell0-cell-mapping-q4gq5\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:10.127385 master-0 kubenswrapper[4652]: I0216 17:44:10.127306 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:44:10.134353 master-0 kubenswrapper[4652]: I0216 17:44:10.133864 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w62p\" (UniqueName: \"kubernetes.io/projected/209b2a48-903a-46dd-abc2-902650a6384c-kube-api-access-2w62p\") pod \"nova-cell0-cell-mapping-q4gq5\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:10.176395 master-0 kubenswrapper[4652]: I0216 17:44:10.176344 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:44:10.178963 master-0 kubenswrapper[4652]: I0216 17:44:10.178140 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:44:10.181993 master-0 kubenswrapper[4652]: I0216 17:44:10.181939 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 17:44:10.194388 master-0 kubenswrapper[4652]: I0216 17:44:10.193897 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:44:10.202382 master-0 kubenswrapper[4652]: I0216 17:44:10.202339 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxm46\" (UniqueName: \"kubernetes.io/projected/672cb018-c7eb-4e49-93bf-44f6613465a7-kube-api-access-pxm46\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"672cb018-c7eb-4e49-93bf-44f6613465a7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 17:44:10.202689 master-0 kubenswrapper[4652]: I0216 17:44:10.202666 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/672cb018-c7eb-4e49-93bf-44f6613465a7-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"672cb018-c7eb-4e49-93bf-44f6613465a7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 17:44:10.202952 master-0 kubenswrapper[4652]: I0216 17:44:10.202933 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672cb018-c7eb-4e49-93bf-44f6613465a7-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"672cb018-c7eb-4e49-93bf-44f6613465a7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 17:44:10.207116 master-0 kubenswrapper[4652]: I0216 17:44:10.206133 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxm46\" (UniqueName: \"kubernetes.io/projected/672cb018-c7eb-4e49-93bf-44f6613465a7-kube-api-access-pxm46\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"672cb018-c7eb-4e49-93bf-44f6613465a7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 17:44:10.216000 master-0 kubenswrapper[4652]: I0216 17:44:10.215859 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/672cb018-c7eb-4e49-93bf-44f6613465a7-config-data\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"672cb018-c7eb-4e49-93bf-44f6613465a7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 17:44:10.225584 master-0 kubenswrapper[4652]: I0216 17:44:10.225326 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:44:10.227719 master-0 kubenswrapper[4652]: I0216 17:44:10.227676 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:10.231071 master-0 kubenswrapper[4652]: I0216 17:44:10.230358 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 17:44:10.237101 master-0 kubenswrapper[4652]: I0216 17:44:10.237048 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/672cb018-c7eb-4e49-93bf-44f6613465a7-combined-ca-bundle\") pod \"nova-cell1-compute-ironic-compute-0\" (UID: \"672cb018-c7eb-4e49-93bf-44f6613465a7\") " pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 17:44:10.237497 master-0 kubenswrapper[4652]: I0216 17:44:10.237131 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:44:10.295414 master-0 kubenswrapper[4652]: I0216 17:44:10.295198 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 17:44:10.309970 master-0 kubenswrapper[4652]: I0216 17:44:10.309770 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vgmd\" (UniqueName: \"kubernetes.io/projected/2604753f-de68-498b-be82-6d8da2ce56d9-kube-api-access-4vgmd\") pod \"nova-api-0\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " pod="openstack/nova-api-0" Feb 16 17:44:10.310442 master-0 kubenswrapper[4652]: I0216 17:44:10.310363 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2604753f-de68-498b-be82-6d8da2ce56d9-logs\") pod \"nova-api-0\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " pod="openstack/nova-api-0" Feb 16 17:44:10.331889 master-0 kubenswrapper[4652]: I0216 17:44:10.330953 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2604753f-de68-498b-be82-6d8da2ce56d9-config-data\") pod \"nova-api-0\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " pod="openstack/nova-api-0" Feb 16 17:44:10.331889 master-0 kubenswrapper[4652]: I0216 17:44:10.331081 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2604753f-de68-498b-be82-6d8da2ce56d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " pod="openstack/nova-api-0" Feb 16 17:44:10.331889 master-0 kubenswrapper[4652]: I0216 17:44:10.331179 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7361f440-77f5-42e0-bdce-5bc776fa7f8d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:10.331889 master-0 kubenswrapper[4652]: I0216 17:44:10.331219 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65aadd67-7869-439a-a571-b0827da937da-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"65aadd67-7869-439a-a571-b0827da937da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:10.331889 master-0 kubenswrapper[4652]: I0216 17:44:10.331438 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2flg2\" (UniqueName: \"kubernetes.io/projected/7361f440-77f5-42e0-bdce-5bc776fa7f8d-kube-api-access-2flg2\") pod \"nova-scheduler-0\" (UID: \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:10.331889 master-0 kubenswrapper[4652]: I0216 17:44:10.331464 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65aadd67-7869-439a-a571-b0827da937da-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"65aadd67-7869-439a-a571-b0827da937da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:10.331889 master-0 kubenswrapper[4652]: I0216 17:44:10.331535 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7361f440-77f5-42e0-bdce-5bc776fa7f8d-config-data\") pod \"nova-scheduler-0\" (UID: \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:10.331889 master-0 kubenswrapper[4652]: I0216 17:44:10.331683 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbt2f\" (UniqueName: \"kubernetes.io/projected/65aadd67-7869-439a-a571-b0827da937da-kube-api-access-kbt2f\") pod \"nova-cell1-novncproxy-0\" (UID: \"65aadd67-7869-439a-a571-b0827da937da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:10.342737 master-0 kubenswrapper[4652]: I0216 17:44:10.342646 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:10.347390 master-0 kubenswrapper[4652]: I0216 17:44:10.346763 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:44:10.352660 master-0 kubenswrapper[4652]: I0216 17:44:10.350800 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 17:44:10.393357 master-0 kubenswrapper[4652]: I0216 17:44:10.391463 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:10.434175 master-0 kubenswrapper[4652]: I0216 17:44:10.433771 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2604753f-de68-498b-be82-6d8da2ce56d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " pod="openstack/nova-api-0" Feb 16 17:44:10.434175 master-0 kubenswrapper[4652]: I0216 17:44:10.433844 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wv7h\" (UniqueName: \"kubernetes.io/projected/79c35f71-3301-437c-a4c5-8243d973c58d-kube-api-access-7wv7h\") pod \"nova-metadata-0\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " pod="openstack/nova-metadata-0" Feb 16 17:44:10.434175 master-0 kubenswrapper[4652]: I0216 17:44:10.433884 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7361f440-77f5-42e0-bdce-5bc776fa7f8d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:10.434175 master-0 kubenswrapper[4652]: I0216 17:44:10.433926 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65aadd67-7869-439a-a571-b0827da937da-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"65aadd67-7869-439a-a571-b0827da937da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:10.434175 master-0 kubenswrapper[4652]: I0216 17:44:10.434093 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2flg2\" (UniqueName: \"kubernetes.io/projected/7361f440-77f5-42e0-bdce-5bc776fa7f8d-kube-api-access-2flg2\") pod \"nova-scheduler-0\" (UID: \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:10.434175 master-0 kubenswrapper[4652]: I0216 17:44:10.434115 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65aadd67-7869-439a-a571-b0827da937da-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"65aadd67-7869-439a-a571-b0827da937da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:10.434175 master-0 kubenswrapper[4652]: I0216 17:44:10.434155 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7361f440-77f5-42e0-bdce-5bc776fa7f8d-config-data\") pod \"nova-scheduler-0\" (UID: \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:10.434708 master-0 kubenswrapper[4652]: I0216 17:44:10.434227 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbt2f\" (UniqueName: \"kubernetes.io/projected/65aadd67-7869-439a-a571-b0827da937da-kube-api-access-kbt2f\") pod \"nova-cell1-novncproxy-0\" (UID: \"65aadd67-7869-439a-a571-b0827da937da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:10.434708 master-0 kubenswrapper[4652]: I0216 17:44:10.434273 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c35f71-3301-437c-a4c5-8243d973c58d-config-data\") pod \"nova-metadata-0\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " pod="openstack/nova-metadata-0" Feb 16 17:44:10.434708 master-0 kubenswrapper[4652]: I0216 17:44:10.434346 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vgmd\" (UniqueName: \"kubernetes.io/projected/2604753f-de68-498b-be82-6d8da2ce56d9-kube-api-access-4vgmd\") pod \"nova-api-0\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " pod="openstack/nova-api-0" Feb 16 17:44:10.434708 master-0 kubenswrapper[4652]: I0216 17:44:10.434382 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c35f71-3301-437c-a4c5-8243d973c58d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " pod="openstack/nova-metadata-0" Feb 16 17:44:10.434708 master-0 kubenswrapper[4652]: I0216 17:44:10.434407 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79c35f71-3301-437c-a4c5-8243d973c58d-logs\") pod \"nova-metadata-0\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " pod="openstack/nova-metadata-0" Feb 16 17:44:10.434708 master-0 kubenswrapper[4652]: I0216 17:44:10.434480 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2604753f-de68-498b-be82-6d8da2ce56d9-logs\") pod \"nova-api-0\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " pod="openstack/nova-api-0" Feb 16 17:44:10.434708 master-0 kubenswrapper[4652]: I0216 17:44:10.434515 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2604753f-de68-498b-be82-6d8da2ce56d9-config-data\") pod \"nova-api-0\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " pod="openstack/nova-api-0" Feb 16 17:44:10.437639 master-0 kubenswrapper[4652]: I0216 17:44:10.437605 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:10.443124 master-0 kubenswrapper[4652]: I0216 17:44:10.443090 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7361f440-77f5-42e0-bdce-5bc776fa7f8d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:10.446360 master-0 kubenswrapper[4652]: I0216 17:44:10.446314 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2604753f-de68-498b-be82-6d8da2ce56d9-logs\") pod \"nova-api-0\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " pod="openstack/nova-api-0" Feb 16 17:44:10.447295 master-0 kubenswrapper[4652]: I0216 17:44:10.446937 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-87c86584f-whh65"] Feb 16 17:44:10.447295 master-0 kubenswrapper[4652]: I0216 17:44:10.446964 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2604753f-de68-498b-be82-6d8da2ce56d9-config-data\") pod \"nova-api-0\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " pod="openstack/nova-api-0" Feb 16 17:44:10.457630 master-0 kubenswrapper[4652]: I0216 17:44:10.450895 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7361f440-77f5-42e0-bdce-5bc776fa7f8d-config-data\") pod \"nova-scheduler-0\" (UID: \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:10.457630 master-0 kubenswrapper[4652]: I0216 17:44:10.453844 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.458250 master-0 kubenswrapper[4652]: I0216 17:44:10.458180 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65aadd67-7869-439a-a571-b0827da937da-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"65aadd67-7869-439a-a571-b0827da937da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:10.459464 master-0 kubenswrapper[4652]: I0216 17:44:10.459393 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2604753f-de68-498b-be82-6d8da2ce56d9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " pod="openstack/nova-api-0" Feb 16 17:44:10.459580 master-0 kubenswrapper[4652]: I0216 17:44:10.459473 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65aadd67-7869-439a-a571-b0827da937da-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"65aadd67-7869-439a-a571-b0827da937da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:10.461017 master-0 kubenswrapper[4652]: I0216 17:44:10.460979 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-87c86584f-whh65"] Feb 16 17:44:10.463635 master-0 kubenswrapper[4652]: I0216 17:44:10.463598 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2flg2\" (UniqueName: \"kubernetes.io/projected/7361f440-77f5-42e0-bdce-5bc776fa7f8d-kube-api-access-2flg2\") pod \"nova-scheduler-0\" (UID: \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:10.467082 master-0 kubenswrapper[4652]: I0216 17:44:10.467023 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbt2f\" (UniqueName: \"kubernetes.io/projected/65aadd67-7869-439a-a571-b0827da937da-kube-api-access-kbt2f\") pod \"nova-cell1-novncproxy-0\" (UID: \"65aadd67-7869-439a-a571-b0827da937da\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:10.474906 master-0 kubenswrapper[4652]: I0216 17:44:10.474814 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vgmd\" (UniqueName: \"kubernetes.io/projected/2604753f-de68-498b-be82-6d8da2ce56d9-kube-api-access-4vgmd\") pod \"nova-api-0\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " pod="openstack/nova-api-0" Feb 16 17:44:10.505373 master-0 kubenswrapper[4652]: I0216 17:44:10.505093 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:44:10.541117 master-0 kubenswrapper[4652]: I0216 17:44:10.537135 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-config\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.541117 master-0 kubenswrapper[4652]: I0216 17:44:10.537222 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-dns-svc\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.541117 master-0 kubenswrapper[4652]: I0216 17:44:10.537315 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c35f71-3301-437c-a4c5-8243d973c58d-config-data\") pod \"nova-metadata-0\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " pod="openstack/nova-metadata-0" Feb 16 17:44:10.541117 master-0 kubenswrapper[4652]: I0216 17:44:10.537378 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccjt6\" (UniqueName: \"kubernetes.io/projected/db988590-db79-495b-b490-541f70c6f907-kube-api-access-ccjt6\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.541117 master-0 kubenswrapper[4652]: I0216 17:44:10.538061 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c35f71-3301-437c-a4c5-8243d973c58d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " pod="openstack/nova-metadata-0" Feb 16 17:44:10.541117 master-0 kubenswrapper[4652]: I0216 17:44:10.538117 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79c35f71-3301-437c-a4c5-8243d973c58d-logs\") pod \"nova-metadata-0\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " pod="openstack/nova-metadata-0" Feb 16 17:44:10.541117 master-0 kubenswrapper[4652]: I0216 17:44:10.538274 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-ovsdbserver-nb\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.541117 master-0 kubenswrapper[4652]: I0216 17:44:10.538389 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wv7h\" (UniqueName: \"kubernetes.io/projected/79c35f71-3301-437c-a4c5-8243d973c58d-kube-api-access-7wv7h\") pod \"nova-metadata-0\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " pod="openstack/nova-metadata-0" Feb 16 17:44:10.541117 master-0 kubenswrapper[4652]: I0216 17:44:10.538447 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-ovsdbserver-sb\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.541117 master-0 kubenswrapper[4652]: I0216 17:44:10.538603 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-dns-swift-storage-0\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.541117 master-0 kubenswrapper[4652]: I0216 17:44:10.538982 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79c35f71-3301-437c-a4c5-8243d973c58d-logs\") pod \"nova-metadata-0\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " pod="openstack/nova-metadata-0" Feb 16 17:44:10.577536 master-0 kubenswrapper[4652]: I0216 17:44:10.544464 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c35f71-3301-437c-a4c5-8243d973c58d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " pod="openstack/nova-metadata-0" Feb 16 17:44:10.577536 master-0 kubenswrapper[4652]: I0216 17:44:10.544821 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c35f71-3301-437c-a4c5-8243d973c58d-config-data\") pod \"nova-metadata-0\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " pod="openstack/nova-metadata-0" Feb 16 17:44:10.577536 master-0 kubenswrapper[4652]: I0216 17:44:10.558115 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wv7h\" (UniqueName: \"kubernetes.io/projected/79c35f71-3301-437c-a4c5-8243d973c58d-kube-api-access-7wv7h\") pod \"nova-metadata-0\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " pod="openstack/nova-metadata-0" Feb 16 17:44:10.649727 master-0 kubenswrapper[4652]: I0216 17:44:10.642087 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-ovsdbserver-nb\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.649727 master-0 kubenswrapper[4652]: I0216 17:44:10.642206 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-ovsdbserver-sb\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.649727 master-0 kubenswrapper[4652]: I0216 17:44:10.642298 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-dns-swift-storage-0\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.649727 master-0 kubenswrapper[4652]: I0216 17:44:10.642358 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-config\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.649727 master-0 kubenswrapper[4652]: I0216 17:44:10.642406 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-dns-svc\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.649727 master-0 kubenswrapper[4652]: I0216 17:44:10.642495 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccjt6\" (UniqueName: \"kubernetes.io/projected/db988590-db79-495b-b490-541f70c6f907-kube-api-access-ccjt6\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.649727 master-0 kubenswrapper[4652]: I0216 17:44:10.643788 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-ovsdbserver-sb\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.649727 master-0 kubenswrapper[4652]: I0216 17:44:10.643825 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-dns-swift-storage-0\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.649727 master-0 kubenswrapper[4652]: I0216 17:44:10.643978 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-dns-svc\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.649727 master-0 kubenswrapper[4652]: I0216 17:44:10.644720 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-ovsdbserver-nb\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.649727 master-0 kubenswrapper[4652]: I0216 17:44:10.647429 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-config\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.649727 master-0 kubenswrapper[4652]: I0216 17:44:10.647680 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:44:10.740311 master-0 kubenswrapper[4652]: I0216 17:44:10.740058 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:10.743848 master-0 kubenswrapper[4652]: I0216 17:44:10.743804 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccjt6\" (UniqueName: \"kubernetes.io/projected/db988590-db79-495b-b490-541f70c6f907-kube-api-access-ccjt6\") pod \"dnsmasq-dns-87c86584f-whh65\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.784821 master-0 kubenswrapper[4652]: I0216 17:44:10.784775 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:44:10.800726 master-0 kubenswrapper[4652]: I0216 17:44:10.800011 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:10.941278 master-0 kubenswrapper[4652]: I0216 17:44:10.937936 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rmx4f"] Feb 16 17:44:10.941278 master-0 kubenswrapper[4652]: I0216 17:44:10.939934 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:10.942368 master-0 kubenswrapper[4652]: I0216 17:44:10.942324 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 17:44:10.943551 master-0 kubenswrapper[4652]: I0216 17:44:10.943383 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 17:44:10.955055 master-0 kubenswrapper[4652]: I0216 17:44:10.955002 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rmx4f"] Feb 16 17:44:11.053230 master-0 kubenswrapper[4652]: I0216 17:44:11.053143 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d49fp\" (UniqueName: \"kubernetes.io/projected/c2966848-b02c-4fef-8d49-df6a97604e12-kube-api-access-d49fp\") pod \"nova-cell1-conductor-db-sync-rmx4f\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:11.053831 master-0 kubenswrapper[4652]: I0216 17:44:11.053773 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rmx4f\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:11.054068 master-0 kubenswrapper[4652]: I0216 17:44:11.054019 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-scripts\") pod \"nova-cell1-conductor-db-sync-rmx4f\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:11.054481 master-0 kubenswrapper[4652]: I0216 17:44:11.054404 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-config-data\") pod \"nova-cell1-conductor-db-sync-rmx4f\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:11.088799 master-0 kubenswrapper[4652]: I0216 17:44:11.088724 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-compute-ironic-compute-0"] Feb 16 17:44:11.163396 master-0 kubenswrapper[4652]: I0216 17:44:11.163348 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rmx4f\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:11.163634 master-0 kubenswrapper[4652]: I0216 17:44:11.163440 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-scripts\") pod \"nova-cell1-conductor-db-sync-rmx4f\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:11.163634 master-0 kubenswrapper[4652]: I0216 17:44:11.163539 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-config-data\") pod \"nova-cell1-conductor-db-sync-rmx4f\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:11.163765 master-0 kubenswrapper[4652]: I0216 17:44:11.163740 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d49fp\" (UniqueName: \"kubernetes.io/projected/c2966848-b02c-4fef-8d49-df6a97604e12-kube-api-access-d49fp\") pod \"nova-cell1-conductor-db-sync-rmx4f\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:11.168903 master-0 kubenswrapper[4652]: I0216 17:44:11.168658 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rmx4f\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:11.173916 master-0 kubenswrapper[4652]: I0216 17:44:11.173887 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-config-data\") pod \"nova-cell1-conductor-db-sync-rmx4f\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:11.186471 master-0 kubenswrapper[4652]: I0216 17:44:11.186233 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-scripts\") pod \"nova-cell1-conductor-db-sync-rmx4f\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:11.187379 master-0 kubenswrapper[4652]: I0216 17:44:11.187309 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d49fp\" (UniqueName: \"kubernetes.io/projected/c2966848-b02c-4fef-8d49-df6a97604e12-kube-api-access-d49fp\") pod \"nova-cell1-conductor-db-sync-rmx4f\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:11.224884 master-0 kubenswrapper[4652]: W0216 17:44:11.224481 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2604753f_de68_498b_be82_6d8da2ce56d9.slice/crio-f396940e43cbccdbdaea3fd474ae3ab5151581394fd90e2ddcbf3b786d08b504 WatchSource:0}: Error finding container f396940e43cbccdbdaea3fd474ae3ab5151581394fd90e2ddcbf3b786d08b504: Status 404 returned error can't find the container with id f396940e43cbccdbdaea3fd474ae3ab5151581394fd90e2ddcbf3b786d08b504 Feb 16 17:44:11.260144 master-0 kubenswrapper[4652]: I0216 17:44:11.260075 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:44:11.261857 master-0 kubenswrapper[4652]: W0216 17:44:11.261829 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod209b2a48_903a_46dd_abc2_902650a6384c.slice/crio-b1eb813a566d168eee8fd117e8ac0e54575680852e8637ad43e12f8de604b147 WatchSource:0}: Error finding container b1eb813a566d168eee8fd117e8ac0e54575680852e8637ad43e12f8de604b147: Status 404 returned error can't find the container with id b1eb813a566d168eee8fd117e8ac0e54575680852e8637ad43e12f8de604b147 Feb 16 17:44:11.266596 master-0 kubenswrapper[4652]: I0216 17:44:11.266551 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:11.313266 master-0 kubenswrapper[4652]: I0216 17:44:11.312855 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-q4gq5"] Feb 16 17:44:11.421065 master-0 kubenswrapper[4652]: I0216 17:44:11.421008 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:44:11.601500 master-0 kubenswrapper[4652]: I0216 17:44:11.596738 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:11.624757 master-0 kubenswrapper[4652]: I0216 17:44:11.623936 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:44:11.667188 master-0 kubenswrapper[4652]: I0216 17:44:11.667053 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-87c86584f-whh65"] Feb 16 17:44:11.859718 master-0 kubenswrapper[4652]: I0216 17:44:11.859675 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rmx4f"] Feb 16 17:44:11.943945 master-0 kubenswrapper[4652]: I0216 17:44:11.943883 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"672cb018-c7eb-4e49-93bf-44f6613465a7","Type":"ContainerStarted","Data":"79bfaccf3cc6950b895bc7bd0299922c2da71517fd34cc34bb1c25b7a2ce753b"} Feb 16 17:44:11.945489 master-0 kubenswrapper[4652]: I0216 17:44:11.945458 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65aadd67-7869-439a-a571-b0827da937da","Type":"ContainerStarted","Data":"8bb565856842c2acd6b45889a66e24222e5379aac7eab038d2c7a9abab96bae9"} Feb 16 17:44:11.948946 master-0 kubenswrapper[4652]: I0216 17:44:11.948919 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q4gq5" event={"ID":"209b2a48-903a-46dd-abc2-902650a6384c","Type":"ContainerStarted","Data":"cb3c6b770042c08c46f635a9753e80636cc9bd837b33ccdbb08f7294aebc316f"} Feb 16 17:44:11.949108 master-0 kubenswrapper[4652]: I0216 17:44:11.949075 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q4gq5" event={"ID":"209b2a48-903a-46dd-abc2-902650a6384c","Type":"ContainerStarted","Data":"b1eb813a566d168eee8fd117e8ac0e54575680852e8637ad43e12f8de604b147"} Feb 16 17:44:11.950952 master-0 kubenswrapper[4652]: I0216 17:44:11.950912 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79c35f71-3301-437c-a4c5-8243d973c58d","Type":"ContainerStarted","Data":"3bef1a7d567a55918a1d46567e8de848b2c025f9a16f07d92a6ca5f2a4a196bc"} Feb 16 17:44:11.954365 master-0 kubenswrapper[4652]: I0216 17:44:11.952661 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7361f440-77f5-42e0-bdce-5bc776fa7f8d","Type":"ContainerStarted","Data":"ee6917428d5750e40e5f877f23471413af83ef7ef9a28e08b9265d73e97a15ef"} Feb 16 17:44:11.954365 master-0 kubenswrapper[4652]: I0216 17:44:11.954172 4652 generic.go:334] "Generic (PLEG): container finished" podID="db988590-db79-495b-b490-541f70c6f907" containerID="71c63a0dfb2b044be659716270535b3478e8372a403d5c0f76450de0b248adeb" exitCode=0 Feb 16 17:44:11.954365 master-0 kubenswrapper[4652]: I0216 17:44:11.954240 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87c86584f-whh65" event={"ID":"db988590-db79-495b-b490-541f70c6f907","Type":"ContainerDied","Data":"71c63a0dfb2b044be659716270535b3478e8372a403d5c0f76450de0b248adeb"} Feb 16 17:44:11.954365 master-0 kubenswrapper[4652]: I0216 17:44:11.954284 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87c86584f-whh65" event={"ID":"db988590-db79-495b-b490-541f70c6f907","Type":"ContainerStarted","Data":"8fdaf81efffae40e389c254c59a6069edb78b6295828ee0fda5591e25181ed9c"} Feb 16 17:44:11.956309 master-0 kubenswrapper[4652]: I0216 17:44:11.956276 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2604753f-de68-498b-be82-6d8da2ce56d9","Type":"ContainerStarted","Data":"f396940e43cbccdbdaea3fd474ae3ab5151581394fd90e2ddcbf3b786d08b504"} Feb 16 17:44:11.957665 master-0 kubenswrapper[4652]: I0216 17:44:11.957628 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rmx4f" event={"ID":"c2966848-b02c-4fef-8d49-df6a97604e12","Type":"ContainerStarted","Data":"63b24906effc796c9a865ac54c14721c2a10a6edcf54b785209283bad4839f0e"} Feb 16 17:44:12.004281 master-0 kubenswrapper[4652]: I0216 17:44:12.001615 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-q4gq5" podStartSLOduration=3.001584668 podStartE2EDuration="3.001584668s" podCreationTimestamp="2026-02-16 17:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:44:11.96769488 +0000 UTC m=+1209.355863396" watchObservedRunningTime="2026-02-16 17:44:12.001584668 +0000 UTC m=+1209.389753214" Feb 16 17:44:12.972195 master-0 kubenswrapper[4652]: I0216 17:44:12.972131 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87c86584f-whh65" event={"ID":"db988590-db79-495b-b490-541f70c6f907","Type":"ContainerStarted","Data":"ea75e4bb1bb1c01a73060dda867f0d375ce0007c88800031bc92ef19bfadec27"} Feb 16 17:44:12.977356 master-0 kubenswrapper[4652]: I0216 17:44:12.976092 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:12.981677 master-0 kubenswrapper[4652]: I0216 17:44:12.981606 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rmx4f" event={"ID":"c2966848-b02c-4fef-8d49-df6a97604e12","Type":"ContainerStarted","Data":"11cceeb018fa48ac3af539fb9856e4b2809d02dffb56c60cb56ce4813a6e6ac5"} Feb 16 17:44:13.004892 master-0 kubenswrapper[4652]: I0216 17:44:13.004762 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-87c86584f-whh65" podStartSLOduration=3.004742179 podStartE2EDuration="3.004742179s" podCreationTimestamp="2026-02-16 17:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:44:13.002759566 +0000 UTC m=+1210.390928082" watchObservedRunningTime="2026-02-16 17:44:13.004742179 +0000 UTC m=+1210.392910695" Feb 16 17:44:13.031866 master-0 kubenswrapper[4652]: I0216 17:44:13.031778 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-rmx4f" podStartSLOduration=3.031754053 podStartE2EDuration="3.031754053s" podCreationTimestamp="2026-02-16 17:44:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:44:13.01784401 +0000 UTC m=+1210.406012556" watchObservedRunningTime="2026-02-16 17:44:13.031754053 +0000 UTC m=+1210.419922579" Feb 16 17:44:14.381059 master-0 kubenswrapper[4652]: I0216 17:44:14.380988 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:14.413076 master-0 kubenswrapper[4652]: I0216 17:44:14.412970 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:44:15.055727 master-0 kubenswrapper[4652]: I0216 17:44:15.055673 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79c35f71-3301-437c-a4c5-8243d973c58d","Type":"ContainerStarted","Data":"25612cb123845c7d59562d166a35d440d1f5669dea4be97555a73519b6254f1b"} Feb 16 17:44:16.071779 master-0 kubenswrapper[4652]: I0216 17:44:16.071713 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2604753f-de68-498b-be82-6d8da2ce56d9","Type":"ContainerStarted","Data":"6051da26155a57e24177ea45e195b7f89a26c5e03a7967b8cf73583a6a781e32"} Feb 16 17:44:16.071779 master-0 kubenswrapper[4652]: I0216 17:44:16.071804 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2604753f-de68-498b-be82-6d8da2ce56d9","Type":"ContainerStarted","Data":"17c9dbb046c5217431166ffcdf8ea2149d86bc0845a792436299348fa92637c1"} Feb 16 17:44:16.077480 master-0 kubenswrapper[4652]: I0216 17:44:16.077384 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65aadd67-7869-439a-a571-b0827da937da","Type":"ContainerStarted","Data":"6aa91491057953c3de6a441828c55ea9bdc8b4f061e094a4ce80c379463e78d3"} Feb 16 17:44:16.077480 master-0 kubenswrapper[4652]: I0216 17:44:16.077428 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="65aadd67-7869-439a-a571-b0827da937da" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://6aa91491057953c3de6a441828c55ea9bdc8b4f061e094a4ce80c379463e78d3" gracePeriod=30 Feb 16 17:44:16.082046 master-0 kubenswrapper[4652]: I0216 17:44:16.082002 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79c35f71-3301-437c-a4c5-8243d973c58d","Type":"ContainerStarted","Data":"3ed56c8f10db86ca8fce677b3d85da4c737c51a48d2ea52aa69912c7e5b262ec"} Feb 16 17:44:16.082208 master-0 kubenswrapper[4652]: I0216 17:44:16.082114 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="79c35f71-3301-437c-a4c5-8243d973c58d" containerName="nova-metadata-log" containerID="cri-o://25612cb123845c7d59562d166a35d440d1f5669dea4be97555a73519b6254f1b" gracePeriod=30 Feb 16 17:44:16.082297 master-0 kubenswrapper[4652]: I0216 17:44:16.082207 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="79c35f71-3301-437c-a4c5-8243d973c58d" containerName="nova-metadata-metadata" containerID="cri-o://3ed56c8f10db86ca8fce677b3d85da4c737c51a48d2ea52aa69912c7e5b262ec" gracePeriod=30 Feb 16 17:44:16.089505 master-0 kubenswrapper[4652]: I0216 17:44:16.088396 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7361f440-77f5-42e0-bdce-5bc776fa7f8d","Type":"ContainerStarted","Data":"c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937"} Feb 16 17:44:16.361565 master-0 kubenswrapper[4652]: I0216 17:44:16.353489 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.981493786 podStartE2EDuration="7.353471675s" podCreationTimestamp="2026-02-16 17:44:09 +0000 UTC" firstStartedPulling="2026-02-16 17:44:11.228356792 +0000 UTC m=+1208.616525318" lastFinishedPulling="2026-02-16 17:44:14.600334691 +0000 UTC m=+1211.988503207" observedRunningTime="2026-02-16 17:44:16.346525958 +0000 UTC m=+1213.734694474" watchObservedRunningTime="2026-02-16 17:44:16.353471675 +0000 UTC m=+1213.741640191" Feb 16 17:44:16.389275 master-0 kubenswrapper[4652]: I0216 17:44:16.380781 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.370256468 podStartE2EDuration="6.380744606s" podCreationTimestamp="2026-02-16 17:44:10 +0000 UTC" firstStartedPulling="2026-02-16 17:44:11.59647577 +0000 UTC m=+1208.984644286" lastFinishedPulling="2026-02-16 17:44:14.606963908 +0000 UTC m=+1211.995132424" observedRunningTime="2026-02-16 17:44:16.374072307 +0000 UTC m=+1213.762240833" watchObservedRunningTime="2026-02-16 17:44:16.380744606 +0000 UTC m=+1213.768913132" Feb 16 17:44:16.401354 master-0 kubenswrapper[4652]: I0216 17:44:16.400156 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.390802757 podStartE2EDuration="6.400137855s" podCreationTimestamp="2026-02-16 17:44:10 +0000 UTC" firstStartedPulling="2026-02-16 17:44:11.586653036 +0000 UTC m=+1208.974821552" lastFinishedPulling="2026-02-16 17:44:14.595988134 +0000 UTC m=+1211.984156650" observedRunningTime="2026-02-16 17:44:16.393858797 +0000 UTC m=+1213.782027313" watchObservedRunningTime="2026-02-16 17:44:16.400137855 +0000 UTC m=+1213.788306371" Feb 16 17:44:16.441297 master-0 kubenswrapper[4652]: I0216 17:44:16.437383 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.28867534 podStartE2EDuration="6.437357993s" podCreationTimestamp="2026-02-16 17:44:10 +0000 UTC" firstStartedPulling="2026-02-16 17:44:11.447790234 +0000 UTC m=+1208.835958750" lastFinishedPulling="2026-02-16 17:44:14.596472887 +0000 UTC m=+1211.984641403" observedRunningTime="2026-02-16 17:44:16.425566277 +0000 UTC m=+1213.813734793" watchObservedRunningTime="2026-02-16 17:44:16.437357993 +0000 UTC m=+1213.825526519" Feb 16 17:44:17.102269 master-0 kubenswrapper[4652]: I0216 17:44:17.102172 4652 generic.go:334] "Generic (PLEG): container finished" podID="79c35f71-3301-437c-a4c5-8243d973c58d" containerID="3ed56c8f10db86ca8fce677b3d85da4c737c51a48d2ea52aa69912c7e5b262ec" exitCode=0 Feb 16 17:44:17.102269 master-0 kubenswrapper[4652]: I0216 17:44:17.102232 4652 generic.go:334] "Generic (PLEG): container finished" podID="79c35f71-3301-437c-a4c5-8243d973c58d" containerID="25612cb123845c7d59562d166a35d440d1f5669dea4be97555a73519b6254f1b" exitCode=143 Feb 16 17:44:17.103288 master-0 kubenswrapper[4652]: I0216 17:44:17.103229 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79c35f71-3301-437c-a4c5-8243d973c58d","Type":"ContainerDied","Data":"3ed56c8f10db86ca8fce677b3d85da4c737c51a48d2ea52aa69912c7e5b262ec"} Feb 16 17:44:17.103352 master-0 kubenswrapper[4652]: I0216 17:44:17.103284 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79c35f71-3301-437c-a4c5-8243d973c58d","Type":"ContainerDied","Data":"25612cb123845c7d59562d166a35d440d1f5669dea4be97555a73519b6254f1b"} Feb 16 17:44:17.103352 master-0 kubenswrapper[4652]: I0216 17:44:17.103303 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"79c35f71-3301-437c-a4c5-8243d973c58d","Type":"ContainerDied","Data":"3bef1a7d567a55918a1d46567e8de848b2c025f9a16f07d92a6ca5f2a4a196bc"} Feb 16 17:44:17.103352 master-0 kubenswrapper[4652]: I0216 17:44:17.103314 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bef1a7d567a55918a1d46567e8de848b2c025f9a16f07d92a6ca5f2a4a196bc" Feb 16 17:44:17.192860 master-0 kubenswrapper[4652]: I0216 17:44:17.192762 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:44:17.299915 master-0 kubenswrapper[4652]: I0216 17:44:17.299644 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wv7h\" (UniqueName: \"kubernetes.io/projected/79c35f71-3301-437c-a4c5-8243d973c58d-kube-api-access-7wv7h\") pod \"79c35f71-3301-437c-a4c5-8243d973c58d\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " Feb 16 17:44:17.300172 master-0 kubenswrapper[4652]: I0216 17:44:17.300076 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79c35f71-3301-437c-a4c5-8243d973c58d-logs\") pod \"79c35f71-3301-437c-a4c5-8243d973c58d\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " Feb 16 17:44:17.300434 master-0 kubenswrapper[4652]: I0216 17:44:17.300390 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c35f71-3301-437c-a4c5-8243d973c58d-combined-ca-bundle\") pod \"79c35f71-3301-437c-a4c5-8243d973c58d\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " Feb 16 17:44:17.300503 master-0 kubenswrapper[4652]: I0216 17:44:17.300433 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c35f71-3301-437c-a4c5-8243d973c58d-config-data\") pod \"79c35f71-3301-437c-a4c5-8243d973c58d\" (UID: \"79c35f71-3301-437c-a4c5-8243d973c58d\") " Feb 16 17:44:17.300616 master-0 kubenswrapper[4652]: I0216 17:44:17.300563 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79c35f71-3301-437c-a4c5-8243d973c58d-logs" (OuterVolumeSpecName: "logs") pod "79c35f71-3301-437c-a4c5-8243d973c58d" (UID: "79c35f71-3301-437c-a4c5-8243d973c58d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:44:17.301020 master-0 kubenswrapper[4652]: I0216 17:44:17.300987 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/79c35f71-3301-437c-a4c5-8243d973c58d-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:17.316337 master-0 kubenswrapper[4652]: I0216 17:44:17.303324 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79c35f71-3301-437c-a4c5-8243d973c58d-kube-api-access-7wv7h" (OuterVolumeSpecName: "kube-api-access-7wv7h") pod "79c35f71-3301-437c-a4c5-8243d973c58d" (UID: "79c35f71-3301-437c-a4c5-8243d973c58d"). InnerVolumeSpecName "kube-api-access-7wv7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:44:17.333101 master-0 kubenswrapper[4652]: I0216 17:44:17.333032 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79c35f71-3301-437c-a4c5-8243d973c58d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79c35f71-3301-437c-a4c5-8243d973c58d" (UID: "79c35f71-3301-437c-a4c5-8243d973c58d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:17.346360 master-0 kubenswrapper[4652]: I0216 17:44:17.345987 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79c35f71-3301-437c-a4c5-8243d973c58d-config-data" (OuterVolumeSpecName: "config-data") pod "79c35f71-3301-437c-a4c5-8243d973c58d" (UID: "79c35f71-3301-437c-a4c5-8243d973c58d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:17.404985 master-0 kubenswrapper[4652]: I0216 17:44:17.404900 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79c35f71-3301-437c-a4c5-8243d973c58d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:17.404985 master-0 kubenswrapper[4652]: I0216 17:44:17.404970 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wv7h\" (UniqueName: \"kubernetes.io/projected/79c35f71-3301-437c-a4c5-8243d973c58d-kube-api-access-7wv7h\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:17.404985 master-0 kubenswrapper[4652]: I0216 17:44:17.404990 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79c35f71-3301-437c-a4c5-8243d973c58d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:18.129014 master-0 kubenswrapper[4652]: I0216 17:44:18.128929 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:44:18.206276 master-0 kubenswrapper[4652]: I0216 17:44:18.206076 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:18.225734 master-0 kubenswrapper[4652]: I0216 17:44:18.224416 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:18.255915 master-0 kubenswrapper[4652]: I0216 17:44:18.255128 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:18.255915 master-0 kubenswrapper[4652]: E0216 17:44:18.255733 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79c35f71-3301-437c-a4c5-8243d973c58d" containerName="nova-metadata-metadata" Feb 16 17:44:18.255915 master-0 kubenswrapper[4652]: I0216 17:44:18.255748 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c35f71-3301-437c-a4c5-8243d973c58d" containerName="nova-metadata-metadata" Feb 16 17:44:18.255915 master-0 kubenswrapper[4652]: E0216 17:44:18.255780 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79c35f71-3301-437c-a4c5-8243d973c58d" containerName="nova-metadata-log" Feb 16 17:44:18.255915 master-0 kubenswrapper[4652]: I0216 17:44:18.255786 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="79c35f71-3301-437c-a4c5-8243d973c58d" containerName="nova-metadata-log" Feb 16 17:44:18.256340 master-0 kubenswrapper[4652]: I0216 17:44:18.256030 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="79c35f71-3301-437c-a4c5-8243d973c58d" containerName="nova-metadata-metadata" Feb 16 17:44:18.256340 master-0 kubenswrapper[4652]: I0216 17:44:18.256055 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="79c35f71-3301-437c-a4c5-8243d973c58d" containerName="nova-metadata-log" Feb 16 17:44:18.257791 master-0 kubenswrapper[4652]: I0216 17:44:18.257763 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:44:18.260204 master-0 kubenswrapper[4652]: I0216 17:44:18.260129 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 17:44:18.260942 master-0 kubenswrapper[4652]: I0216 17:44:18.260898 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 17:44:18.280044 master-0 kubenswrapper[4652]: I0216 17:44:18.279994 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:18.323998 master-0 kubenswrapper[4652]: I0216 17:44:18.323919 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbdw2\" (UniqueName: \"kubernetes.io/projected/680b172d-b955-4048-b53b-6695c95d68a5-kube-api-access-sbdw2\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.324232 master-0 kubenswrapper[4652]: I0216 17:44:18.324056 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-config-data\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.324306 master-0 kubenswrapper[4652]: I0216 17:44:18.324282 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.324511 master-0 kubenswrapper[4652]: I0216 17:44:18.324484 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/680b172d-b955-4048-b53b-6695c95d68a5-logs\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.324561 master-0 kubenswrapper[4652]: I0216 17:44:18.324545 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.427214 master-0 kubenswrapper[4652]: I0216 17:44:18.426963 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.427587 master-0 kubenswrapper[4652]: I0216 17:44:18.427329 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbdw2\" (UniqueName: \"kubernetes.io/projected/680b172d-b955-4048-b53b-6695c95d68a5-kube-api-access-sbdw2\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.429077 master-0 kubenswrapper[4652]: I0216 17:44:18.428827 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-config-data\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.429739 master-0 kubenswrapper[4652]: I0216 17:44:18.429698 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.431179 master-0 kubenswrapper[4652]: I0216 17:44:18.431148 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/680b172d-b955-4048-b53b-6695c95d68a5-logs\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.435670 master-0 kubenswrapper[4652]: I0216 17:44:18.431744 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/680b172d-b955-4048-b53b-6695c95d68a5-logs\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.436750 master-0 kubenswrapper[4652]: I0216 17:44:18.436671 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.436750 master-0 kubenswrapper[4652]: I0216 17:44:18.436739 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.436878 master-0 kubenswrapper[4652]: I0216 17:44:18.436773 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-config-data\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.444882 master-0 kubenswrapper[4652]: I0216 17:44:18.444835 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbdw2\" (UniqueName: \"kubernetes.io/projected/680b172d-b955-4048-b53b-6695c95d68a5-kube-api-access-sbdw2\") pod \"nova-metadata-0\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " pod="openstack/nova-metadata-0" Feb 16 17:44:18.582327 master-0 kubenswrapper[4652]: I0216 17:44:18.582201 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:44:18.765801 master-0 kubenswrapper[4652]: I0216 17:44:18.765682 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79c35f71-3301-437c-a4c5-8243d973c58d" path="/var/lib/kubelet/pods/79c35f71-3301-437c-a4c5-8243d973c58d/volumes" Feb 16 17:44:20.179393 master-0 kubenswrapper[4652]: I0216 17:44:20.179336 4652 generic.go:334] "Generic (PLEG): container finished" podID="209b2a48-903a-46dd-abc2-902650a6384c" containerID="cb3c6b770042c08c46f635a9753e80636cc9bd837b33ccdbb08f7294aebc316f" exitCode=0 Feb 16 17:44:20.179393 master-0 kubenswrapper[4652]: I0216 17:44:20.179378 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q4gq5" event={"ID":"209b2a48-903a-46dd-abc2-902650a6384c","Type":"ContainerDied","Data":"cb3c6b770042c08c46f635a9753e80636cc9bd837b33ccdbb08f7294aebc316f"} Feb 16 17:44:20.507355 master-0 kubenswrapper[4652]: I0216 17:44:20.507171 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:44:20.507355 master-0 kubenswrapper[4652]: I0216 17:44:20.507237 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:44:20.648474 master-0 kubenswrapper[4652]: I0216 17:44:20.648395 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 17:44:20.648821 master-0 kubenswrapper[4652]: I0216 17:44:20.648542 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 17:44:20.679734 master-0 kubenswrapper[4652]: I0216 17:44:20.679666 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 17:44:20.742011 master-0 kubenswrapper[4652]: I0216 17:44:20.741927 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:20.802718 master-0 kubenswrapper[4652]: I0216 17:44:20.802476 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:44:21.534191 master-0 kubenswrapper[4652]: I0216 17:44:21.534124 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 17:44:21.591141 master-0 kubenswrapper[4652]: I0216 17:44:21.590605 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2604753f-de68-498b-be82-6d8da2ce56d9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.0.225:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:44:21.591141 master-0 kubenswrapper[4652]: I0216 17:44:21.590692 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2604753f-de68-498b-be82-6d8da2ce56d9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.0.225:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:44:25.218617 master-0 kubenswrapper[4652]: I0216 17:44:25.215171 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7897cfb75c-d6qs4"] Feb 16 17:44:25.218617 master-0 kubenswrapper[4652]: I0216 17:44:25.215562 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" podUID="a6f438fb-b03b-4b1e-9334-6438bb21c7eb" containerName="dnsmasq-dns" containerID="cri-o://e27da12b5f25e7b046e89c7ef896ef258499d158b9da026bdab1d7b7c6121fb1" gracePeriod=10 Feb 16 17:44:25.400392 master-0 kubenswrapper[4652]: E0216 17:44:25.400319 4652 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6f438fb_b03b_4b1e_9334_6438bb21c7eb.slice/crio-e27da12b5f25e7b046e89c7ef896ef258499d158b9da026bdab1d7b7c6121fb1.scope\": RecentStats: unable to find data in memory cache]" Feb 16 17:44:25.870379 master-0 kubenswrapper[4652]: I0216 17:44:25.870334 4652 generic.go:334] "Generic (PLEG): container finished" podID="a6f438fb-b03b-4b1e-9334-6438bb21c7eb" containerID="e27da12b5f25e7b046e89c7ef896ef258499d158b9da026bdab1d7b7c6121fb1" exitCode=0 Feb 16 17:44:25.870977 master-0 kubenswrapper[4652]: I0216 17:44:25.870955 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" event={"ID":"a6f438fb-b03b-4b1e-9334-6438bb21c7eb","Type":"ContainerDied","Data":"e27da12b5f25e7b046e89c7ef896ef258499d158b9da026bdab1d7b7c6121fb1"} Feb 16 17:44:27.862615 master-0 kubenswrapper[4652]: I0216 17:44:27.862536 4652 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" podUID="a6f438fb-b03b-4b1e-9334-6438bb21c7eb" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.128.0.219:5353: connect: connection refused" Feb 16 17:44:27.895833 master-0 kubenswrapper[4652]: I0216 17:44:27.895778 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-q4gq5" event={"ID":"209b2a48-903a-46dd-abc2-902650a6384c","Type":"ContainerDied","Data":"b1eb813a566d168eee8fd117e8ac0e54575680852e8637ad43e12f8de604b147"} Feb 16 17:44:27.895833 master-0 kubenswrapper[4652]: I0216 17:44:27.895821 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1eb813a566d168eee8fd117e8ac0e54575680852e8637ad43e12f8de604b147" Feb 16 17:44:28.236216 master-0 kubenswrapper[4652]: I0216 17:44:28.236171 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:28.271380 master-0 kubenswrapper[4652]: I0216 17:44:28.271318 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-config-data\") pod \"209b2a48-903a-46dd-abc2-902650a6384c\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " Feb 16 17:44:28.271380 master-0 kubenswrapper[4652]: I0216 17:44:28.271378 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-combined-ca-bundle\") pod \"209b2a48-903a-46dd-abc2-902650a6384c\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " Feb 16 17:44:28.271675 master-0 kubenswrapper[4652]: I0216 17:44:28.271435 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-scripts\") pod \"209b2a48-903a-46dd-abc2-902650a6384c\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " Feb 16 17:44:28.271675 master-0 kubenswrapper[4652]: I0216 17:44:28.271555 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w62p\" (UniqueName: \"kubernetes.io/projected/209b2a48-903a-46dd-abc2-902650a6384c-kube-api-access-2w62p\") pod \"209b2a48-903a-46dd-abc2-902650a6384c\" (UID: \"209b2a48-903a-46dd-abc2-902650a6384c\") " Feb 16 17:44:28.280541 master-0 kubenswrapper[4652]: I0216 17:44:28.280485 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-scripts" (OuterVolumeSpecName: "scripts") pod "209b2a48-903a-46dd-abc2-902650a6384c" (UID: "209b2a48-903a-46dd-abc2-902650a6384c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:28.280760 master-0 kubenswrapper[4652]: I0216 17:44:28.280598 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/209b2a48-903a-46dd-abc2-902650a6384c-kube-api-access-2w62p" (OuterVolumeSpecName: "kube-api-access-2w62p") pod "209b2a48-903a-46dd-abc2-902650a6384c" (UID: "209b2a48-903a-46dd-abc2-902650a6384c"). InnerVolumeSpecName "kube-api-access-2w62p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:44:28.321296 master-0 kubenswrapper[4652]: I0216 17:44:28.320862 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-config-data" (OuterVolumeSpecName: "config-data") pod "209b2a48-903a-46dd-abc2-902650a6384c" (UID: "209b2a48-903a-46dd-abc2-902650a6384c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:28.374287 master-0 kubenswrapper[4652]: I0216 17:44:28.369449 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "209b2a48-903a-46dd-abc2-902650a6384c" (UID: "209b2a48-903a-46dd-abc2-902650a6384c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:28.374555 master-0 kubenswrapper[4652]: I0216 17:44:28.374491 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:28.374555 master-0 kubenswrapper[4652]: I0216 17:44:28.374533 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:28.374555 master-0 kubenswrapper[4652]: I0216 17:44:28.374545 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/209b2a48-903a-46dd-abc2-902650a6384c-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:28.374672 master-0 kubenswrapper[4652]: I0216 17:44:28.374557 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w62p\" (UniqueName: \"kubernetes.io/projected/209b2a48-903a-46dd-abc2-902650a6384c-kube-api-access-2w62p\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:28.456287 master-0 kubenswrapper[4652]: I0216 17:44:28.454359 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:44:28.580343 master-0 kubenswrapper[4652]: I0216 17:44:28.580282 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-dns-swift-storage-0\") pod \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " Feb 16 17:44:28.580601 master-0 kubenswrapper[4652]: I0216 17:44:28.580435 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-config\") pod \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " Feb 16 17:44:28.580601 master-0 kubenswrapper[4652]: I0216 17:44:28.580521 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdjh4\" (UniqueName: \"kubernetes.io/projected/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-kube-api-access-mdjh4\") pod \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " Feb 16 17:44:28.580601 master-0 kubenswrapper[4652]: I0216 17:44:28.580548 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-ovsdbserver-sb\") pod \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " Feb 16 17:44:28.580783 master-0 kubenswrapper[4652]: I0216 17:44:28.580647 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-dns-svc\") pod \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " Feb 16 17:44:28.580783 master-0 kubenswrapper[4652]: I0216 17:44:28.580707 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-ovsdbserver-nb\") pod \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\" (UID: \"a6f438fb-b03b-4b1e-9334-6438bb21c7eb\") " Feb 16 17:44:28.583585 master-0 kubenswrapper[4652]: W0216 17:44:28.583523 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod680b172d_b955_4048_b53b_6695c95d68a5.slice/crio-7f97b9e94206679ad7c5d81da4794601d5714254554d95d9e1f0a43a653acd81 WatchSource:0}: Error finding container 7f97b9e94206679ad7c5d81da4794601d5714254554d95d9e1f0a43a653acd81: Status 404 returned error can't find the container with id 7f97b9e94206679ad7c5d81da4794601d5714254554d95d9e1f0a43a653acd81 Feb 16 17:44:28.584669 master-0 kubenswrapper[4652]: I0216 17:44:28.584615 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-kube-api-access-mdjh4" (OuterVolumeSpecName: "kube-api-access-mdjh4") pod "a6f438fb-b03b-4b1e-9334-6438bb21c7eb" (UID: "a6f438fb-b03b-4b1e-9334-6438bb21c7eb"). InnerVolumeSpecName "kube-api-access-mdjh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:44:28.593138 master-0 kubenswrapper[4652]: I0216 17:44:28.592678 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:28.652962 master-0 kubenswrapper[4652]: I0216 17:44:28.650840 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a6f438fb-b03b-4b1e-9334-6438bb21c7eb" (UID: "a6f438fb-b03b-4b1e-9334-6438bb21c7eb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:44:28.655465 master-0 kubenswrapper[4652]: I0216 17:44:28.653866 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a6f438fb-b03b-4b1e-9334-6438bb21c7eb" (UID: "a6f438fb-b03b-4b1e-9334-6438bb21c7eb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:44:28.661426 master-0 kubenswrapper[4652]: I0216 17:44:28.661351 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a6f438fb-b03b-4b1e-9334-6438bb21c7eb" (UID: "a6f438fb-b03b-4b1e-9334-6438bb21c7eb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:44:28.665411 master-0 kubenswrapper[4652]: I0216 17:44:28.665350 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a6f438fb-b03b-4b1e-9334-6438bb21c7eb" (UID: "a6f438fb-b03b-4b1e-9334-6438bb21c7eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:44:28.669659 master-0 kubenswrapper[4652]: I0216 17:44:28.669603 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-config" (OuterVolumeSpecName: "config") pod "a6f438fb-b03b-4b1e-9334-6438bb21c7eb" (UID: "a6f438fb-b03b-4b1e-9334-6438bb21c7eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:44:28.696517 master-0 kubenswrapper[4652]: I0216 17:44:28.690566 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:28.696517 master-0 kubenswrapper[4652]: I0216 17:44:28.690634 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdjh4\" (UniqueName: \"kubernetes.io/projected/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-kube-api-access-mdjh4\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:28.696517 master-0 kubenswrapper[4652]: I0216 17:44:28.690651 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:28.696517 master-0 kubenswrapper[4652]: I0216 17:44:28.690665 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:28.696517 master-0 kubenswrapper[4652]: I0216 17:44:28.690678 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:28.696517 master-0 kubenswrapper[4652]: I0216 17:44:28.690694 4652 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a6f438fb-b03b-4b1e-9334-6438bb21c7eb-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:28.911294 master-0 kubenswrapper[4652]: I0216 17:44:28.911217 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"680b172d-b955-4048-b53b-6695c95d68a5","Type":"ContainerStarted","Data":"c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3"} Feb 16 17:44:28.911294 master-0 kubenswrapper[4652]: I0216 17:44:28.911284 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"680b172d-b955-4048-b53b-6695c95d68a5","Type":"ContainerStarted","Data":"7f97b9e94206679ad7c5d81da4794601d5714254554d95d9e1f0a43a653acd81"} Feb 16 17:44:28.914068 master-0 kubenswrapper[4652]: I0216 17:44:28.914024 4652 generic.go:334] "Generic (PLEG): container finished" podID="7d01cc93-4bd4-4091-92a8-1c9a7e035c3e" containerID="b48630d0a30601fcdecd0da702d94769e0b915a321f4b94dfc435e98b64c586e" exitCode=0 Feb 16 17:44:28.914183 master-0 kubenswrapper[4652]: I0216 17:44:28.914084 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e","Type":"ContainerDied","Data":"b48630d0a30601fcdecd0da702d94769e0b915a321f4b94dfc435e98b64c586e"} Feb 16 17:44:28.916382 master-0 kubenswrapper[4652]: I0216 17:44:28.916115 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-compute-ironic-compute-0" event={"ID":"672cb018-c7eb-4e49-93bf-44f6613465a7","Type":"ContainerStarted","Data":"1c0d5878cc637a87dd4c2285a0e467dc9ab8d989fe1a52bd7a8d3113d99642a0"} Feb 16 17:44:28.917048 master-0 kubenswrapper[4652]: I0216 17:44:28.917013 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 17:44:28.921765 master-0 kubenswrapper[4652]: I0216 17:44:28.920578 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" event={"ID":"a6f438fb-b03b-4b1e-9334-6438bb21c7eb","Type":"ContainerDied","Data":"821b124a12eb132dcc64d6eca3bd3bf42d421a972e264aa6056f36c855fc44ad"} Feb 16 17:44:28.921765 master-0 kubenswrapper[4652]: I0216 17:44:28.920627 4652 scope.go:117] "RemoveContainer" containerID="e27da12b5f25e7b046e89c7ef896ef258499d158b9da026bdab1d7b7c6121fb1" Feb 16 17:44:28.921765 master-0 kubenswrapper[4652]: I0216 17:44:28.920673 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-q4gq5" Feb 16 17:44:28.921765 master-0 kubenswrapper[4652]: I0216 17:44:28.920697 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7897cfb75c-d6qs4" Feb 16 17:44:28.953280 master-0 kubenswrapper[4652]: I0216 17:44:28.951668 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-compute-ironic-compute-0" Feb 16 17:44:28.982505 master-0 kubenswrapper[4652]: I0216 17:44:28.982018 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-compute-ironic-compute-0" podStartSLOduration=3.1036999769999998 podStartE2EDuration="19.981634743s" podCreationTimestamp="2026-02-16 17:44:09 +0000 UTC" firstStartedPulling="2026-02-16 17:44:11.103840824 +0000 UTC m=+1208.492009340" lastFinishedPulling="2026-02-16 17:44:27.98177559 +0000 UTC m=+1225.369944106" observedRunningTime="2026-02-16 17:44:28.96885634 +0000 UTC m=+1226.357024866" watchObservedRunningTime="2026-02-16 17:44:28.981634743 +0000 UTC m=+1226.369803259" Feb 16 17:44:28.995600 master-0 kubenswrapper[4652]: I0216 17:44:28.995547 4652 scope.go:117] "RemoveContainer" containerID="6d2e4b65bfc30127f391b236cf89c05d5501422a565232ae207ba2ecc3bd8163" Feb 16 17:44:29.149350 master-0 kubenswrapper[4652]: I0216 17:44:29.148337 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7897cfb75c-d6qs4"] Feb 16 17:44:29.162474 master-0 kubenswrapper[4652]: I0216 17:44:29.162427 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7897cfb75c-d6qs4"] Feb 16 17:44:29.465291 master-0 kubenswrapper[4652]: I0216 17:44:29.459721 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:44:29.465291 master-0 kubenswrapper[4652]: I0216 17:44:29.460015 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2604753f-de68-498b-be82-6d8da2ce56d9" containerName="nova-api-log" containerID="cri-o://17c9dbb046c5217431166ffcdf8ea2149d86bc0845a792436299348fa92637c1" gracePeriod=30 Feb 16 17:44:29.465291 master-0 kubenswrapper[4652]: I0216 17:44:29.460553 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2604753f-de68-498b-be82-6d8da2ce56d9" containerName="nova-api-api" containerID="cri-o://6051da26155a57e24177ea45e195b7f89a26c5e03a7967b8cf73583a6a781e32" gracePeriod=30 Feb 16 17:44:29.485755 master-0 kubenswrapper[4652]: I0216 17:44:29.485543 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:44:29.485755 master-0 kubenswrapper[4652]: I0216 17:44:29.485743 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7361f440-77f5-42e0-bdce-5bc776fa7f8d" containerName="nova-scheduler-scheduler" containerID="cri-o://c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937" gracePeriod=30 Feb 16 17:44:29.522386 master-0 kubenswrapper[4652]: I0216 17:44:29.522316 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:29.942542 master-0 kubenswrapper[4652]: I0216 17:44:29.942500 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"680b172d-b955-4048-b53b-6695c95d68a5","Type":"ContainerStarted","Data":"e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a"} Feb 16 17:44:29.947324 master-0 kubenswrapper[4652]: I0216 17:44:29.947299 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e","Type":"ContainerStarted","Data":"3db5f329476fe89b3812b0c4bff1674f82242d73188d867ca7a1379bb3c07b79"} Feb 16 17:44:29.951708 master-0 kubenswrapper[4652]: I0216 17:44:29.951626 4652 generic.go:334] "Generic (PLEG): container finished" podID="2604753f-de68-498b-be82-6d8da2ce56d9" containerID="17c9dbb046c5217431166ffcdf8ea2149d86bc0845a792436299348fa92637c1" exitCode=143 Feb 16 17:44:29.952008 master-0 kubenswrapper[4652]: I0216 17:44:29.951705 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2604753f-de68-498b-be82-6d8da2ce56d9","Type":"ContainerDied","Data":"17c9dbb046c5217431166ffcdf8ea2149d86bc0845a792436299348fa92637c1"} Feb 16 17:44:29.971412 master-0 kubenswrapper[4652]: I0216 17:44:29.971350 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=11.971331612 podStartE2EDuration="11.971331612s" podCreationTimestamp="2026-02-16 17:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:44:29.96455753 +0000 UTC m=+1227.352726046" watchObservedRunningTime="2026-02-16 17:44:29.971331612 +0000 UTC m=+1227.359500128" Feb 16 17:44:30.653870 master-0 kubenswrapper[4652]: E0216 17:44:30.653792 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 17:44:30.656194 master-0 kubenswrapper[4652]: E0216 17:44:30.656122 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 17:44:30.660585 master-0 kubenswrapper[4652]: E0216 17:44:30.660521 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 17:44:30.660585 master-0 kubenswrapper[4652]: E0216 17:44:30.660578 4652 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7361f440-77f5-42e0-bdce-5bc776fa7f8d" containerName="nova-scheduler-scheduler" Feb 16 17:44:30.761863 master-0 kubenswrapper[4652]: I0216 17:44:30.761693 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f438fb-b03b-4b1e-9334-6438bb21c7eb" path="/var/lib/kubelet/pods/a6f438fb-b03b-4b1e-9334-6438bb21c7eb/volumes" Feb 16 17:44:30.963908 master-0 kubenswrapper[4652]: I0216 17:44:30.963845 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e","Type":"ContainerStarted","Data":"23a71890a1fecabcf9d3d08ab2c4408c90f7147421d027d8b0d383d641ef36d2"} Feb 16 17:44:30.963908 master-0 kubenswrapper[4652]: I0216 17:44:30.963888 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ironic-conductor-0" event={"ID":"7d01cc93-4bd4-4091-92a8-1c9a7e035c3e","Type":"ContainerStarted","Data":"fd4a38a8d743e1980a8d0cf443a86e13b326250feb3ff1f18bc5a4a8a2e6caa1"} Feb 16 17:44:30.964626 master-0 kubenswrapper[4652]: I0216 17:44:30.964055 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 16 17:44:30.966415 master-0 kubenswrapper[4652]: I0216 17:44:30.966369 4652 generic.go:334] "Generic (PLEG): container finished" podID="c2966848-b02c-4fef-8d49-df6a97604e12" containerID="11cceeb018fa48ac3af539fb9856e4b2809d02dffb56c60cb56ce4813a6e6ac5" exitCode=0 Feb 16 17:44:30.966490 master-0 kubenswrapper[4652]: I0216 17:44:30.966440 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rmx4f" event={"ID":"c2966848-b02c-4fef-8d49-df6a97604e12","Type":"ContainerDied","Data":"11cceeb018fa48ac3af539fb9856e4b2809d02dffb56c60cb56ce4813a6e6ac5"} Feb 16 17:44:30.966639 master-0 kubenswrapper[4652]: I0216 17:44:30.966600 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="680b172d-b955-4048-b53b-6695c95d68a5" containerName="nova-metadata-log" containerID="cri-o://c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3" gracePeriod=30 Feb 16 17:44:30.966710 master-0 kubenswrapper[4652]: I0216 17:44:30.966672 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="680b172d-b955-4048-b53b-6695c95d68a5" containerName="nova-metadata-metadata" containerID="cri-o://e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a" gracePeriod=30 Feb 16 17:44:31.036537 master-0 kubenswrapper[4652]: I0216 17:44:31.036455 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ironic-conductor-0" podStartSLOduration=61.618742451 podStartE2EDuration="1m37.036433553s" podCreationTimestamp="2026-02-16 17:42:54 +0000 UTC" firstStartedPulling="2026-02-16 17:43:04.550651768 +0000 UTC m=+1141.938820284" lastFinishedPulling="2026-02-16 17:43:39.96834287 +0000 UTC m=+1177.356511386" observedRunningTime="2026-02-16 17:44:31.001873737 +0000 UTC m=+1228.390042263" watchObservedRunningTime="2026-02-16 17:44:31.036433553 +0000 UTC m=+1228.424602069" Feb 16 17:44:31.710665 master-0 kubenswrapper[4652]: I0216 17:44:31.710632 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:44:31.873928 master-0 kubenswrapper[4652]: I0216 17:44:31.873784 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-nova-metadata-tls-certs\") pod \"680b172d-b955-4048-b53b-6695c95d68a5\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " Feb 16 17:44:31.874159 master-0 kubenswrapper[4652]: I0216 17:44:31.873952 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/680b172d-b955-4048-b53b-6695c95d68a5-logs\") pod \"680b172d-b955-4048-b53b-6695c95d68a5\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " Feb 16 17:44:31.874159 master-0 kubenswrapper[4652]: I0216 17:44:31.874011 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-combined-ca-bundle\") pod \"680b172d-b955-4048-b53b-6695c95d68a5\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " Feb 16 17:44:31.874159 master-0 kubenswrapper[4652]: I0216 17:44:31.874086 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-config-data\") pod \"680b172d-b955-4048-b53b-6695c95d68a5\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " Feb 16 17:44:31.874371 master-0 kubenswrapper[4652]: I0216 17:44:31.874184 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbdw2\" (UniqueName: \"kubernetes.io/projected/680b172d-b955-4048-b53b-6695c95d68a5-kube-api-access-sbdw2\") pod \"680b172d-b955-4048-b53b-6695c95d68a5\" (UID: \"680b172d-b955-4048-b53b-6695c95d68a5\") " Feb 16 17:44:31.874830 master-0 kubenswrapper[4652]: I0216 17:44:31.874749 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/680b172d-b955-4048-b53b-6695c95d68a5-logs" (OuterVolumeSpecName: "logs") pod "680b172d-b955-4048-b53b-6695c95d68a5" (UID: "680b172d-b955-4048-b53b-6695c95d68a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:44:31.877503 master-0 kubenswrapper[4652]: I0216 17:44:31.877442 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/680b172d-b955-4048-b53b-6695c95d68a5-kube-api-access-sbdw2" (OuterVolumeSpecName: "kube-api-access-sbdw2") pod "680b172d-b955-4048-b53b-6695c95d68a5" (UID: "680b172d-b955-4048-b53b-6695c95d68a5"). InnerVolumeSpecName "kube-api-access-sbdw2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:44:31.907737 master-0 kubenswrapper[4652]: I0216 17:44:31.902938 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-config-data" (OuterVolumeSpecName: "config-data") pod "680b172d-b955-4048-b53b-6695c95d68a5" (UID: "680b172d-b955-4048-b53b-6695c95d68a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:31.908070 master-0 kubenswrapper[4652]: I0216 17:44:31.908006 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "680b172d-b955-4048-b53b-6695c95d68a5" (UID: "680b172d-b955-4048-b53b-6695c95d68a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:31.941210 master-0 kubenswrapper[4652]: I0216 17:44:31.941140 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "680b172d-b955-4048-b53b-6695c95d68a5" (UID: "680b172d-b955-4048-b53b-6695c95d68a5"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:31.976953 master-0 kubenswrapper[4652]: I0216 17:44:31.976860 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbdw2\" (UniqueName: \"kubernetes.io/projected/680b172d-b955-4048-b53b-6695c95d68a5-kube-api-access-sbdw2\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:31.976953 master-0 kubenswrapper[4652]: I0216 17:44:31.976943 4652 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:31.977582 master-0 kubenswrapper[4652]: I0216 17:44:31.976978 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/680b172d-b955-4048-b53b-6695c95d68a5-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:31.977582 master-0 kubenswrapper[4652]: I0216 17:44:31.976992 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:31.977582 master-0 kubenswrapper[4652]: I0216 17:44:31.977004 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/680b172d-b955-4048-b53b-6695c95d68a5-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:31.979654 master-0 kubenswrapper[4652]: I0216 17:44:31.979601 4652 generic.go:334] "Generic (PLEG): container finished" podID="680b172d-b955-4048-b53b-6695c95d68a5" containerID="e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a" exitCode=0 Feb 16 17:44:31.979654 master-0 kubenswrapper[4652]: I0216 17:44:31.979649 4652 generic.go:334] "Generic (PLEG): container finished" podID="680b172d-b955-4048-b53b-6695c95d68a5" containerID="c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3" exitCode=143 Feb 16 17:44:31.979822 master-0 kubenswrapper[4652]: I0216 17:44:31.979666 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"680b172d-b955-4048-b53b-6695c95d68a5","Type":"ContainerDied","Data":"e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a"} Feb 16 17:44:31.979822 master-0 kubenswrapper[4652]: I0216 17:44:31.979696 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:44:31.979822 master-0 kubenswrapper[4652]: I0216 17:44:31.979714 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"680b172d-b955-4048-b53b-6695c95d68a5","Type":"ContainerDied","Data":"c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3"} Feb 16 17:44:31.979822 master-0 kubenswrapper[4652]: I0216 17:44:31.979725 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"680b172d-b955-4048-b53b-6695c95d68a5","Type":"ContainerDied","Data":"7f97b9e94206679ad7c5d81da4794601d5714254554d95d9e1f0a43a653acd81"} Feb 16 17:44:31.979822 master-0 kubenswrapper[4652]: I0216 17:44:31.979744 4652 scope.go:117] "RemoveContainer" containerID="e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a" Feb 16 17:44:31.981406 master-0 kubenswrapper[4652]: I0216 17:44:31.981378 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ironic-conductor-0" Feb 16 17:44:32.006851 master-0 kubenswrapper[4652]: I0216 17:44:32.006827 4652 scope.go:117] "RemoveContainer" containerID="c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3" Feb 16 17:44:32.028716 master-0 kubenswrapper[4652]: I0216 17:44:32.028650 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:32.041813 master-0 kubenswrapper[4652]: I0216 17:44:32.041715 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:32.045279 master-0 kubenswrapper[4652]: I0216 17:44:32.042638 4652 scope.go:117] "RemoveContainer" containerID="e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a" Feb 16 17:44:32.047084 master-0 kubenswrapper[4652]: E0216 17:44:32.045639 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a\": container with ID starting with e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a not found: ID does not exist" containerID="e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a" Feb 16 17:44:32.047084 master-0 kubenswrapper[4652]: I0216 17:44:32.045703 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a"} err="failed to get container status \"e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a\": rpc error: code = NotFound desc = could not find container \"e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a\": container with ID starting with e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a not found: ID does not exist" Feb 16 17:44:32.047084 master-0 kubenswrapper[4652]: I0216 17:44:32.045737 4652 scope.go:117] "RemoveContainer" containerID="c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3" Feb 16 17:44:32.047084 master-0 kubenswrapper[4652]: E0216 17:44:32.046225 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3\": container with ID starting with c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3 not found: ID does not exist" containerID="c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3" Feb 16 17:44:32.047084 master-0 kubenswrapper[4652]: I0216 17:44:32.046291 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3"} err="failed to get container status \"c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3\": rpc error: code = NotFound desc = could not find container \"c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3\": container with ID starting with c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3 not found: ID does not exist" Feb 16 17:44:32.047084 master-0 kubenswrapper[4652]: I0216 17:44:32.046322 4652 scope.go:117] "RemoveContainer" containerID="e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a" Feb 16 17:44:32.047084 master-0 kubenswrapper[4652]: I0216 17:44:32.046662 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a"} err="failed to get container status \"e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a\": rpc error: code = NotFound desc = could not find container \"e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a\": container with ID starting with e1bf8b03d47664528e13a53e61f099b52b9c3a47b3b075e4d35c2c948ee0907a not found: ID does not exist" Feb 16 17:44:32.047084 master-0 kubenswrapper[4652]: I0216 17:44:32.046703 4652 scope.go:117] "RemoveContainer" containerID="c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3" Feb 16 17:44:32.047084 master-0 kubenswrapper[4652]: I0216 17:44:32.046957 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3"} err="failed to get container status \"c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3\": rpc error: code = NotFound desc = could not find container \"c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3\": container with ID starting with c480e04934fb36546fac98fccdb007f46ca0e9ffdff687bf54be6d8617fb99c3 not found: ID does not exist" Feb 16 17:44:32.076610 master-0 kubenswrapper[4652]: I0216 17:44:32.076550 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:32.077226 master-0 kubenswrapper[4652]: E0216 17:44:32.077196 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680b172d-b955-4048-b53b-6695c95d68a5" containerName="nova-metadata-metadata" Feb 16 17:44:32.077226 master-0 kubenswrapper[4652]: I0216 17:44:32.077221 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="680b172d-b955-4048-b53b-6695c95d68a5" containerName="nova-metadata-metadata" Feb 16 17:44:32.077342 master-0 kubenswrapper[4652]: E0216 17:44:32.077285 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680b172d-b955-4048-b53b-6695c95d68a5" containerName="nova-metadata-log" Feb 16 17:44:32.077342 master-0 kubenswrapper[4652]: I0216 17:44:32.077297 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="680b172d-b955-4048-b53b-6695c95d68a5" containerName="nova-metadata-log" Feb 16 17:44:32.077342 master-0 kubenswrapper[4652]: E0216 17:44:32.077318 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f438fb-b03b-4b1e-9334-6438bb21c7eb" containerName="dnsmasq-dns" Feb 16 17:44:32.077342 master-0 kubenswrapper[4652]: I0216 17:44:32.077326 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f438fb-b03b-4b1e-9334-6438bb21c7eb" containerName="dnsmasq-dns" Feb 16 17:44:32.077464 master-0 kubenswrapper[4652]: E0216 17:44:32.077347 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="209b2a48-903a-46dd-abc2-902650a6384c" containerName="nova-manage" Feb 16 17:44:32.077464 master-0 kubenswrapper[4652]: I0216 17:44:32.077355 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="209b2a48-903a-46dd-abc2-902650a6384c" containerName="nova-manage" Feb 16 17:44:32.077464 master-0 kubenswrapper[4652]: E0216 17:44:32.077381 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f438fb-b03b-4b1e-9334-6438bb21c7eb" containerName="init" Feb 16 17:44:32.077464 master-0 kubenswrapper[4652]: I0216 17:44:32.077389 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f438fb-b03b-4b1e-9334-6438bb21c7eb" containerName="init" Feb 16 17:44:32.077658 master-0 kubenswrapper[4652]: I0216 17:44:32.077631 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="680b172d-b955-4048-b53b-6695c95d68a5" containerName="nova-metadata-metadata" Feb 16 17:44:32.077700 master-0 kubenswrapper[4652]: I0216 17:44:32.077663 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f438fb-b03b-4b1e-9334-6438bb21c7eb" containerName="dnsmasq-dns" Feb 16 17:44:32.077700 master-0 kubenswrapper[4652]: I0216 17:44:32.077687 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="209b2a48-903a-46dd-abc2-902650a6384c" containerName="nova-manage" Feb 16 17:44:32.077763 master-0 kubenswrapper[4652]: I0216 17:44:32.077708 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="680b172d-b955-4048-b53b-6695c95d68a5" containerName="nova-metadata-log" Feb 16 17:44:32.079761 master-0 kubenswrapper[4652]: I0216 17:44:32.079703 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:44:32.084785 master-0 kubenswrapper[4652]: I0216 17:44:32.084739 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 17:44:32.084936 master-0 kubenswrapper[4652]: I0216 17:44:32.084910 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 17:44:32.100649 master-0 kubenswrapper[4652]: I0216 17:44:32.100596 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:32.185145 master-0 kubenswrapper[4652]: I0216 17:44:32.185099 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-config-data\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.185145 master-0 kubenswrapper[4652]: I0216 17:44:32.185150 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a171bdac-1967-4caf-836e-4eac64a10fd6-logs\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.185546 master-0 kubenswrapper[4652]: I0216 17:44:32.185336 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.185546 master-0 kubenswrapper[4652]: I0216 17:44:32.185426 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v96g\" (UniqueName: \"kubernetes.io/projected/a171bdac-1967-4caf-836e-4eac64a10fd6-kube-api-access-5v96g\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.185546 master-0 kubenswrapper[4652]: I0216 17:44:32.185517 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.212147 master-0 kubenswrapper[4652]: I0216 17:44:32.211640 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ironic-conductor-0" Feb 16 17:44:32.288912 master-0 kubenswrapper[4652]: I0216 17:44:32.288851 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-config-data\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.288912 master-0 kubenswrapper[4652]: I0216 17:44:32.288909 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a171bdac-1967-4caf-836e-4eac64a10fd6-logs\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.289212 master-0 kubenswrapper[4652]: I0216 17:44:32.289022 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.289212 master-0 kubenswrapper[4652]: I0216 17:44:32.289091 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v96g\" (UniqueName: \"kubernetes.io/projected/a171bdac-1967-4caf-836e-4eac64a10fd6-kube-api-access-5v96g\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.289212 master-0 kubenswrapper[4652]: I0216 17:44:32.289155 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.289391 master-0 kubenswrapper[4652]: I0216 17:44:32.289365 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a171bdac-1967-4caf-836e-4eac64a10fd6-logs\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.301681 master-0 kubenswrapper[4652]: I0216 17:44:32.293104 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-config-data\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.301681 master-0 kubenswrapper[4652]: I0216 17:44:32.293866 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.301681 master-0 kubenswrapper[4652]: I0216 17:44:32.298872 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.307320 master-0 kubenswrapper[4652]: I0216 17:44:32.305338 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v96g\" (UniqueName: \"kubernetes.io/projected/a171bdac-1967-4caf-836e-4eac64a10fd6-kube-api-access-5v96g\") pod \"nova-metadata-0\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " pod="openstack/nova-metadata-0" Feb 16 17:44:32.407305 master-0 kubenswrapper[4652]: I0216 17:44:32.407270 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:32.462651 master-0 kubenswrapper[4652]: I0216 17:44:32.462587 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:44:32.492370 master-0 kubenswrapper[4652]: I0216 17:44:32.492240 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d49fp\" (UniqueName: \"kubernetes.io/projected/c2966848-b02c-4fef-8d49-df6a97604e12-kube-api-access-d49fp\") pod \"c2966848-b02c-4fef-8d49-df6a97604e12\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " Feb 16 17:44:32.492583 master-0 kubenswrapper[4652]: I0216 17:44:32.492435 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-combined-ca-bundle\") pod \"c2966848-b02c-4fef-8d49-df6a97604e12\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " Feb 16 17:44:32.492583 master-0 kubenswrapper[4652]: I0216 17:44:32.492497 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-scripts\") pod \"c2966848-b02c-4fef-8d49-df6a97604e12\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " Feb 16 17:44:32.492694 master-0 kubenswrapper[4652]: I0216 17:44:32.492667 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-config-data\") pod \"c2966848-b02c-4fef-8d49-df6a97604e12\" (UID: \"c2966848-b02c-4fef-8d49-df6a97604e12\") " Feb 16 17:44:32.495764 master-0 kubenswrapper[4652]: I0216 17:44:32.495718 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2966848-b02c-4fef-8d49-df6a97604e12-kube-api-access-d49fp" (OuterVolumeSpecName: "kube-api-access-d49fp") pod "c2966848-b02c-4fef-8d49-df6a97604e12" (UID: "c2966848-b02c-4fef-8d49-df6a97604e12"). InnerVolumeSpecName "kube-api-access-d49fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:44:32.496275 master-0 kubenswrapper[4652]: I0216 17:44:32.496163 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-scripts" (OuterVolumeSpecName: "scripts") pod "c2966848-b02c-4fef-8d49-df6a97604e12" (UID: "c2966848-b02c-4fef-8d49-df6a97604e12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:32.528630 master-0 kubenswrapper[4652]: I0216 17:44:32.528576 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-config-data" (OuterVolumeSpecName: "config-data") pod "c2966848-b02c-4fef-8d49-df6a97604e12" (UID: "c2966848-b02c-4fef-8d49-df6a97604e12"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:32.530630 master-0 kubenswrapper[4652]: I0216 17:44:32.530597 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2966848-b02c-4fef-8d49-df6a97604e12" (UID: "c2966848-b02c-4fef-8d49-df6a97604e12"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:32.598262 master-0 kubenswrapper[4652]: I0216 17:44:32.598168 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:32.598262 master-0 kubenswrapper[4652]: I0216 17:44:32.598232 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d49fp\" (UniqueName: \"kubernetes.io/projected/c2966848-b02c-4fef-8d49-df6a97604e12-kube-api-access-d49fp\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:32.598262 master-0 kubenswrapper[4652]: I0216 17:44:32.598268 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:32.598262 master-0 kubenswrapper[4652]: I0216 17:44:32.598281 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2966848-b02c-4fef-8d49-df6a97604e12-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:32.773294 master-0 kubenswrapper[4652]: I0216 17:44:32.773232 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="680b172d-b955-4048-b53b-6695c95d68a5" path="/var/lib/kubelet/pods/680b172d-b955-4048-b53b-6695c95d68a5/volumes" Feb 16 17:44:32.934285 master-0 kubenswrapper[4652]: I0216 17:44:32.934209 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:44:32.991179 master-0 kubenswrapper[4652]: I0216 17:44:32.991117 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a171bdac-1967-4caf-836e-4eac64a10fd6","Type":"ContainerStarted","Data":"29063fd5fbef9ecadbce514303ac005671282ae1343faf4f83aa1c1dbad2d10d"} Feb 16 17:44:32.993558 master-0 kubenswrapper[4652]: I0216 17:44:32.993475 4652 generic.go:334] "Generic (PLEG): container finished" podID="2604753f-de68-498b-be82-6d8da2ce56d9" containerID="6051da26155a57e24177ea45e195b7f89a26c5e03a7967b8cf73583a6a781e32" exitCode=0 Feb 16 17:44:32.993558 master-0 kubenswrapper[4652]: I0216 17:44:32.993523 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2604753f-de68-498b-be82-6d8da2ce56d9","Type":"ContainerDied","Data":"6051da26155a57e24177ea45e195b7f89a26c5e03a7967b8cf73583a6a781e32"} Feb 16 17:44:32.995542 master-0 kubenswrapper[4652]: I0216 17:44:32.995520 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rmx4f" event={"ID":"c2966848-b02c-4fef-8d49-df6a97604e12","Type":"ContainerDied","Data":"63b24906effc796c9a865ac54c14721c2a10a6edcf54b785209283bad4839f0e"} Feb 16 17:44:32.995641 master-0 kubenswrapper[4652]: I0216 17:44:32.995546 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63b24906effc796c9a865ac54c14721c2a10a6edcf54b785209283bad4839f0e" Feb 16 17:44:32.995641 master-0 kubenswrapper[4652]: I0216 17:44:32.995602 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rmx4f" Feb 16 17:44:33.160308 master-0 kubenswrapper[4652]: I0216 17:44:33.160238 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 17:44:33.160762 master-0 kubenswrapper[4652]: E0216 17:44:33.160738 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2966848-b02c-4fef-8d49-df6a97604e12" containerName="nova-cell1-conductor-db-sync" Feb 16 17:44:33.160762 master-0 kubenswrapper[4652]: I0216 17:44:33.160757 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2966848-b02c-4fef-8d49-df6a97604e12" containerName="nova-cell1-conductor-db-sync" Feb 16 17:44:33.161646 master-0 kubenswrapper[4652]: I0216 17:44:33.160964 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2966848-b02c-4fef-8d49-df6a97604e12" containerName="nova-cell1-conductor-db-sync" Feb 16 17:44:33.161828 master-0 kubenswrapper[4652]: I0216 17:44:33.161805 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 17:44:33.163765 master-0 kubenswrapper[4652]: I0216 17:44:33.163744 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 17:44:33.188286 master-0 kubenswrapper[4652]: I0216 17:44:33.187658 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 17:44:33.220085 master-0 kubenswrapper[4652]: I0216 17:44:33.220015 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3300c3df-961b-4d03-9260-764620b49489-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3300c3df-961b-4d03-9260-764620b49489\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:44:33.220085 master-0 kubenswrapper[4652]: I0216 17:44:33.220086 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkbmr\" (UniqueName: \"kubernetes.io/projected/3300c3df-961b-4d03-9260-764620b49489-kube-api-access-mkbmr\") pod \"nova-cell1-conductor-0\" (UID: \"3300c3df-961b-4d03-9260-764620b49489\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:44:33.220592 master-0 kubenswrapper[4652]: I0216 17:44:33.220554 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3300c3df-961b-4d03-9260-764620b49489-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3300c3df-961b-4d03-9260-764620b49489\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:44:33.310010 master-0 kubenswrapper[4652]: I0216 17:44:33.309967 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:44:33.322446 master-0 kubenswrapper[4652]: I0216 17:44:33.322396 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3300c3df-961b-4d03-9260-764620b49489-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3300c3df-961b-4d03-9260-764620b49489\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:44:33.322648 master-0 kubenswrapper[4652]: I0216 17:44:33.322572 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3300c3df-961b-4d03-9260-764620b49489-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3300c3df-961b-4d03-9260-764620b49489\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:44:33.322648 master-0 kubenswrapper[4652]: I0216 17:44:33.322604 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkbmr\" (UniqueName: \"kubernetes.io/projected/3300c3df-961b-4d03-9260-764620b49489-kube-api-access-mkbmr\") pod \"nova-cell1-conductor-0\" (UID: \"3300c3df-961b-4d03-9260-764620b49489\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:44:33.326701 master-0 kubenswrapper[4652]: I0216 17:44:33.326658 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3300c3df-961b-4d03-9260-764620b49489-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3300c3df-961b-4d03-9260-764620b49489\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:44:33.327036 master-0 kubenswrapper[4652]: I0216 17:44:33.327005 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3300c3df-961b-4d03-9260-764620b49489-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3300c3df-961b-4d03-9260-764620b49489\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:44:33.343150 master-0 kubenswrapper[4652]: I0216 17:44:33.343101 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkbmr\" (UniqueName: \"kubernetes.io/projected/3300c3df-961b-4d03-9260-764620b49489-kube-api-access-mkbmr\") pod \"nova-cell1-conductor-0\" (UID: \"3300c3df-961b-4d03-9260-764620b49489\") " pod="openstack/nova-cell1-conductor-0" Feb 16 17:44:33.421969 master-0 kubenswrapper[4652]: I0216 17:44:33.421904 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 17:44:33.430811 master-0 kubenswrapper[4652]: I0216 17:44:33.430755 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2604753f-de68-498b-be82-6d8da2ce56d9-config-data\") pod \"2604753f-de68-498b-be82-6d8da2ce56d9\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " Feb 16 17:44:33.431207 master-0 kubenswrapper[4652]: I0216 17:44:33.430891 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2604753f-de68-498b-be82-6d8da2ce56d9-logs\") pod \"2604753f-de68-498b-be82-6d8da2ce56d9\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " Feb 16 17:44:33.431207 master-0 kubenswrapper[4652]: I0216 17:44:33.431124 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2604753f-de68-498b-be82-6d8da2ce56d9-combined-ca-bundle\") pod \"2604753f-de68-498b-be82-6d8da2ce56d9\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " Feb 16 17:44:33.431398 master-0 kubenswrapper[4652]: I0216 17:44:33.431235 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vgmd\" (UniqueName: \"kubernetes.io/projected/2604753f-de68-498b-be82-6d8da2ce56d9-kube-api-access-4vgmd\") pod \"2604753f-de68-498b-be82-6d8da2ce56d9\" (UID: \"2604753f-de68-498b-be82-6d8da2ce56d9\") " Feb 16 17:44:33.432543 master-0 kubenswrapper[4652]: I0216 17:44:33.432503 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2604753f-de68-498b-be82-6d8da2ce56d9-logs" (OuterVolumeSpecName: "logs") pod "2604753f-de68-498b-be82-6d8da2ce56d9" (UID: "2604753f-de68-498b-be82-6d8da2ce56d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:44:33.435605 master-0 kubenswrapper[4652]: I0216 17:44:33.435559 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2604753f-de68-498b-be82-6d8da2ce56d9-kube-api-access-4vgmd" (OuterVolumeSpecName: "kube-api-access-4vgmd") pod "2604753f-de68-498b-be82-6d8da2ce56d9" (UID: "2604753f-de68-498b-be82-6d8da2ce56d9"). InnerVolumeSpecName "kube-api-access-4vgmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:44:33.468658 master-0 kubenswrapper[4652]: I0216 17:44:33.468583 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2604753f-de68-498b-be82-6d8da2ce56d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2604753f-de68-498b-be82-6d8da2ce56d9" (UID: "2604753f-de68-498b-be82-6d8da2ce56d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:33.470409 master-0 kubenswrapper[4652]: I0216 17:44:33.470380 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2604753f-de68-498b-be82-6d8da2ce56d9-config-data" (OuterVolumeSpecName: "config-data") pod "2604753f-de68-498b-be82-6d8da2ce56d9" (UID: "2604753f-de68-498b-be82-6d8da2ce56d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:33.537346 master-0 kubenswrapper[4652]: I0216 17:44:33.534306 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vgmd\" (UniqueName: \"kubernetes.io/projected/2604753f-de68-498b-be82-6d8da2ce56d9-kube-api-access-4vgmd\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:33.537346 master-0 kubenswrapper[4652]: I0216 17:44:33.534345 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2604753f-de68-498b-be82-6d8da2ce56d9-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:33.537346 master-0 kubenswrapper[4652]: I0216 17:44:33.534357 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2604753f-de68-498b-be82-6d8da2ce56d9-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:33.537346 master-0 kubenswrapper[4652]: I0216 17:44:33.534368 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2604753f-de68-498b-be82-6d8da2ce56d9-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:33.589790 master-0 kubenswrapper[4652]: I0216 17:44:33.589739 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ironic-conductor-0" Feb 16 17:44:33.908651 master-0 kubenswrapper[4652]: I0216 17:44:33.908582 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 17:44:33.912657 master-0 kubenswrapper[4652]: W0216 17:44:33.912604 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3300c3df_961b_4d03_9260_764620b49489.slice/crio-40ef60e7d6733f29c3605a9cb5a4f6033ecabc34ae6629bd5efd9cccf795803b WatchSource:0}: Error finding container 40ef60e7d6733f29c3605a9cb5a4f6033ecabc34ae6629bd5efd9cccf795803b: Status 404 returned error can't find the container with id 40ef60e7d6733f29c3605a9cb5a4f6033ecabc34ae6629bd5efd9cccf795803b Feb 16 17:44:34.014172 master-0 kubenswrapper[4652]: I0216 17:44:34.014114 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a171bdac-1967-4caf-836e-4eac64a10fd6","Type":"ContainerStarted","Data":"ab45afcfa669303e07706ee11dbfe501ba09d4099b47d61441e3029e0a808bea"} Feb 16 17:44:34.014172 master-0 kubenswrapper[4652]: I0216 17:44:34.014177 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a171bdac-1967-4caf-836e-4eac64a10fd6","Type":"ContainerStarted","Data":"b75dc0f0c79469729ab35074e7abfc4db660faf3fca665b9f45168f6f1176dbd"} Feb 16 17:44:34.018347 master-0 kubenswrapper[4652]: I0216 17:44:34.018289 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2604753f-de68-498b-be82-6d8da2ce56d9","Type":"ContainerDied","Data":"f396940e43cbccdbdaea3fd474ae3ab5151581394fd90e2ddcbf3b786d08b504"} Feb 16 17:44:34.018616 master-0 kubenswrapper[4652]: I0216 17:44:34.018370 4652 scope.go:117] "RemoveContainer" containerID="6051da26155a57e24177ea45e195b7f89a26c5e03a7967b8cf73583a6a781e32" Feb 16 17:44:34.018616 master-0 kubenswrapper[4652]: I0216 17:44:34.018557 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:44:34.022367 master-0 kubenswrapper[4652]: I0216 17:44:34.022319 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3300c3df-961b-4d03-9260-764620b49489","Type":"ContainerStarted","Data":"40ef60e7d6733f29c3605a9cb5a4f6033ecabc34ae6629bd5efd9cccf795803b"} Feb 16 17:44:34.050848 master-0 kubenswrapper[4652]: I0216 17:44:34.050763 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.050742734 podStartE2EDuration="2.050742734s" podCreationTimestamp="2026-02-16 17:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:44:34.04460536 +0000 UTC m=+1231.432773886" watchObservedRunningTime="2026-02-16 17:44:34.050742734 +0000 UTC m=+1231.438911260" Feb 16 17:44:34.063874 master-0 kubenswrapper[4652]: I0216 17:44:34.063839 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 16 17:44:34.087651 master-0 kubenswrapper[4652]: I0216 17:44:34.087621 4652 scope.go:117] "RemoveContainer" containerID="17c9dbb046c5217431166ffcdf8ea2149d86bc0845a792436299348fa92637c1" Feb 16 17:44:34.088325 master-0 kubenswrapper[4652]: I0216 17:44:34.088293 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:44:34.101532 master-0 kubenswrapper[4652]: I0216 17:44:34.101486 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:44:34.126499 master-0 kubenswrapper[4652]: I0216 17:44:34.126445 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 17:44:34.126984 master-0 kubenswrapper[4652]: E0216 17:44:34.126957 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2604753f-de68-498b-be82-6d8da2ce56d9" containerName="nova-api-log" Feb 16 17:44:34.126984 master-0 kubenswrapper[4652]: I0216 17:44:34.126976 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="2604753f-de68-498b-be82-6d8da2ce56d9" containerName="nova-api-log" Feb 16 17:44:34.127110 master-0 kubenswrapper[4652]: E0216 17:44:34.127088 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2604753f-de68-498b-be82-6d8da2ce56d9" containerName="nova-api-api" Feb 16 17:44:34.127110 master-0 kubenswrapper[4652]: I0216 17:44:34.127102 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="2604753f-de68-498b-be82-6d8da2ce56d9" containerName="nova-api-api" Feb 16 17:44:34.127373 master-0 kubenswrapper[4652]: I0216 17:44:34.127345 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="2604753f-de68-498b-be82-6d8da2ce56d9" containerName="nova-api-log" Feb 16 17:44:34.127373 master-0 kubenswrapper[4652]: I0216 17:44:34.127363 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="2604753f-de68-498b-be82-6d8da2ce56d9" containerName="nova-api-api" Feb 16 17:44:34.128706 master-0 kubenswrapper[4652]: I0216 17:44:34.128679 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:44:34.131336 master-0 kubenswrapper[4652]: I0216 17:44:34.131297 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 17:44:34.198766 master-0 kubenswrapper[4652]: I0216 17:44:34.198642 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:44:34.355773 master-0 kubenswrapper[4652]: I0216 17:44:34.355678 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p99pc\" (UniqueName: \"kubernetes.io/projected/8915bb9d-caab-44fd-b00a-2426c5e1fad4-kube-api-access-p99pc\") pod \"nova-api-0\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " pod="openstack/nova-api-0" Feb 16 17:44:34.356051 master-0 kubenswrapper[4652]: I0216 17:44:34.355973 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8915bb9d-caab-44fd-b00a-2426c5e1fad4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " pod="openstack/nova-api-0" Feb 16 17:44:34.356616 master-0 kubenswrapper[4652]: I0216 17:44:34.356556 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8915bb9d-caab-44fd-b00a-2426c5e1fad4-config-data\") pod \"nova-api-0\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " pod="openstack/nova-api-0" Feb 16 17:44:34.357025 master-0 kubenswrapper[4652]: I0216 17:44:34.356994 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8915bb9d-caab-44fd-b00a-2426c5e1fad4-logs\") pod \"nova-api-0\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " pod="openstack/nova-api-0" Feb 16 17:44:34.459725 master-0 kubenswrapper[4652]: I0216 17:44:34.459668 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8915bb9d-caab-44fd-b00a-2426c5e1fad4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " pod="openstack/nova-api-0" Feb 16 17:44:34.459999 master-0 kubenswrapper[4652]: I0216 17:44:34.459769 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8915bb9d-caab-44fd-b00a-2426c5e1fad4-config-data\") pod \"nova-api-0\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " pod="openstack/nova-api-0" Feb 16 17:44:34.459999 master-0 kubenswrapper[4652]: I0216 17:44:34.459834 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8915bb9d-caab-44fd-b00a-2426c5e1fad4-logs\") pod \"nova-api-0\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " pod="openstack/nova-api-0" Feb 16 17:44:34.459999 master-0 kubenswrapper[4652]: I0216 17:44:34.459908 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p99pc\" (UniqueName: \"kubernetes.io/projected/8915bb9d-caab-44fd-b00a-2426c5e1fad4-kube-api-access-p99pc\") pod \"nova-api-0\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " pod="openstack/nova-api-0" Feb 16 17:44:34.461262 master-0 kubenswrapper[4652]: I0216 17:44:34.461197 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8915bb9d-caab-44fd-b00a-2426c5e1fad4-logs\") pod \"nova-api-0\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " pod="openstack/nova-api-0" Feb 16 17:44:34.473464 master-0 kubenswrapper[4652]: I0216 17:44:34.473420 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8915bb9d-caab-44fd-b00a-2426c5e1fad4-config-data\") pod \"nova-api-0\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " pod="openstack/nova-api-0" Feb 16 17:44:34.474202 master-0 kubenswrapper[4652]: I0216 17:44:34.474173 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8915bb9d-caab-44fd-b00a-2426c5e1fad4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " pod="openstack/nova-api-0" Feb 16 17:44:34.476598 master-0 kubenswrapper[4652]: I0216 17:44:34.476530 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p99pc\" (UniqueName: \"kubernetes.io/projected/8915bb9d-caab-44fd-b00a-2426c5e1fad4-kube-api-access-p99pc\") pod \"nova-api-0\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " pod="openstack/nova-api-0" Feb 16 17:44:34.713400 master-0 kubenswrapper[4652]: I0216 17:44:34.713240 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:44:34.770458 master-0 kubenswrapper[4652]: I0216 17:44:34.768741 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7361f440-77f5-42e0-bdce-5bc776fa7f8d-config-data\") pod \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\" (UID: \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\") " Feb 16 17:44:34.770458 master-0 kubenswrapper[4652]: I0216 17:44:34.768917 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2flg2\" (UniqueName: \"kubernetes.io/projected/7361f440-77f5-42e0-bdce-5bc776fa7f8d-kube-api-access-2flg2\") pod \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\" (UID: \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\") " Feb 16 17:44:34.770458 master-0 kubenswrapper[4652]: I0216 17:44:34.769101 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7361f440-77f5-42e0-bdce-5bc776fa7f8d-combined-ca-bundle\") pod \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\" (UID: \"7361f440-77f5-42e0-bdce-5bc776fa7f8d\") " Feb 16 17:44:34.776285 master-0 kubenswrapper[4652]: I0216 17:44:34.773802 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2604753f-de68-498b-be82-6d8da2ce56d9" path="/var/lib/kubelet/pods/2604753f-de68-498b-be82-6d8da2ce56d9/volumes" Feb 16 17:44:34.776285 master-0 kubenswrapper[4652]: I0216 17:44:34.775822 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:44:34.813893 master-0 kubenswrapper[4652]: I0216 17:44:34.813314 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7361f440-77f5-42e0-bdce-5bc776fa7f8d-kube-api-access-2flg2" (OuterVolumeSpecName: "kube-api-access-2flg2") pod "7361f440-77f5-42e0-bdce-5bc776fa7f8d" (UID: "7361f440-77f5-42e0-bdce-5bc776fa7f8d"). InnerVolumeSpecName "kube-api-access-2flg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:44:34.821747 master-0 kubenswrapper[4652]: I0216 17:44:34.820552 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7361f440-77f5-42e0-bdce-5bc776fa7f8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7361f440-77f5-42e0-bdce-5bc776fa7f8d" (UID: "7361f440-77f5-42e0-bdce-5bc776fa7f8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:34.836900 master-0 kubenswrapper[4652]: I0216 17:44:34.836819 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7361f440-77f5-42e0-bdce-5bc776fa7f8d-config-data" (OuterVolumeSpecName: "config-data") pod "7361f440-77f5-42e0-bdce-5bc776fa7f8d" (UID: "7361f440-77f5-42e0-bdce-5bc776fa7f8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:34.886222 master-0 kubenswrapper[4652]: I0216 17:44:34.886157 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2flg2\" (UniqueName: \"kubernetes.io/projected/7361f440-77f5-42e0-bdce-5bc776fa7f8d-kube-api-access-2flg2\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:34.886222 master-0 kubenswrapper[4652]: I0216 17:44:34.886211 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7361f440-77f5-42e0-bdce-5bc776fa7f8d-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:34.886222 master-0 kubenswrapper[4652]: I0216 17:44:34.886224 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7361f440-77f5-42e0-bdce-5bc776fa7f8d-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:35.038130 master-0 kubenswrapper[4652]: I0216 17:44:35.038073 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3300c3df-961b-4d03-9260-764620b49489","Type":"ContainerStarted","Data":"d55d10a4254bacc2c646be609ef2428e823ba01f37a0b5a904cd1d224adca82a"} Feb 16 17:44:35.039765 master-0 kubenswrapper[4652]: I0216 17:44:35.039732 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 17:44:35.045420 master-0 kubenswrapper[4652]: I0216 17:44:35.045374 4652 generic.go:334] "Generic (PLEG): container finished" podID="7361f440-77f5-42e0-bdce-5bc776fa7f8d" containerID="c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937" exitCode=0 Feb 16 17:44:35.045539 master-0 kubenswrapper[4652]: I0216 17:44:35.045461 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7361f440-77f5-42e0-bdce-5bc776fa7f8d","Type":"ContainerDied","Data":"c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937"} Feb 16 17:44:35.045539 master-0 kubenswrapper[4652]: I0216 17:44:35.045529 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7361f440-77f5-42e0-bdce-5bc776fa7f8d","Type":"ContainerDied","Data":"ee6917428d5750e40e5f877f23471413af83ef7ef9a28e08b9265d73e97a15ef"} Feb 16 17:44:35.045633 master-0 kubenswrapper[4652]: I0216 17:44:35.045549 4652 scope.go:117] "RemoveContainer" containerID="c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937" Feb 16 17:44:35.045751 master-0 kubenswrapper[4652]: I0216 17:44:35.045721 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:44:35.059092 master-0 kubenswrapper[4652]: I0216 17:44:35.059042 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ironic-conductor-0" Feb 16 17:44:35.068435 master-0 kubenswrapper[4652]: I0216 17:44:35.068365 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.068227569 podStartE2EDuration="2.068227569s" podCreationTimestamp="2026-02-16 17:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:44:35.067196281 +0000 UTC m=+1232.455364817" watchObservedRunningTime="2026-02-16 17:44:35.068227569 +0000 UTC m=+1232.456396075" Feb 16 17:44:35.097769 master-0 kubenswrapper[4652]: I0216 17:44:35.096794 4652 scope.go:117] "RemoveContainer" containerID="c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937" Feb 16 17:44:35.097769 master-0 kubenswrapper[4652]: E0216 17:44:35.097333 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937\": container with ID starting with c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937 not found: ID does not exist" containerID="c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937" Feb 16 17:44:35.097769 master-0 kubenswrapper[4652]: I0216 17:44:35.097371 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937"} err="failed to get container status \"c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937\": rpc error: code = NotFound desc = could not find container \"c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937\": container with ID starting with c385b333f7d4617d9eb21f80f7e356cb4b1fc4f7f9eab268eb7186bf7dba1937 not found: ID does not exist" Feb 16 17:44:35.289801 master-0 kubenswrapper[4652]: I0216 17:44:35.288167 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:44:35.329970 master-0 kubenswrapper[4652]: I0216 17:44:35.329896 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:44:35.364489 master-0 kubenswrapper[4652]: I0216 17:44:35.363777 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:44:35.379329 master-0 kubenswrapper[4652]: I0216 17:44:35.379243 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:44:35.379864 master-0 kubenswrapper[4652]: E0216 17:44:35.379833 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7361f440-77f5-42e0-bdce-5bc776fa7f8d" containerName="nova-scheduler-scheduler" Feb 16 17:44:35.379864 master-0 kubenswrapper[4652]: I0216 17:44:35.379854 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="7361f440-77f5-42e0-bdce-5bc776fa7f8d" containerName="nova-scheduler-scheduler" Feb 16 17:44:35.380168 master-0 kubenswrapper[4652]: I0216 17:44:35.380085 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="7361f440-77f5-42e0-bdce-5bc776fa7f8d" containerName="nova-scheduler-scheduler" Feb 16 17:44:35.380881 master-0 kubenswrapper[4652]: I0216 17:44:35.380845 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:44:35.383703 master-0 kubenswrapper[4652]: I0216 17:44:35.383667 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 17:44:35.396310 master-0 kubenswrapper[4652]: I0216 17:44:35.396151 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:44:35.402315 master-0 kubenswrapper[4652]: I0216 17:44:35.402263 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ff8fb5-61f0-42a8-8b16-8571f3305785-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40ff8fb5-61f0-42a8-8b16-8571f3305785\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:35.402630 master-0 kubenswrapper[4652]: I0216 17:44:35.402423 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ff8fb5-61f0-42a8-8b16-8571f3305785-config-data\") pod \"nova-scheduler-0\" (UID: \"40ff8fb5-61f0-42a8-8b16-8571f3305785\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:35.402630 master-0 kubenswrapper[4652]: I0216 17:44:35.402451 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7n9w\" (UniqueName: \"kubernetes.io/projected/40ff8fb5-61f0-42a8-8b16-8571f3305785-kube-api-access-v7n9w\") pod \"nova-scheduler-0\" (UID: \"40ff8fb5-61f0-42a8-8b16-8571f3305785\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:35.504965 master-0 kubenswrapper[4652]: I0216 17:44:35.504894 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ff8fb5-61f0-42a8-8b16-8571f3305785-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40ff8fb5-61f0-42a8-8b16-8571f3305785\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:35.505180 master-0 kubenswrapper[4652]: I0216 17:44:35.505073 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ff8fb5-61f0-42a8-8b16-8571f3305785-config-data\") pod \"nova-scheduler-0\" (UID: \"40ff8fb5-61f0-42a8-8b16-8571f3305785\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:35.505180 master-0 kubenswrapper[4652]: I0216 17:44:35.505099 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7n9w\" (UniqueName: \"kubernetes.io/projected/40ff8fb5-61f0-42a8-8b16-8571f3305785-kube-api-access-v7n9w\") pod \"nova-scheduler-0\" (UID: \"40ff8fb5-61f0-42a8-8b16-8571f3305785\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:35.518115 master-0 kubenswrapper[4652]: I0216 17:44:35.509188 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ff8fb5-61f0-42a8-8b16-8571f3305785-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"40ff8fb5-61f0-42a8-8b16-8571f3305785\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:35.518115 master-0 kubenswrapper[4652]: I0216 17:44:35.510207 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ff8fb5-61f0-42a8-8b16-8571f3305785-config-data\") pod \"nova-scheduler-0\" (UID: \"40ff8fb5-61f0-42a8-8b16-8571f3305785\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:35.522463 master-0 kubenswrapper[4652]: I0216 17:44:35.522388 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7n9w\" (UniqueName: \"kubernetes.io/projected/40ff8fb5-61f0-42a8-8b16-8571f3305785-kube-api-access-v7n9w\") pod \"nova-scheduler-0\" (UID: \"40ff8fb5-61f0-42a8-8b16-8571f3305785\") " pod="openstack/nova-scheduler-0" Feb 16 17:44:35.761463 master-0 kubenswrapper[4652]: I0216 17:44:35.758298 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:44:36.080569 master-0 kubenswrapper[4652]: I0216 17:44:36.078888 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8915bb9d-caab-44fd-b00a-2426c5e1fad4","Type":"ContainerStarted","Data":"3a21a2e6e596bbb26a3dd551d79809f286ba60a3f7c1aaa29173ad97cf09dd7e"} Feb 16 17:44:36.080569 master-0 kubenswrapper[4652]: I0216 17:44:36.078929 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8915bb9d-caab-44fd-b00a-2426c5e1fad4","Type":"ContainerStarted","Data":"85d1ae6a7531eb3470be6b71fd149cc0c79cbd89aef8bded6349d2d6558c2870"} Feb 16 17:44:36.080569 master-0 kubenswrapper[4652]: I0216 17:44:36.078937 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8915bb9d-caab-44fd-b00a-2426c5e1fad4","Type":"ContainerStarted","Data":"8dfbf486ce949e8332e54209e849dc1d78d29631d63a17c73c4ed68acee22297"} Feb 16 17:44:36.117115 master-0 kubenswrapper[4652]: I0216 17:44:36.115836 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.115817261 podStartE2EDuration="2.115817261s" podCreationTimestamp="2026-02-16 17:44:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:44:36.101300512 +0000 UTC m=+1233.489469028" watchObservedRunningTime="2026-02-16 17:44:36.115817261 +0000 UTC m=+1233.503985777" Feb 16 17:44:36.512564 master-0 kubenswrapper[4652]: I0216 17:44:36.512506 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:44:36.763514 master-0 kubenswrapper[4652]: I0216 17:44:36.763455 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7361f440-77f5-42e0-bdce-5bc776fa7f8d" path="/var/lib/kubelet/pods/7361f440-77f5-42e0-bdce-5bc776fa7f8d/volumes" Feb 16 17:44:37.092864 master-0 kubenswrapper[4652]: I0216 17:44:37.092730 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40ff8fb5-61f0-42a8-8b16-8571f3305785","Type":"ContainerStarted","Data":"7962b043b53fa81b163601b9f78aad11933825c698b1f294370923aeea1d30d2"} Feb 16 17:44:37.092864 master-0 kubenswrapper[4652]: I0216 17:44:37.092811 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40ff8fb5-61f0-42a8-8b16-8571f3305785","Type":"ContainerStarted","Data":"9cf98f29077c410fce99a0a982dbc21f8506dedfebc56d328182d077fa78b0fa"} Feb 16 17:44:37.462926 master-0 kubenswrapper[4652]: I0216 17:44:37.462846 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:44:37.462926 master-0 kubenswrapper[4652]: I0216 17:44:37.462910 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:44:40.762595 master-0 kubenswrapper[4652]: I0216 17:44:40.762442 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 17:44:42.468081 master-0 kubenswrapper[4652]: I0216 17:44:42.468021 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 17:44:42.471220 master-0 kubenswrapper[4652]: I0216 17:44:42.471175 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 17:44:43.476090 master-0 kubenswrapper[4652]: I0216 17:44:43.476033 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 17:44:43.480536 master-0 kubenswrapper[4652]: I0216 17:44:43.480463 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a171bdac-1967-4caf-836e-4eac64a10fd6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.0.232:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:44:43.480799 master-0 kubenswrapper[4652]: I0216 17:44:43.480482 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a171bdac-1967-4caf-836e-4eac64a10fd6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.0.232:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:44:43.506421 master-0 kubenswrapper[4652]: I0216 17:44:43.506306 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=8.506285859 podStartE2EDuration="8.506285859s" podCreationTimestamp="2026-02-16 17:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:44:37.127934361 +0000 UTC m=+1234.516102877" watchObservedRunningTime="2026-02-16 17:44:43.506285859 +0000 UTC m=+1240.894454375" Feb 16 17:44:44.777098 master-0 kubenswrapper[4652]: I0216 17:44:44.777022 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:44:44.777098 master-0 kubenswrapper[4652]: I0216 17:44:44.777089 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:44:45.761646 master-0 kubenswrapper[4652]: I0216 17:44:45.761586 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 17:44:45.792098 master-0 kubenswrapper[4652]: I0216 17:44:45.792021 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 17:44:45.864541 master-0 kubenswrapper[4652]: I0216 17:44:45.864460 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8915bb9d-caab-44fd-b00a-2426c5e1fad4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.128.0.234:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:44:45.864770 master-0 kubenswrapper[4652]: I0216 17:44:45.864533 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8915bb9d-caab-44fd-b00a-2426c5e1fad4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.128.0.234:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:44:46.203678 master-0 kubenswrapper[4652]: I0216 17:44:46.203617 4652 generic.go:334] "Generic (PLEG): container finished" podID="65aadd67-7869-439a-a571-b0827da937da" containerID="6aa91491057953c3de6a441828c55ea9bdc8b4f061e094a4ce80c379463e78d3" exitCode=137 Feb 16 17:44:46.203904 master-0 kubenswrapper[4652]: I0216 17:44:46.203690 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65aadd67-7869-439a-a571-b0827da937da","Type":"ContainerDied","Data":"6aa91491057953c3de6a441828c55ea9bdc8b4f061e094a4ce80c379463e78d3"} Feb 16 17:44:46.236131 master-0 kubenswrapper[4652]: I0216 17:44:46.236065 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 17:44:46.539292 master-0 kubenswrapper[4652]: I0216 17:44:46.539200 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:46.621318 master-0 kubenswrapper[4652]: I0216 17:44:46.621239 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65aadd67-7869-439a-a571-b0827da937da-config-data\") pod \"65aadd67-7869-439a-a571-b0827da937da\" (UID: \"65aadd67-7869-439a-a571-b0827da937da\") " Feb 16 17:44:46.621563 master-0 kubenswrapper[4652]: I0216 17:44:46.621544 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65aadd67-7869-439a-a571-b0827da937da-combined-ca-bundle\") pod \"65aadd67-7869-439a-a571-b0827da937da\" (UID: \"65aadd67-7869-439a-a571-b0827da937da\") " Feb 16 17:44:46.621748 master-0 kubenswrapper[4652]: I0216 17:44:46.621716 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbt2f\" (UniqueName: \"kubernetes.io/projected/65aadd67-7869-439a-a571-b0827da937da-kube-api-access-kbt2f\") pod \"65aadd67-7869-439a-a571-b0827da937da\" (UID: \"65aadd67-7869-439a-a571-b0827da937da\") " Feb 16 17:44:46.629754 master-0 kubenswrapper[4652]: I0216 17:44:46.629707 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65aadd67-7869-439a-a571-b0827da937da-kube-api-access-kbt2f" (OuterVolumeSpecName: "kube-api-access-kbt2f") pod "65aadd67-7869-439a-a571-b0827da937da" (UID: "65aadd67-7869-439a-a571-b0827da937da"). InnerVolumeSpecName "kube-api-access-kbt2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:44:46.656827 master-0 kubenswrapper[4652]: I0216 17:44:46.656768 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65aadd67-7869-439a-a571-b0827da937da-config-data" (OuterVolumeSpecName: "config-data") pod "65aadd67-7869-439a-a571-b0827da937da" (UID: "65aadd67-7869-439a-a571-b0827da937da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:46.702625 master-0 kubenswrapper[4652]: I0216 17:44:46.697292 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65aadd67-7869-439a-a571-b0827da937da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65aadd67-7869-439a-a571-b0827da937da" (UID: "65aadd67-7869-439a-a571-b0827da937da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:44:46.725274 master-0 kubenswrapper[4652]: I0216 17:44:46.724854 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65aadd67-7869-439a-a571-b0827da937da-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:46.725274 master-0 kubenswrapper[4652]: I0216 17:44:46.724904 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbt2f\" (UniqueName: \"kubernetes.io/projected/65aadd67-7869-439a-a571-b0827da937da-kube-api-access-kbt2f\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:46.725274 master-0 kubenswrapper[4652]: I0216 17:44:46.724920 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65aadd67-7869-439a-a571-b0827da937da-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:44:47.218170 master-0 kubenswrapper[4652]: I0216 17:44:47.218125 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.218170 master-0 kubenswrapper[4652]: I0216 17:44:47.218146 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65aadd67-7869-439a-a571-b0827da937da","Type":"ContainerDied","Data":"8bb565856842c2acd6b45889a66e24222e5379aac7eab038d2c7a9abab96bae9"} Feb 16 17:44:47.218750 master-0 kubenswrapper[4652]: I0216 17:44:47.218227 4652 scope.go:117] "RemoveContainer" containerID="6aa91491057953c3de6a441828c55ea9bdc8b4f061e094a4ce80c379463e78d3" Feb 16 17:44:47.269293 master-0 kubenswrapper[4652]: I0216 17:44:47.267754 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:44:47.288713 master-0 kubenswrapper[4652]: I0216 17:44:47.288650 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:44:47.313421 master-0 kubenswrapper[4652]: I0216 17:44:47.312570 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:44:47.313421 master-0 kubenswrapper[4652]: E0216 17:44:47.313099 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65aadd67-7869-439a-a571-b0827da937da" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 17:44:47.313421 master-0 kubenswrapper[4652]: I0216 17:44:47.313113 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="65aadd67-7869-439a-a571-b0827da937da" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 17:44:47.313421 master-0 kubenswrapper[4652]: I0216 17:44:47.313359 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="65aadd67-7869-439a-a571-b0827da937da" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 17:44:47.314108 master-0 kubenswrapper[4652]: I0216 17:44:47.314081 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.318816 master-0 kubenswrapper[4652]: I0216 17:44:47.318752 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 17:44:47.319616 master-0 kubenswrapper[4652]: I0216 17:44:47.319378 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 17:44:47.319616 master-0 kubenswrapper[4652]: I0216 17:44:47.319434 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 17:44:47.330694 master-0 kubenswrapper[4652]: I0216 17:44:47.330646 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:44:47.442450 master-0 kubenswrapper[4652]: I0216 17:44:47.442396 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5ndd\" (UniqueName: \"kubernetes.io/projected/3bf904ad-b600-4caf-9766-ee4db8199c0f-kube-api-access-d5ndd\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.442662 master-0 kubenswrapper[4652]: I0216 17:44:47.442461 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bf904ad-b600-4caf-9766-ee4db8199c0f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.442662 master-0 kubenswrapper[4652]: I0216 17:44:47.442583 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf904ad-b600-4caf-9766-ee4db8199c0f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.442662 master-0 kubenswrapper[4652]: I0216 17:44:47.442609 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf904ad-b600-4caf-9766-ee4db8199c0f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.442783 master-0 kubenswrapper[4652]: I0216 17:44:47.442745 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bf904ad-b600-4caf-9766-ee4db8199c0f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.544443 master-0 kubenswrapper[4652]: I0216 17:44:47.544259 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf904ad-b600-4caf-9766-ee4db8199c0f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.544894 master-0 kubenswrapper[4652]: I0216 17:44:47.544832 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf904ad-b600-4caf-9766-ee4db8199c0f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.545274 master-0 kubenswrapper[4652]: I0216 17:44:47.545227 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bf904ad-b600-4caf-9766-ee4db8199c0f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.545365 master-0 kubenswrapper[4652]: I0216 17:44:47.545349 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5ndd\" (UniqueName: \"kubernetes.io/projected/3bf904ad-b600-4caf-9766-ee4db8199c0f-kube-api-access-d5ndd\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.545637 master-0 kubenswrapper[4652]: I0216 17:44:47.545435 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bf904ad-b600-4caf-9766-ee4db8199c0f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.549031 master-0 kubenswrapper[4652]: I0216 17:44:47.548764 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bf904ad-b600-4caf-9766-ee4db8199c0f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.549432 master-0 kubenswrapper[4652]: I0216 17:44:47.549413 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bf904ad-b600-4caf-9766-ee4db8199c0f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.549544 master-0 kubenswrapper[4652]: I0216 17:44:47.549421 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bf904ad-b600-4caf-9766-ee4db8199c0f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.549886 master-0 kubenswrapper[4652]: I0216 17:44:47.549840 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bf904ad-b600-4caf-9766-ee4db8199c0f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.563061 master-0 kubenswrapper[4652]: I0216 17:44:47.563016 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5ndd\" (UniqueName: \"kubernetes.io/projected/3bf904ad-b600-4caf-9766-ee4db8199c0f-kube-api-access-d5ndd\") pod \"nova-cell1-novncproxy-0\" (UID: \"3bf904ad-b600-4caf-9766-ee4db8199c0f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:47.656953 master-0 kubenswrapper[4652]: I0216 17:44:47.656864 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:48.114369 master-0 kubenswrapper[4652]: W0216 17:44:48.114323 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3bf904ad_b600_4caf_9766_ee4db8199c0f.slice/crio-d833dc88fc83f479e1841115c284eb2769598666156abaa5b5eabf15224b7cd7 WatchSource:0}: Error finding container d833dc88fc83f479e1841115c284eb2769598666156abaa5b5eabf15224b7cd7: Status 404 returned error can't find the container with id d833dc88fc83f479e1841115c284eb2769598666156abaa5b5eabf15224b7cd7 Feb 16 17:44:48.117016 master-0 kubenswrapper[4652]: I0216 17:44:48.116961 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 17:44:48.230212 master-0 kubenswrapper[4652]: I0216 17:44:48.230137 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3bf904ad-b600-4caf-9766-ee4db8199c0f","Type":"ContainerStarted","Data":"d833dc88fc83f479e1841115c284eb2769598666156abaa5b5eabf15224b7cd7"} Feb 16 17:44:48.785424 master-0 kubenswrapper[4652]: I0216 17:44:48.785120 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65aadd67-7869-439a-a571-b0827da937da" path="/var/lib/kubelet/pods/65aadd67-7869-439a-a571-b0827da937da/volumes" Feb 16 17:44:49.250058 master-0 kubenswrapper[4652]: I0216 17:44:49.249924 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3bf904ad-b600-4caf-9766-ee4db8199c0f","Type":"ContainerStarted","Data":"f2e2800ea9b7109b1b978fb6f744d425a97101e8627987dc1303c02051bc24bb"} Feb 16 17:44:49.365306 master-0 kubenswrapper[4652]: I0216 17:44:49.365211 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.365187932 podStartE2EDuration="2.365187932s" podCreationTimestamp="2026-02-16 17:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:44:49.352206914 +0000 UTC m=+1246.740375430" watchObservedRunningTime="2026-02-16 17:44:49.365187932 +0000 UTC m=+1246.753356458" Feb 16 17:44:52.468668 master-0 kubenswrapper[4652]: I0216 17:44:52.468608 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 17:44:52.469434 master-0 kubenswrapper[4652]: I0216 17:44:52.469120 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 17:44:52.474388 master-0 kubenswrapper[4652]: I0216 17:44:52.474346 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 17:44:52.657781 master-0 kubenswrapper[4652]: I0216 17:44:52.657718 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:53.307067 master-0 kubenswrapper[4652]: I0216 17:44:53.306986 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 17:44:54.783174 master-0 kubenswrapper[4652]: I0216 17:44:54.780201 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 17:44:54.783174 master-0 kubenswrapper[4652]: I0216 17:44:54.781087 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 17:44:54.785548 master-0 kubenswrapper[4652]: I0216 17:44:54.784414 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 17:44:54.788468 master-0 kubenswrapper[4652]: I0216 17:44:54.788417 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 17:44:55.320274 master-0 kubenswrapper[4652]: I0216 17:44:55.320194 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 17:44:55.324042 master-0 kubenswrapper[4652]: I0216 17:44:55.323988 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 17:44:55.666865 master-0 kubenswrapper[4652]: I0216 17:44:55.666788 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f8bc5cb7-rfh9j"] Feb 16 17:44:55.669576 master-0 kubenswrapper[4652]: I0216 17:44:55.669525 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.707270 master-0 kubenswrapper[4652]: I0216 17:44:55.707177 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f8bc5cb7-rfh9j"] Feb 16 17:44:55.784748 master-0 kubenswrapper[4652]: I0216 17:44:55.784422 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-config\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.784748 master-0 kubenswrapper[4652]: I0216 17:44:55.784636 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-ovsdbserver-sb\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.784748 master-0 kubenswrapper[4652]: I0216 17:44:55.784727 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw7tv\" (UniqueName: \"kubernetes.io/projected/26d48aee-ca82-4694-aa9c-388f5f0582dd-kube-api-access-nw7tv\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.785578 master-0 kubenswrapper[4652]: I0216 17:44:55.784967 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-ovsdbserver-nb\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.785578 master-0 kubenswrapper[4652]: I0216 17:44:55.785074 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-dns-svc\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.785578 master-0 kubenswrapper[4652]: I0216 17:44:55.785214 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-dns-swift-storage-0\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.888037 master-0 kubenswrapper[4652]: I0216 17:44:55.887922 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-ovsdbserver-nb\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.888724 master-0 kubenswrapper[4652]: I0216 17:44:55.888700 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-dns-svc\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.888981 master-0 kubenswrapper[4652]: I0216 17:44:55.888963 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-dns-swift-storage-0\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.889451 master-0 kubenswrapper[4652]: I0216 17:44:55.889434 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-config\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.889580 master-0 kubenswrapper[4652]: I0216 17:44:55.889567 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-ovsdbserver-sb\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.889694 master-0 kubenswrapper[4652]: I0216 17:44:55.889677 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw7tv\" (UniqueName: \"kubernetes.io/projected/26d48aee-ca82-4694-aa9c-388f5f0582dd-kube-api-access-nw7tv\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.891355 master-0 kubenswrapper[4652]: I0216 17:44:55.891301 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-dns-swift-storage-0\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.891355 master-0 kubenswrapper[4652]: I0216 17:44:55.891344 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-ovsdbserver-nb\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.893052 master-0 kubenswrapper[4652]: I0216 17:44:55.892999 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-dns-svc\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.893052 master-0 kubenswrapper[4652]: I0216 17:44:55.893017 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-config\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.893268 master-0 kubenswrapper[4652]: I0216 17:44:55.893060 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/26d48aee-ca82-4694-aa9c-388f5f0582dd-ovsdbserver-sb\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:55.912178 master-0 kubenswrapper[4652]: I0216 17:44:55.912121 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw7tv\" (UniqueName: \"kubernetes.io/projected/26d48aee-ca82-4694-aa9c-388f5f0582dd-kube-api-access-nw7tv\") pod \"dnsmasq-dns-85f8bc5cb7-rfh9j\" (UID: \"26d48aee-ca82-4694-aa9c-388f5f0582dd\") " pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:56.040057 master-0 kubenswrapper[4652]: I0216 17:44:56.040001 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:56.565473 master-0 kubenswrapper[4652]: W0216 17:44:56.565411 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26d48aee_ca82_4694_aa9c_388f5f0582dd.slice/crio-7620cf04ffd5f1e396fe2c82caeaa1cc991e6a06cee751f1ae0489a7957f021f WatchSource:0}: Error finding container 7620cf04ffd5f1e396fe2c82caeaa1cc991e6a06cee751f1ae0489a7957f021f: Status 404 returned error can't find the container with id 7620cf04ffd5f1e396fe2c82caeaa1cc991e6a06cee751f1ae0489a7957f021f Feb 16 17:44:56.566307 master-0 kubenswrapper[4652]: I0216 17:44:56.566287 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f8bc5cb7-rfh9j"] Feb 16 17:44:57.355117 master-0 kubenswrapper[4652]: I0216 17:44:57.355040 4652 generic.go:334] "Generic (PLEG): container finished" podID="26d48aee-ca82-4694-aa9c-388f5f0582dd" containerID="4b46d7b704cbb0e1b17a30c89c83d85a6550a767f65a89e86ac235e3c8d4f9f2" exitCode=0 Feb 16 17:44:57.355764 master-0 kubenswrapper[4652]: I0216 17:44:57.355150 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" event={"ID":"26d48aee-ca82-4694-aa9c-388f5f0582dd","Type":"ContainerDied","Data":"4b46d7b704cbb0e1b17a30c89c83d85a6550a767f65a89e86ac235e3c8d4f9f2"} Feb 16 17:44:57.355764 master-0 kubenswrapper[4652]: I0216 17:44:57.355213 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" event={"ID":"26d48aee-ca82-4694-aa9c-388f5f0582dd","Type":"ContainerStarted","Data":"7620cf04ffd5f1e396fe2c82caeaa1cc991e6a06cee751f1ae0489a7957f021f"} Feb 16 17:44:57.657763 master-0 kubenswrapper[4652]: I0216 17:44:57.657681 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:57.676610 master-0 kubenswrapper[4652]: I0216 17:44:57.676551 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:58.375084 master-0 kubenswrapper[4652]: I0216 17:44:58.374951 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" event={"ID":"26d48aee-ca82-4694-aa9c-388f5f0582dd","Type":"ContainerStarted","Data":"d2a34f56c89d249f25a6c9eedce992fbe8a4d044deb1c7b4446036fd0ae3648b"} Feb 16 17:44:58.378515 master-0 kubenswrapper[4652]: I0216 17:44:58.376359 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:44:58.397491 master-0 kubenswrapper[4652]: I0216 17:44:58.397425 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 17:44:58.407466 master-0 kubenswrapper[4652]: I0216 17:44:58.407408 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" podStartSLOduration=3.407389694 podStartE2EDuration="3.407389694s" podCreationTimestamp="2026-02-16 17:44:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:44:58.404858396 +0000 UTC m=+1255.793026932" watchObservedRunningTime="2026-02-16 17:44:58.407389694 +0000 UTC m=+1255.795558220" Feb 16 17:44:58.485923 master-0 kubenswrapper[4652]: I0216 17:44:58.485872 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:44:58.487793 master-0 kubenswrapper[4652]: I0216 17:44:58.486540 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8915bb9d-caab-44fd-b00a-2426c5e1fad4" containerName="nova-api-log" containerID="cri-o://85d1ae6a7531eb3470be6b71fd149cc0c79cbd89aef8bded6349d2d6558c2870" gracePeriod=30 Feb 16 17:44:58.489754 master-0 kubenswrapper[4652]: I0216 17:44:58.488127 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8915bb9d-caab-44fd-b00a-2426c5e1fad4" containerName="nova-api-api" containerID="cri-o://3a21a2e6e596bbb26a3dd551d79809f286ba60a3f7c1aaa29173ad97cf09dd7e" gracePeriod=30 Feb 16 17:44:58.703758 master-0 kubenswrapper[4652]: I0216 17:44:58.703575 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-9l2b8"] Feb 16 17:44:58.705316 master-0 kubenswrapper[4652]: I0216 17:44:58.705272 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:58.708379 master-0 kubenswrapper[4652]: I0216 17:44:58.708321 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 17:44:58.708966 master-0 kubenswrapper[4652]: I0216 17:44:58.708926 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 17:44:58.776064 master-0 kubenswrapper[4652]: I0216 17:44:58.776012 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-host-discover-jgglr"] Feb 16 17:44:58.778461 master-0 kubenswrapper[4652]: I0216 17:44:58.778428 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:58.779306 master-0 kubenswrapper[4652]: I0216 17:44:58.779279 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9l2b8"] Feb 16 17:44:58.802906 master-0 kubenswrapper[4652]: I0216 17:44:58.802849 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-jgglr"] Feb 16 17:44:58.884220 master-0 kubenswrapper[4652]: I0216 17:44:58.884119 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-config-data\") pod \"nova-cell1-host-discover-jgglr\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:58.884220 master-0 kubenswrapper[4652]: I0216 17:44:58.884199 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-scripts\") pod \"nova-cell1-host-discover-jgglr\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:58.884509 master-0 kubenswrapper[4652]: I0216 17:44:58.884369 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjtgf\" (UniqueName: \"kubernetes.io/projected/82015a7e-8945-4748-bb16-db5b284117a6-kube-api-access-mjtgf\") pod \"nova-cell1-cell-mapping-9l2b8\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:58.885596 master-0 kubenswrapper[4652]: I0216 17:44:58.885529 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9l2b8\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:58.885657 master-0 kubenswrapper[4652]: I0216 17:44:58.885602 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cbrq\" (UniqueName: \"kubernetes.io/projected/63d158ef-2e1c-4eff-be36-c0ab68bedebc-kube-api-access-7cbrq\") pod \"nova-cell1-host-discover-jgglr\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:58.885657 master-0 kubenswrapper[4652]: I0216 17:44:58.885642 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-config-data\") pod \"nova-cell1-cell-mapping-9l2b8\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:58.885728 master-0 kubenswrapper[4652]: I0216 17:44:58.885667 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-scripts\") pod \"nova-cell1-cell-mapping-9l2b8\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:58.885728 master-0 kubenswrapper[4652]: I0216 17:44:58.885699 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-combined-ca-bundle\") pod \"nova-cell1-host-discover-jgglr\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:58.988334 master-0 kubenswrapper[4652]: I0216 17:44:58.988272 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-config-data\") pod \"nova-cell1-host-discover-jgglr\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:58.988334 master-0 kubenswrapper[4652]: I0216 17:44:58.988322 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-scripts\") pod \"nova-cell1-host-discover-jgglr\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:58.988646 master-0 kubenswrapper[4652]: I0216 17:44:58.988375 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjtgf\" (UniqueName: \"kubernetes.io/projected/82015a7e-8945-4748-bb16-db5b284117a6-kube-api-access-mjtgf\") pod \"nova-cell1-cell-mapping-9l2b8\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:58.988646 master-0 kubenswrapper[4652]: I0216 17:44:58.988494 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9l2b8\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:58.988755 master-0 kubenswrapper[4652]: I0216 17:44:58.988653 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cbrq\" (UniqueName: \"kubernetes.io/projected/63d158ef-2e1c-4eff-be36-c0ab68bedebc-kube-api-access-7cbrq\") pod \"nova-cell1-host-discover-jgglr\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:58.988755 master-0 kubenswrapper[4652]: I0216 17:44:58.988700 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-config-data\") pod \"nova-cell1-cell-mapping-9l2b8\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:58.988755 master-0 kubenswrapper[4652]: I0216 17:44:58.988727 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-scripts\") pod \"nova-cell1-cell-mapping-9l2b8\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:58.988898 master-0 kubenswrapper[4652]: I0216 17:44:58.988762 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-combined-ca-bundle\") pod \"nova-cell1-host-discover-jgglr\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:58.993855 master-0 kubenswrapper[4652]: I0216 17:44:58.993806 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-config-data\") pod \"nova-cell1-host-discover-jgglr\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:58.994049 master-0 kubenswrapper[4652]: I0216 17:44:58.993893 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-combined-ca-bundle\") pod \"nova-cell1-host-discover-jgglr\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:58.994049 master-0 kubenswrapper[4652]: I0216 17:44:58.993897 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9l2b8\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:58.998722 master-0 kubenswrapper[4652]: I0216 17:44:58.996971 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-config-data\") pod \"nova-cell1-cell-mapping-9l2b8\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:59.001522 master-0 kubenswrapper[4652]: I0216 17:44:59.001494 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-scripts\") pod \"nova-cell1-cell-mapping-9l2b8\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:59.006550 master-0 kubenswrapper[4652]: I0216 17:44:59.006474 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-scripts\") pod \"nova-cell1-host-discover-jgglr\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:59.006731 master-0 kubenswrapper[4652]: I0216 17:44:59.006477 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cbrq\" (UniqueName: \"kubernetes.io/projected/63d158ef-2e1c-4eff-be36-c0ab68bedebc-kube-api-access-7cbrq\") pod \"nova-cell1-host-discover-jgglr\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:59.006731 master-0 kubenswrapper[4652]: I0216 17:44:59.006567 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjtgf\" (UniqueName: \"kubernetes.io/projected/82015a7e-8945-4748-bb16-db5b284117a6-kube-api-access-mjtgf\") pod \"nova-cell1-cell-mapping-9l2b8\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:59.032316 master-0 kubenswrapper[4652]: I0216 17:44:59.032223 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:44:59.101400 master-0 kubenswrapper[4652]: I0216 17:44:59.101331 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:44:59.401488 master-0 kubenswrapper[4652]: I0216 17:44:59.401445 4652 generic.go:334] "Generic (PLEG): container finished" podID="8915bb9d-caab-44fd-b00a-2426c5e1fad4" containerID="85d1ae6a7531eb3470be6b71fd149cc0c79cbd89aef8bded6349d2d6558c2870" exitCode=143 Feb 16 17:44:59.402099 master-0 kubenswrapper[4652]: I0216 17:44:59.401515 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8915bb9d-caab-44fd-b00a-2426c5e1fad4","Type":"ContainerDied","Data":"85d1ae6a7531eb3470be6b71fd149cc0c79cbd89aef8bded6349d2d6558c2870"} Feb 16 17:44:59.537437 master-0 kubenswrapper[4652]: I0216 17:44:59.537323 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9l2b8"] Feb 16 17:44:59.668393 master-0 kubenswrapper[4652]: W0216 17:44:59.668345 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63d158ef_2e1c_4eff_be36_c0ab68bedebc.slice/crio-311fdf72ac4615a5d1e8a75bb7881aeefc9f40aaa853be46d864e55bb3cb1213 WatchSource:0}: Error finding container 311fdf72ac4615a5d1e8a75bb7881aeefc9f40aaa853be46d864e55bb3cb1213: Status 404 returned error can't find the container with id 311fdf72ac4615a5d1e8a75bb7881aeefc9f40aaa853be46d864e55bb3cb1213 Feb 16 17:44:59.675495 master-0 kubenswrapper[4652]: I0216 17:44:59.675414 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-host-discover-jgglr"] Feb 16 17:45:00.235082 master-0 kubenswrapper[4652]: I0216 17:45:00.235022 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4"] Feb 16 17:45:00.237006 master-0 kubenswrapper[4652]: I0216 17:45:00.236971 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" Feb 16 17:45:00.238669 master-0 kubenswrapper[4652]: I0216 17:45:00.238624 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 17:45:00.239969 master-0 kubenswrapper[4652]: I0216 17:45:00.239927 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-4vsn8" Feb 16 17:45:00.269651 master-0 kubenswrapper[4652]: I0216 17:45:00.269600 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4"] Feb 16 17:45:00.330634 master-0 kubenswrapper[4652]: I0216 17:45:00.328604 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/299571e8-c349-4dc4-a1f6-89adeea76ed5-config-volume\") pod \"collect-profiles-29521065-mzpb4\" (UID: \"299571e8-c349-4dc4-a1f6-89adeea76ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" Feb 16 17:45:00.330634 master-0 kubenswrapper[4652]: I0216 17:45:00.328794 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfl25\" (UniqueName: \"kubernetes.io/projected/299571e8-c349-4dc4-a1f6-89adeea76ed5-kube-api-access-sfl25\") pod \"collect-profiles-29521065-mzpb4\" (UID: \"299571e8-c349-4dc4-a1f6-89adeea76ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" Feb 16 17:45:00.330634 master-0 kubenswrapper[4652]: I0216 17:45:00.328831 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/299571e8-c349-4dc4-a1f6-89adeea76ed5-secret-volume\") pod \"collect-profiles-29521065-mzpb4\" (UID: \"299571e8-c349-4dc4-a1f6-89adeea76ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" Feb 16 17:45:00.427065 master-0 kubenswrapper[4652]: I0216 17:45:00.427000 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9l2b8" event={"ID":"82015a7e-8945-4748-bb16-db5b284117a6","Type":"ContainerStarted","Data":"e70c2c4c38426bb55d159ae45a6066ed66d3879926ac2a4ef8b0e71dae74848b"} Feb 16 17:45:00.427065 master-0 kubenswrapper[4652]: I0216 17:45:00.427058 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9l2b8" event={"ID":"82015a7e-8945-4748-bb16-db5b284117a6","Type":"ContainerStarted","Data":"399d6bf60965fa7b9db0ee1e88f22ff3927f7e8ae2d6d31400b308b3d7a72968"} Feb 16 17:45:00.430149 master-0 kubenswrapper[4652]: I0216 17:45:00.430090 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-jgglr" event={"ID":"63d158ef-2e1c-4eff-be36-c0ab68bedebc","Type":"ContainerStarted","Data":"d962f888d0c71075e9b41847da7dd2c5d651a8e212db804a3e12ced2dcb8dbed"} Feb 16 17:45:00.430296 master-0 kubenswrapper[4652]: I0216 17:45:00.430236 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-jgglr" event={"ID":"63d158ef-2e1c-4eff-be36-c0ab68bedebc","Type":"ContainerStarted","Data":"311fdf72ac4615a5d1e8a75bb7881aeefc9f40aaa853be46d864e55bb3cb1213"} Feb 16 17:45:00.430296 master-0 kubenswrapper[4652]: I0216 17:45:00.430108 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfl25\" (UniqueName: \"kubernetes.io/projected/299571e8-c349-4dc4-a1f6-89adeea76ed5-kube-api-access-sfl25\") pod \"collect-profiles-29521065-mzpb4\" (UID: \"299571e8-c349-4dc4-a1f6-89adeea76ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" Feb 16 17:45:00.431994 master-0 kubenswrapper[4652]: I0216 17:45:00.431960 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/299571e8-c349-4dc4-a1f6-89adeea76ed5-secret-volume\") pod \"collect-profiles-29521065-mzpb4\" (UID: \"299571e8-c349-4dc4-a1f6-89adeea76ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" Feb 16 17:45:00.433335 master-0 kubenswrapper[4652]: I0216 17:45:00.432745 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/299571e8-c349-4dc4-a1f6-89adeea76ed5-config-volume\") pod \"collect-profiles-29521065-mzpb4\" (UID: \"299571e8-c349-4dc4-a1f6-89adeea76ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" Feb 16 17:45:00.433890 master-0 kubenswrapper[4652]: I0216 17:45:00.433857 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/299571e8-c349-4dc4-a1f6-89adeea76ed5-config-volume\") pod \"collect-profiles-29521065-mzpb4\" (UID: \"299571e8-c349-4dc4-a1f6-89adeea76ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" Feb 16 17:45:00.448397 master-0 kubenswrapper[4652]: I0216 17:45:00.448357 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/299571e8-c349-4dc4-a1f6-89adeea76ed5-secret-volume\") pod \"collect-profiles-29521065-mzpb4\" (UID: \"299571e8-c349-4dc4-a1f6-89adeea76ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" Feb 16 17:45:00.451299 master-0 kubenswrapper[4652]: I0216 17:45:00.451209 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfl25\" (UniqueName: \"kubernetes.io/projected/299571e8-c349-4dc4-a1f6-89adeea76ed5-kube-api-access-sfl25\") pod \"collect-profiles-29521065-mzpb4\" (UID: \"299571e8-c349-4dc4-a1f6-89adeea76ed5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" Feb 16 17:45:00.470159 master-0 kubenswrapper[4652]: I0216 17:45:00.470069 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-9l2b8" podStartSLOduration=2.470047136 podStartE2EDuration="2.470047136s" podCreationTimestamp="2026-02-16 17:44:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:45:00.456908294 +0000 UTC m=+1257.845076820" watchObservedRunningTime="2026-02-16 17:45:00.470047136 +0000 UTC m=+1257.858215672" Feb 16 17:45:00.498780 master-0 kubenswrapper[4652]: I0216 17:45:00.498600 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-host-discover-jgglr" podStartSLOduration=2.498572461 podStartE2EDuration="2.498572461s" podCreationTimestamp="2026-02-16 17:44:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:45:00.48513518 +0000 UTC m=+1257.873303696" watchObservedRunningTime="2026-02-16 17:45:00.498572461 +0000 UTC m=+1257.886740987" Feb 16 17:45:00.555530 master-0 kubenswrapper[4652]: I0216 17:45:00.555479 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" Feb 16 17:45:01.047439 master-0 kubenswrapper[4652]: I0216 17:45:01.047363 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4"] Feb 16 17:45:01.449485 master-0 kubenswrapper[4652]: I0216 17:45:01.449437 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" event={"ID":"299571e8-c349-4dc4-a1f6-89adeea76ed5","Type":"ContainerStarted","Data":"8aae895245b48535bc8231e2c7ba149bee0de39d6e7272906f3d4656bbaf4f6d"} Feb 16 17:45:01.450002 master-0 kubenswrapper[4652]: I0216 17:45:01.449491 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" event={"ID":"299571e8-c349-4dc4-a1f6-89adeea76ed5","Type":"ContainerStarted","Data":"f7c0ec96180f4927bbb446e65ad6f349cc97163baf64c832d6e89e54a2fee4fc"} Feb 16 17:45:01.481525 master-0 kubenswrapper[4652]: I0216 17:45:01.481305 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" podStartSLOduration=1.481281683 podStartE2EDuration="1.481281683s" podCreationTimestamp="2026-02-16 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:45:01.473023982 +0000 UTC m=+1258.861192498" watchObservedRunningTime="2026-02-16 17:45:01.481281683 +0000 UTC m=+1258.869450199" Feb 16 17:45:02.229839 master-0 kubenswrapper[4652]: I0216 17:45:02.229709 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:45:02.315413 master-0 kubenswrapper[4652]: I0216 17:45:02.310456 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8915bb9d-caab-44fd-b00a-2426c5e1fad4-combined-ca-bundle\") pod \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " Feb 16 17:45:02.315413 master-0 kubenswrapper[4652]: I0216 17:45:02.310548 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p99pc\" (UniqueName: \"kubernetes.io/projected/8915bb9d-caab-44fd-b00a-2426c5e1fad4-kube-api-access-p99pc\") pod \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " Feb 16 17:45:02.315413 master-0 kubenswrapper[4652]: I0216 17:45:02.310702 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8915bb9d-caab-44fd-b00a-2426c5e1fad4-config-data\") pod \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " Feb 16 17:45:02.315413 master-0 kubenswrapper[4652]: I0216 17:45:02.310789 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8915bb9d-caab-44fd-b00a-2426c5e1fad4-logs\") pod \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\" (UID: \"8915bb9d-caab-44fd-b00a-2426c5e1fad4\") " Feb 16 17:45:02.315413 master-0 kubenswrapper[4652]: I0216 17:45:02.311615 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8915bb9d-caab-44fd-b00a-2426c5e1fad4-logs" (OuterVolumeSpecName: "logs") pod "8915bb9d-caab-44fd-b00a-2426c5e1fad4" (UID: "8915bb9d-caab-44fd-b00a-2426c5e1fad4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:45:02.343180 master-0 kubenswrapper[4652]: I0216 17:45:02.343100 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8915bb9d-caab-44fd-b00a-2426c5e1fad4-kube-api-access-p99pc" (OuterVolumeSpecName: "kube-api-access-p99pc") pod "8915bb9d-caab-44fd-b00a-2426c5e1fad4" (UID: "8915bb9d-caab-44fd-b00a-2426c5e1fad4"). InnerVolumeSpecName "kube-api-access-p99pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:45:02.351943 master-0 kubenswrapper[4652]: I0216 17:45:02.351820 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8915bb9d-caab-44fd-b00a-2426c5e1fad4-config-data" (OuterVolumeSpecName: "config-data") pod "8915bb9d-caab-44fd-b00a-2426c5e1fad4" (UID: "8915bb9d-caab-44fd-b00a-2426c5e1fad4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:02.374828 master-0 kubenswrapper[4652]: I0216 17:45:02.374769 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8915bb9d-caab-44fd-b00a-2426c5e1fad4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8915bb9d-caab-44fd-b00a-2426c5e1fad4" (UID: "8915bb9d-caab-44fd-b00a-2426c5e1fad4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:02.414641 master-0 kubenswrapper[4652]: I0216 17:45:02.414531 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8915bb9d-caab-44fd-b00a-2426c5e1fad4-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:02.414641 master-0 kubenswrapper[4652]: I0216 17:45:02.414587 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p99pc\" (UniqueName: \"kubernetes.io/projected/8915bb9d-caab-44fd-b00a-2426c5e1fad4-kube-api-access-p99pc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:02.414641 master-0 kubenswrapper[4652]: I0216 17:45:02.414599 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8915bb9d-caab-44fd-b00a-2426c5e1fad4-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:02.414641 master-0 kubenswrapper[4652]: I0216 17:45:02.414608 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8915bb9d-caab-44fd-b00a-2426c5e1fad4-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:02.473759 master-0 kubenswrapper[4652]: I0216 17:45:02.472130 4652 generic.go:334] "Generic (PLEG): container finished" podID="299571e8-c349-4dc4-a1f6-89adeea76ed5" containerID="8aae895245b48535bc8231e2c7ba149bee0de39d6e7272906f3d4656bbaf4f6d" exitCode=0 Feb 16 17:45:02.473759 master-0 kubenswrapper[4652]: I0216 17:45:02.472213 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" event={"ID":"299571e8-c349-4dc4-a1f6-89adeea76ed5","Type":"ContainerDied","Data":"8aae895245b48535bc8231e2c7ba149bee0de39d6e7272906f3d4656bbaf4f6d"} Feb 16 17:45:02.477355 master-0 kubenswrapper[4652]: I0216 17:45:02.476169 4652 generic.go:334] "Generic (PLEG): container finished" podID="8915bb9d-caab-44fd-b00a-2426c5e1fad4" containerID="3a21a2e6e596bbb26a3dd551d79809f286ba60a3f7c1aaa29173ad97cf09dd7e" exitCode=0 Feb 16 17:45:02.477355 master-0 kubenswrapper[4652]: I0216 17:45:02.476233 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8915bb9d-caab-44fd-b00a-2426c5e1fad4","Type":"ContainerDied","Data":"3a21a2e6e596bbb26a3dd551d79809f286ba60a3f7c1aaa29173ad97cf09dd7e"} Feb 16 17:45:02.477355 master-0 kubenswrapper[4652]: I0216 17:45:02.476297 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8915bb9d-caab-44fd-b00a-2426c5e1fad4","Type":"ContainerDied","Data":"8dfbf486ce949e8332e54209e849dc1d78d29631d63a17c73c4ed68acee22297"} Feb 16 17:45:02.477355 master-0 kubenswrapper[4652]: I0216 17:45:02.476293 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:45:02.477355 master-0 kubenswrapper[4652]: I0216 17:45:02.476354 4652 scope.go:117] "RemoveContainer" containerID="3a21a2e6e596bbb26a3dd551d79809f286ba60a3f7c1aaa29173ad97cf09dd7e" Feb 16 17:45:02.514885 master-0 kubenswrapper[4652]: I0216 17:45:02.507834 4652 scope.go:117] "RemoveContainer" containerID="85d1ae6a7531eb3470be6b71fd149cc0c79cbd89aef8bded6349d2d6558c2870" Feb 16 17:45:02.548557 master-0 kubenswrapper[4652]: I0216 17:45:02.548501 4652 scope.go:117] "RemoveContainer" containerID="3a21a2e6e596bbb26a3dd551d79809f286ba60a3f7c1aaa29173ad97cf09dd7e" Feb 16 17:45:02.549688 master-0 kubenswrapper[4652]: E0216 17:45:02.549371 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a21a2e6e596bbb26a3dd551d79809f286ba60a3f7c1aaa29173ad97cf09dd7e\": container with ID starting with 3a21a2e6e596bbb26a3dd551d79809f286ba60a3f7c1aaa29173ad97cf09dd7e not found: ID does not exist" containerID="3a21a2e6e596bbb26a3dd551d79809f286ba60a3f7c1aaa29173ad97cf09dd7e" Feb 16 17:45:02.549688 master-0 kubenswrapper[4652]: I0216 17:45:02.549418 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a21a2e6e596bbb26a3dd551d79809f286ba60a3f7c1aaa29173ad97cf09dd7e"} err="failed to get container status \"3a21a2e6e596bbb26a3dd551d79809f286ba60a3f7c1aaa29173ad97cf09dd7e\": rpc error: code = NotFound desc = could not find container \"3a21a2e6e596bbb26a3dd551d79809f286ba60a3f7c1aaa29173ad97cf09dd7e\": container with ID starting with 3a21a2e6e596bbb26a3dd551d79809f286ba60a3f7c1aaa29173ad97cf09dd7e not found: ID does not exist" Feb 16 17:45:02.549688 master-0 kubenswrapper[4652]: I0216 17:45:02.549450 4652 scope.go:117] "RemoveContainer" containerID="85d1ae6a7531eb3470be6b71fd149cc0c79cbd89aef8bded6349d2d6558c2870" Feb 16 17:45:02.549688 master-0 kubenswrapper[4652]: E0216 17:45:02.549683 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85d1ae6a7531eb3470be6b71fd149cc0c79cbd89aef8bded6349d2d6558c2870\": container with ID starting with 85d1ae6a7531eb3470be6b71fd149cc0c79cbd89aef8bded6349d2d6558c2870 not found: ID does not exist" containerID="85d1ae6a7531eb3470be6b71fd149cc0c79cbd89aef8bded6349d2d6558c2870" Feb 16 17:45:02.549867 master-0 kubenswrapper[4652]: I0216 17:45:02.549704 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85d1ae6a7531eb3470be6b71fd149cc0c79cbd89aef8bded6349d2d6558c2870"} err="failed to get container status \"85d1ae6a7531eb3470be6b71fd149cc0c79cbd89aef8bded6349d2d6558c2870\": rpc error: code = NotFound desc = could not find container \"85d1ae6a7531eb3470be6b71fd149cc0c79cbd89aef8bded6349d2d6558c2870\": container with ID starting with 85d1ae6a7531eb3470be6b71fd149cc0c79cbd89aef8bded6349d2d6558c2870 not found: ID does not exist" Feb 16 17:45:02.572410 master-0 kubenswrapper[4652]: I0216 17:45:02.572326 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:45:02.602235 master-0 kubenswrapper[4652]: I0216 17:45:02.602048 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:45:02.628884 master-0 kubenswrapper[4652]: I0216 17:45:02.628821 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 17:45:02.629449 master-0 kubenswrapper[4652]: E0216 17:45:02.629414 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8915bb9d-caab-44fd-b00a-2426c5e1fad4" containerName="nova-api-api" Feb 16 17:45:02.629449 master-0 kubenswrapper[4652]: I0216 17:45:02.629434 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="8915bb9d-caab-44fd-b00a-2426c5e1fad4" containerName="nova-api-api" Feb 16 17:45:02.629449 master-0 kubenswrapper[4652]: E0216 17:45:02.629451 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8915bb9d-caab-44fd-b00a-2426c5e1fad4" containerName="nova-api-log" Feb 16 17:45:02.629631 master-0 kubenswrapper[4652]: I0216 17:45:02.629459 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="8915bb9d-caab-44fd-b00a-2426c5e1fad4" containerName="nova-api-log" Feb 16 17:45:02.629737 master-0 kubenswrapper[4652]: I0216 17:45:02.629707 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="8915bb9d-caab-44fd-b00a-2426c5e1fad4" containerName="nova-api-log" Feb 16 17:45:02.629737 master-0 kubenswrapper[4652]: I0216 17:45:02.629725 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="8915bb9d-caab-44fd-b00a-2426c5e1fad4" containerName="nova-api-api" Feb 16 17:45:02.631766 master-0 kubenswrapper[4652]: I0216 17:45:02.631088 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:45:02.643346 master-0 kubenswrapper[4652]: I0216 17:45:02.643178 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 17:45:02.643585 master-0 kubenswrapper[4652]: I0216 17:45:02.643500 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 17:45:02.643585 master-0 kubenswrapper[4652]: I0216 17:45:02.643595 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 17:45:02.650042 master-0 kubenswrapper[4652]: I0216 17:45:02.648466 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:45:02.723509 master-0 kubenswrapper[4652]: I0216 17:45:02.723440 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-config-data\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.723728 master-0 kubenswrapper[4652]: I0216 17:45:02.723526 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b27724d-b99b-41de-b116-cc6217074c20-logs\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.723728 master-0 kubenswrapper[4652]: I0216 17:45:02.723677 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.723845 master-0 kubenswrapper[4652]: I0216 17:45:02.723823 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfhft\" (UniqueName: \"kubernetes.io/projected/4b27724d-b99b-41de-b116-cc6217074c20-kube-api-access-lfhft\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.723913 master-0 kubenswrapper[4652]: I0216 17:45:02.723873 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.724087 master-0 kubenswrapper[4652]: I0216 17:45:02.724032 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.766942 master-0 kubenswrapper[4652]: I0216 17:45:02.766894 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8915bb9d-caab-44fd-b00a-2426c5e1fad4" path="/var/lib/kubelet/pods/8915bb9d-caab-44fd-b00a-2426c5e1fad4/volumes" Feb 16 17:45:02.826733 master-0 kubenswrapper[4652]: I0216 17:45:02.826671 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.826970 master-0 kubenswrapper[4652]: I0216 17:45:02.826836 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-config-data\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.826970 master-0 kubenswrapper[4652]: I0216 17:45:02.826873 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b27724d-b99b-41de-b116-cc6217074c20-logs\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.827083 master-0 kubenswrapper[4652]: I0216 17:45:02.826981 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.827083 master-0 kubenswrapper[4652]: I0216 17:45:02.827029 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfhft\" (UniqueName: \"kubernetes.io/projected/4b27724d-b99b-41de-b116-cc6217074c20-kube-api-access-lfhft\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.827379 master-0 kubenswrapper[4652]: I0216 17:45:02.827092 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.828132 master-0 kubenswrapper[4652]: I0216 17:45:02.827771 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b27724d-b99b-41de-b116-cc6217074c20-logs\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.829826 master-0 kubenswrapper[4652]: I0216 17:45:02.829789 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 17:45:02.831333 master-0 kubenswrapper[4652]: I0216 17:45:02.830227 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 17:45:02.831333 master-0 kubenswrapper[4652]: I0216 17:45:02.831141 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 17:45:02.832029 master-0 kubenswrapper[4652]: I0216 17:45:02.831986 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.852679 master-0 kubenswrapper[4652]: I0216 17:45:02.847285 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-public-tls-certs\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.852679 master-0 kubenswrapper[4652]: I0216 17:45:02.848114 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-config-data\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.852679 master-0 kubenswrapper[4652]: I0216 17:45:02.848122 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfhft\" (UniqueName: \"kubernetes.io/projected/4b27724d-b99b-41de-b116-cc6217074c20-kube-api-access-lfhft\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.852679 master-0 kubenswrapper[4652]: I0216 17:45:02.849117 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " pod="openstack/nova-api-0" Feb 16 17:45:02.961283 master-0 kubenswrapper[4652]: I0216 17:45:02.961202 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:45:03.437033 master-0 kubenswrapper[4652]: I0216 17:45:03.436965 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:45:03.493857 master-0 kubenswrapper[4652]: I0216 17:45:03.493801 4652 generic.go:334] "Generic (PLEG): container finished" podID="63d158ef-2e1c-4eff-be36-c0ab68bedebc" containerID="d962f888d0c71075e9b41847da7dd2c5d651a8e212db804a3e12ced2dcb8dbed" exitCode=0 Feb 16 17:45:03.493857 master-0 kubenswrapper[4652]: I0216 17:45:03.493855 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-jgglr" event={"ID":"63d158ef-2e1c-4eff-be36-c0ab68bedebc","Type":"ContainerDied","Data":"d962f888d0c71075e9b41847da7dd2c5d651a8e212db804a3e12ced2dcb8dbed"} Feb 16 17:45:03.501175 master-0 kubenswrapper[4652]: I0216 17:45:03.499894 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b27724d-b99b-41de-b116-cc6217074c20","Type":"ContainerStarted","Data":"ad87da93377ceab3e97717e6c36e3b58e02a9f583402ee553f10ce30b752e63a"} Feb 16 17:45:04.014611 master-0 kubenswrapper[4652]: I0216 17:45:04.014538 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" Feb 16 17:45:04.168961 master-0 kubenswrapper[4652]: I0216 17:45:04.167758 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfl25\" (UniqueName: \"kubernetes.io/projected/299571e8-c349-4dc4-a1f6-89adeea76ed5-kube-api-access-sfl25\") pod \"299571e8-c349-4dc4-a1f6-89adeea76ed5\" (UID: \"299571e8-c349-4dc4-a1f6-89adeea76ed5\") " Feb 16 17:45:04.168961 master-0 kubenswrapper[4652]: I0216 17:45:04.167906 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/299571e8-c349-4dc4-a1f6-89adeea76ed5-config-volume\") pod \"299571e8-c349-4dc4-a1f6-89adeea76ed5\" (UID: \"299571e8-c349-4dc4-a1f6-89adeea76ed5\") " Feb 16 17:45:04.168961 master-0 kubenswrapper[4652]: I0216 17:45:04.168123 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/299571e8-c349-4dc4-a1f6-89adeea76ed5-secret-volume\") pod \"299571e8-c349-4dc4-a1f6-89adeea76ed5\" (UID: \"299571e8-c349-4dc4-a1f6-89adeea76ed5\") " Feb 16 17:45:04.169716 master-0 kubenswrapper[4652]: I0216 17:45:04.169410 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/299571e8-c349-4dc4-a1f6-89adeea76ed5-config-volume" (OuterVolumeSpecName: "config-volume") pod "299571e8-c349-4dc4-a1f6-89adeea76ed5" (UID: "299571e8-c349-4dc4-a1f6-89adeea76ed5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:45:04.172132 master-0 kubenswrapper[4652]: I0216 17:45:04.172101 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/299571e8-c349-4dc4-a1f6-89adeea76ed5-kube-api-access-sfl25" (OuterVolumeSpecName: "kube-api-access-sfl25") pod "299571e8-c349-4dc4-a1f6-89adeea76ed5" (UID: "299571e8-c349-4dc4-a1f6-89adeea76ed5"). InnerVolumeSpecName "kube-api-access-sfl25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:45:04.172317 master-0 kubenswrapper[4652]: I0216 17:45:04.172285 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/299571e8-c349-4dc4-a1f6-89adeea76ed5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "299571e8-c349-4dc4-a1f6-89adeea76ed5" (UID: "299571e8-c349-4dc4-a1f6-89adeea76ed5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:04.271042 master-0 kubenswrapper[4652]: I0216 17:45:04.270997 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfl25\" (UniqueName: \"kubernetes.io/projected/299571e8-c349-4dc4-a1f6-89adeea76ed5-kube-api-access-sfl25\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:04.271574 master-0 kubenswrapper[4652]: I0216 17:45:04.271541 4652 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/299571e8-c349-4dc4-a1f6-89adeea76ed5-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:04.271574 master-0 kubenswrapper[4652]: I0216 17:45:04.271557 4652 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/299571e8-c349-4dc4-a1f6-89adeea76ed5-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:04.518089 master-0 kubenswrapper[4652]: I0216 17:45:04.517559 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" event={"ID":"299571e8-c349-4dc4-a1f6-89adeea76ed5","Type":"ContainerDied","Data":"f7c0ec96180f4927bbb446e65ad6f349cc97163baf64c832d6e89e54a2fee4fc"} Feb 16 17:45:04.518089 master-0 kubenswrapper[4652]: I0216 17:45:04.517615 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7c0ec96180f4927bbb446e65ad6f349cc97163baf64c832d6e89e54a2fee4fc" Feb 16 17:45:04.518819 master-0 kubenswrapper[4652]: I0216 17:45:04.518141 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521065-mzpb4" Feb 16 17:45:04.522673 master-0 kubenswrapper[4652]: I0216 17:45:04.522631 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b27724d-b99b-41de-b116-cc6217074c20","Type":"ContainerStarted","Data":"a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a"} Feb 16 17:45:04.522805 master-0 kubenswrapper[4652]: I0216 17:45:04.522684 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b27724d-b99b-41de-b116-cc6217074c20","Type":"ContainerStarted","Data":"665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603"} Feb 16 17:45:04.581714 master-0 kubenswrapper[4652]: I0216 17:45:04.581582 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.581554739 podStartE2EDuration="2.581554739s" podCreationTimestamp="2026-02-16 17:45:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:45:04.563174746 +0000 UTC m=+1261.951343262" watchObservedRunningTime="2026-02-16 17:45:04.581554739 +0000 UTC m=+1261.969723255" Feb 16 17:45:04.606959 master-0 kubenswrapper[4652]: I0216 17:45:04.606795 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf"] Feb 16 17:45:04.624083 master-0 kubenswrapper[4652]: I0216 17:45:04.620883 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521020-mtpvf"] Feb 16 17:45:04.773589 master-0 kubenswrapper[4652]: I0216 17:45:04.773445 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869cd4c8-bf00-427c-84f0-5c39517f2d27" path="/var/lib/kubelet/pods/869cd4c8-bf00-427c-84f0-5c39517f2d27/volumes" Feb 16 17:45:05.125550 master-0 kubenswrapper[4652]: I0216 17:45:05.125488 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:45:05.298904 master-0 kubenswrapper[4652]: I0216 17:45:05.298783 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-scripts\") pod \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " Feb 16 17:45:05.299149 master-0 kubenswrapper[4652]: I0216 17:45:05.298963 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-combined-ca-bundle\") pod \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " Feb 16 17:45:05.299149 master-0 kubenswrapper[4652]: I0216 17:45:05.299031 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-config-data\") pod \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " Feb 16 17:45:05.299149 master-0 kubenswrapper[4652]: I0216 17:45:05.299093 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cbrq\" (UniqueName: \"kubernetes.io/projected/63d158ef-2e1c-4eff-be36-c0ab68bedebc-kube-api-access-7cbrq\") pod \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\" (UID: \"63d158ef-2e1c-4eff-be36-c0ab68bedebc\") " Feb 16 17:45:05.302951 master-0 kubenswrapper[4652]: I0216 17:45:05.302797 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-scripts" (OuterVolumeSpecName: "scripts") pod "63d158ef-2e1c-4eff-be36-c0ab68bedebc" (UID: "63d158ef-2e1c-4eff-be36-c0ab68bedebc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:05.303052 master-0 kubenswrapper[4652]: I0216 17:45:05.302966 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d158ef-2e1c-4eff-be36-c0ab68bedebc-kube-api-access-7cbrq" (OuterVolumeSpecName: "kube-api-access-7cbrq") pod "63d158ef-2e1c-4eff-be36-c0ab68bedebc" (UID: "63d158ef-2e1c-4eff-be36-c0ab68bedebc"). InnerVolumeSpecName "kube-api-access-7cbrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:45:05.329459 master-0 kubenswrapper[4652]: I0216 17:45:05.329414 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63d158ef-2e1c-4eff-be36-c0ab68bedebc" (UID: "63d158ef-2e1c-4eff-be36-c0ab68bedebc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:05.330586 master-0 kubenswrapper[4652]: I0216 17:45:05.330548 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-config-data" (OuterVolumeSpecName: "config-data") pod "63d158ef-2e1c-4eff-be36-c0ab68bedebc" (UID: "63d158ef-2e1c-4eff-be36-c0ab68bedebc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:05.403451 master-0 kubenswrapper[4652]: I0216 17:45:05.403406 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:05.403732 master-0 kubenswrapper[4652]: I0216 17:45:05.403711 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:05.403831 master-0 kubenswrapper[4652]: I0216 17:45:05.403819 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d158ef-2e1c-4eff-be36-c0ab68bedebc-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:05.403942 master-0 kubenswrapper[4652]: I0216 17:45:05.403926 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cbrq\" (UniqueName: \"kubernetes.io/projected/63d158ef-2e1c-4eff-be36-c0ab68bedebc-kube-api-access-7cbrq\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:05.536183 master-0 kubenswrapper[4652]: I0216 17:45:05.536133 4652 generic.go:334] "Generic (PLEG): container finished" podID="82015a7e-8945-4748-bb16-db5b284117a6" containerID="e70c2c4c38426bb55d159ae45a6066ed66d3879926ac2a4ef8b0e71dae74848b" exitCode=0 Feb 16 17:45:05.536788 master-0 kubenswrapper[4652]: I0216 17:45:05.536290 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9l2b8" event={"ID":"82015a7e-8945-4748-bb16-db5b284117a6","Type":"ContainerDied","Data":"e70c2c4c38426bb55d159ae45a6066ed66d3879926ac2a4ef8b0e71dae74848b"} Feb 16 17:45:05.539447 master-0 kubenswrapper[4652]: I0216 17:45:05.539391 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-host-discover-jgglr" event={"ID":"63d158ef-2e1c-4eff-be36-c0ab68bedebc","Type":"ContainerDied","Data":"311fdf72ac4615a5d1e8a75bb7881aeefc9f40aaa853be46d864e55bb3cb1213"} Feb 16 17:45:05.539590 master-0 kubenswrapper[4652]: I0216 17:45:05.539453 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-host-discover-jgglr" Feb 16 17:45:05.539590 master-0 kubenswrapper[4652]: I0216 17:45:05.539459 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="311fdf72ac4615a5d1e8a75bb7881aeefc9f40aaa853be46d864e55bb3cb1213" Feb 16 17:45:06.041451 master-0 kubenswrapper[4652]: I0216 17:45:06.041381 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f8bc5cb7-rfh9j" Feb 16 17:45:06.167278 master-0 kubenswrapper[4652]: I0216 17:45:06.159269 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-87c86584f-whh65"] Feb 16 17:45:06.167278 master-0 kubenswrapper[4652]: I0216 17:45:06.159654 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-87c86584f-whh65" podUID="db988590-db79-495b-b490-541f70c6f907" containerName="dnsmasq-dns" containerID="cri-o://ea75e4bb1bb1c01a73060dda867f0d375ce0007c88800031bc92ef19bfadec27" gracePeriod=10 Feb 16 17:45:06.241509 master-0 kubenswrapper[4652]: E0216 17:45:06.241408 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e83cc2b9d59101af52883eb85a000c7d49ff8d2c7af0b49ff2cc02ae1e2e3af\": container with ID starting with 3e83cc2b9d59101af52883eb85a000c7d49ff8d2c7af0b49ff2cc02ae1e2e3af not found: ID does not exist" containerID="3e83cc2b9d59101af52883eb85a000c7d49ff8d2c7af0b49ff2cc02ae1e2e3af" Feb 16 17:45:06.241750 master-0 kubenswrapper[4652]: I0216 17:45:06.241551 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="3e83cc2b9d59101af52883eb85a000c7d49ff8d2c7af0b49ff2cc02ae1e2e3af" err="rpc error: code = NotFound desc = could not find container \"3e83cc2b9d59101af52883eb85a000c7d49ff8d2c7af0b49ff2cc02ae1e2e3af\": container with ID starting with 3e83cc2b9d59101af52883eb85a000c7d49ff8d2c7af0b49ff2cc02ae1e2e3af not found: ID does not exist" Feb 16 17:45:06.570958 master-0 kubenswrapper[4652]: I0216 17:45:06.570840 4652 generic.go:334] "Generic (PLEG): container finished" podID="db988590-db79-495b-b490-541f70c6f907" containerID="ea75e4bb1bb1c01a73060dda867f0d375ce0007c88800031bc92ef19bfadec27" exitCode=0 Feb 16 17:45:06.571566 master-0 kubenswrapper[4652]: I0216 17:45:06.571097 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87c86584f-whh65" event={"ID":"db988590-db79-495b-b490-541f70c6f907","Type":"ContainerDied","Data":"ea75e4bb1bb1c01a73060dda867f0d375ce0007c88800031bc92ef19bfadec27"} Feb 16 17:45:06.846600 master-0 kubenswrapper[4652]: I0216 17:45:06.846447 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:45:06.966559 master-0 kubenswrapper[4652]: I0216 17:45:06.966483 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-ovsdbserver-sb\") pod \"db988590-db79-495b-b490-541f70c6f907\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " Feb 16 17:45:06.966559 master-0 kubenswrapper[4652]: I0216 17:45:06.966534 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-ovsdbserver-nb\") pod \"db988590-db79-495b-b490-541f70c6f907\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " Feb 16 17:45:06.966874 master-0 kubenswrapper[4652]: I0216 17:45:06.966657 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccjt6\" (UniqueName: \"kubernetes.io/projected/db988590-db79-495b-b490-541f70c6f907-kube-api-access-ccjt6\") pod \"db988590-db79-495b-b490-541f70c6f907\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " Feb 16 17:45:06.966874 master-0 kubenswrapper[4652]: I0216 17:45:06.966752 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-config\") pod \"db988590-db79-495b-b490-541f70c6f907\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " Feb 16 17:45:06.966874 master-0 kubenswrapper[4652]: I0216 17:45:06.966797 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-dns-swift-storage-0\") pod \"db988590-db79-495b-b490-541f70c6f907\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " Feb 16 17:45:06.966874 master-0 kubenswrapper[4652]: I0216 17:45:06.966817 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-dns-svc\") pod \"db988590-db79-495b-b490-541f70c6f907\" (UID: \"db988590-db79-495b-b490-541f70c6f907\") " Feb 16 17:45:06.971126 master-0 kubenswrapper[4652]: I0216 17:45:06.971067 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db988590-db79-495b-b490-541f70c6f907-kube-api-access-ccjt6" (OuterVolumeSpecName: "kube-api-access-ccjt6") pod "db988590-db79-495b-b490-541f70c6f907" (UID: "db988590-db79-495b-b490-541f70c6f907"). InnerVolumeSpecName "kube-api-access-ccjt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:45:07.025417 master-0 kubenswrapper[4652]: I0216 17:45:07.025200 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "db988590-db79-495b-b490-541f70c6f907" (UID: "db988590-db79-495b-b490-541f70c6f907"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:45:07.027496 master-0 kubenswrapper[4652]: I0216 17:45:07.027440 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "db988590-db79-495b-b490-541f70c6f907" (UID: "db988590-db79-495b-b490-541f70c6f907"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:45:07.027582 master-0 kubenswrapper[4652]: I0216 17:45:07.027507 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-config" (OuterVolumeSpecName: "config") pod "db988590-db79-495b-b490-541f70c6f907" (UID: "db988590-db79-495b-b490-541f70c6f907"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:45:07.036100 master-0 kubenswrapper[4652]: I0216 17:45:07.036043 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "db988590-db79-495b-b490-541f70c6f907" (UID: "db988590-db79-495b-b490-541f70c6f907"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:45:07.039656 master-0 kubenswrapper[4652]: I0216 17:45:07.039409 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "db988590-db79-495b-b490-541f70c6f907" (UID: "db988590-db79-495b-b490-541f70c6f907"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:45:07.071901 master-0 kubenswrapper[4652]: I0216 17:45:07.071856 4652 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:07.071901 master-0 kubenswrapper[4652]: I0216 17:45:07.071897 4652 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-dns-swift-storage-0\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:07.072043 master-0 kubenswrapper[4652]: I0216 17:45:07.071907 4652 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-dns-svc\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:07.072043 master-0 kubenswrapper[4652]: I0216 17:45:07.071917 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-ovsdbserver-sb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:07.072043 master-0 kubenswrapper[4652]: I0216 17:45:07.071926 4652 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/db988590-db79-495b-b490-541f70c6f907-ovsdbserver-nb\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:07.072043 master-0 kubenswrapper[4652]: I0216 17:45:07.071938 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccjt6\" (UniqueName: \"kubernetes.io/projected/db988590-db79-495b-b490-541f70c6f907-kube-api-access-ccjt6\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:07.099845 master-0 kubenswrapper[4652]: I0216 17:45:07.099686 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:45:07.173836 master-0 kubenswrapper[4652]: I0216 17:45:07.173774 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-config-data\") pod \"82015a7e-8945-4748-bb16-db5b284117a6\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " Feb 16 17:45:07.174071 master-0 kubenswrapper[4652]: I0216 17:45:07.173859 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-combined-ca-bundle\") pod \"82015a7e-8945-4748-bb16-db5b284117a6\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " Feb 16 17:45:07.174071 master-0 kubenswrapper[4652]: I0216 17:45:07.173922 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjtgf\" (UniqueName: \"kubernetes.io/projected/82015a7e-8945-4748-bb16-db5b284117a6-kube-api-access-mjtgf\") pod \"82015a7e-8945-4748-bb16-db5b284117a6\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " Feb 16 17:45:07.174071 master-0 kubenswrapper[4652]: I0216 17:45:07.173973 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-scripts\") pod \"82015a7e-8945-4748-bb16-db5b284117a6\" (UID: \"82015a7e-8945-4748-bb16-db5b284117a6\") " Feb 16 17:45:07.177865 master-0 kubenswrapper[4652]: I0216 17:45:07.177769 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-scripts" (OuterVolumeSpecName: "scripts") pod "82015a7e-8945-4748-bb16-db5b284117a6" (UID: "82015a7e-8945-4748-bb16-db5b284117a6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:07.178742 master-0 kubenswrapper[4652]: I0216 17:45:07.178676 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82015a7e-8945-4748-bb16-db5b284117a6-kube-api-access-mjtgf" (OuterVolumeSpecName: "kube-api-access-mjtgf") pod "82015a7e-8945-4748-bb16-db5b284117a6" (UID: "82015a7e-8945-4748-bb16-db5b284117a6"). InnerVolumeSpecName "kube-api-access-mjtgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:45:07.208513 master-0 kubenswrapper[4652]: I0216 17:45:07.208455 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82015a7e-8945-4748-bb16-db5b284117a6" (UID: "82015a7e-8945-4748-bb16-db5b284117a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:07.209759 master-0 kubenswrapper[4652]: I0216 17:45:07.209679 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-config-data" (OuterVolumeSpecName: "config-data") pod "82015a7e-8945-4748-bb16-db5b284117a6" (UID: "82015a7e-8945-4748-bb16-db5b284117a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:07.278062 master-0 kubenswrapper[4652]: I0216 17:45:07.277981 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjtgf\" (UniqueName: \"kubernetes.io/projected/82015a7e-8945-4748-bb16-db5b284117a6-kube-api-access-mjtgf\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:07.278062 master-0 kubenswrapper[4652]: I0216 17:45:07.278041 4652 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-scripts\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:07.278062 master-0 kubenswrapper[4652]: I0216 17:45:07.278058 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:07.278062 master-0 kubenswrapper[4652]: I0216 17:45:07.278068 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82015a7e-8945-4748-bb16-db5b284117a6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:07.582658 master-0 kubenswrapper[4652]: I0216 17:45:07.582600 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9l2b8" Feb 16 17:45:07.583266 master-0 kubenswrapper[4652]: I0216 17:45:07.582602 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9l2b8" event={"ID":"82015a7e-8945-4748-bb16-db5b284117a6","Type":"ContainerDied","Data":"399d6bf60965fa7b9db0ee1e88f22ff3927f7e8ae2d6d31400b308b3d7a72968"} Feb 16 17:45:07.583266 master-0 kubenswrapper[4652]: I0216 17:45:07.582727 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="399d6bf60965fa7b9db0ee1e88f22ff3927f7e8ae2d6d31400b308b3d7a72968" Feb 16 17:45:07.584550 master-0 kubenswrapper[4652]: I0216 17:45:07.584503 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-87c86584f-whh65" event={"ID":"db988590-db79-495b-b490-541f70c6f907","Type":"ContainerDied","Data":"8fdaf81efffae40e389c254c59a6069edb78b6295828ee0fda5591e25181ed9c"} Feb 16 17:45:07.584625 master-0 kubenswrapper[4652]: I0216 17:45:07.584562 4652 scope.go:117] "RemoveContainer" containerID="ea75e4bb1bb1c01a73060dda867f0d375ce0007c88800031bc92ef19bfadec27" Feb 16 17:45:07.584668 master-0 kubenswrapper[4652]: I0216 17:45:07.584604 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-87c86584f-whh65" Feb 16 17:45:07.639201 master-0 kubenswrapper[4652]: I0216 17:45:07.638920 4652 scope.go:117] "RemoveContainer" containerID="71c63a0dfb2b044be659716270535b3478e8372a403d5c0f76450de0b248adeb" Feb 16 17:45:07.709269 master-0 kubenswrapper[4652]: I0216 17:45:07.705357 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-87c86584f-whh65"] Feb 16 17:45:07.727958 master-0 kubenswrapper[4652]: I0216 17:45:07.723889 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-87c86584f-whh65"] Feb 16 17:45:07.865612 master-0 kubenswrapper[4652]: I0216 17:45:07.865347 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:45:07.865936 master-0 kubenswrapper[4652]: I0216 17:45:07.865631 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b27724d-b99b-41de-b116-cc6217074c20" containerName="nova-api-log" containerID="cri-o://665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603" gracePeriod=30 Feb 16 17:45:07.866223 master-0 kubenswrapper[4652]: I0216 17:45:07.866165 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b27724d-b99b-41de-b116-cc6217074c20" containerName="nova-api-api" containerID="cri-o://a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a" gracePeriod=30 Feb 16 17:45:07.881298 master-0 kubenswrapper[4652]: I0216 17:45:07.880977 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:45:07.881298 master-0 kubenswrapper[4652]: I0216 17:45:07.881208 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="40ff8fb5-61f0-42a8-8b16-8571f3305785" containerName="nova-scheduler-scheduler" containerID="cri-o://7962b043b53fa81b163601b9f78aad11933825c698b1f294370923aeea1d30d2" gracePeriod=30 Feb 16 17:45:07.912999 master-0 kubenswrapper[4652]: I0216 17:45:07.912918 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:45:07.913399 master-0 kubenswrapper[4652]: I0216 17:45:07.913217 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a171bdac-1967-4caf-836e-4eac64a10fd6" containerName="nova-metadata-log" containerID="cri-o://b75dc0f0c79469729ab35074e7abfc4db660faf3fca665b9f45168f6f1176dbd" gracePeriod=30 Feb 16 17:45:07.913487 master-0 kubenswrapper[4652]: I0216 17:45:07.913405 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a171bdac-1967-4caf-836e-4eac64a10fd6" containerName="nova-metadata-metadata" containerID="cri-o://ab45afcfa669303e07706ee11dbfe501ba09d4099b47d61441e3029e0a808bea" gracePeriod=30 Feb 16 17:45:08.543329 master-0 kubenswrapper[4652]: I0216 17:45:08.543242 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:45:08.606136 master-0 kubenswrapper[4652]: I0216 17:45:08.605665 4652 generic.go:334] "Generic (PLEG): container finished" podID="4b27724d-b99b-41de-b116-cc6217074c20" containerID="a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a" exitCode=0 Feb 16 17:45:08.606136 master-0 kubenswrapper[4652]: I0216 17:45:08.605698 4652 generic.go:334] "Generic (PLEG): container finished" podID="4b27724d-b99b-41de-b116-cc6217074c20" containerID="665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603" exitCode=143 Feb 16 17:45:08.606136 master-0 kubenswrapper[4652]: I0216 17:45:08.605722 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:45:08.606136 master-0 kubenswrapper[4652]: I0216 17:45:08.605745 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b27724d-b99b-41de-b116-cc6217074c20","Type":"ContainerDied","Data":"a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a"} Feb 16 17:45:08.606136 master-0 kubenswrapper[4652]: I0216 17:45:08.605769 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b27724d-b99b-41de-b116-cc6217074c20","Type":"ContainerDied","Data":"665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603"} Feb 16 17:45:08.606136 master-0 kubenswrapper[4652]: I0216 17:45:08.605778 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b27724d-b99b-41de-b116-cc6217074c20","Type":"ContainerDied","Data":"ad87da93377ceab3e97717e6c36e3b58e02a9f583402ee553f10ce30b752e63a"} Feb 16 17:45:08.606136 master-0 kubenswrapper[4652]: I0216 17:45:08.605792 4652 scope.go:117] "RemoveContainer" containerID="a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a" Feb 16 17:45:08.610739 master-0 kubenswrapper[4652]: I0216 17:45:08.610646 4652 generic.go:334] "Generic (PLEG): container finished" podID="a171bdac-1967-4caf-836e-4eac64a10fd6" containerID="b75dc0f0c79469729ab35074e7abfc4db660faf3fca665b9f45168f6f1176dbd" exitCode=143 Feb 16 17:45:08.610793 master-0 kubenswrapper[4652]: I0216 17:45:08.610748 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a171bdac-1967-4caf-836e-4eac64a10fd6","Type":"ContainerDied","Data":"b75dc0f0c79469729ab35074e7abfc4db660faf3fca665b9f45168f6f1176dbd"} Feb 16 17:45:08.629257 master-0 kubenswrapper[4652]: I0216 17:45:08.629208 4652 scope.go:117] "RemoveContainer" containerID="665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603" Feb 16 17:45:08.649707 master-0 kubenswrapper[4652]: I0216 17:45:08.649672 4652 scope.go:117] "RemoveContainer" containerID="a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a" Feb 16 17:45:08.650213 master-0 kubenswrapper[4652]: E0216 17:45:08.650139 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a\": container with ID starting with a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a not found: ID does not exist" containerID="a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a" Feb 16 17:45:08.650397 master-0 kubenswrapper[4652]: I0216 17:45:08.650196 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a"} err="failed to get container status \"a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a\": rpc error: code = NotFound desc = could not find container \"a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a\": container with ID starting with a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a not found: ID does not exist" Feb 16 17:45:08.650397 master-0 kubenswrapper[4652]: I0216 17:45:08.650238 4652 scope.go:117] "RemoveContainer" containerID="665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603" Feb 16 17:45:08.650743 master-0 kubenswrapper[4652]: E0216 17:45:08.650690 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603\": container with ID starting with 665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603 not found: ID does not exist" containerID="665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603" Feb 16 17:45:08.650862 master-0 kubenswrapper[4652]: I0216 17:45:08.650745 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603"} err="failed to get container status \"665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603\": rpc error: code = NotFound desc = could not find container \"665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603\": container with ID starting with 665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603 not found: ID does not exist" Feb 16 17:45:08.650862 master-0 kubenswrapper[4652]: I0216 17:45:08.650768 4652 scope.go:117] "RemoveContainer" containerID="a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a" Feb 16 17:45:08.651178 master-0 kubenswrapper[4652]: I0216 17:45:08.651111 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a"} err="failed to get container status \"a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a\": rpc error: code = NotFound desc = could not find container \"a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a\": container with ID starting with a796199c5a00e4255430e9b29e7919725a26c17b889fedd198c07f074103fb8a not found: ID does not exist" Feb 16 17:45:08.651178 master-0 kubenswrapper[4652]: I0216 17:45:08.651151 4652 scope.go:117] "RemoveContainer" containerID="665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603" Feb 16 17:45:08.651458 master-0 kubenswrapper[4652]: I0216 17:45:08.651382 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603"} err="failed to get container status \"665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603\": rpc error: code = NotFound desc = could not find container \"665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603\": container with ID starting with 665a69158a82893b63835c0b0e657108b6541c88e57b33c8820c3b7ea0344603 not found: ID does not exist" Feb 16 17:45:08.710183 master-0 kubenswrapper[4652]: I0216 17:45:08.709731 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-combined-ca-bundle\") pod \"4b27724d-b99b-41de-b116-cc6217074c20\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " Feb 16 17:45:08.710183 master-0 kubenswrapper[4652]: I0216 17:45:08.710078 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-config-data\") pod \"4b27724d-b99b-41de-b116-cc6217074c20\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " Feb 16 17:45:08.711446 master-0 kubenswrapper[4652]: I0216 17:45:08.710510 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-internal-tls-certs\") pod \"4b27724d-b99b-41de-b116-cc6217074c20\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " Feb 16 17:45:08.711446 master-0 kubenswrapper[4652]: I0216 17:45:08.710547 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b27724d-b99b-41de-b116-cc6217074c20-logs\") pod \"4b27724d-b99b-41de-b116-cc6217074c20\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " Feb 16 17:45:08.711446 master-0 kubenswrapper[4652]: I0216 17:45:08.710566 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-public-tls-certs\") pod \"4b27724d-b99b-41de-b116-cc6217074c20\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " Feb 16 17:45:08.711446 master-0 kubenswrapper[4652]: I0216 17:45:08.710582 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfhft\" (UniqueName: \"kubernetes.io/projected/4b27724d-b99b-41de-b116-cc6217074c20-kube-api-access-lfhft\") pod \"4b27724d-b99b-41de-b116-cc6217074c20\" (UID: \"4b27724d-b99b-41de-b116-cc6217074c20\") " Feb 16 17:45:08.711446 master-0 kubenswrapper[4652]: I0216 17:45:08.711341 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b27724d-b99b-41de-b116-cc6217074c20-logs" (OuterVolumeSpecName: "logs") pod "4b27724d-b99b-41de-b116-cc6217074c20" (UID: "4b27724d-b99b-41de-b116-cc6217074c20"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:45:08.714374 master-0 kubenswrapper[4652]: I0216 17:45:08.714316 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b27724d-b99b-41de-b116-cc6217074c20-kube-api-access-lfhft" (OuterVolumeSpecName: "kube-api-access-lfhft") pod "4b27724d-b99b-41de-b116-cc6217074c20" (UID: "4b27724d-b99b-41de-b116-cc6217074c20"). InnerVolumeSpecName "kube-api-access-lfhft". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:45:08.740496 master-0 kubenswrapper[4652]: I0216 17:45:08.740439 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b27724d-b99b-41de-b116-cc6217074c20" (UID: "4b27724d-b99b-41de-b116-cc6217074c20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:08.744808 master-0 kubenswrapper[4652]: I0216 17:45:08.744776 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-config-data" (OuterVolumeSpecName: "config-data") pod "4b27724d-b99b-41de-b116-cc6217074c20" (UID: "4b27724d-b99b-41de-b116-cc6217074c20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:08.759765 master-0 kubenswrapper[4652]: I0216 17:45:08.759702 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db988590-db79-495b-b490-541f70c6f907" path="/var/lib/kubelet/pods/db988590-db79-495b-b490-541f70c6f907/volumes" Feb 16 17:45:08.766756 master-0 kubenswrapper[4652]: I0216 17:45:08.766711 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4b27724d-b99b-41de-b116-cc6217074c20" (UID: "4b27724d-b99b-41de-b116-cc6217074c20"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:08.771010 master-0 kubenswrapper[4652]: I0216 17:45:08.770953 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4b27724d-b99b-41de-b116-cc6217074c20" (UID: "4b27724d-b99b-41de-b116-cc6217074c20"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:08.814161 master-0 kubenswrapper[4652]: I0216 17:45:08.814083 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:08.814161 master-0 kubenswrapper[4652]: I0216 17:45:08.814151 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:08.814161 master-0 kubenswrapper[4652]: I0216 17:45:08.814165 4652 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-internal-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:08.814519 master-0 kubenswrapper[4652]: I0216 17:45:08.814176 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b27724d-b99b-41de-b116-cc6217074c20-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:08.814519 master-0 kubenswrapper[4652]: I0216 17:45:08.814189 4652 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b27724d-b99b-41de-b116-cc6217074c20-public-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:08.814519 master-0 kubenswrapper[4652]: I0216 17:45:08.814201 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfhft\" (UniqueName: \"kubernetes.io/projected/4b27724d-b99b-41de-b116-cc6217074c20-kube-api-access-lfhft\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:08.956385 master-0 kubenswrapper[4652]: I0216 17:45:08.952781 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:45:09.012353 master-0 kubenswrapper[4652]: I0216 17:45:09.011739 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: I0216 17:45:09.031575 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: E0216 17:45:09.033050 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b27724d-b99b-41de-b116-cc6217074c20" containerName="nova-api-log" Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: I0216 17:45:09.033081 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b27724d-b99b-41de-b116-cc6217074c20" containerName="nova-api-log" Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: E0216 17:45:09.033101 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b27724d-b99b-41de-b116-cc6217074c20" containerName="nova-api-api" Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: I0216 17:45:09.033112 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b27724d-b99b-41de-b116-cc6217074c20" containerName="nova-api-api" Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: E0216 17:45:09.033129 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db988590-db79-495b-b490-541f70c6f907" containerName="dnsmasq-dns" Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: I0216 17:45:09.033138 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="db988590-db79-495b-b490-541f70c6f907" containerName="dnsmasq-dns" Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: E0216 17:45:09.033160 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db988590-db79-495b-b490-541f70c6f907" containerName="init" Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: I0216 17:45:09.033168 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="db988590-db79-495b-b490-541f70c6f907" containerName="init" Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: E0216 17:45:09.033183 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82015a7e-8945-4748-bb16-db5b284117a6" containerName="nova-manage" Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: I0216 17:45:09.033191 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="82015a7e-8945-4748-bb16-db5b284117a6" containerName="nova-manage" Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: E0216 17:45:09.033217 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d158ef-2e1c-4eff-be36-c0ab68bedebc" containerName="nova-manage" Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: I0216 17:45:09.033225 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d158ef-2e1c-4eff-be36-c0ab68bedebc" containerName="nova-manage" Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: E0216 17:45:09.033260 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="299571e8-c349-4dc4-a1f6-89adeea76ed5" containerName="collect-profiles" Feb 16 17:45:09.033327 master-0 kubenswrapper[4652]: I0216 17:45:09.033269 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="299571e8-c349-4dc4-a1f6-89adeea76ed5" containerName="collect-profiles" Feb 16 17:45:09.034355 master-0 kubenswrapper[4652]: I0216 17:45:09.033601 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="db988590-db79-495b-b490-541f70c6f907" containerName="dnsmasq-dns" Feb 16 17:45:09.034355 master-0 kubenswrapper[4652]: I0216 17:45:09.033640 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="299571e8-c349-4dc4-a1f6-89adeea76ed5" containerName="collect-profiles" Feb 16 17:45:09.034355 master-0 kubenswrapper[4652]: I0216 17:45:09.033659 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d158ef-2e1c-4eff-be36-c0ab68bedebc" containerName="nova-manage" Feb 16 17:45:09.034355 master-0 kubenswrapper[4652]: I0216 17:45:09.033674 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="82015a7e-8945-4748-bb16-db5b284117a6" containerName="nova-manage" Feb 16 17:45:09.034355 master-0 kubenswrapper[4652]: I0216 17:45:09.033688 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b27724d-b99b-41de-b116-cc6217074c20" containerName="nova-api-log" Feb 16 17:45:09.034355 master-0 kubenswrapper[4652]: I0216 17:45:09.033704 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b27724d-b99b-41de-b116-cc6217074c20" containerName="nova-api-api" Feb 16 17:45:09.037035 master-0 kubenswrapper[4652]: I0216 17:45:09.036991 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:45:09.038969 master-0 kubenswrapper[4652]: I0216 17:45:09.038938 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 17:45:09.039043 master-0 kubenswrapper[4652]: I0216 17:45:09.038981 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 17:45:09.043241 master-0 kubenswrapper[4652]: I0216 17:45:09.043162 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 17:45:09.049400 master-0 kubenswrapper[4652]: I0216 17:45:09.049338 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:45:09.120664 master-0 kubenswrapper[4652]: I0216 17:45:09.120606 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18af013d-322b-4c38-9601-9afd8fb70bf1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.120892 master-0 kubenswrapper[4652]: I0216 17:45:09.120672 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18af013d-322b-4c38-9601-9afd8fb70bf1-logs\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.120892 master-0 kubenswrapper[4652]: I0216 17:45:09.120727 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18af013d-322b-4c38-9601-9afd8fb70bf1-public-tls-certs\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.120892 master-0 kubenswrapper[4652]: I0216 17:45:09.120791 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxj8\" (UniqueName: \"kubernetes.io/projected/18af013d-322b-4c38-9601-9afd8fb70bf1-kube-api-access-cwxj8\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.120892 master-0 kubenswrapper[4652]: I0216 17:45:09.120857 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18af013d-322b-4c38-9601-9afd8fb70bf1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.121089 master-0 kubenswrapper[4652]: I0216 17:45:09.121055 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18af013d-322b-4c38-9601-9afd8fb70bf1-config-data\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.222759 master-0 kubenswrapper[4652]: I0216 17:45:09.222694 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwxj8\" (UniqueName: \"kubernetes.io/projected/18af013d-322b-4c38-9601-9afd8fb70bf1-kube-api-access-cwxj8\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.223034 master-0 kubenswrapper[4652]: I0216 17:45:09.222780 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18af013d-322b-4c38-9601-9afd8fb70bf1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.223034 master-0 kubenswrapper[4652]: I0216 17:45:09.222900 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18af013d-322b-4c38-9601-9afd8fb70bf1-config-data\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.223157 master-0 kubenswrapper[4652]: I0216 17:45:09.223046 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18af013d-322b-4c38-9601-9afd8fb70bf1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.223632 master-0 kubenswrapper[4652]: I0216 17:45:09.223595 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18af013d-322b-4c38-9601-9afd8fb70bf1-logs\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.223723 master-0 kubenswrapper[4652]: I0216 17:45:09.223640 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18af013d-322b-4c38-9601-9afd8fb70bf1-public-tls-certs\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.224078 master-0 kubenswrapper[4652]: I0216 17:45:09.224037 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18af013d-322b-4c38-9601-9afd8fb70bf1-logs\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.226734 master-0 kubenswrapper[4652]: I0216 17:45:09.226694 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18af013d-322b-4c38-9601-9afd8fb70bf1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.226813 master-0 kubenswrapper[4652]: I0216 17:45:09.226765 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/18af013d-322b-4c38-9601-9afd8fb70bf1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.227432 master-0 kubenswrapper[4652]: I0216 17:45:09.227395 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18af013d-322b-4c38-9601-9afd8fb70bf1-config-data\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.228799 master-0 kubenswrapper[4652]: I0216 17:45:09.228761 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/18af013d-322b-4c38-9601-9afd8fb70bf1-public-tls-certs\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.243077 master-0 kubenswrapper[4652]: I0216 17:45:09.243010 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwxj8\" (UniqueName: \"kubernetes.io/projected/18af013d-322b-4c38-9601-9afd8fb70bf1-kube-api-access-cwxj8\") pod \"nova-api-0\" (UID: \"18af013d-322b-4c38-9601-9afd8fb70bf1\") " pod="openstack/nova-api-0" Feb 16 17:45:09.354334 master-0 kubenswrapper[4652]: I0216 17:45:09.354124 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 17:45:09.822629 master-0 kubenswrapper[4652]: I0216 17:45:09.822570 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 17:45:09.825978 master-0 kubenswrapper[4652]: W0216 17:45:09.825917 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18af013d_322b_4c38_9601_9afd8fb70bf1.slice/crio-b607aebf87b0b785b9eb8fcb6eaedecfc271a12ff319ac7c06ff43252aad314b WatchSource:0}: Error finding container b607aebf87b0b785b9eb8fcb6eaedecfc271a12ff319ac7c06ff43252aad314b: Status 404 returned error can't find the container with id b607aebf87b0b785b9eb8fcb6eaedecfc271a12ff319ac7c06ff43252aad314b Feb 16 17:45:10.658350 master-0 kubenswrapper[4652]: I0216 17:45:10.658272 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18af013d-322b-4c38-9601-9afd8fb70bf1","Type":"ContainerStarted","Data":"f8563767096a2c616d92490d0645ea22a0953d438f60db0c2acd1ffb2ff35993"} Feb 16 17:45:10.658350 master-0 kubenswrapper[4652]: I0216 17:45:10.658324 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18af013d-322b-4c38-9601-9afd8fb70bf1","Type":"ContainerStarted","Data":"7d0c01a75439b2205ea16dee16bbcc6281dc97d097e2b727e3c1be5ada7ade46"} Feb 16 17:45:10.658350 master-0 kubenswrapper[4652]: I0216 17:45:10.658335 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18af013d-322b-4c38-9601-9afd8fb70bf1","Type":"ContainerStarted","Data":"b607aebf87b0b785b9eb8fcb6eaedecfc271a12ff319ac7c06ff43252aad314b"} Feb 16 17:45:10.688174 master-0 kubenswrapper[4652]: I0216 17:45:10.688093 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.688070579 podStartE2EDuration="2.688070579s" podCreationTimestamp="2026-02-16 17:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:45:10.68287234 +0000 UTC m=+1268.071040876" watchObservedRunningTime="2026-02-16 17:45:10.688070579 +0000 UTC m=+1268.076239095" Feb 16 17:45:10.759004 master-0 kubenswrapper[4652]: I0216 17:45:10.758961 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b27724d-b99b-41de-b116-cc6217074c20" path="/var/lib/kubelet/pods/4b27724d-b99b-41de-b116-cc6217074c20/volumes" Feb 16 17:45:10.763513 master-0 kubenswrapper[4652]: E0216 17:45:10.763425 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7962b043b53fa81b163601b9f78aad11933825c698b1f294370923aeea1d30d2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 17:45:10.765199 master-0 kubenswrapper[4652]: E0216 17:45:10.765152 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7962b043b53fa81b163601b9f78aad11933825c698b1f294370923aeea1d30d2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 17:45:10.767231 master-0 kubenswrapper[4652]: E0216 17:45:10.767188 4652 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="7962b043b53fa81b163601b9f78aad11933825c698b1f294370923aeea1d30d2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 17:45:10.767430 master-0 kubenswrapper[4652]: E0216 17:45:10.767238 4652 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="40ff8fb5-61f0-42a8-8b16-8571f3305785" containerName="nova-scheduler-scheduler" Feb 16 17:45:11.594358 master-0 kubenswrapper[4652]: I0216 17:45:11.594320 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:45:11.690419 master-0 kubenswrapper[4652]: I0216 17:45:11.687999 4652 generic.go:334] "Generic (PLEG): container finished" podID="a171bdac-1967-4caf-836e-4eac64a10fd6" containerID="ab45afcfa669303e07706ee11dbfe501ba09d4099b47d61441e3029e0a808bea" exitCode=0 Feb 16 17:45:11.690419 master-0 kubenswrapper[4652]: I0216 17:45:11.689084 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:45:11.690419 master-0 kubenswrapper[4652]: I0216 17:45:11.689884 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a171bdac-1967-4caf-836e-4eac64a10fd6","Type":"ContainerDied","Data":"ab45afcfa669303e07706ee11dbfe501ba09d4099b47d61441e3029e0a808bea"} Feb 16 17:45:11.690419 master-0 kubenswrapper[4652]: I0216 17:45:11.689918 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a171bdac-1967-4caf-836e-4eac64a10fd6","Type":"ContainerDied","Data":"29063fd5fbef9ecadbce514303ac005671282ae1343faf4f83aa1c1dbad2d10d"} Feb 16 17:45:11.690419 master-0 kubenswrapper[4652]: I0216 17:45:11.689938 4652 scope.go:117] "RemoveContainer" containerID="ab45afcfa669303e07706ee11dbfe501ba09d4099b47d61441e3029e0a808bea" Feb 16 17:45:11.733930 master-0 kubenswrapper[4652]: I0216 17:45:11.732717 4652 scope.go:117] "RemoveContainer" containerID="b75dc0f0c79469729ab35074e7abfc4db660faf3fca665b9f45168f6f1176dbd" Feb 16 17:45:11.758824 master-0 kubenswrapper[4652]: I0216 17:45:11.758566 4652 scope.go:117] "RemoveContainer" containerID="ab45afcfa669303e07706ee11dbfe501ba09d4099b47d61441e3029e0a808bea" Feb 16 17:45:11.764239 master-0 kubenswrapper[4652]: E0216 17:45:11.764189 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab45afcfa669303e07706ee11dbfe501ba09d4099b47d61441e3029e0a808bea\": container with ID starting with ab45afcfa669303e07706ee11dbfe501ba09d4099b47d61441e3029e0a808bea not found: ID does not exist" containerID="ab45afcfa669303e07706ee11dbfe501ba09d4099b47d61441e3029e0a808bea" Feb 16 17:45:11.764413 master-0 kubenswrapper[4652]: I0216 17:45:11.764244 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab45afcfa669303e07706ee11dbfe501ba09d4099b47d61441e3029e0a808bea"} err="failed to get container status \"ab45afcfa669303e07706ee11dbfe501ba09d4099b47d61441e3029e0a808bea\": rpc error: code = NotFound desc = could not find container \"ab45afcfa669303e07706ee11dbfe501ba09d4099b47d61441e3029e0a808bea\": container with ID starting with ab45afcfa669303e07706ee11dbfe501ba09d4099b47d61441e3029e0a808bea not found: ID does not exist" Feb 16 17:45:11.764413 master-0 kubenswrapper[4652]: I0216 17:45:11.764328 4652 scope.go:117] "RemoveContainer" containerID="b75dc0f0c79469729ab35074e7abfc4db660faf3fca665b9f45168f6f1176dbd" Feb 16 17:45:11.764784 master-0 kubenswrapper[4652]: E0216 17:45:11.764748 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b75dc0f0c79469729ab35074e7abfc4db660faf3fca665b9f45168f6f1176dbd\": container with ID starting with b75dc0f0c79469729ab35074e7abfc4db660faf3fca665b9f45168f6f1176dbd not found: ID does not exist" containerID="b75dc0f0c79469729ab35074e7abfc4db660faf3fca665b9f45168f6f1176dbd" Feb 16 17:45:11.764850 master-0 kubenswrapper[4652]: I0216 17:45:11.764790 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b75dc0f0c79469729ab35074e7abfc4db660faf3fca665b9f45168f6f1176dbd"} err="failed to get container status \"b75dc0f0c79469729ab35074e7abfc4db660faf3fca665b9f45168f6f1176dbd\": rpc error: code = NotFound desc = could not find container \"b75dc0f0c79469729ab35074e7abfc4db660faf3fca665b9f45168f6f1176dbd\": container with ID starting with b75dc0f0c79469729ab35074e7abfc4db660faf3fca665b9f45168f6f1176dbd not found: ID does not exist" Feb 16 17:45:11.782533 master-0 kubenswrapper[4652]: I0216 17:45:11.782491 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-combined-ca-bundle\") pod \"a171bdac-1967-4caf-836e-4eac64a10fd6\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " Feb 16 17:45:11.782789 master-0 kubenswrapper[4652]: I0216 17:45:11.782575 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-nova-metadata-tls-certs\") pod \"a171bdac-1967-4caf-836e-4eac64a10fd6\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " Feb 16 17:45:11.782789 master-0 kubenswrapper[4652]: I0216 17:45:11.782682 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v96g\" (UniqueName: \"kubernetes.io/projected/a171bdac-1967-4caf-836e-4eac64a10fd6-kube-api-access-5v96g\") pod \"a171bdac-1967-4caf-836e-4eac64a10fd6\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " Feb 16 17:45:11.782865 master-0 kubenswrapper[4652]: I0216 17:45:11.782835 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a171bdac-1967-4caf-836e-4eac64a10fd6-logs\") pod \"a171bdac-1967-4caf-836e-4eac64a10fd6\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " Feb 16 17:45:11.783299 master-0 kubenswrapper[4652]: I0216 17:45:11.782926 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-config-data\") pod \"a171bdac-1967-4caf-836e-4eac64a10fd6\" (UID: \"a171bdac-1967-4caf-836e-4eac64a10fd6\") " Feb 16 17:45:11.788848 master-0 kubenswrapper[4652]: I0216 17:45:11.788807 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a171bdac-1967-4caf-836e-4eac64a10fd6-logs" (OuterVolumeSpecName: "logs") pod "a171bdac-1967-4caf-836e-4eac64a10fd6" (UID: "a171bdac-1967-4caf-836e-4eac64a10fd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 17:45:11.800576 master-0 kubenswrapper[4652]: I0216 17:45:11.800519 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a171bdac-1967-4caf-836e-4eac64a10fd6-kube-api-access-5v96g" (OuterVolumeSpecName: "kube-api-access-5v96g") pod "a171bdac-1967-4caf-836e-4eac64a10fd6" (UID: "a171bdac-1967-4caf-836e-4eac64a10fd6"). InnerVolumeSpecName "kube-api-access-5v96g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:45:11.816384 master-0 kubenswrapper[4652]: I0216 17:45:11.816334 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a171bdac-1967-4caf-836e-4eac64a10fd6" (UID: "a171bdac-1967-4caf-836e-4eac64a10fd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:11.821784 master-0 kubenswrapper[4652]: I0216 17:45:11.821739 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-config-data" (OuterVolumeSpecName: "config-data") pod "a171bdac-1967-4caf-836e-4eac64a10fd6" (UID: "a171bdac-1967-4caf-836e-4eac64a10fd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:11.850877 master-0 kubenswrapper[4652]: I0216 17:45:11.850817 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a171bdac-1967-4caf-836e-4eac64a10fd6" (UID: "a171bdac-1967-4caf-836e-4eac64a10fd6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:11.885610 master-0 kubenswrapper[4652]: I0216 17:45:11.885556 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:11.885610 master-0 kubenswrapper[4652]: I0216 17:45:11.885598 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:11.885610 master-0 kubenswrapper[4652]: I0216 17:45:11.885609 4652 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a171bdac-1967-4caf-836e-4eac64a10fd6-nova-metadata-tls-certs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:11.885610 master-0 kubenswrapper[4652]: I0216 17:45:11.885619 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v96g\" (UniqueName: \"kubernetes.io/projected/a171bdac-1967-4caf-836e-4eac64a10fd6-kube-api-access-5v96g\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:11.885947 master-0 kubenswrapper[4652]: I0216 17:45:11.885630 4652 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a171bdac-1967-4caf-836e-4eac64a10fd6-logs\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:12.047967 master-0 kubenswrapper[4652]: I0216 17:45:12.047884 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:45:12.063812 master-0 kubenswrapper[4652]: I0216 17:45:12.062816 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:45:12.098103 master-0 kubenswrapper[4652]: I0216 17:45:12.098036 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:45:12.098583 master-0 kubenswrapper[4652]: E0216 17:45:12.098556 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a171bdac-1967-4caf-836e-4eac64a10fd6" containerName="nova-metadata-log" Feb 16 17:45:12.098583 master-0 kubenswrapper[4652]: I0216 17:45:12.098575 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="a171bdac-1967-4caf-836e-4eac64a10fd6" containerName="nova-metadata-log" Feb 16 17:45:12.098676 master-0 kubenswrapper[4652]: E0216 17:45:12.098590 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a171bdac-1967-4caf-836e-4eac64a10fd6" containerName="nova-metadata-metadata" Feb 16 17:45:12.098676 master-0 kubenswrapper[4652]: I0216 17:45:12.098597 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="a171bdac-1967-4caf-836e-4eac64a10fd6" containerName="nova-metadata-metadata" Feb 16 17:45:12.098832 master-0 kubenswrapper[4652]: I0216 17:45:12.098809 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="a171bdac-1967-4caf-836e-4eac64a10fd6" containerName="nova-metadata-log" Feb 16 17:45:12.098877 master-0 kubenswrapper[4652]: I0216 17:45:12.098849 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="a171bdac-1967-4caf-836e-4eac64a10fd6" containerName="nova-metadata-metadata" Feb 16 17:45:12.100763 master-0 kubenswrapper[4652]: I0216 17:45:12.100440 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:45:12.106281 master-0 kubenswrapper[4652]: I0216 17:45:12.105514 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 17:45:12.106281 master-0 kubenswrapper[4652]: I0216 17:45:12.105619 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 17:45:12.132665 master-0 kubenswrapper[4652]: I0216 17:45:12.125753 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:45:12.192950 master-0 kubenswrapper[4652]: I0216 17:45:12.192858 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2612398d-4b62-4ef3-8b89-f3107362c6b0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.192950 master-0 kubenswrapper[4652]: I0216 17:45:12.192919 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2612398d-4b62-4ef3-8b89-f3107362c6b0-logs\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.193391 master-0 kubenswrapper[4652]: I0216 17:45:12.193024 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2612398d-4b62-4ef3-8b89-f3107362c6b0-config-data\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.193391 master-0 kubenswrapper[4652]: I0216 17:45:12.193186 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2612398d-4b62-4ef3-8b89-f3107362c6b0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.193391 master-0 kubenswrapper[4652]: I0216 17:45:12.193357 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpm74\" (UniqueName: \"kubernetes.io/projected/2612398d-4b62-4ef3-8b89-f3107362c6b0-kube-api-access-gpm74\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.295781 master-0 kubenswrapper[4652]: I0216 17:45:12.295631 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2612398d-4b62-4ef3-8b89-f3107362c6b0-config-data\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.295781 master-0 kubenswrapper[4652]: I0216 17:45:12.295692 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2612398d-4b62-4ef3-8b89-f3107362c6b0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.295781 master-0 kubenswrapper[4652]: I0216 17:45:12.295728 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpm74\" (UniqueName: \"kubernetes.io/projected/2612398d-4b62-4ef3-8b89-f3107362c6b0-kube-api-access-gpm74\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.296071 master-0 kubenswrapper[4652]: I0216 17:45:12.295869 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2612398d-4b62-4ef3-8b89-f3107362c6b0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.296071 master-0 kubenswrapper[4652]: I0216 17:45:12.295891 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2612398d-4b62-4ef3-8b89-f3107362c6b0-logs\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.296350 master-0 kubenswrapper[4652]: I0216 17:45:12.296324 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2612398d-4b62-4ef3-8b89-f3107362c6b0-logs\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.300069 master-0 kubenswrapper[4652]: I0216 17:45:12.300033 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2612398d-4b62-4ef3-8b89-f3107362c6b0-config-data\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.300201 master-0 kubenswrapper[4652]: I0216 17:45:12.300135 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2612398d-4b62-4ef3-8b89-f3107362c6b0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.301405 master-0 kubenswrapper[4652]: I0216 17:45:12.301354 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2612398d-4b62-4ef3-8b89-f3107362c6b0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.320064 master-0 kubenswrapper[4652]: I0216 17:45:12.319990 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpm74\" (UniqueName: \"kubernetes.io/projected/2612398d-4b62-4ef3-8b89-f3107362c6b0-kube-api-access-gpm74\") pod \"nova-metadata-0\" (UID: \"2612398d-4b62-4ef3-8b89-f3107362c6b0\") " pod="openstack/nova-metadata-0" Feb 16 17:45:12.435318 master-0 kubenswrapper[4652]: I0216 17:45:12.434786 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 17:45:12.763704 master-0 kubenswrapper[4652]: I0216 17:45:12.763632 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a171bdac-1967-4caf-836e-4eac64a10fd6" path="/var/lib/kubelet/pods/a171bdac-1967-4caf-836e-4eac64a10fd6/volumes" Feb 16 17:45:13.039657 master-0 kubenswrapper[4652]: I0216 17:45:13.039615 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 17:45:13.727589 master-0 kubenswrapper[4652]: I0216 17:45:13.727522 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2612398d-4b62-4ef3-8b89-f3107362c6b0","Type":"ContainerStarted","Data":"0e15cf1942ac6bb4ccc651f6c0893d499bd07320ad1b6e27561244e099dc276d"} Feb 16 17:45:13.727589 master-0 kubenswrapper[4652]: I0216 17:45:13.727576 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2612398d-4b62-4ef3-8b89-f3107362c6b0","Type":"ContainerStarted","Data":"d3eb2ff36bd2a733220764a3019382d4ebbecef2f9dc4d78b1fcc60e7d4efae1"} Feb 16 17:45:13.727589 master-0 kubenswrapper[4652]: I0216 17:45:13.727588 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2612398d-4b62-4ef3-8b89-f3107362c6b0","Type":"ContainerStarted","Data":"4d7d9c4639996b1f0d7ed9ed0d5d2a3c5c073605c362527592b4df37e67d600a"} Feb 16 17:45:13.761286 master-0 kubenswrapper[4652]: I0216 17:45:13.761188 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.7611655659999998 podStartE2EDuration="1.761165566s" podCreationTimestamp="2026-02-16 17:45:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:45:13.751005083 +0000 UTC m=+1271.139173599" watchObservedRunningTime="2026-02-16 17:45:13.761165566 +0000 UTC m=+1271.149334082" Feb 16 17:45:14.745062 master-0 kubenswrapper[4652]: I0216 17:45:14.745010 4652 generic.go:334] "Generic (PLEG): container finished" podID="40ff8fb5-61f0-42a8-8b16-8571f3305785" containerID="7962b043b53fa81b163601b9f78aad11933825c698b1f294370923aeea1d30d2" exitCode=0 Feb 16 17:45:14.773805 master-0 kubenswrapper[4652]: I0216 17:45:14.773710 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40ff8fb5-61f0-42a8-8b16-8571f3305785","Type":"ContainerDied","Data":"7962b043b53fa81b163601b9f78aad11933825c698b1f294370923aeea1d30d2"} Feb 16 17:45:15.048466 master-0 kubenswrapper[4652]: I0216 17:45:15.048418 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:45:15.169048 master-0 kubenswrapper[4652]: I0216 17:45:15.168960 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ff8fb5-61f0-42a8-8b16-8571f3305785-config-data\") pod \"40ff8fb5-61f0-42a8-8b16-8571f3305785\" (UID: \"40ff8fb5-61f0-42a8-8b16-8571f3305785\") " Feb 16 17:45:15.169716 master-0 kubenswrapper[4652]: I0216 17:45:15.169574 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ff8fb5-61f0-42a8-8b16-8571f3305785-combined-ca-bundle\") pod \"40ff8fb5-61f0-42a8-8b16-8571f3305785\" (UID: \"40ff8fb5-61f0-42a8-8b16-8571f3305785\") " Feb 16 17:45:15.169797 master-0 kubenswrapper[4652]: I0216 17:45:15.169759 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7n9w\" (UniqueName: \"kubernetes.io/projected/40ff8fb5-61f0-42a8-8b16-8571f3305785-kube-api-access-v7n9w\") pod \"40ff8fb5-61f0-42a8-8b16-8571f3305785\" (UID: \"40ff8fb5-61f0-42a8-8b16-8571f3305785\") " Feb 16 17:45:15.173567 master-0 kubenswrapper[4652]: I0216 17:45:15.173226 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ff8fb5-61f0-42a8-8b16-8571f3305785-kube-api-access-v7n9w" (OuterVolumeSpecName: "kube-api-access-v7n9w") pod "40ff8fb5-61f0-42a8-8b16-8571f3305785" (UID: "40ff8fb5-61f0-42a8-8b16-8571f3305785"). InnerVolumeSpecName "kube-api-access-v7n9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:45:15.198421 master-0 kubenswrapper[4652]: I0216 17:45:15.198291 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ff8fb5-61f0-42a8-8b16-8571f3305785-config-data" (OuterVolumeSpecName: "config-data") pod "40ff8fb5-61f0-42a8-8b16-8571f3305785" (UID: "40ff8fb5-61f0-42a8-8b16-8571f3305785"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:15.218837 master-0 kubenswrapper[4652]: I0216 17:45:15.218518 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ff8fb5-61f0-42a8-8b16-8571f3305785-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40ff8fb5-61f0-42a8-8b16-8571f3305785" (UID: "40ff8fb5-61f0-42a8-8b16-8571f3305785"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:45:15.272933 master-0 kubenswrapper[4652]: I0216 17:45:15.272879 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40ff8fb5-61f0-42a8-8b16-8571f3305785-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:15.272933 master-0 kubenswrapper[4652]: I0216 17:45:15.272927 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7n9w\" (UniqueName: \"kubernetes.io/projected/40ff8fb5-61f0-42a8-8b16-8571f3305785-kube-api-access-v7n9w\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:15.272933 master-0 kubenswrapper[4652]: I0216 17:45:15.272939 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40ff8fb5-61f0-42a8-8b16-8571f3305785-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 17:45:15.758417 master-0 kubenswrapper[4652]: I0216 17:45:15.758334 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"40ff8fb5-61f0-42a8-8b16-8571f3305785","Type":"ContainerDied","Data":"9cf98f29077c410fce99a0a982dbc21f8506dedfebc56d328182d077fa78b0fa"} Feb 16 17:45:15.758417 master-0 kubenswrapper[4652]: I0216 17:45:15.758414 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:45:15.759527 master-0 kubenswrapper[4652]: I0216 17:45:15.758424 4652 scope.go:117] "RemoveContainer" containerID="7962b043b53fa81b163601b9f78aad11933825c698b1f294370923aeea1d30d2" Feb 16 17:45:15.805531 master-0 kubenswrapper[4652]: I0216 17:45:15.805464 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:45:15.819770 master-0 kubenswrapper[4652]: I0216 17:45:15.819704 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:45:15.852596 master-0 kubenswrapper[4652]: I0216 17:45:15.848504 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:45:15.852596 master-0 kubenswrapper[4652]: E0216 17:45:15.849268 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ff8fb5-61f0-42a8-8b16-8571f3305785" containerName="nova-scheduler-scheduler" Feb 16 17:45:15.852596 master-0 kubenswrapper[4652]: I0216 17:45:15.849288 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ff8fb5-61f0-42a8-8b16-8571f3305785" containerName="nova-scheduler-scheduler" Feb 16 17:45:15.852596 master-0 kubenswrapper[4652]: I0216 17:45:15.849795 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ff8fb5-61f0-42a8-8b16-8571f3305785" containerName="nova-scheduler-scheduler" Feb 16 17:45:15.852596 master-0 kubenswrapper[4652]: I0216 17:45:15.851147 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:45:15.863890 master-0 kubenswrapper[4652]: I0216 17:45:15.863684 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 17:45:15.877439 master-0 kubenswrapper[4652]: I0216 17:45:15.874416 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:45:15.995007 master-0 kubenswrapper[4652]: I0216 17:45:15.994958 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b889ce-c105-495e-90de-f454a7bacea9-config-data\") pod \"nova-scheduler-0\" (UID: \"f7b889ce-c105-495e-90de-f454a7bacea9\") " pod="openstack/nova-scheduler-0" Feb 16 17:45:15.995007 master-0 kubenswrapper[4652]: I0216 17:45:15.995014 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b889ce-c105-495e-90de-f454a7bacea9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f7b889ce-c105-495e-90de-f454a7bacea9\") " pod="openstack/nova-scheduler-0" Feb 16 17:45:15.995846 master-0 kubenswrapper[4652]: I0216 17:45:15.995785 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6fqt\" (UniqueName: \"kubernetes.io/projected/f7b889ce-c105-495e-90de-f454a7bacea9-kube-api-access-h6fqt\") pod \"nova-scheduler-0\" (UID: \"f7b889ce-c105-495e-90de-f454a7bacea9\") " pod="openstack/nova-scheduler-0" Feb 16 17:45:16.099650 master-0 kubenswrapper[4652]: I0216 17:45:16.099463 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b889ce-c105-495e-90de-f454a7bacea9-config-data\") pod \"nova-scheduler-0\" (UID: \"f7b889ce-c105-495e-90de-f454a7bacea9\") " pod="openstack/nova-scheduler-0" Feb 16 17:45:16.099650 master-0 kubenswrapper[4652]: I0216 17:45:16.099561 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b889ce-c105-495e-90de-f454a7bacea9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f7b889ce-c105-495e-90de-f454a7bacea9\") " pod="openstack/nova-scheduler-0" Feb 16 17:45:16.099990 master-0 kubenswrapper[4652]: I0216 17:45:16.099871 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6fqt\" (UniqueName: \"kubernetes.io/projected/f7b889ce-c105-495e-90de-f454a7bacea9-kube-api-access-h6fqt\") pod \"nova-scheduler-0\" (UID: \"f7b889ce-c105-495e-90de-f454a7bacea9\") " pod="openstack/nova-scheduler-0" Feb 16 17:45:16.105988 master-0 kubenswrapper[4652]: I0216 17:45:16.104383 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7b889ce-c105-495e-90de-f454a7bacea9-config-data\") pod \"nova-scheduler-0\" (UID: \"f7b889ce-c105-495e-90de-f454a7bacea9\") " pod="openstack/nova-scheduler-0" Feb 16 17:45:16.108450 master-0 kubenswrapper[4652]: I0216 17:45:16.106770 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7b889ce-c105-495e-90de-f454a7bacea9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f7b889ce-c105-495e-90de-f454a7bacea9\") " pod="openstack/nova-scheduler-0" Feb 16 17:45:16.118853 master-0 kubenswrapper[4652]: I0216 17:45:16.118825 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6fqt\" (UniqueName: \"kubernetes.io/projected/f7b889ce-c105-495e-90de-f454a7bacea9-kube-api-access-h6fqt\") pod \"nova-scheduler-0\" (UID: \"f7b889ce-c105-495e-90de-f454a7bacea9\") " pod="openstack/nova-scheduler-0" Feb 16 17:45:16.334074 master-0 kubenswrapper[4652]: I0216 17:45:16.334022 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 17:45:16.760810 master-0 kubenswrapper[4652]: I0216 17:45:16.760751 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ff8fb5-61f0-42a8-8b16-8571f3305785" path="/var/lib/kubelet/pods/40ff8fb5-61f0-42a8-8b16-8571f3305785/volumes" Feb 16 17:45:16.803859 master-0 kubenswrapper[4652]: I0216 17:45:16.803610 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 17:45:17.435312 master-0 kubenswrapper[4652]: I0216 17:45:17.435233 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:45:17.435312 master-0 kubenswrapper[4652]: I0216 17:45:17.435313 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 17:45:17.786806 master-0 kubenswrapper[4652]: I0216 17:45:17.786608 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f7b889ce-c105-495e-90de-f454a7bacea9","Type":"ContainerStarted","Data":"a1c2a97a40ce0e7445fb0bfcd281cefeb03ee187696dc770a8bb5e340e447594"} Feb 16 17:45:17.786806 master-0 kubenswrapper[4652]: I0216 17:45:17.786657 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f7b889ce-c105-495e-90de-f454a7bacea9","Type":"ContainerStarted","Data":"6e1b339de2352774b7e5eb5fb018d8ae2127c66aa79502e9d8fc6043015ce048"} Feb 16 17:45:17.813506 master-0 kubenswrapper[4652]: I0216 17:45:17.813007 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.812982649 podStartE2EDuration="2.812982649s" podCreationTimestamp="2026-02-16 17:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:45:17.810825631 +0000 UTC m=+1275.198994177" watchObservedRunningTime="2026-02-16 17:45:17.812982649 +0000 UTC m=+1275.201151165" Feb 16 17:45:19.355099 master-0 kubenswrapper[4652]: I0216 17:45:19.355004 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:45:19.355099 master-0 kubenswrapper[4652]: I0216 17:45:19.355095 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 17:45:20.360639 master-0 kubenswrapper[4652]: I0216 17:45:20.360553 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="18af013d-322b-4c38-9601-9afd8fb70bf1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.128.0.242:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:45:20.367423 master-0 kubenswrapper[4652]: I0216 17:45:20.367372 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="18af013d-322b-4c38-9601-9afd8fb70bf1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.128.0.242:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:45:21.335388 master-0 kubenswrapper[4652]: I0216 17:45:21.335227 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 17:45:22.435321 master-0 kubenswrapper[4652]: I0216 17:45:22.435283 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 17:45:22.435876 master-0 kubenswrapper[4652]: I0216 17:45:22.435862 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 17:45:23.448670 master-0 kubenswrapper[4652]: I0216 17:45:23.448593 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2612398d-4b62-4ef3-8b89-f3107362c6b0" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.128.0.243:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:45:23.448670 master-0 kubenswrapper[4652]: I0216 17:45:23.448621 4652 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2612398d-4b62-4ef3-8b89-f3107362c6b0" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.128.0.243:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 17:45:26.334284 master-0 kubenswrapper[4652]: I0216 17:45:26.334230 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 17:45:26.369093 master-0 kubenswrapper[4652]: I0216 17:45:26.369062 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 17:45:26.923588 master-0 kubenswrapper[4652]: I0216 17:45:26.923505 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 17:45:29.362071 master-0 kubenswrapper[4652]: I0216 17:45:29.361991 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 17:45:29.362987 master-0 kubenswrapper[4652]: I0216 17:45:29.362552 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 17:45:29.365032 master-0 kubenswrapper[4652]: I0216 17:45:29.364984 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 17:45:29.369688 master-0 kubenswrapper[4652]: I0216 17:45:29.369636 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 17:45:29.929277 master-0 kubenswrapper[4652]: I0216 17:45:29.928720 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 17:45:29.944103 master-0 kubenswrapper[4652]: I0216 17:45:29.944059 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 17:45:32.449964 master-0 kubenswrapper[4652]: I0216 17:45:32.449864 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 17:45:32.451358 master-0 kubenswrapper[4652]: I0216 17:45:32.451330 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 17:45:32.457000 master-0 kubenswrapper[4652]: I0216 17:45:32.456946 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 17:45:32.971730 master-0 kubenswrapper[4652]: I0216 17:45:32.971656 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 17:45:59.384030 master-0 kubenswrapper[4652]: I0216 17:45:59.383938 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-skfh4"] Feb 16 17:45:59.384707 master-0 kubenswrapper[4652]: I0216 17:45:59.384291 4652 kuberuntime_container.go:808] "Killing container with a grace period" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" podUID="95f052f3-eab9-49a0-b95f-51722af6f1f9" containerName="sushy-emulator" containerID="cri-o://6ed14ddf510a7493478e5ee5d442fd9c3dc90cca3c4f62db941894cc3a6caa0a" gracePeriod=30 Feb 16 17:46:00.092006 master-0 kubenswrapper[4652]: I0216 17:46:00.091967 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:46:00.225493 master-0 kubenswrapper[4652]: I0216 17:46:00.225424 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-mkltd"] Feb 16 17:46:00.226231 master-0 kubenswrapper[4652]: E0216 17:46:00.226193 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95f052f3-eab9-49a0-b95f-51722af6f1f9" containerName="sushy-emulator" Feb 16 17:46:00.226231 master-0 kubenswrapper[4652]: I0216 17:46:00.226222 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="95f052f3-eab9-49a0-b95f-51722af6f1f9" containerName="sushy-emulator" Feb 16 17:46:00.226800 master-0 kubenswrapper[4652]: I0216 17:46:00.226578 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="95f052f3-eab9-49a0-b95f-51722af6f1f9" containerName="sushy-emulator" Feb 16 17:46:00.228065 master-0 kubenswrapper[4652]: I0216 17:46:00.228035 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:00.236038 master-0 kubenswrapper[4652]: I0216 17:46:00.235940 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/95f052f3-eab9-49a0-b95f-51722af6f1f9-os-client-config\") pod \"95f052f3-eab9-49a0-b95f-51722af6f1f9\" (UID: \"95f052f3-eab9-49a0-b95f-51722af6f1f9\") " Feb 16 17:46:00.236382 master-0 kubenswrapper[4652]: I0216 17:46:00.236366 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/95f052f3-eab9-49a0-b95f-51722af6f1f9-sushy-emulator-config\") pod \"95f052f3-eab9-49a0-b95f-51722af6f1f9\" (UID: \"95f052f3-eab9-49a0-b95f-51722af6f1f9\") " Feb 16 17:46:00.236591 master-0 kubenswrapper[4652]: I0216 17:46:00.236577 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnkp8\" (UniqueName: \"kubernetes.io/projected/95f052f3-eab9-49a0-b95f-51722af6f1f9-kube-api-access-xnkp8\") pod \"95f052f3-eab9-49a0-b95f-51722af6f1f9\" (UID: \"95f052f3-eab9-49a0-b95f-51722af6f1f9\") " Feb 16 17:46:00.240378 master-0 kubenswrapper[4652]: I0216 17:46:00.240328 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-mkltd"] Feb 16 17:46:00.241488 master-0 kubenswrapper[4652]: I0216 17:46:00.241468 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95f052f3-eab9-49a0-b95f-51722af6f1f9-kube-api-access-xnkp8" (OuterVolumeSpecName: "kube-api-access-xnkp8") pod "95f052f3-eab9-49a0-b95f-51722af6f1f9" (UID: "95f052f3-eab9-49a0-b95f-51722af6f1f9"). InnerVolumeSpecName "kube-api-access-xnkp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 17:46:00.241963 master-0 kubenswrapper[4652]: I0216 17:46:00.241944 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95f052f3-eab9-49a0-b95f-51722af6f1f9-sushy-emulator-config" (OuterVolumeSpecName: "sushy-emulator-config") pod "95f052f3-eab9-49a0-b95f-51722af6f1f9" (UID: "95f052f3-eab9-49a0-b95f-51722af6f1f9"). InnerVolumeSpecName "sushy-emulator-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 17:46:00.257603 master-0 kubenswrapper[4652]: I0216 17:46:00.257539 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95f052f3-eab9-49a0-b95f-51722af6f1f9-os-client-config" (OuterVolumeSpecName: "os-client-config") pod "95f052f3-eab9-49a0-b95f-51722af6f1f9" (UID: "95f052f3-eab9-49a0-b95f-51722af6f1f9"). InnerVolumeSpecName "os-client-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 17:46:00.276326 master-0 kubenswrapper[4652]: I0216 17:46:00.276276 4652 generic.go:334] "Generic (PLEG): container finished" podID="95f052f3-eab9-49a0-b95f-51722af6f1f9" containerID="6ed14ddf510a7493478e5ee5d442fd9c3dc90cca3c4f62db941894cc3a6caa0a" exitCode=0 Feb 16 17:46:00.276326 master-0 kubenswrapper[4652]: I0216 17:46:00.276313 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" event={"ID":"95f052f3-eab9-49a0-b95f-51722af6f1f9","Type":"ContainerDied","Data":"6ed14ddf510a7493478e5ee5d442fd9c3dc90cca3c4f62db941894cc3a6caa0a"} Feb 16 17:46:00.276582 master-0 kubenswrapper[4652]: I0216 17:46:00.276366 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" event={"ID":"95f052f3-eab9-49a0-b95f-51722af6f1f9","Type":"ContainerDied","Data":"a5ef3681793fc53179c226419e2596e4276becca6648d8a7b613a5707a73217b"} Feb 16 17:46:00.276582 master-0 kubenswrapper[4652]: I0216 17:46:00.276384 4652 scope.go:117] "RemoveContainer" containerID="6ed14ddf510a7493478e5ee5d442fd9c3dc90cca3c4f62db941894cc3a6caa0a" Feb 16 17:46:00.276582 master-0 kubenswrapper[4652]: I0216 17:46:00.276335 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-58f4c9b998-skfh4" Feb 16 17:46:00.339860 master-0 kubenswrapper[4652]: I0216 17:46:00.339787 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/fc2a8864-8d07-485a-a50a-93251cdf5715-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-mkltd\" (UID: \"fc2a8864-8d07-485a-a50a-93251cdf5715\") " pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:00.340150 master-0 kubenswrapper[4652]: I0216 17:46:00.339912 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/fc2a8864-8d07-485a-a50a-93251cdf5715-os-client-config\") pod \"sushy-emulator-64488c485f-mkltd\" (UID: \"fc2a8864-8d07-485a-a50a-93251cdf5715\") " pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:00.340150 master-0 kubenswrapper[4652]: I0216 17:46:00.339961 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9djzn\" (UniqueName: \"kubernetes.io/projected/fc2a8864-8d07-485a-a50a-93251cdf5715-kube-api-access-9djzn\") pod \"sushy-emulator-64488c485f-mkltd\" (UID: \"fc2a8864-8d07-485a-a50a-93251cdf5715\") " pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:00.340150 master-0 kubenswrapper[4652]: I0216 17:46:00.340023 4652 reconciler_common.go:293] "Volume detached for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/95f052f3-eab9-49a0-b95f-51722af6f1f9-os-client-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:46:00.340150 master-0 kubenswrapper[4652]: I0216 17:46:00.340036 4652 reconciler_common.go:293] "Volume detached for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/95f052f3-eab9-49a0-b95f-51722af6f1f9-sushy-emulator-config\") on node \"master-0\" DevicePath \"\"" Feb 16 17:46:00.340150 master-0 kubenswrapper[4652]: I0216 17:46:00.340046 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnkp8\" (UniqueName: \"kubernetes.io/projected/95f052f3-eab9-49a0-b95f-51722af6f1f9-kube-api-access-xnkp8\") on node \"master-0\" DevicePath \"\"" Feb 16 17:46:00.376434 master-0 kubenswrapper[4652]: I0216 17:46:00.376086 4652 scope.go:117] "RemoveContainer" containerID="6ed14ddf510a7493478e5ee5d442fd9c3dc90cca3c4f62db941894cc3a6caa0a" Feb 16 17:46:00.381864 master-0 kubenswrapper[4652]: E0216 17:46:00.381809 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed14ddf510a7493478e5ee5d442fd9c3dc90cca3c4f62db941894cc3a6caa0a\": container with ID starting with 6ed14ddf510a7493478e5ee5d442fd9c3dc90cca3c4f62db941894cc3a6caa0a not found: ID does not exist" containerID="6ed14ddf510a7493478e5ee5d442fd9c3dc90cca3c4f62db941894cc3a6caa0a" Feb 16 17:46:00.382012 master-0 kubenswrapper[4652]: I0216 17:46:00.381863 4652 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed14ddf510a7493478e5ee5d442fd9c3dc90cca3c4f62db941894cc3a6caa0a"} err="failed to get container status \"6ed14ddf510a7493478e5ee5d442fd9c3dc90cca3c4f62db941894cc3a6caa0a\": rpc error: code = NotFound desc = could not find container \"6ed14ddf510a7493478e5ee5d442fd9c3dc90cca3c4f62db941894cc3a6caa0a\": container with ID starting with 6ed14ddf510a7493478e5ee5d442fd9c3dc90cca3c4f62db941894cc3a6caa0a not found: ID does not exist" Feb 16 17:46:00.387207 master-0 kubenswrapper[4652]: I0216 17:46:00.387143 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-skfh4"] Feb 16 17:46:00.400938 master-0 kubenswrapper[4652]: I0216 17:46:00.400867 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["sushy-emulator/sushy-emulator-58f4c9b998-skfh4"] Feb 16 17:46:00.442118 master-0 kubenswrapper[4652]: I0216 17:46:00.442061 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/fc2a8864-8d07-485a-a50a-93251cdf5715-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-mkltd\" (UID: \"fc2a8864-8d07-485a-a50a-93251cdf5715\") " pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:00.442470 master-0 kubenswrapper[4652]: I0216 17:46:00.442155 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/fc2a8864-8d07-485a-a50a-93251cdf5715-os-client-config\") pod \"sushy-emulator-64488c485f-mkltd\" (UID: \"fc2a8864-8d07-485a-a50a-93251cdf5715\") " pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:00.442470 master-0 kubenswrapper[4652]: I0216 17:46:00.442202 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9djzn\" (UniqueName: \"kubernetes.io/projected/fc2a8864-8d07-485a-a50a-93251cdf5715-kube-api-access-9djzn\") pod \"sushy-emulator-64488c485f-mkltd\" (UID: \"fc2a8864-8d07-485a-a50a-93251cdf5715\") " pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:00.443421 master-0 kubenswrapper[4652]: I0216 17:46:00.443364 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sushy-emulator-config\" (UniqueName: \"kubernetes.io/configmap/fc2a8864-8d07-485a-a50a-93251cdf5715-sushy-emulator-config\") pod \"sushy-emulator-64488c485f-mkltd\" (UID: \"fc2a8864-8d07-485a-a50a-93251cdf5715\") " pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:00.448137 master-0 kubenswrapper[4652]: I0216 17:46:00.447884 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-client-config\" (UniqueName: \"kubernetes.io/secret/fc2a8864-8d07-485a-a50a-93251cdf5715-os-client-config\") pod \"sushy-emulator-64488c485f-mkltd\" (UID: \"fc2a8864-8d07-485a-a50a-93251cdf5715\") " pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:00.458073 master-0 kubenswrapper[4652]: I0216 17:46:00.458014 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9djzn\" (UniqueName: \"kubernetes.io/projected/fc2a8864-8d07-485a-a50a-93251cdf5715-kube-api-access-9djzn\") pod \"sushy-emulator-64488c485f-mkltd\" (UID: \"fc2a8864-8d07-485a-a50a-93251cdf5715\") " pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:00.653316 master-0 kubenswrapper[4652]: I0216 17:46:00.653158 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:00.761573 master-0 kubenswrapper[4652]: I0216 17:46:00.761481 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95f052f3-eab9-49a0-b95f-51722af6f1f9" path="/var/lib/kubelet/pods/95f052f3-eab9-49a0-b95f-51722af6f1f9/volumes" Feb 16 17:46:01.189589 master-0 kubenswrapper[4652]: I0216 17:46:01.189178 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["sushy-emulator/sushy-emulator-64488c485f-mkltd"] Feb 16 17:46:01.205024 master-0 kubenswrapper[4652]: W0216 17:46:01.204789 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc2a8864_8d07_485a_a50a_93251cdf5715.slice/crio-f675fcd2b6f22336ab82094702c003cf7efbfb57c9a22ea01bad468933dec6f1 WatchSource:0}: Error finding container f675fcd2b6f22336ab82094702c003cf7efbfb57c9a22ea01bad468933dec6f1: Status 404 returned error can't find the container with id f675fcd2b6f22336ab82094702c003cf7efbfb57c9a22ea01bad468933dec6f1 Feb 16 17:46:01.291613 master-0 kubenswrapper[4652]: I0216 17:46:01.291553 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" event={"ID":"fc2a8864-8d07-485a-a50a-93251cdf5715","Type":"ContainerStarted","Data":"f675fcd2b6f22336ab82094702c003cf7efbfb57c9a22ea01bad468933dec6f1"} Feb 16 17:46:02.314676 master-0 kubenswrapper[4652]: I0216 17:46:02.314610 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" event={"ID":"fc2a8864-8d07-485a-a50a-93251cdf5715","Type":"ContainerStarted","Data":"4cca11cc84263625f74ef84ae0081ea50097bea7df10eb369eb16184ff79db5c"} Feb 16 17:46:02.345064 master-0 kubenswrapper[4652]: I0216 17:46:02.344970 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" podStartSLOduration=2.344943367 podStartE2EDuration="2.344943367s" podCreationTimestamp="2026-02-16 17:46:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 17:46:02.332977929 +0000 UTC m=+1319.721146465" watchObservedRunningTime="2026-02-16 17:46:02.344943367 +0000 UTC m=+1319.733111883" Feb 16 17:46:10.654364 master-0 kubenswrapper[4652]: I0216 17:46:10.654285 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:10.654364 master-0 kubenswrapper[4652]: I0216 17:46:10.654354 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:10.668013 master-0 kubenswrapper[4652]: I0216 17:46:10.667954 4652 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:46:11.418149 master-0 kubenswrapper[4652]: I0216 17:46:11.418063 4652 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="sushy-emulator/sushy-emulator-64488c485f-mkltd" Feb 16 17:47:06.376005 master-0 kubenswrapper[4652]: I0216 17:47:06.375912 4652 scope.go:117] "RemoveContainer" containerID="1839cd71cd19eea96a2900ce4cc01260807919f87b5e182e15d2072d8fc1af58" Feb 16 17:47:06.419704 master-0 kubenswrapper[4652]: I0216 17:47:06.419633 4652 scope.go:117] "RemoveContainer" containerID="247d34bdc7a39167eecd49732fae3a98f316666dedf586aa933f262c0ce848b1" Feb 16 17:48:06.549456 master-0 kubenswrapper[4652]: I0216 17:48:06.548288 4652 scope.go:117] "RemoveContainer" containerID="56e38ccc40c2cb0eb7ce018278579a1136e729c03eba950e9876c0cac1576931" Feb 16 17:48:06.580621 master-0 kubenswrapper[4652]: I0216 17:48:06.580587 4652 scope.go:117] "RemoveContainer" containerID="8c4bd38b59d17d0c89cd7c5883431356086cba2c97e1fb9be66cc23ec4b985ef" Feb 16 17:48:06.629557 master-0 kubenswrapper[4652]: I0216 17:48:06.629513 4652 scope.go:117] "RemoveContainer" containerID="9e97e6351dfa9e9574d588b51453a711c4449b16480af391fab93cf8e4c34d6b" Feb 16 17:48:06.697996 master-0 kubenswrapper[4652]: I0216 17:48:06.697964 4652 scope.go:117] "RemoveContainer" containerID="9b1ab2cc2be412b1cbab860d6f4a20ed0a04d71c2e4100dc2a0fec9ba8add898" Feb 16 17:48:26.782593 master-0 kubenswrapper[4652]: I0216 17:48:26.782525 4652 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-c34a6-scheduler-0" podUID="0f6e10ee-00f9-4c6e-b67e-3f631e8c7363" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.128.0.204:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:48:27.786476 master-0 kubenswrapper[4652]: I0216 17:48:27.786413 4652 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-c34a6-backup-0" podUID="580471cb-c8d4-40ec-86ed-7062a15b7d24" containerName="cinder-backup" probeResult="failure" output="Get \"http://10.128.0.205:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:48:28.506566 master-0 kubenswrapper[4652]: I0216 17:48:28.506495 4652 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-c34a6-volume-lvm-iscsi-0" podUID="c5ba513b-d2ca-4d7c-b419-6e8009ebe299" containerName="cinder-volume" probeResult="failure" output="Get \"http://10.128.0.202:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 17:50:18.646369 master-0 kubenswrapper[4652]: E0216 17:50:18.646215 4652 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:52206->192.168.32.10:38707: write tcp 192.168.32.10:52206->192.168.32.10:38707: write: broken pipe Feb 16 17:51:06.881311 master-0 kubenswrapper[4652]: I0216 17:51:06.881222 4652 scope.go:117] "RemoveContainer" containerID="25612cb123845c7d59562d166a35d440d1f5669dea4be97555a73519b6254f1b" Feb 16 17:51:06.901162 master-0 kubenswrapper[4652]: I0216 17:51:06.901122 4652 scope.go:117] "RemoveContainer" containerID="3ed56c8f10db86ca8fce677b3d85da4c737c51a48d2ea52aa69912c7e5b262ec" Feb 16 17:51:32.082549 master-0 kubenswrapper[4652]: I0216 17:51:32.079534 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-0c92-account-create-update-vkjtr"] Feb 16 17:51:32.092743 master-0 kubenswrapper[4652]: I0216 17:51:32.092667 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6tchx"] Feb 16 17:51:32.103329 master-0 kubenswrapper[4652]: I0216 17:51:32.103274 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-0c92-account-create-update-vkjtr"] Feb 16 17:51:32.116557 master-0 kubenswrapper[4652]: I0216 17:51:32.116477 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6tchx"] Feb 16 17:51:32.758094 master-0 kubenswrapper[4652]: I0216 17:51:32.757998 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="535576c7-2541-496c-be4e-75714ddcb6de" path="/var/lib/kubelet/pods/535576c7-2541-496c-be4e-75714ddcb6de/volumes" Feb 16 17:51:32.758869 master-0 kubenswrapper[4652]: I0216 17:51:32.758840 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e389adde-527c-4092-adb1-8a9f5bab0a35" path="/var/lib/kubelet/pods/e389adde-527c-4092-adb1-8a9f5bab0a35/volumes" Feb 16 17:51:33.041192 master-0 kubenswrapper[4652]: I0216 17:51:33.041066 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-npbng"] Feb 16 17:51:33.060273 master-0 kubenswrapper[4652]: I0216 17:51:33.060171 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f9e4-account-create-update-xch88"] Feb 16 17:51:33.073945 master-0 kubenswrapper[4652]: I0216 17:51:33.072982 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-npbng"] Feb 16 17:51:33.082458 master-0 kubenswrapper[4652]: I0216 17:51:33.082403 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f9e4-account-create-update-xch88"] Feb 16 17:51:34.765574 master-0 kubenswrapper[4652]: I0216 17:51:34.765211 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ccccd6a-a0cd-48cf-b8f9-234e97c490be" path="/var/lib/kubelet/pods/2ccccd6a-a0cd-48cf-b8f9-234e97c490be/volumes" Feb 16 17:51:34.767843 master-0 kubenswrapper[4652]: I0216 17:51:34.765787 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0fa216-316f-4f38-9522-e08e6741d57e" path="/var/lib/kubelet/pods/2e0fa216-316f-4f38-9522-e08e6741d57e/volumes" Feb 16 17:51:36.043407 master-0 kubenswrapper[4652]: I0216 17:51:36.043349 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-p5x2l"] Feb 16 17:51:36.053853 master-0 kubenswrapper[4652]: I0216 17:51:36.053800 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-78fa-account-create-update-prrd4"] Feb 16 17:51:36.064593 master-0 kubenswrapper[4652]: I0216 17:51:36.064542 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-p5x2l"] Feb 16 17:51:36.074509 master-0 kubenswrapper[4652]: I0216 17:51:36.074455 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-78fa-account-create-update-prrd4"] Feb 16 17:51:36.769797 master-0 kubenswrapper[4652]: I0216 17:51:36.769714 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="268bd521-18a0-44a2-94af-c8b0d5fc62de" path="/var/lib/kubelet/pods/268bd521-18a0-44a2-94af-c8b0d5fc62de/volumes" Feb 16 17:51:36.772286 master-0 kubenswrapper[4652]: I0216 17:51:36.772226 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953baf72-0755-451c-a083-0088cb99c43a" path="/var/lib/kubelet/pods/953baf72-0755-451c-a083-0088cb99c43a/volumes" Feb 16 17:51:58.065685 master-0 kubenswrapper[4652]: I0216 17:51:58.065595 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-7cwql"] Feb 16 17:51:58.088204 master-0 kubenswrapper[4652]: I0216 17:51:58.087902 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d565-account-create-update-s2grp"] Feb 16 17:51:58.101481 master-0 kubenswrapper[4652]: I0216 17:51:58.101160 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bb42-account-create-update-cf2b2"] Feb 16 17:51:58.114082 master-0 kubenswrapper[4652]: I0216 17:51:58.113998 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tvnfc"] Feb 16 17:51:58.125310 master-0 kubenswrapper[4652]: I0216 17:51:58.125239 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-lkt9c"] Feb 16 17:51:58.136435 master-0 kubenswrapper[4652]: I0216 17:51:58.136400 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-7cwql"] Feb 16 17:51:58.147612 master-0 kubenswrapper[4652]: I0216 17:51:58.147581 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d565-account-create-update-s2grp"] Feb 16 17:51:58.160472 master-0 kubenswrapper[4652]: I0216 17:51:58.160429 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-lkt9c"] Feb 16 17:51:58.175087 master-0 kubenswrapper[4652]: I0216 17:51:58.175022 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bb42-account-create-update-cf2b2"] Feb 16 17:51:58.191205 master-0 kubenswrapper[4652]: I0216 17:51:58.191150 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tvnfc"] Feb 16 17:51:58.762856 master-0 kubenswrapper[4652]: I0216 17:51:58.762789 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21f35c66-58aa-4320-9ef0-80dfa90c72af" path="/var/lib/kubelet/pods/21f35c66-58aa-4320-9ef0-80dfa90c72af/volumes" Feb 16 17:51:58.763913 master-0 kubenswrapper[4652]: I0216 17:51:58.763657 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="940575f8-c708-470d-9674-9363119cc8e2" path="/var/lib/kubelet/pods/940575f8-c708-470d-9674-9363119cc8e2/volumes" Feb 16 17:51:58.764480 master-0 kubenswrapper[4652]: I0216 17:51:58.764439 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95923bef-659a-4898-9f22-fde581751f95" path="/var/lib/kubelet/pods/95923bef-659a-4898-9f22-fde581751f95/volumes" Feb 16 17:51:58.765211 master-0 kubenswrapper[4652]: I0216 17:51:58.765171 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cef8dd61-0c05-4c03-8d95-c5cc00267a2a" path="/var/lib/kubelet/pods/cef8dd61-0c05-4c03-8d95-c5cc00267a2a/volumes" Feb 16 17:51:58.766479 master-0 kubenswrapper[4652]: I0216 17:51:58.766437 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e24217ad-6ba4-4280-8a72-de8b7543fef0" path="/var/lib/kubelet/pods/e24217ad-6ba4-4280-8a72-de8b7543fef0/volumes" Feb 16 17:52:04.053173 master-0 kubenswrapper[4652]: I0216 17:52:04.053090 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-xgxgv"] Feb 16 17:52:04.068497 master-0 kubenswrapper[4652]: I0216 17:52:04.068395 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-xgxgv"] Feb 16 17:52:04.084095 master-0 kubenswrapper[4652]: I0216 17:52:04.084018 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-fd8th"] Feb 16 17:52:04.095640 master-0 kubenswrapper[4652]: I0216 17:52:04.095590 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-fd8th"] Feb 16 17:52:04.765980 master-0 kubenswrapper[4652]: I0216 17:52:04.765932 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f51de53-2fd8-4d8b-95f4-8f4d4504333c" path="/var/lib/kubelet/pods/3f51de53-2fd8-4d8b-95f4-8f4d4504333c/volumes" Feb 16 17:52:04.767485 master-0 kubenswrapper[4652]: I0216 17:52:04.767458 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f977e20f-2501-45b2-b8d1-2dc333899a52" path="/var/lib/kubelet/pods/f977e20f-2501-45b2-b8d1-2dc333899a52/volumes" Feb 16 17:52:06.968981 master-0 kubenswrapper[4652]: I0216 17:52:06.968905 4652 scope.go:117] "RemoveContainer" containerID="a26f6715eede58d2f8eec8b61ca4ee9de53451a05d637a6fd89ff07b4410eac9" Feb 16 17:52:06.995258 master-0 kubenswrapper[4652]: I0216 17:52:06.995207 4652 scope.go:117] "RemoveContainer" containerID="383febbeb5471695843d225f9bf3bdf02dd637c459910174d1d7b92ad33c0022" Feb 16 17:52:07.055727 master-0 kubenswrapper[4652]: I0216 17:52:07.055634 4652 scope.go:117] "RemoveContainer" containerID="ccb2f7dbf5adf707b23a829ae07aae2ff8b742016d1bd0843b76248a6a0c6e93" Feb 16 17:52:07.110050 master-0 kubenswrapper[4652]: I0216 17:52:07.109959 4652 scope.go:117] "RemoveContainer" containerID="d5abfec442eb135ff0ec6d82048ab5c29b3c64d3baf753d58fb30f22037510c8" Feb 16 17:52:07.160794 master-0 kubenswrapper[4652]: I0216 17:52:07.160758 4652 scope.go:117] "RemoveContainer" containerID="332c56a94cb37366abc854c5d9674bff116df312edc0c8e589b4de616160edce" Feb 16 17:52:07.227041 master-0 kubenswrapper[4652]: I0216 17:52:07.226895 4652 scope.go:117] "RemoveContainer" containerID="22cf9db35594d5cb2c5ec28d9923c7b2e2fe31d7c012f8c0c3bb3893ffd8444f" Feb 16 17:52:07.265527 master-0 kubenswrapper[4652]: I0216 17:52:07.265319 4652 scope.go:117] "RemoveContainer" containerID="f23fd87bd6bfb449dfd9cdc3a276a3b29513ce67988a3f3db93ed7e2a571aaf0" Feb 16 17:52:07.304345 master-0 kubenswrapper[4652]: I0216 17:52:07.304281 4652 scope.go:117] "RemoveContainer" containerID="52ee62a4d9d8c1c4bbfa090ef576438b85370449ee8ea1632a4ffc10ca9f4340" Feb 16 17:52:07.322849 master-0 kubenswrapper[4652]: I0216 17:52:07.322800 4652 scope.go:117] "RemoveContainer" containerID="22dc1e621340728afa4c9358b2ee5db9131c1d37e8f8fe6999c83186e6a1c644" Feb 16 17:52:07.346140 master-0 kubenswrapper[4652]: I0216 17:52:07.346104 4652 scope.go:117] "RemoveContainer" containerID="d8b2da7cf9a42f65f6dd3885917972be14314db250e4a9f48c5b9c6cd2313771" Feb 16 17:52:07.366436 master-0 kubenswrapper[4652]: I0216 17:52:07.366399 4652 scope.go:117] "RemoveContainer" containerID="5f08947e894cc41aea4212be0354c1e08940bfd9e3e0f7c04175380fcaaed9f9" Feb 16 17:52:07.386326 master-0 kubenswrapper[4652]: I0216 17:52:07.386263 4652 scope.go:117] "RemoveContainer" containerID="40077a0d99c67a4ca04132c2eb6b2a3b207b8ec9ea6e60e94db4e36517923258" Feb 16 17:52:07.416612 master-0 kubenswrapper[4652]: I0216 17:52:07.416568 4652 scope.go:117] "RemoveContainer" containerID="9f6b3448d15bdec2fb609b4a3a9b75649fc464a26f0d68d9fd7604ff52cbf160" Feb 16 17:52:10.065412 master-0 kubenswrapper[4652]: I0216 17:52:10.065368 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-create-x89lf"] Feb 16 17:52:10.075569 master-0 kubenswrapper[4652]: I0216 17:52:10.075498 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-c255-account-create-update-ttmxj"] Feb 16 17:52:10.085220 master-0 kubenswrapper[4652]: I0216 17:52:10.085182 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-create-x89lf"] Feb 16 17:52:10.094841 master-0 kubenswrapper[4652]: I0216 17:52:10.094787 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-c255-account-create-update-ttmxj"] Feb 16 17:52:10.762090 master-0 kubenswrapper[4652]: I0216 17:52:10.762033 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="607e6833-eab2-4429-ac81-a161c3525702" path="/var/lib/kubelet/pods/607e6833-eab2-4429-ac81-a161c3525702/volumes" Feb 16 17:52:10.762743 master-0 kubenswrapper[4652]: I0216 17:52:10.762708 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7068e65-9057-4efb-a478-53734617a8fe" path="/var/lib/kubelet/pods/d7068e65-9057-4efb-a478-53734617a8fe/volumes" Feb 16 17:52:24.077506 master-0 kubenswrapper[4652]: I0216 17:52:24.077394 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-mw67q"] Feb 16 17:52:24.094768 master-0 kubenswrapper[4652]: I0216 17:52:24.094676 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-9w7qn"] Feb 16 17:52:24.114536 master-0 kubenswrapper[4652]: I0216 17:52:24.114445 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-mw67q"] Feb 16 17:52:24.125645 master-0 kubenswrapper[4652]: I0216 17:52:24.125578 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-9w7qn"] Feb 16 17:52:24.818163 master-0 kubenswrapper[4652]: I0216 17:52:24.818076 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a" path="/var/lib/kubelet/pods/4ed898d1-cc7e-4a2c-b70f-d19d289f5e8a/volumes" Feb 16 17:52:24.820873 master-0 kubenswrapper[4652]: I0216 17:52:24.820795 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d550d3-576a-460e-9595-6ade1d630c47" path="/var/lib/kubelet/pods/70d550d3-576a-460e-9595-6ade1d630c47/volumes" Feb 16 17:52:38.056085 master-0 kubenswrapper[4652]: I0216 17:52:38.056003 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c34a6-db-sync-5mcjg"] Feb 16 17:52:38.066714 master-0 kubenswrapper[4652]: I0216 17:52:38.066657 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c34a6-db-sync-5mcjg"] Feb 16 17:52:38.759675 master-0 kubenswrapper[4652]: I0216 17:52:38.759617 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9405c7d-2ad3-46cf-b8e4-4c91feead991" path="/var/lib/kubelet/pods/c9405c7d-2ad3-46cf-b8e4-4c91feead991/volumes" Feb 16 17:52:43.840202 master-0 kubenswrapper[4652]: E0216 17:52:43.840153 4652 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:42082->192.168.32.10:38707: write tcp 192.168.32.10:42082->192.168.32.10:38707: write: broken pipe Feb 16 17:52:44.064957 master-0 kubenswrapper[4652]: I0216 17:52:44.064835 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-74cn5"] Feb 16 17:52:44.083945 master-0 kubenswrapper[4652]: I0216 17:52:44.083873 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-74cn5"] Feb 16 17:52:44.771841 master-0 kubenswrapper[4652]: I0216 17:52:44.770967 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9aa2fd7-c127-4b90-973d-67f8be387ef6" path="/var/lib/kubelet/pods/a9aa2fd7-c127-4b90-973d-67f8be387ef6/volumes" Feb 16 17:52:53.067981 master-0 kubenswrapper[4652]: I0216 17:52:53.067913 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-db-sync-ndjf5"] Feb 16 17:52:53.083153 master-0 kubenswrapper[4652]: I0216 17:52:53.083092 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-db-sync-ndjf5"] Feb 16 17:52:54.779562 master-0 kubenswrapper[4652]: I0216 17:52:54.779493 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5005365-36c1-44e2-be02-84737aa7a60a" path="/var/lib/kubelet/pods/e5005365-36c1-44e2-be02-84737aa7a60a/volumes" Feb 16 17:53:00.044998 master-0 kubenswrapper[4652]: I0216 17:53:00.044907 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-create-m4w4d"] Feb 16 17:53:00.064367 master-0 kubenswrapper[4652]: I0216 17:53:00.064292 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-create-m4w4d"] Feb 16 17:53:00.767877 master-0 kubenswrapper[4652]: I0216 17:53:00.767805 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="109e606b-e77f-4512-957a-77f228cd55ed" path="/var/lib/kubelet/pods/109e606b-e77f-4512-957a-77f228cd55ed/volumes" Feb 16 17:53:01.063655 master-0 kubenswrapper[4652]: I0216 17:53:01.063488 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-e5ec-account-create-update-nr7fv"] Feb 16 17:53:01.082006 master-0 kubenswrapper[4652]: I0216 17:53:01.081951 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-e5ec-account-create-update-nr7fv"] Feb 16 17:53:02.763271 master-0 kubenswrapper[4652]: I0216 17:53:02.763152 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="205faff9-2936-475b-9fce-f11ab722187e" path="/var/lib/kubelet/pods/205faff9-2936-475b-9fce-f11ab722187e/volumes" Feb 16 17:53:07.676823 master-0 kubenswrapper[4652]: I0216 17:53:07.676779 4652 scope.go:117] "RemoveContainer" containerID="7e9cf091a5f27ffb6fa78bcb63dc31fe17bd7187ecf9743206c305750a710cf4" Feb 16 17:53:07.731696 master-0 kubenswrapper[4652]: I0216 17:53:07.731652 4652 scope.go:117] "RemoveContainer" containerID="7861e37166c61949429d4e6033714e675c5f7aaea61e8b91765d0c35d244cfb6" Feb 16 17:53:07.766442 master-0 kubenswrapper[4652]: I0216 17:53:07.766388 4652 scope.go:117] "RemoveContainer" containerID="93dc276590cb13f099e03549689a110e1c330753736978dd58323e394f463f8a" Feb 16 17:53:07.810568 master-0 kubenswrapper[4652]: I0216 17:53:07.810537 4652 scope.go:117] "RemoveContainer" containerID="66c99613ddcc757ca3e590f3a5ebfd9250c3e880cf12ba1c7adc8a58b754987a" Feb 16 17:53:07.890522 master-0 kubenswrapper[4652]: I0216 17:53:07.890481 4652 scope.go:117] "RemoveContainer" containerID="e412aa24735fea9f02dc30f13130564eec6490783445260c3a7795c839754a9a" Feb 16 17:53:07.916202 master-0 kubenswrapper[4652]: I0216 17:53:07.916136 4652 scope.go:117] "RemoveContainer" containerID="31b99946c6bee2983a671733289b9481b3866d105c9a507d1cd3c9ad117064f7" Feb 16 17:53:07.979010 master-0 kubenswrapper[4652]: I0216 17:53:07.978953 4652 scope.go:117] "RemoveContainer" containerID="355f2f1cf3208235cfa2edd5deb6abf54dc635abacb4d7f7bf980e853bb6a8b4" Feb 16 17:53:08.031389 master-0 kubenswrapper[4652]: I0216 17:53:08.030918 4652 scope.go:117] "RemoveContainer" containerID="bb340dd4be40ed1f61a3df17b5ea8cdc7837337166aa9d8be591accb8b17c863" Feb 16 17:53:08.072086 master-0 kubenswrapper[4652]: I0216 17:53:08.072050 4652 scope.go:117] "RemoveContainer" containerID="3d455baac623cdb0d7613eacaf1b287025e2b8ec11b914b50afcc83dea9e618c" Feb 16 17:53:08.097760 master-0 kubenswrapper[4652]: I0216 17:53:08.097196 4652 scope.go:117] "RemoveContainer" containerID="c1c837aa92bbf8d10ed9025bffe0c521bc1a557d9b0e1ef931701d4432c0a8af" Feb 16 17:53:26.063487 master-0 kubenswrapper[4652]: I0216 17:53:26.063376 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c6c9-account-create-update-xdl2v"] Feb 16 17:53:26.081218 master-0 kubenswrapper[4652]: I0216 17:53:26.081131 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-9rpkr"] Feb 16 17:53:26.101786 master-0 kubenswrapper[4652]: I0216 17:53:26.101730 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-gp6kb"] Feb 16 17:53:26.112853 master-0 kubenswrapper[4652]: I0216 17:53:26.112782 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c6c9-account-create-update-xdl2v"] Feb 16 17:53:26.125454 master-0 kubenswrapper[4652]: I0216 17:53:26.125384 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-nrzvp"] Feb 16 17:53:26.135672 master-0 kubenswrapper[4652]: I0216 17:53:26.135613 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b802-account-create-update-mqckv"] Feb 16 17:53:26.148137 master-0 kubenswrapper[4652]: I0216 17:53:26.148079 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d9f2-account-create-update-r7xjk"] Feb 16 17:53:26.159979 master-0 kubenswrapper[4652]: I0216 17:53:26.159903 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-9rpkr"] Feb 16 17:53:26.177022 master-0 kubenswrapper[4652]: I0216 17:53:26.176960 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-gp6kb"] Feb 16 17:53:26.190098 master-0 kubenswrapper[4652]: I0216 17:53:26.190023 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-nrzvp"] Feb 16 17:53:26.202133 master-0 kubenswrapper[4652]: I0216 17:53:26.201825 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b802-account-create-update-mqckv"] Feb 16 17:53:26.213911 master-0 kubenswrapper[4652]: I0216 17:53:26.213809 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d9f2-account-create-update-r7xjk"] Feb 16 17:53:26.773271 master-0 kubenswrapper[4652]: I0216 17:53:26.773185 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="089b8594-a539-4435-9573-6d904bce3901" path="/var/lib/kubelet/pods/089b8594-a539-4435-9573-6d904bce3901/volumes" Feb 16 17:53:26.774780 master-0 kubenswrapper[4652]: I0216 17:53:26.774670 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10a6eddf-9a5a-450c-b4f9-45fc556526dc" path="/var/lib/kubelet/pods/10a6eddf-9a5a-450c-b4f9-45fc556526dc/volumes" Feb 16 17:53:26.776088 master-0 kubenswrapper[4652]: I0216 17:53:26.776030 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e120e62-7cc3-4d7e-8c1c-92d3f06302f1" path="/var/lib/kubelet/pods/3e120e62-7cc3-4d7e-8c1c-92d3f06302f1/volumes" Feb 16 17:53:26.777564 master-0 kubenswrapper[4652]: I0216 17:53:26.777498 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56eeae89-abbe-4bff-b750-6ad05532d328" path="/var/lib/kubelet/pods/56eeae89-abbe-4bff-b750-6ad05532d328/volumes" Feb 16 17:53:26.780141 master-0 kubenswrapper[4652]: I0216 17:53:26.780071 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="923ed270-e930-4edb-bd2e-8db0412c5334" path="/var/lib/kubelet/pods/923ed270-e930-4edb-bd2e-8db0412c5334/volumes" Feb 16 17:53:26.781718 master-0 kubenswrapper[4652]: I0216 17:53:26.781656 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbfc1e76-e972-465d-bed5-92eb603c32a6" path="/var/lib/kubelet/pods/cbfc1e76-e972-465d-bed5-92eb603c32a6/volumes" Feb 16 17:53:30.040288 master-0 kubenswrapper[4652]: I0216 17:53:30.035409 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ironic-inspector-db-sync-v5nmj"] Feb 16 17:53:30.050343 master-0 kubenswrapper[4652]: I0216 17:53:30.049774 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ironic-inspector-db-sync-v5nmj"] Feb 16 17:53:30.758078 master-0 kubenswrapper[4652]: I0216 17:53:30.758027 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f793d28-5e22-4c0a-8f87-dabf1e4031a2" path="/var/lib/kubelet/pods/9f793d28-5e22-4c0a-8f87-dabf1e4031a2/volumes" Feb 16 17:53:58.062972 master-0 kubenswrapper[4652]: I0216 17:53:58.062914 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n4l2r"] Feb 16 17:53:58.078136 master-0 kubenswrapper[4652]: I0216 17:53:58.078080 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n4l2r"] Feb 16 17:53:58.760618 master-0 kubenswrapper[4652]: I0216 17:53:58.760520 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6a250cc-de97-4949-8016-70a1eb0c64a4" path="/var/lib/kubelet/pods/a6a250cc-de97-4949-8016-70a1eb0c64a4/volumes" Feb 16 17:54:08.319936 master-0 kubenswrapper[4652]: I0216 17:54:08.319864 4652 scope.go:117] "RemoveContainer" containerID="90b0fe5e32687b9703b39979a42107cb0abf7b6484c8243fe2ce9c4e35307ce7" Feb 16 17:54:08.376949 master-0 kubenswrapper[4652]: I0216 17:54:08.376859 4652 scope.go:117] "RemoveContainer" containerID="13fd43bd77d322847a52aa6bd1fa5f58c81f6717a66b3e4283779760f2e6091e" Feb 16 17:54:08.400029 master-0 kubenswrapper[4652]: I0216 17:54:08.399711 4652 scope.go:117] "RemoveContainer" containerID="65a6743d2b5e994c1cccbc9246e093e1359151d732aaad73c40d5f184edebb8e" Feb 16 17:54:08.450083 master-0 kubenswrapper[4652]: I0216 17:54:08.450039 4652 scope.go:117] "RemoveContainer" containerID="f30bd4b174a0d986708740453f187fe3428c4634c9adccae928a5d0f3e8b57c4" Feb 16 17:54:08.523406 master-0 kubenswrapper[4652]: I0216 17:54:08.523329 4652 scope.go:117] "RemoveContainer" containerID="658c9b050a86d087bb3f165bfc7fa711923a531e37c12b0731f5919b44523936" Feb 16 17:54:08.565274 master-0 kubenswrapper[4652]: I0216 17:54:08.565222 4652 scope.go:117] "RemoveContainer" containerID="ee1c12beb7a58edbe0f602211c575efc6aaaab58bf36d615acb9b0a7cee901cd" Feb 16 17:54:08.591407 master-0 kubenswrapper[4652]: I0216 17:54:08.591369 4652 scope.go:117] "RemoveContainer" containerID="dba8d483125c52c368490a4b7a3f33ff3333d8d2c9f967bcfb2939b4aaa6eb45" Feb 16 17:54:08.613960 master-0 kubenswrapper[4652]: I0216 17:54:08.613917 4652 scope.go:117] "RemoveContainer" containerID="73c336da51a1da01bedbe205fe291028451df89ebbbe16806dcb57e0436fa393" Feb 16 17:54:29.065667 master-0 kubenswrapper[4652]: I0216 17:54:29.065600 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-q4gq5"] Feb 16 17:54:29.077219 master-0 kubenswrapper[4652]: I0216 17:54:29.077159 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-q4gq5"] Feb 16 17:54:30.757958 master-0 kubenswrapper[4652]: I0216 17:54:30.757901 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="209b2a48-903a-46dd-abc2-902650a6384c" path="/var/lib/kubelet/pods/209b2a48-903a-46dd-abc2-902650a6384c/volumes" Feb 16 17:54:33.043016 master-0 kubenswrapper[4652]: I0216 17:54:33.042907 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rmx4f"] Feb 16 17:54:33.061225 master-0 kubenswrapper[4652]: I0216 17:54:33.061128 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rmx4f"] Feb 16 17:54:34.769554 master-0 kubenswrapper[4652]: I0216 17:54:34.769499 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2966848-b02c-4fef-8d49-df6a97604e12" path="/var/lib/kubelet/pods/c2966848-b02c-4fef-8d49-df6a97604e12/volumes" Feb 16 17:55:05.060600 master-0 kubenswrapper[4652]: I0216 17:55:05.060498 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-host-discover-jgglr"] Feb 16 17:55:05.072155 master-0 kubenswrapper[4652]: I0216 17:55:05.072091 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-host-discover-jgglr"] Feb 16 17:55:06.771952 master-0 kubenswrapper[4652]: I0216 17:55:06.771844 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d158ef-2e1c-4eff-be36-c0ab68bedebc" path="/var/lib/kubelet/pods/63d158ef-2e1c-4eff-be36-c0ab68bedebc/volumes" Feb 16 17:55:07.036412 master-0 kubenswrapper[4652]: I0216 17:55:07.036149 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-9l2b8"] Feb 16 17:55:07.048991 master-0 kubenswrapper[4652]: I0216 17:55:07.048924 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-9l2b8"] Feb 16 17:55:08.762156 master-0 kubenswrapper[4652]: I0216 17:55:08.762091 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82015a7e-8945-4748-bb16-db5b284117a6" path="/var/lib/kubelet/pods/82015a7e-8945-4748-bb16-db5b284117a6/volumes" Feb 16 17:55:08.812552 master-0 kubenswrapper[4652]: I0216 17:55:08.812466 4652 scope.go:117] "RemoveContainer" containerID="d962f888d0c71075e9b41847da7dd2c5d651a8e212db804a3e12ced2dcb8dbed" Feb 16 17:55:08.853212 master-0 kubenswrapper[4652]: I0216 17:55:08.853151 4652 scope.go:117] "RemoveContainer" containerID="cb3c6b770042c08c46f635a9753e80636cc9bd837b33ccdbb08f7294aebc316f" Feb 16 17:55:08.937686 master-0 kubenswrapper[4652]: I0216 17:55:08.937372 4652 scope.go:117] "RemoveContainer" containerID="e70c2c4c38426bb55d159ae45a6066ed66d3879926ac2a4ef8b0e71dae74848b" Feb 16 17:55:08.996605 master-0 kubenswrapper[4652]: I0216 17:55:08.996532 4652 scope.go:117] "RemoveContainer" containerID="11cceeb018fa48ac3af539fb9856e4b2809d02dffb56c60cb56ce4813a6e6ac5" Feb 16 17:58:21.752904 master-0 kubenswrapper[4652]: E0216 17:58:21.752802 4652 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:57106->192.168.32.10:38707: write tcp 192.168.32.10:57106->192.168.32.10:38707: write: broken pipe Feb 16 18:00:00.285810 master-0 kubenswrapper[4652]: I0216 18:00:00.285747 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n"] Feb 16 18:00:00.287606 master-0 kubenswrapper[4652]: I0216 18:00:00.287416 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" Feb 16 18:00:00.295499 master-0 kubenswrapper[4652]: I0216 18:00:00.295451 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 18:00:00.295717 master-0 kubenswrapper[4652]: I0216 18:00:00.295681 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-4vsn8" Feb 16 18:00:00.338500 master-0 kubenswrapper[4652]: I0216 18:00:00.333302 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n"] Feb 16 18:00:00.343544 master-0 kubenswrapper[4652]: I0216 18:00:00.343520 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtr6m\" (UniqueName: \"kubernetes.io/projected/6fe7e81a-597e-45f2-83d3-de8532fb855f-kube-api-access-vtr6m\") pod \"collect-profiles-29521080-cmp2n\" (UID: \"6fe7e81a-597e-45f2-83d3-de8532fb855f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" Feb 16 18:00:00.343741 master-0 kubenswrapper[4652]: I0216 18:00:00.343726 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fe7e81a-597e-45f2-83d3-de8532fb855f-config-volume\") pod \"collect-profiles-29521080-cmp2n\" (UID: \"6fe7e81a-597e-45f2-83d3-de8532fb855f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" Feb 16 18:00:00.343880 master-0 kubenswrapper[4652]: I0216 18:00:00.343867 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fe7e81a-597e-45f2-83d3-de8532fb855f-secret-volume\") pod \"collect-profiles-29521080-cmp2n\" (UID: \"6fe7e81a-597e-45f2-83d3-de8532fb855f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" Feb 16 18:00:00.446400 master-0 kubenswrapper[4652]: I0216 18:00:00.446334 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fe7e81a-597e-45f2-83d3-de8532fb855f-config-volume\") pod \"collect-profiles-29521080-cmp2n\" (UID: \"6fe7e81a-597e-45f2-83d3-de8532fb855f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" Feb 16 18:00:00.446716 master-0 kubenswrapper[4652]: I0216 18:00:00.446469 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fe7e81a-597e-45f2-83d3-de8532fb855f-secret-volume\") pod \"collect-profiles-29521080-cmp2n\" (UID: \"6fe7e81a-597e-45f2-83d3-de8532fb855f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" Feb 16 18:00:00.446716 master-0 kubenswrapper[4652]: I0216 18:00:00.446625 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtr6m\" (UniqueName: \"kubernetes.io/projected/6fe7e81a-597e-45f2-83d3-de8532fb855f-kube-api-access-vtr6m\") pod \"collect-profiles-29521080-cmp2n\" (UID: \"6fe7e81a-597e-45f2-83d3-de8532fb855f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" Feb 16 18:00:00.447203 master-0 kubenswrapper[4652]: I0216 18:00:00.447163 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fe7e81a-597e-45f2-83d3-de8532fb855f-config-volume\") pod \"collect-profiles-29521080-cmp2n\" (UID: \"6fe7e81a-597e-45f2-83d3-de8532fb855f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" Feb 16 18:00:00.454780 master-0 kubenswrapper[4652]: I0216 18:00:00.454748 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fe7e81a-597e-45f2-83d3-de8532fb855f-secret-volume\") pod \"collect-profiles-29521080-cmp2n\" (UID: \"6fe7e81a-597e-45f2-83d3-de8532fb855f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" Feb 16 18:00:00.461611 master-0 kubenswrapper[4652]: I0216 18:00:00.461582 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtr6m\" (UniqueName: \"kubernetes.io/projected/6fe7e81a-597e-45f2-83d3-de8532fb855f-kube-api-access-vtr6m\") pod \"collect-profiles-29521080-cmp2n\" (UID: \"6fe7e81a-597e-45f2-83d3-de8532fb855f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" Feb 16 18:00:00.672286 master-0 kubenswrapper[4652]: I0216 18:00:00.672108 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" Feb 16 18:00:01.166884 master-0 kubenswrapper[4652]: I0216 18:00:01.166809 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n"] Feb 16 18:00:01.169947 master-0 kubenswrapper[4652]: W0216 18:00:01.169895 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fe7e81a_597e_45f2_83d3_de8532fb855f.slice/crio-347efa39fb07e377f8653f072717c87d1fbad2fd058daabc39a79a54e42ca893 WatchSource:0}: Error finding container 347efa39fb07e377f8653f072717c87d1fbad2fd058daabc39a79a54e42ca893: Status 404 returned error can't find the container with id 347efa39fb07e377f8653f072717c87d1fbad2fd058daabc39a79a54e42ca893 Feb 16 18:00:01.382086 master-0 kubenswrapper[4652]: I0216 18:00:01.381997 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" event={"ID":"6fe7e81a-597e-45f2-83d3-de8532fb855f","Type":"ContainerStarted","Data":"ef578a928a92ba63671929fb4c84246dc4543f78a211dcc27979b14decb4e733"} Feb 16 18:00:01.382086 master-0 kubenswrapper[4652]: I0216 18:00:01.382042 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" event={"ID":"6fe7e81a-597e-45f2-83d3-de8532fb855f","Type":"ContainerStarted","Data":"347efa39fb07e377f8653f072717c87d1fbad2fd058daabc39a79a54e42ca893"} Feb 16 18:00:01.408496 master-0 kubenswrapper[4652]: I0216 18:00:01.405385 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" podStartSLOduration=1.405364919 podStartE2EDuration="1.405364919s" podCreationTimestamp="2026-02-16 18:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 18:00:01.401807215 +0000 UTC m=+2158.789975731" watchObservedRunningTime="2026-02-16 18:00:01.405364919 +0000 UTC m=+2158.793533445" Feb 16 18:00:02.394609 master-0 kubenswrapper[4652]: I0216 18:00:02.394558 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" event={"ID":"6fe7e81a-597e-45f2-83d3-de8532fb855f","Type":"ContainerDied","Data":"ef578a928a92ba63671929fb4c84246dc4543f78a211dcc27979b14decb4e733"} Feb 16 18:00:02.395154 master-0 kubenswrapper[4652]: I0216 18:00:02.394483 4652 generic.go:334] "Generic (PLEG): container finished" podID="6fe7e81a-597e-45f2-83d3-de8532fb855f" containerID="ef578a928a92ba63671929fb4c84246dc4543f78a211dcc27979b14decb4e733" exitCode=0 Feb 16 18:00:03.840734 master-0 kubenswrapper[4652]: I0216 18:00:03.840698 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" Feb 16 18:00:03.939866 master-0 kubenswrapper[4652]: I0216 18:00:03.939731 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtr6m\" (UniqueName: \"kubernetes.io/projected/6fe7e81a-597e-45f2-83d3-de8532fb855f-kube-api-access-vtr6m\") pod \"6fe7e81a-597e-45f2-83d3-de8532fb855f\" (UID: \"6fe7e81a-597e-45f2-83d3-de8532fb855f\") " Feb 16 18:00:03.939866 master-0 kubenswrapper[4652]: I0216 18:00:03.939897 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fe7e81a-597e-45f2-83d3-de8532fb855f-config-volume\") pod \"6fe7e81a-597e-45f2-83d3-de8532fb855f\" (UID: \"6fe7e81a-597e-45f2-83d3-de8532fb855f\") " Feb 16 18:00:03.940154 master-0 kubenswrapper[4652]: I0216 18:00:03.939938 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fe7e81a-597e-45f2-83d3-de8532fb855f-secret-volume\") pod \"6fe7e81a-597e-45f2-83d3-de8532fb855f\" (UID: \"6fe7e81a-597e-45f2-83d3-de8532fb855f\") " Feb 16 18:00:03.940503 master-0 kubenswrapper[4652]: I0216 18:00:03.940454 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fe7e81a-597e-45f2-83d3-de8532fb855f-config-volume" (OuterVolumeSpecName: "config-volume") pod "6fe7e81a-597e-45f2-83d3-de8532fb855f" (UID: "6fe7e81a-597e-45f2-83d3-de8532fb855f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 18:00:03.941167 master-0 kubenswrapper[4652]: I0216 18:00:03.941138 4652 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fe7e81a-597e-45f2-83d3-de8532fb855f-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 18:00:03.970950 master-0 kubenswrapper[4652]: I0216 18:00:03.970889 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fe7e81a-597e-45f2-83d3-de8532fb855f-kube-api-access-vtr6m" (OuterVolumeSpecName: "kube-api-access-vtr6m") pod "6fe7e81a-597e-45f2-83d3-de8532fb855f" (UID: "6fe7e81a-597e-45f2-83d3-de8532fb855f"). InnerVolumeSpecName "kube-api-access-vtr6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:00:03.972744 master-0 kubenswrapper[4652]: I0216 18:00:03.972679 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fe7e81a-597e-45f2-83d3-de8532fb855f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6fe7e81a-597e-45f2-83d3-de8532fb855f" (UID: "6fe7e81a-597e-45f2-83d3-de8532fb855f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:00:04.042502 master-0 kubenswrapper[4652]: I0216 18:00:04.042391 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtr6m\" (UniqueName: \"kubernetes.io/projected/6fe7e81a-597e-45f2-83d3-de8532fb855f-kube-api-access-vtr6m\") on node \"master-0\" DevicePath \"\"" Feb 16 18:00:04.042502 master-0 kubenswrapper[4652]: I0216 18:00:04.042428 4652 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fe7e81a-597e-45f2-83d3-de8532fb855f-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 18:00:04.430377 master-0 kubenswrapper[4652]: I0216 18:00:04.428942 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" event={"ID":"6fe7e81a-597e-45f2-83d3-de8532fb855f","Type":"ContainerDied","Data":"347efa39fb07e377f8653f072717c87d1fbad2fd058daabc39a79a54e42ca893"} Feb 16 18:00:04.430377 master-0 kubenswrapper[4652]: I0216 18:00:04.428998 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="347efa39fb07e377f8653f072717c87d1fbad2fd058daabc39a79a54e42ca893" Feb 16 18:00:04.430377 master-0 kubenswrapper[4652]: I0216 18:00:04.429068 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521080-cmp2n" Feb 16 18:00:04.523978 master-0 kubenswrapper[4652]: I0216 18:00:04.523912 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r"] Feb 16 18:00:04.537038 master-0 kubenswrapper[4652]: I0216 18:00:04.536963 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521035-zdh6r"] Feb 16 18:00:04.759878 master-0 kubenswrapper[4652]: I0216 18:00:04.759753 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f18a5c41-3a62-4c14-88f5-cc9c09e81d38" path="/var/lib/kubelet/pods/f18a5c41-3a62-4c14-88f5-cc9c09e81d38/volumes" Feb 16 18:00:09.290598 master-0 kubenswrapper[4652]: E0216 18:00:09.290509 4652 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f30fc4dc43f5bff7934dfed48168650156eb7932764e6f71956e1dd3d5d703e\": container with ID starting with 7f30fc4dc43f5bff7934dfed48168650156eb7932764e6f71956e1dd3d5d703e not found: ID does not exist" containerID="7f30fc4dc43f5bff7934dfed48168650156eb7932764e6f71956e1dd3d5d703e" Feb 16 18:00:09.290598 master-0 kubenswrapper[4652]: I0216 18:00:09.290566 4652 kuberuntime_gc.go:361] "Error getting ContainerStatus for containerID" containerID="7f30fc4dc43f5bff7934dfed48168650156eb7932764e6f71956e1dd3d5d703e" err="rpc error: code = NotFound desc = could not find container \"7f30fc4dc43f5bff7934dfed48168650156eb7932764e6f71956e1dd3d5d703e\": container with ID starting with 7f30fc4dc43f5bff7934dfed48168650156eb7932764e6f71956e1dd3d5d703e not found: ID does not exist" Feb 16 18:01:00.207395 master-0 kubenswrapper[4652]: I0216 18:01:00.207302 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29521081-cj8hg"] Feb 16 18:01:00.208385 master-0 kubenswrapper[4652]: E0216 18:01:00.207943 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fe7e81a-597e-45f2-83d3-de8532fb855f" containerName="collect-profiles" Feb 16 18:01:00.208385 master-0 kubenswrapper[4652]: I0216 18:01:00.207963 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fe7e81a-597e-45f2-83d3-de8532fb855f" containerName="collect-profiles" Feb 16 18:01:00.208385 master-0 kubenswrapper[4652]: I0216 18:01:00.208292 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fe7e81a-597e-45f2-83d3-de8532fb855f" containerName="collect-profiles" Feb 16 18:01:00.209316 master-0 kubenswrapper[4652]: I0216 18:01:00.209272 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:00.224367 master-0 kubenswrapper[4652]: I0216 18:01:00.224275 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521081-cj8hg"] Feb 16 18:01:00.341276 master-0 kubenswrapper[4652]: I0216 18:01:00.339623 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-combined-ca-bundle\") pod \"keystone-cron-29521081-cj8hg\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:00.341276 master-0 kubenswrapper[4652]: I0216 18:01:00.339786 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8qqg\" (UniqueName: \"kubernetes.io/projected/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-kube-api-access-f8qqg\") pod \"keystone-cron-29521081-cj8hg\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:00.341276 master-0 kubenswrapper[4652]: I0216 18:01:00.339814 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-fernet-keys\") pod \"keystone-cron-29521081-cj8hg\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:00.341276 master-0 kubenswrapper[4652]: I0216 18:01:00.339848 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-config-data\") pod \"keystone-cron-29521081-cj8hg\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:00.444692 master-0 kubenswrapper[4652]: I0216 18:01:00.444605 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-combined-ca-bundle\") pod \"keystone-cron-29521081-cj8hg\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:00.444991 master-0 kubenswrapper[4652]: I0216 18:01:00.444718 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8qqg\" (UniqueName: \"kubernetes.io/projected/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-kube-api-access-f8qqg\") pod \"keystone-cron-29521081-cj8hg\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:00.444991 master-0 kubenswrapper[4652]: I0216 18:01:00.444738 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-fernet-keys\") pod \"keystone-cron-29521081-cj8hg\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:00.444991 master-0 kubenswrapper[4652]: I0216 18:01:00.444761 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-config-data\") pod \"keystone-cron-29521081-cj8hg\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:00.455272 master-0 kubenswrapper[4652]: I0216 18:01:00.449639 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-combined-ca-bundle\") pod \"keystone-cron-29521081-cj8hg\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:00.469284 master-0 kubenswrapper[4652]: I0216 18:01:00.469134 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-config-data\") pod \"keystone-cron-29521081-cj8hg\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:00.469833 master-0 kubenswrapper[4652]: I0216 18:01:00.469739 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-fernet-keys\") pod \"keystone-cron-29521081-cj8hg\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:00.484227 master-0 kubenswrapper[4652]: I0216 18:01:00.484180 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8qqg\" (UniqueName: \"kubernetes.io/projected/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-kube-api-access-f8qqg\") pod \"keystone-cron-29521081-cj8hg\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:00.549328 master-0 kubenswrapper[4652]: I0216 18:01:00.549260 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:01.013582 master-0 kubenswrapper[4652]: I0216 18:01:01.013512 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29521081-cj8hg"] Feb 16 18:01:01.020764 master-0 kubenswrapper[4652]: W0216 18:01:01.020670 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf61f9cab_fe10_4829_bf14_3f7ddafa53ef.slice/crio-2892cd3833d926eb587a62b56564d755f9885da03f5f477b641b6e087d0df6a9 WatchSource:0}: Error finding container 2892cd3833d926eb587a62b56564d755f9885da03f5f477b641b6e087d0df6a9: Status 404 returned error can't find the container with id 2892cd3833d926eb587a62b56564d755f9885da03f5f477b641b6e087d0df6a9 Feb 16 18:01:01.117942 master-0 kubenswrapper[4652]: I0216 18:01:01.117893 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521081-cj8hg" event={"ID":"f61f9cab-fe10-4829-bf14-3f7ddafa53ef","Type":"ContainerStarted","Data":"2892cd3833d926eb587a62b56564d755f9885da03f5f477b641b6e087d0df6a9"} Feb 16 18:01:02.137093 master-0 kubenswrapper[4652]: I0216 18:01:02.136994 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521081-cj8hg" event={"ID":"f61f9cab-fe10-4829-bf14-3f7ddafa53ef","Type":"ContainerStarted","Data":"5430ed812554e9204610f27d8417f322a485cf4480be95caee4d53eb2f0b36f7"} Feb 16 18:01:02.159995 master-0 kubenswrapper[4652]: I0216 18:01:02.159861 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29521081-cj8hg" podStartSLOduration=2.159754479 podStartE2EDuration="2.159754479s" podCreationTimestamp="2026-02-16 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 18:01:02.156136123 +0000 UTC m=+2219.544304669" watchObservedRunningTime="2026-02-16 18:01:02.159754479 +0000 UTC m=+2219.547923035" Feb 16 18:01:04.157431 master-0 kubenswrapper[4652]: I0216 18:01:04.157372 4652 generic.go:334] "Generic (PLEG): container finished" podID="f61f9cab-fe10-4829-bf14-3f7ddafa53ef" containerID="5430ed812554e9204610f27d8417f322a485cf4480be95caee4d53eb2f0b36f7" exitCode=0 Feb 16 18:01:04.157431 master-0 kubenswrapper[4652]: I0216 18:01:04.157421 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521081-cj8hg" event={"ID":"f61f9cab-fe10-4829-bf14-3f7ddafa53ef","Type":"ContainerDied","Data":"5430ed812554e9204610f27d8417f322a485cf4480be95caee4d53eb2f0b36f7"} Feb 16 18:01:05.644553 master-0 kubenswrapper[4652]: I0216 18:01:05.644499 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:05.790839 master-0 kubenswrapper[4652]: I0216 18:01:05.790669 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-fernet-keys\") pod \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " Feb 16 18:01:05.790839 master-0 kubenswrapper[4652]: I0216 18:01:05.790717 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-config-data\") pod \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " Feb 16 18:01:05.790839 master-0 kubenswrapper[4652]: I0216 18:01:05.790740 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8qqg\" (UniqueName: \"kubernetes.io/projected/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-kube-api-access-f8qqg\") pod \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " Feb 16 18:01:05.791304 master-0 kubenswrapper[4652]: I0216 18:01:05.790950 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-combined-ca-bundle\") pod \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\" (UID: \"f61f9cab-fe10-4829-bf14-3f7ddafa53ef\") " Feb 16 18:01:05.793720 master-0 kubenswrapper[4652]: I0216 18:01:05.793669 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-kube-api-access-f8qqg" (OuterVolumeSpecName: "kube-api-access-f8qqg") pod "f61f9cab-fe10-4829-bf14-3f7ddafa53ef" (UID: "f61f9cab-fe10-4829-bf14-3f7ddafa53ef"). InnerVolumeSpecName "kube-api-access-f8qqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:01:05.794082 master-0 kubenswrapper[4652]: I0216 18:01:05.794042 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f61f9cab-fe10-4829-bf14-3f7ddafa53ef" (UID: "f61f9cab-fe10-4829-bf14-3f7ddafa53ef"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:01:05.818744 master-0 kubenswrapper[4652]: I0216 18:01:05.818673 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f61f9cab-fe10-4829-bf14-3f7ddafa53ef" (UID: "f61f9cab-fe10-4829-bf14-3f7ddafa53ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:01:05.842850 master-0 kubenswrapper[4652]: I0216 18:01:05.842776 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-config-data" (OuterVolumeSpecName: "config-data") pod "f61f9cab-fe10-4829-bf14-3f7ddafa53ef" (UID: "f61f9cab-fe10-4829-bf14-3f7ddafa53ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:01:05.895205 master-0 kubenswrapper[4652]: I0216 18:01:05.895143 4652 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-config-data\") on node \"master-0\" DevicePath \"\"" Feb 16 18:01:05.895205 master-0 kubenswrapper[4652]: I0216 18:01:05.895181 4652 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-fernet-keys\") on node \"master-0\" DevicePath \"\"" Feb 16 18:01:05.895205 master-0 kubenswrapper[4652]: I0216 18:01:05.895196 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8qqg\" (UniqueName: \"kubernetes.io/projected/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-kube-api-access-f8qqg\") on node \"master-0\" DevicePath \"\"" Feb 16 18:01:05.895205 master-0 kubenswrapper[4652]: I0216 18:01:05.895209 4652 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61f9cab-fe10-4829-bf14-3f7ddafa53ef-combined-ca-bundle\") on node \"master-0\" DevicePath \"\"" Feb 16 18:01:06.197223 master-0 kubenswrapper[4652]: I0216 18:01:06.197145 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29521081-cj8hg" event={"ID":"f61f9cab-fe10-4829-bf14-3f7ddafa53ef","Type":"ContainerDied","Data":"2892cd3833d926eb587a62b56564d755f9885da03f5f477b641b6e087d0df6a9"} Feb 16 18:01:06.197223 master-0 kubenswrapper[4652]: I0216 18:01:06.197182 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29521081-cj8hg" Feb 16 18:01:06.197223 master-0 kubenswrapper[4652]: I0216 18:01:06.197193 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2892cd3833d926eb587a62b56564d755f9885da03f5f477b641b6e087d0df6a9" Feb 16 18:09:50.902210 master-0 kubenswrapper[4652]: E0216 18:09:50.902121 4652 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.32.10:33296->192.168.32.10:38707: read tcp 192.168.32.10:33296->192.168.32.10:38707: read: connection reset by peer Feb 16 18:10:38.805023 master-0 kubenswrapper[4652]: E0216 18:10:38.804961 4652 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.32.10:34492->192.168.32.10:38707: write tcp 192.168.32.10:34492->192.168.32.10:38707: write: connection reset by peer Feb 16 18:15:00.208419 master-0 kubenswrapper[4652]: I0216 18:15:00.208347 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r"] Feb 16 18:15:00.209191 master-0 kubenswrapper[4652]: E0216 18:15:00.208897 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61f9cab-fe10-4829-bf14-3f7ddafa53ef" containerName="keystone-cron" Feb 16 18:15:00.209191 master-0 kubenswrapper[4652]: I0216 18:15:00.208914 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61f9cab-fe10-4829-bf14-3f7ddafa53ef" containerName="keystone-cron" Feb 16 18:15:00.209191 master-0 kubenswrapper[4652]: I0216 18:15:00.209116 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61f9cab-fe10-4829-bf14-3f7ddafa53ef" containerName="keystone-cron" Feb 16 18:15:00.209912 master-0 kubenswrapper[4652]: I0216 18:15:00.209887 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" Feb 16 18:15:00.214190 master-0 kubenswrapper[4652]: I0216 18:15:00.214144 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 18:15:00.216061 master-0 kubenswrapper[4652]: I0216 18:15:00.215987 4652 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-4vsn8" Feb 16 18:15:00.224454 master-0 kubenswrapper[4652]: I0216 18:15:00.224381 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r"] Feb 16 18:15:00.356130 master-0 kubenswrapper[4652]: I0216 18:15:00.356065 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-config-volume\") pod \"collect-profiles-29521095-r4m7r\" (UID: \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" Feb 16 18:15:00.356130 master-0 kubenswrapper[4652]: I0216 18:15:00.356123 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkspx\" (UniqueName: \"kubernetes.io/projected/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-kube-api-access-vkspx\") pod \"collect-profiles-29521095-r4m7r\" (UID: \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" Feb 16 18:15:00.356535 master-0 kubenswrapper[4652]: I0216 18:15:00.356487 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-secret-volume\") pod \"collect-profiles-29521095-r4m7r\" (UID: \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" Feb 16 18:15:00.459063 master-0 kubenswrapper[4652]: I0216 18:15:00.458922 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkspx\" (UniqueName: \"kubernetes.io/projected/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-kube-api-access-vkspx\") pod \"collect-profiles-29521095-r4m7r\" (UID: \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" Feb 16 18:15:00.459063 master-0 kubenswrapper[4652]: I0216 18:15:00.458985 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-config-volume\") pod \"collect-profiles-29521095-r4m7r\" (UID: \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" Feb 16 18:15:00.459355 master-0 kubenswrapper[4652]: I0216 18:15:00.459083 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-secret-volume\") pod \"collect-profiles-29521095-r4m7r\" (UID: \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" Feb 16 18:15:00.460862 master-0 kubenswrapper[4652]: I0216 18:15:00.460802 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-config-volume\") pod \"collect-profiles-29521095-r4m7r\" (UID: \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" Feb 16 18:15:00.464444 master-0 kubenswrapper[4652]: I0216 18:15:00.464383 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-secret-volume\") pod \"collect-profiles-29521095-r4m7r\" (UID: \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" Feb 16 18:15:00.474385 master-0 kubenswrapper[4652]: I0216 18:15:00.474332 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkspx\" (UniqueName: \"kubernetes.io/projected/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-kube-api-access-vkspx\") pod \"collect-profiles-29521095-r4m7r\" (UID: \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" Feb 16 18:15:00.532865 master-0 kubenswrapper[4652]: I0216 18:15:00.532788 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" Feb 16 18:15:01.087645 master-0 kubenswrapper[4652]: I0216 18:15:01.087586 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r"] Feb 16 18:15:01.898571 master-0 kubenswrapper[4652]: I0216 18:15:01.898422 4652 generic.go:334] "Generic (PLEG): container finished" podID="ece3baf4-c8f2-45ef-b9d7-b1992a217c9b" containerID="190b776ffb72fabc2dfe46dfed8230e8c8dffb6653ebd088e68686aac5572a01" exitCode=0 Feb 16 18:15:01.898571 master-0 kubenswrapper[4652]: I0216 18:15:01.898487 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" event={"ID":"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b","Type":"ContainerDied","Data":"190b776ffb72fabc2dfe46dfed8230e8c8dffb6653ebd088e68686aac5572a01"} Feb 16 18:15:01.898571 master-0 kubenswrapper[4652]: I0216 18:15:01.898527 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" event={"ID":"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b","Type":"ContainerStarted","Data":"200fcdecbe0c0333226bd75578c72c10a57cf8c01cc43fe4d7464fe4f16c1137"} Feb 16 18:15:03.381487 master-0 kubenswrapper[4652]: I0216 18:15:03.381419 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" Feb 16 18:15:03.440760 master-0 kubenswrapper[4652]: I0216 18:15:03.440709 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-config-volume\") pod \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\" (UID: \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\") " Feb 16 18:15:03.440967 master-0 kubenswrapper[4652]: I0216 18:15:03.440890 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-secret-volume\") pod \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\" (UID: \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\") " Feb 16 18:15:03.441005 master-0 kubenswrapper[4652]: I0216 18:15:03.440981 4652 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkspx\" (UniqueName: \"kubernetes.io/projected/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-kube-api-access-vkspx\") pod \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\" (UID: \"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b\") " Feb 16 18:15:03.441654 master-0 kubenswrapper[4652]: I0216 18:15:03.441592 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-config-volume" (OuterVolumeSpecName: "config-volume") pod "ece3baf4-c8f2-45ef-b9d7-b1992a217c9b" (UID: "ece3baf4-c8f2-45ef-b9d7-b1992a217c9b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 18:15:03.442064 master-0 kubenswrapper[4652]: I0216 18:15:03.442028 4652 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-config-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 18:15:03.443915 master-0 kubenswrapper[4652]: I0216 18:15:03.443876 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-kube-api-access-vkspx" (OuterVolumeSpecName: "kube-api-access-vkspx") pod "ece3baf4-c8f2-45ef-b9d7-b1992a217c9b" (UID: "ece3baf4-c8f2-45ef-b9d7-b1992a217c9b"). InnerVolumeSpecName "kube-api-access-vkspx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 18:15:03.444487 master-0 kubenswrapper[4652]: I0216 18:15:03.444456 4652 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ece3baf4-c8f2-45ef-b9d7-b1992a217c9b" (UID: "ece3baf4-c8f2-45ef-b9d7-b1992a217c9b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 18:15:03.544661 master-0 kubenswrapper[4652]: I0216 18:15:03.544483 4652 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-secret-volume\") on node \"master-0\" DevicePath \"\"" Feb 16 18:15:03.544661 master-0 kubenswrapper[4652]: I0216 18:15:03.544550 4652 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkspx\" (UniqueName: \"kubernetes.io/projected/ece3baf4-c8f2-45ef-b9d7-b1992a217c9b-kube-api-access-vkspx\") on node \"master-0\" DevicePath \"\"" Feb 16 18:15:03.931843 master-0 kubenswrapper[4652]: I0216 18:15:03.931775 4652 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" Feb 16 18:15:03.932428 master-0 kubenswrapper[4652]: I0216 18:15:03.932319 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29521095-r4m7r" event={"ID":"ece3baf4-c8f2-45ef-b9d7-b1992a217c9b","Type":"ContainerDied","Data":"200fcdecbe0c0333226bd75578c72c10a57cf8c01cc43fe4d7464fe4f16c1137"} Feb 16 18:15:03.932573 master-0 kubenswrapper[4652]: I0216 18:15:03.932444 4652 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="200fcdecbe0c0333226bd75578c72c10a57cf8c01cc43fe4d7464fe4f16c1137" Feb 16 18:15:04.485886 master-0 kubenswrapper[4652]: I0216 18:15:04.485808 4652 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk"] Feb 16 18:15:04.501769 master-0 kubenswrapper[4652]: I0216 18:15:04.501684 4652 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29521050-dpzjk"] Feb 16 18:15:04.766806 master-0 kubenswrapper[4652]: I0216 18:15:04.766668 4652 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42c6e30f-bdad-470f-b310-f1c4ad117dc9" path="/var/lib/kubelet/pods/42c6e30f-bdad-470f-b310-f1c4ad117dc9/volumes" Feb 16 18:15:09.779716 master-0 kubenswrapper[4652]: I0216 18:15:09.779652 4652 scope.go:117] "RemoveContainer" containerID="00e536e5ac15c166f17381f2d12f1b92c8875c94578bdd07182180b4c3006573" Feb 16 18:20:46.170220 master-0 kubenswrapper[4652]: I0216 18:20:46.169742 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ldpdw/must-gather-dg46c"] Feb 16 18:20:46.170994 master-0 kubenswrapper[4652]: E0216 18:20:46.170549 4652 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece3baf4-c8f2-45ef-b9d7-b1992a217c9b" containerName="collect-profiles" Feb 16 18:20:46.170994 master-0 kubenswrapper[4652]: I0216 18:20:46.170576 4652 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece3baf4-c8f2-45ef-b9d7-b1992a217c9b" containerName="collect-profiles" Feb 16 18:20:46.170994 master-0 kubenswrapper[4652]: I0216 18:20:46.170929 4652 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece3baf4-c8f2-45ef-b9d7-b1992a217c9b" containerName="collect-profiles" Feb 16 18:20:46.172928 master-0 kubenswrapper[4652]: I0216 18:20:46.172877 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldpdw/must-gather-dg46c" Feb 16 18:20:46.179714 master-0 kubenswrapper[4652]: I0216 18:20:46.178319 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ldpdw"/"openshift-service-ca.crt" Feb 16 18:20:46.179714 master-0 kubenswrapper[4652]: I0216 18:20:46.178670 4652 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ldpdw"/"kube-root-ca.crt" Feb 16 18:20:46.179714 master-0 kubenswrapper[4652]: I0216 18:20:46.179087 4652 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ldpdw/must-gather-l7p2h"] Feb 16 18:20:46.182856 master-0 kubenswrapper[4652]: I0216 18:20:46.182668 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldpdw/must-gather-l7p2h" Feb 16 18:20:46.193965 master-0 kubenswrapper[4652]: I0216 18:20:46.193910 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ldpdw/must-gather-l7p2h"] Feb 16 18:20:46.204383 master-0 kubenswrapper[4652]: I0216 18:20:46.204332 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ldpdw/must-gather-dg46c"] Feb 16 18:20:46.310836 master-0 kubenswrapper[4652]: I0216 18:20:46.310763 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4b296be-58f8-4c3e-b828-41f8d1fd7a43-must-gather-output\") pod \"must-gather-dg46c\" (UID: \"c4b296be-58f8-4c3e-b828-41f8d1fd7a43\") " pod="openshift-must-gather-ldpdw/must-gather-dg46c" Feb 16 18:20:46.310836 master-0 kubenswrapper[4652]: I0216 18:20:46.310815 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kf6h\" (UniqueName: \"kubernetes.io/projected/c4b296be-58f8-4c3e-b828-41f8d1fd7a43-kube-api-access-2kf6h\") pod \"must-gather-dg46c\" (UID: \"c4b296be-58f8-4c3e-b828-41f8d1fd7a43\") " pod="openshift-must-gather-ldpdw/must-gather-dg46c" Feb 16 18:20:46.311140 master-0 kubenswrapper[4652]: I0216 18:20:46.310956 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc45m\" (UniqueName: \"kubernetes.io/projected/2bd7d940-c99c-4dfc-afd2-e20fa799a066-kube-api-access-cc45m\") pod \"must-gather-l7p2h\" (UID: \"2bd7d940-c99c-4dfc-afd2-e20fa799a066\") " pod="openshift-must-gather-ldpdw/must-gather-l7p2h" Feb 16 18:20:46.311140 master-0 kubenswrapper[4652]: I0216 18:20:46.311071 4652 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2bd7d940-c99c-4dfc-afd2-e20fa799a066-must-gather-output\") pod \"must-gather-l7p2h\" (UID: \"2bd7d940-c99c-4dfc-afd2-e20fa799a066\") " pod="openshift-must-gather-ldpdw/must-gather-l7p2h" Feb 16 18:20:46.413718 master-0 kubenswrapper[4652]: I0216 18:20:46.413665 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc45m\" (UniqueName: \"kubernetes.io/projected/2bd7d940-c99c-4dfc-afd2-e20fa799a066-kube-api-access-cc45m\") pod \"must-gather-l7p2h\" (UID: \"2bd7d940-c99c-4dfc-afd2-e20fa799a066\") " pod="openshift-must-gather-ldpdw/must-gather-l7p2h" Feb 16 18:20:46.413954 master-0 kubenswrapper[4652]: I0216 18:20:46.413820 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2bd7d940-c99c-4dfc-afd2-e20fa799a066-must-gather-output\") pod \"must-gather-l7p2h\" (UID: \"2bd7d940-c99c-4dfc-afd2-e20fa799a066\") " pod="openshift-must-gather-ldpdw/must-gather-l7p2h" Feb 16 18:20:46.413954 master-0 kubenswrapper[4652]: I0216 18:20:46.413868 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4b296be-58f8-4c3e-b828-41f8d1fd7a43-must-gather-output\") pod \"must-gather-dg46c\" (UID: \"c4b296be-58f8-4c3e-b828-41f8d1fd7a43\") " pod="openshift-must-gather-ldpdw/must-gather-dg46c" Feb 16 18:20:46.413954 master-0 kubenswrapper[4652]: I0216 18:20:46.413898 4652 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kf6h\" (UniqueName: \"kubernetes.io/projected/c4b296be-58f8-4c3e-b828-41f8d1fd7a43-kube-api-access-2kf6h\") pod \"must-gather-dg46c\" (UID: \"c4b296be-58f8-4c3e-b828-41f8d1fd7a43\") " pod="openshift-must-gather-ldpdw/must-gather-dg46c" Feb 16 18:20:46.414363 master-0 kubenswrapper[4652]: I0216 18:20:46.414344 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/2bd7d940-c99c-4dfc-afd2-e20fa799a066-must-gather-output\") pod \"must-gather-l7p2h\" (UID: \"2bd7d940-c99c-4dfc-afd2-e20fa799a066\") " pod="openshift-must-gather-ldpdw/must-gather-l7p2h" Feb 16 18:20:46.414456 master-0 kubenswrapper[4652]: I0216 18:20:46.414422 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c4b296be-58f8-4c3e-b828-41f8d1fd7a43-must-gather-output\") pod \"must-gather-dg46c\" (UID: \"c4b296be-58f8-4c3e-b828-41f8d1fd7a43\") " pod="openshift-must-gather-ldpdw/must-gather-dg46c" Feb 16 18:20:46.430189 master-0 kubenswrapper[4652]: I0216 18:20:46.430044 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc45m\" (UniqueName: \"kubernetes.io/projected/2bd7d940-c99c-4dfc-afd2-e20fa799a066-kube-api-access-cc45m\") pod \"must-gather-l7p2h\" (UID: \"2bd7d940-c99c-4dfc-afd2-e20fa799a066\") " pod="openshift-must-gather-ldpdw/must-gather-l7p2h" Feb 16 18:20:46.432152 master-0 kubenswrapper[4652]: I0216 18:20:46.432106 4652 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kf6h\" (UniqueName: \"kubernetes.io/projected/c4b296be-58f8-4c3e-b828-41f8d1fd7a43-kube-api-access-2kf6h\") pod \"must-gather-dg46c\" (UID: \"c4b296be-58f8-4c3e-b828-41f8d1fd7a43\") " pod="openshift-must-gather-ldpdw/must-gather-dg46c" Feb 16 18:20:46.504517 master-0 kubenswrapper[4652]: I0216 18:20:46.504445 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldpdw/must-gather-dg46c" Feb 16 18:20:46.527270 master-0 kubenswrapper[4652]: I0216 18:20:46.527204 4652 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ldpdw/must-gather-l7p2h" Feb 16 18:20:47.003450 master-0 kubenswrapper[4652]: I0216 18:20:47.003292 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ldpdw/must-gather-dg46c"] Feb 16 18:20:47.018207 master-0 kubenswrapper[4652]: I0216 18:20:47.018184 4652 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 18:20:47.197152 master-0 kubenswrapper[4652]: I0216 18:20:47.197054 4652 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ldpdw/must-gather-l7p2h"] Feb 16 18:20:47.201727 master-0 kubenswrapper[4652]: W0216 18:20:47.201642 4652 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bd7d940_c99c_4dfc_afd2_e20fa799a066.slice/crio-20ed566f56876c24d94a0572a75a825e4748e3a00785b0e594b29832caf06bb6 WatchSource:0}: Error finding container 20ed566f56876c24d94a0572a75a825e4748e3a00785b0e594b29832caf06bb6: Status 404 returned error can't find the container with id 20ed566f56876c24d94a0572a75a825e4748e3a00785b0e594b29832caf06bb6 Feb 16 18:20:47.552588 master-0 kubenswrapper[4652]: I0216 18:20:47.552498 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldpdw/must-gather-dg46c" event={"ID":"c4b296be-58f8-4c3e-b828-41f8d1fd7a43","Type":"ContainerStarted","Data":"c26e0da823ce030db7e36b4d1a3aa27571ad359166c9abc2efea977acf28352e"} Feb 16 18:20:47.554741 master-0 kubenswrapper[4652]: I0216 18:20:47.554691 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldpdw/must-gather-l7p2h" event={"ID":"2bd7d940-c99c-4dfc-afd2-e20fa799a066","Type":"ContainerStarted","Data":"20ed566f56876c24d94a0572a75a825e4748e3a00785b0e594b29832caf06bb6"} Feb 16 18:20:48.570731 master-0 kubenswrapper[4652]: I0216 18:20:48.570663 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldpdw/must-gather-l7p2h" event={"ID":"2bd7d940-c99c-4dfc-afd2-e20fa799a066","Type":"ContainerStarted","Data":"37ece8acdf8fe401360c88c54898a44d11898c7094d905a3a2ae0dda52211828"} Feb 16 18:20:48.927274 master-0 kubenswrapper[4652]: I0216 18:20:48.927220 4652 ???:1] "http: TLS handshake error from 192.168.32.10:39134: EOF" Feb 16 18:20:49.584620 master-0 kubenswrapper[4652]: I0216 18:20:49.583504 4652 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ldpdw/must-gather-l7p2h" event={"ID":"2bd7d940-c99c-4dfc-afd2-e20fa799a066","Type":"ContainerStarted","Data":"8f316beacc18a4145d659ea8ff24f0ca6714b0a2be3ba7ad12c3cce926a8112a"} Feb 16 18:20:49.624271 master-0 kubenswrapper[4652]: I0216 18:20:49.622384 4652 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ldpdw/must-gather-l7p2h" podStartSLOduration=2.646660299 podStartE2EDuration="3.622364934s" podCreationTimestamp="2026-02-16 18:20:46 +0000 UTC" firstStartedPulling="2026-02-16 18:20:47.203942639 +0000 UTC m=+3404.592111165" lastFinishedPulling="2026-02-16 18:20:48.179647274 +0000 UTC m=+3405.567815800" observedRunningTime="2026-02-16 18:20:49.615062301 +0000 UTC m=+3407.003230817" watchObservedRunningTime="2026-02-16 18:20:49.622364934 +0000 UTC m=+3407.010533450" Feb 16 18:20:49.810974 master-0 kubenswrapper[4652]: I0216 18:20:49.810918 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-649c4f5445-vt6wb_b6ad958f-25e4-40cb-89ec-5da9cb6395c7/cluster-version-operator/6.log" Feb 16 18:20:54.106765 master-0 kubenswrapper[4652]: I0216 18:20:54.106657 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-l25gm_723775ad-ae81-4016-b1df-4cb8d44df7fa/nmstate-console-plugin/0.log" Feb 16 18:20:54.176483 master-0 kubenswrapper[4652]: I0216 18:20:54.175087 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-pwpz5_3e3aaef8-af2b-403e-b884-e9052dc6642a/nmstate-handler/0.log" Feb 16 18:20:54.195873 master-0 kubenswrapper[4652]: I0216 18:20:54.195614 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-th2nx_514984df-7910-433f-ad1e-b5761b23473f/controller/0.log" Feb 16 18:20:54.217424 master-0 kubenswrapper[4652]: I0216 18:20:54.213205 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-xsplv_e6ac7b0a-388f-45dc-b367-4067ea181a77/nmstate-metrics/0.log" Feb 16 18:20:54.217424 master-0 kubenswrapper[4652]: I0216 18:20:54.213405 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-th2nx_514984df-7910-433f-ad1e-b5761b23473f/kube-rbac-proxy/0.log" Feb 16 18:20:54.224659 master-0 kubenswrapper[4652]: I0216 18:20:54.224612 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-xsplv_e6ac7b0a-388f-45dc-b367-4067ea181a77/kube-rbac-proxy/0.log" Feb 16 18:20:54.263286 master-0 kubenswrapper[4652]: I0216 18:20:54.258394 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_7adecad495595c43c57c30abd350e987/etcdctl/4.log" Feb 16 18:20:54.263286 master-0 kubenswrapper[4652]: I0216 18:20:54.261053 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tldzg_e6c3fe44-4380-4dbc-8e61-6f85a1820c82/controller/0.log" Feb 16 18:20:54.263286 master-0 kubenswrapper[4652]: I0216 18:20:54.261222 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-wgbrw_a49613eb-7dcb-4f5d-9f77-eb36f7929112/nmstate-operator/0.log" Feb 16 18:20:54.296310 master-0 kubenswrapper[4652]: I0216 18:20:54.293835 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-jhjp9_847810b1-5d52-414e-8c6e-46bfca98393a/nmstate-webhook/0.log" Feb 16 18:20:54.667237 master-0 kubenswrapper[4652]: I0216 18:20:54.667133 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_7adecad495595c43c57c30abd350e987/etcd/2.log" Feb 16 18:20:54.700473 master-0 kubenswrapper[4652]: I0216 18:20:54.700446 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_7adecad495595c43c57c30abd350e987/etcd-metrics/2.log" Feb 16 18:20:54.746817 master-0 kubenswrapper[4652]: I0216 18:20:54.746764 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_7adecad495595c43c57c30abd350e987/etcd-readyz/2.log" Feb 16 18:20:54.774974 master-0 kubenswrapper[4652]: I0216 18:20:54.774121 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_7adecad495595c43c57c30abd350e987/etcd-rev/2.log" Feb 16 18:20:54.802201 master-0 kubenswrapper[4652]: I0216 18:20:54.802153 4652 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_7adecad495595c43c57c30abd350e987/setup/2.log"